#42 – Amanda Askell on tackling the ethics of infinity, being clueless about the effects of our actions, and having moral empathy for intellectual adversaries
#42 – Amanda Askell on tackling the ethics of infinity, being clueless about the effects of our actions, and having moral empathy for intellectual adversaries
By Robert Wiblin and Keiran Harris · Published September 11th, 2018
Consider two familiar moments at a family reunion.
Our host, Uncle Bill, is taking pride in his barbequing skills. But his niece Becky says that she now refuses to eat meat. A groan goes round the table; the family mostly think of this as an annoying picky preference. But were it viewed as a moral position rather than personal preference – as they might if instead Becky were avoiding meat on religious grounds – it would usually receive a very different reaction.
An hour later Bill expresses a strong objection to abortion. Again, a groan goes round the table: the family mostly think that he has no business in trying to foist his regressive preferences on other people’s personal lives. But if considered not as a matter of personal taste, but rather as a moral position – that Bill genuinely believes he’s opposing mass-murder – his comment might start a serious conversation.
Amanda Askell, who recently completed a PhD in philosophy at NYU focused on the ethics of infinity, thinks that we often betray a complete lack of moral empathy. Across the political spectrum, we’re unable to get inside the mindset of people who expresses views that we disagree with, and see the issue from their point of view.
A common cause of conflict, as above, is confusion between personal preferences and moral positions. Assuming good faith on the part of the person you disagree with, and actually engaging with the beliefs they claim to hold, is perhaps the best remedy for our inability to make progress on controversial issues.
One seeming path to progress involves contraception. A lot of people who are anti-abortion are also anti-contraception. But they’ll usually think that abortion is much worse than contraception – so why can’t we compromise and agree to have much more contraception available?
According to Amanda, a charitable explanation is that people who are anti-abortion and anti-contraception engage in moral reasoning and advocacy based on what, in their minds, is the best of all possible worlds: one where people neither use contraception nor get abortions.
So instead of arguing about abortion and contraception, we could discuss the underlying principle that one should advocate for the best possible world, rather than the best probable world. Successfully break down such ethical beliefs, absent political toxicity, and it might be possible to actually figure out why we disagree and perhaps even converge on agreement.
Today’s episode blends such practical topics with cutting-edge philosophy. We cover:
- The problem of ‘moral cluelessness’ – our inability to predict the consequences of our actions – and how we might work around it
- Amanda’s biggest criticisms of social justice activists, and of critics of social justice activists
- Is there an ethical difference between prison and corporal punishment? Are both or neither justified?
- How to resolve ‘infinitarian paralysis’ – the inability to make decisions when infinities get involved.
- What’s effective altruism doing wrong?
- How should we think about jargon? Are a lot of people who don’t communicate clearly just trying to scam us?
- How can people be more successful while they remain within the cocoon of school and university?
- How did Amanda find her philosophy PhD, and how will she decide what to do now?
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
The 80,000 Hours podcast is produced by Keiran Harris.
Highlights
…I often think that we should have norms where if you don’t understand people relatively quickly, you’re not required to continue to engage. It’s the job of communicators to clearly tell you what they mean. And if they feel like it’s your job to-
Robert Wiblin: They impose such large demands on other people.
Amanda Askell: Yeah. … if you communicate in a way that’s ambiguous or that uses a lot of jargon, what you do is you force people to spend a lot of time thinking about what you might mean. If they’re smart and conscientious reader, they’re going to be charitable and they’re going to attribute the most generous interpretation to you.
And this is actually really bad because it can mean that … ambiguous communication can actually be really attractive to people who are excited about generating interpretations of texts. And so you can end up having these really perverse incentives to not be clear. …
… there are norms in philosophy. They’re not always followed, but it’s one thing that I always liked about the discipline is you’re told to always just basically state the thing that you mean to state as clearly as possible. And I think that’s like a norm that I live by. And I also think that people appreciate when reading.
Robert Wiblin: Yeah, this is getting close to a hobby horse of mine. I’m quite an extremist on this communication issue. When I notice people who I think are being vague or obscurantist – that they’re not communicating as clearly as they could – my baseline assumption is that they’re pulling a scam. They’re pulling the scam where they’re expecting other people to do the work for them and they’re trying to cover up weaknesses in what they’re saying by not being clear.
Maybe that’s too cynical. Maybe that’s too harsh and interpretation. We were saying we should be charitable to other people but honestly my experience very often just has been even after looking into it more, that has been my conclusion that especially people who can’t express things clearly but claim that they have some extremely clear idea of what you’re trying to say. I feel that they’re just pulling a con.
I think the reason why these questions are important is because they demonstrate inconsistencies with fundamental ethical principles. And those inconsistencies arise and generate problems even if you’re merely uncertain about whether the world is like this. And the fact that the world could in fact be like this means that I think we should find these conflicts between fundamental ethical axioms quite troubling. Because you’re going to have to give up one of those axioms then. And that’s going to have ramifications on your ethical theory, presumably also in finite cases. Like if you rejected the Pareto principle, that could have a huge effect on which ethical principles you think are true in the finite case.
But, I do have sympathy for the concern. I don’t think that this question is an urgent one, for example, and so I don’t think that people should necessarily be pouring all of their time into it. I think it could be important because I think that ethics is important and this generates really important problems for ethics. But I don’t necessarily think it’s urgent.
And so I think that one thing that people might be inclined to say is, “Oh this is just so abstract and just doesn’t matter.” I tend to think it does matter, but the thing maybe that you’re picking up on is it’s possibly not urgent that we solve this problem. And I think that’s probably correct.
If you think that there are really important unresolved problems, then things that give you the space at some point in the future to research the stuff can be more important.
These issues might not be urgent, but at some point it would be really nice to work through and resolve all of them and so you want to make sure that you leave space for that and you don’t commit to one theory being true in this case. And I think that the lessons of impossibility theorems and ethics that are important are mainly that ethics is hard and that you shouldn’t act as if one ethical theory or principle or set of principles is definitely true because there are a lot of inconsistencies between really plausible ones. And so I think that’s a more general principle that one should live by, and maybe these impossibility results just kind of strengthen that.
I’ve talked a little bit about moral value of information in the past and I think that the main thing that I kind of concluded from it was that it’s very easy to take this kind of evidence based mindset when it comes to doing the most good. We are like, let’s just take these interventions, for which I have the most evidence about what the nature of their impact is, and let’s just invest in those or you can take a kind of more expectations-based kind of approach where you say, “Well, actually, what we should do is we should run some experiments and we should try out various things and see if they work because we just don’t have a huge amount of information in this domain.”
And if you take that kind of attitude, you can end up, kind of, investing in things a bit more experimentally and I think that there’s potentially a better case to be made for that than people have appreciated and so that might be one consequence of this is just, “Hey, the ethical value of information is actually higher than we thought and maybe we should just be trying to find new ways of gaining a bunch of new information about how we can do good.
We often seem to betray a kind of complete lack of what I call moral empathy, where moral empathy is trying to get inside the mindset of someone who expresses views that we disagree with and see that from their point of view, what they’re talking about is a moral issue and not merely a preference. The first example is vegetarianism where you’ll sometimes see people basically get very annoyed, say, with their vegetarian family member because the person doesn’t want to eat meat at a family gathering or something like that. I think the example I give is, this makes sense if you just think of vegetarianism like a preference.
It’s just like, “Oh, they’re being awkward. They just have this random preference that they want me to try and accommodate.” It’s much less acceptable if you think of it as a moral view. You see this where people are a bit more respectful of religious views. So if someone eats halal, I think that it would be seen as unacceptable to … people wouldn’t have the same attitude of, oh, how annoying and how terrible of them.
…I find people in conversation much more happy and just much more willing to discuss with you if you show that you actually have cared enough to go away and research their worldview and you might be like, “Look, I looked into your worldview and I don’t agree with it, but I’ll demonstrate to you that I understand it.” It just makes for a much more friendly discussion basically because it shows that you’re not like, “I don’t even need to look at the things that you’ve been raised with or understood or researched. I just know better without even looking at them.
Articles, books, and other media discussed in the show
- Amanda’s blog
- Amanda’s PhD Thesis: Pareto Principles in Infinite Ethics
- Amanda’s BPhil Thesis: Objective Epistemic Consequentialism
- The Moral Value of Information, Amanda’s talk at EA Global 2017: Boston
- Cluelessness by Hilary Greaves
- Pareto Principle on Wikipedia
- Multi-armed bandit on Wikipedia
- Algorithms to Live By: The Computer Science of Human Decisions by Brian Christian and Tom Griffiths
- When Brute Force Fails: How to Have Less Crime and Less Punishment by Mark Kleiman
- Infinite Utility by James Cain (introducing The Sphere of Suffering)
- With Infinite Utility, More Needn’t Be Better by Hamkins and Montero
- Down Girl by Kate Manne
Latest 80,000 Hours articles:
- Our career review of working as a congressional staffer
- Randomised experiment: If you’re genuinely unsure whether to quit your job or break up, then you probably should
- Psychology experiments in top journals – are they true?
- Should you play to your comparative advantage?
Transcript
Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, where each week we have an unusually in-depth conversation about one of the world’s most pressing problems and how you can use your career to solve it. I’m Rob Wiblin, Director of Research at 80,000 Hours.
Before we get to today’s episode I just wanted to mention a few articles we’ve released lately.
I’ll put links to these in the show notes and blog post associated with this episode. If you want to skip this section jump ahead a minute or two.
Last week we launched a career review of working as a congressional staffer which covers the impact you might expect to have, various other pros and cons, and what are indicators that it’s a good personal fit. If you could see yourself pursuing a career in US politics you should check it out.
A few weeks ago I wrote up a summary of a randomised controlled trial that found people who were on the fence about whether to quit their jobs or make other changes to their lives. It then advised some of them to make the change, and others to stay the course. It found that in general people who changed their lives were happier six months later. The write-up goes into more detail and you should certainly read it before taking action on this basis.
You may have heard about a paper published 2 weeks ago reporting on the results of an effort to replicate 21 psychology papers published in the best journals, to figure out which effects were real and which weren’t. I made a quiz that describes the results of these 21 papers and invites you to guess whether that particular effect replicated or not.
It’s quite fun and so far 6,000 people have used it. We’re collecting data on what kinds of people have the most accurate guesses, which we’ll write up soon.
Finally, just yesterday we published a new and quite advanced article on whether or not it’s important to focus on finding your comparative advantage relative to other people in your professional community.
Alright, here’s Amanda.
Robert Wiblin: Today I’m speaking with Amanda Askell. Amanda recently completed a PhD in philosophy at NYU, one the world’s top philosophy grad schools with a thesis focused on Infinite Ethics. Before that, she did a BPhil at Oxford University, with her thesis being focused on objective epistemic consequentialism, quite a mouthful. She’s been involved in the effective activism community since its inception and blogs at rationalreflection.net.
Thanks for coming on the podcast, Amanda.
Amanda Askell: Thank you for having me.
Robert Wiblin: So, we plan to talk a bunch about your philosophy research and I guess, your views on philosophy PhDs and academic careers in general. But first, you finished your PhD defense a couple of weeks ago, right?
Amanda Askell: Yeah, last week.
Robert Wiblin: Okay, yeah. How did you find the PhD experience? Is it six years you spent at NYU?
Amanda Askell: Six and a half years I think, altogether.
Robert Wiblin: I mean I’ve heard that PhDs in the US are pretty painful. Is that kind of your experience?
Amanda Askell: I think it depends on your disposition. In some ways I think I’m maybe not the perfect disposition for a PhD. You have to be able to focus on many things at once if you want to kind of get a lot out of the program, I think. I tend to be much more kind of singular in my thinking. And so, I find it a little bit hard to spread my research across multiple topics. Whereas in the US you start out kind of doing many topics and then eventually focusing in on the thesis.
Robert Wiblin: I thought the challenge that most people had was focusing in their PhD. Because they kind of want to graze intellectually, but then they have to spend potentially years just getting to the forefront of one particular topic.
Amanda Askell: I think it depends on the kind of person you are. So, some people have this kind of magical ability to do a PhD and at the same time produce many different kinds of research while they’re doing their thesis. And I think I always saw that and thought, “Oh, that’s what I want to do. I want to be the kind of person that just produces many things while I’m doing my PhD.” And then I found towards the end that it was like, actually, if I want to get this PhD finished and do the research, I have to really just focus on this one thing.
So some people have this ability to focus on multiple things. But I always don’t have a problem with just focusing in on single research topic. It’s just that I wish I could kind of multitask, in that way I seem unable to.
Robert Wiblin: Yeah, I think you seem like one of the most conscientious people I know. Is this a potential downside of that? I’m trying to find any justification for my lack of conscientiousness.
Amanda Askell: This feels surprising to me. Because I think I can be conscientious about work but this can also mean neglecting things in ways that other people don’t. So I can be very non-conscientious about emails for example-
Robert Wiblin: Yeah.
Amanda Askell: As a result of this. So I just trade off. I just take conscientiousness from one area and I like erase it and I apply this to some other area, like my research. So that’s how I work. I’m like I have a pool of conscientiousness. And I have lots of emails that I haven’t responded to.
Robert Wiblin: I think most of the philosophers I know seem … They really fit the philosopher’s stereotype of kind of a bit having their heads in the clouds. Perhaps like, quite bad at life admin. Filing their taxes and answering their emails and buying food and cleaning their room.
Amanda Askell: Yeah.
Robert Wiblin: I guess you seem a bit like that.
Amanda Askell: Yes, I’m very like that.
Robert Wiblin: Do you think there’s a systematic reason why philosophers have to be that way?
Amanda Askell: I think that you have to carve out a space for research. So the kind of intense research that is involved both in PhDs but also later in research jobs, just needs kind of single minded focus on one topic. And I just find if I’m having to think about other things, it just divides my attention. And so I compartmentalize really heavily.
So I’m the kind of person where I’m like, I completely get rid of my emails. I’ll just snooze all of them until a given task is done. And I think if you don’t have that space, it’s just like you can’t get to that point where you can just focus fully on this very difficult problem in front of you. So I think it’s that, that people are inclined to just get rid of the other stuff in order to focus on problems.
Robert Wiblin: So, having finished your PhD, are you glad that you started it in the first place?
Amanda Askell: Oh, that is a tough question. I think in retrospect, I’m unsure whether I would do a PhD again, were I faced with the same choice that I had say, like six or seven years ago. Mainly … Not because I haven’t enjoyed the program and not because I haven’t learned a lot. It’s just a huge time investment. And it’s a time investment in the case of philosophy that’s quite singularly focused on one outcome. Namely, people are mainly focused on getting academic jobs. It’s somewhat unusual for people to do other things.
And so, if you have any uncertainty about whether that’s what you want to do, it can be quite risky. And, given the way that the job market is at the moment, it can be quite risky even if you think that that’s definitely what you want to do. So, I’m not sure that I … Yeah. I’m basically not sure that I would do it over again and perhaps not on the topics that I chose to focus on.
Robert Wiblin: Okay, well we’ll come back to talking about philosophy as a career track and what you’re going to do next later on in the episode.
But for now I wanna move onto the issue of moral cluelessness. So what is that problem.
Amanda Askell: Cluelessness is this problem that arises when you’re trying to make an ethical decision and there are immediate ramifications to your actions that you can just understand quite well. So I can distribute twenty malaria nets in this region and I can estimate the impact that will have in terms of malaria for those people.
But there are lots of effects of your actions that are just very difficult to predict. An example is you save the life of a woman and you don’t realize that the child that she’s carrying is going to grow up to be a terrible dictator who murders many people. So this was just a very unforeseen consequence of the action of saving the woman. Similarly you could save someone whose child grows up to save billions of people but it was an unforeseen consequence of the act of saving the woman which the direct impact of it was just saving that woman and saving her unborn child.
And the worry about this is essentially you may think, well, maybe things just kind of cancel out. So I have action A, which is saving the woman, and action B, which is not. It’s just very obvious that I should help the woman. I’m inclined to agree with that. So you say, well, she could have given birth to this person who ends up being a terrible dictator but she also could have given birth to this person who saves millions of lives. And so these probabilities of these kind of outlier events cancel out.
The problem of cluelessness, of a novel problem of cluelessness that has been talked about by Hilary Greaves for example is that in some cases it doesn’t seem like we can use this kind of principle of indifference, so it could be that my actions have ongoing ramifications that are simply not foreseen but I don’t have any reason to think that they’re equal across both of my actions.
So you think about the consequences of having this huge impact in a country by donating a huge amount of money and affecting its economy and affecting its people. There may be consistent impacts of that action that are not such that I can just think, oh yeah and it’s equally likely that that wouldn’t have happened or that the opposite would have happened. Rather it’s just I don’t know.
And so the problem of cluelessness is something like I shouldn’t necessarily think that there is equal probabilities of these outlier effects of unintended effects but nor do I have any information to go on about the likelihood or otherwise of these good long term effects versus these negative long term effects. And so it’s this real worry our degree of uncertainty of the long term unintended of direct impact of our actions.
Robert Wiblin: So for this to be a real problem does it have to be the case that these long term or indirect effects are much larger than the direct effect? That they’re likely to swamp it?
Amanda Askell: I’m not sure if it’s necessary for the problem. I’m trying to think about whether you can generate a smaller version of the problem. I think it’s likely that this is what’s generating the key worry. Is just that consequences of my actions are actually quite likely to be large as well because when we think about the fact that this was the causal chain that you’re setting off when you undertake an action like intervening in another country, it’s not when you actually expect the ramifications to be kind of small. It’s one where you do expect a kind of important long term impact and you may not be sure about the sign of that impact if you think that these unforeseen outcomes that are quite negative and quite positive and that you do not have enough information to be able to see how likely they are. So I think the presupposition is more that almost all of the outcomes are fairly large and as soon as you get further beyond a few years from now you start to be really uncertain about what they look like.
Robert Wiblin: Yeah. I wonder if it’s worth pointing out how easy it would be for saving a single life to change the entire course of history, that this is not only possible but perhaps even probable that it could completely change the identities of all people and future generations.
Amanda Askell: Well yeah, a super fun … I think it gets called the ancestor’s paradox. That’s essentially think about how many grandparents you have and how many great grandparents you have and imagine that tree branching outwards. And it gets bigger and bigger as you go back in the generations. And imagine the number of people who have existed in history. It gets narrower and narrower. Our population has been increasing so as you go into the past it actually decreases. And obviously the reason for this is that there’s a lot of overlap between relations, so maybe it’s the case that your great great great grandparent is also your great great great something else. And because of this, if you go far enough back in history you can say, if this person had any living descendant then they are in fact the ancestors of everyone on earth and this is a really interesting effect when you think about it going forward because you should expect the same thing.
So one person, if they have living descendants far into the future will in fact be the ancestor of everyone. So, changing who they give birth to or changing whether they have children and whether they have living descendants can actually change the entire population of the world in the future. So, identities of agents are actually super delicate basically. And so yes if you save the life of one person they’re going to have children and have children’s children etc. there’s a really good chance you just changed the identity of everyone who exists, the entire population of humans that exist in the future, which is very interesting but it’s an example of how the cause of ramifications of your actions can actually be fairly massive.
Robert Wiblin: And also extremely unpredictable [crosstalk 00:52:16].
Amanda Askell: And very unpredictable. Yeah.
Robert Wiblin: Are there different forms of cluelessness that we should worry about, different kind of classifications?
Amanda Askell: The main classification that I’m aware of is this kind of new puzzle of cluelessness versus the original puzzle where the new puzzle is pointing out that it’s very difficult to use the principle of indifference in some of these cases, specifically the example used is effective altruism where you expect to have large and fairly consistent effects so it’s not merely what’s the possibility that this woman gives birth to a dictator versus she gives birth to someone fantastic and maybe you think that you should use the principle of indifference because the person could have been a dictator or they could have been a real benefit to humanity but rather that they have consistent effects but we just don’t know about them.
So that was an unpredictable outcome versus one that is a consistent outcome of your action that you shouldn’t think it was equally likely that the opposite outcome would have happened. So in changing a population, improving the lives right now of a population and therefore changing everything about the future economy of that country, that will in fact kind of good or bad for that country but you shouldn’t just say, oh well, fifty-fifty, it could either be good or either be bad but rather if you investigate you would find reasons for thinking that it’s more likely to be good than bad but right now you just have complete uncertainty about which is the case.
Robert Wiblin: But for it to matter it has to be possible, or you either have to believe already that it’s either probably positive or negative or it has to be possible for you to find out, right?
Amanda Askell: Yeah. If you think of the principle of indifference isn’t true here, that’s a principle that lets you just kind of assign really precise values to kind of outcomes and just say, well I have this really very positive outcome and this really very negative outcome. Both of them are possible and I’m gonna assign them the same probability. If you think that this cluelessness problem is a real problem, one thing you might say is I just can’t assign probabilities to these outcomes given my current evidence and in that case you should perhaps try to use imprecise probabilities or probability intervals or something like that.
Robert Wiblin: Wouldn’t you always have some kind of credence, some kind of probability attached to each outcome? You wanna move away from this simple bayesianism?
Amanda Askell: I mean I like this and the question mainly here is, can we say this is actually rational, what we’re doing. And so, one possible response for this is actually you do have reasons and we’re not merely appealing to a principle of indifference but rather we are thinking of all of the possible long term ramifications of our actions given our current evidence and we’re using that to make decisions and we’re going to try to discover more about those long term ramifications are.
So I think if you do it within a kind of precise framework you would probably just end up denying that we’re reasoning using the principle of indifference and you try and say, no, we actually have evidence about these long term outcomes and we either are or should be taking into account and the major update that we get from the problem of cluelessness is that we should really be trying to figure out more what the long term ramifications of our actions are because the fact that we can even look at cases like this and be unsure about the effects that our actions will have in, like, seventy years is fairly bad. Because it could be that there’s just these things that we could gain evidence about that are at the moment unforeseen and that are gonna negatively impact future populations.
Robert Wiblin: Yeah. So, what’s this principle of indifference?
Amanda Askell: That’s the principle that says take the really bad outcome, the outcome where the agent was a terrible despot, then this other outcome which seems also kind of implausible that they are going to save humanity from something terrible and save millions of people.
Robert Wiblin: You just cancel them out?
Amanda Askell: Yeah, I don’t really have one reason. I don’t have more reason to think that their child is going to be a terrible despot than I do to think that they’re going to save the world and so, sure, let’s just say that these effects just get canceled out. I’m just indifferent between them.
Robert Wiblin: So, I’ve heard people in the past say that this issue of being totally unable to predict the long term consequences of your actions shows that consequentialism is wrong or it’s very problematic. I’ve even heard people say because no one even thinks about this that just shows that no one is a consequentialist. Which is kind of amusing given how much the effective altruism community stresses about this. What do you think of those arguments?
Amanda Askell: I think it’s a problem for consequentialism. One thing that’s worth noting is this is a problem that arises … Maybe this is a philosopher’s point. There’s a problem that arises in a content of making decisions rather than in a context of ranking actions. So some people are going to think about consequentialism as a theory that’s more about how you should rank actions given their actual outcomes and in that case cluelessness doesn’t arise because it’s specifically a problem about uncertainty. But you might think that it’s a problem for people who want to try to internalize a kind of consequentialist procedure, they’re really trying to work out what the best action for them to undertake is if consequentialism is true and it turns out that that’s really hard if not impossible to do.
I’m inclined to think that it is a problem if we can’t use principles to actually give kind of well defined expected values to outcomes. I suppose I’m more optimistic in the case of cluelessness that we can, given our evidence, give more precise estimates of how good the outcomes are.
Robert Wiblin: It seems more like a practical problem that an in principle problem.
Amanda Askell: Yeah-
Robert Wiblin: Even if that’s a very challenging practical problem.
Amanda Askell: Yeah, I think that’s how I … And maybe other people perceive it different but that’s how I perceive the cluelessness problem.
Robert Wiblin: I suppose you could imagine us constructing a world in the future where things are much less chaotic and much more predictable and so the cluelessness problem somewhat goes away.
Amanda Askell: Yeah and we have a lot of evidence about long term ramifications. Maybe one worry for that is going to be that even if you have a huge amount of evidence you will need something close to omniscience because you could just have random factors that huge causal ramifications. So like we said about identity, changing which children someone has is affecting the population of the entire future of humanity. You might just think that random events could have huge impacts on the outcomes of your actions.
Robert Wiblin: Yes, but I suppose at the very extreme end you could just have one non human agent that never reproduces remaining and then it would be much easier to predict the consequences of their [crosstalk 00:58:48] actions.
Amanda Askell: Yeah, we can just imagine worlds where it’s like and also there’s one agent in a box who is completely separate from the rest of the universe and so there’s no chancy behavior. And then we can maybe extrapolate from that. Yeah the amount of data we would need would be huge but in principle we could solve this problem by knowing about everything that’s going to happen, can just show the different things that we could do.
Robert Wiblin: So having mapped out this issue, do you think it’s a challenge for philosophers or is now just a challenge for social scientists and economists and things like that?
Amanda Askell: I think there’s a key challenge of figuring out whether we can actually have rational, precise credences in these kinds of cases especially if we reject this kind of principle of indifference and if so how we can make decisions under this form of uncertainty. And so for the people who think that you should just imprecise credences in this case, the key challenge is going to be giving a good decision theory for imprecise theories which is already a big challenge that philosophers are focused on and that could be something that philosophers and economists could contribute to in finding ways to demonstrate there’s irrational precise credence to have in cases like this, that’s also something that I think both philosophers and others can definitely contribute to.
Robert Wiblin: What’s an imprecise credence?
Amanda Askell: And imprecise credence is where you don’t … With credences the idea is often that we have very precise probabilities that we assign to different states of the world and this seems like an idealization, it seems like I don’t actually have real valued … I have zero point seven one four nine two three one dot dot dot dot in some given state of the world. So in imprecise credence is the value of your credence is in fact an interval between zero one one. And so maybe I think that instead of saying that I have a credence of precisely point six that this thing is going to occur, maybe I actually have a credence that’s between point five and point seven or maybe my credence is in fact the interval point five point seven. And so it just is cases where we don’t have precise credences but rather we just have intervals.
Robert Wiblin: Okay so it sounded like you were saying that if we adopt imprecise credences then we have a challenge at the decision theory end. How do these credences then interface with our decision procedure. But why would these imprecise credences really help? I guess it helps with the problem of it seeming arbitrary to give kind of a point estimate of the likelihood of kind of every possible outcome that could-
Amanda Askell: Yeah, to say that they’re equal. So the first question is what is the rational attitude to have towards these potential long term ramifications that are very positive or very negative and that are kind of unforeseen at the moment. If you answer that question with you don’t have enough … Given the lack of evidence but the fact that there might be a consistent effect in one direction than in another you should have an interval valued credence over these outcomes. At the very extreme end of interval valued credences you can have an interval that is just the entire zero one interval.
Or you may think no there is a precise credence that you have about these outcomes given your current evidence. There’s just a way of partitioning the world such that you’re like, yep, I have that specific hypothesis, I maybe even have it consciously formulated in my brain about having these positive effects on the economic situation which leads to this person being elected, which leads to this person adopting this healthcare program in this country, etc. Give that really precise state of the world I should have a very precise credence that that state of the world will be the thing that is the outcome of my action.
So the idea is you first ask what’s rational in this case and then you have to ask how does this affect our decision making and if it’s precise I think that the answer is hopefully just going to be that it’s going to massively increase the value of gaining information about these kinds of effects. If it’s imprecise then we need a decision theory that can deal with imprecise credences.
Robert Wiblin: Okay. Is it possible for this cluelessness to kind of have a funny interaction with moral uncertainty or other moral theories you might put some credence on. So you can imagine if there are moral theories saying it’s very bad if your actions have any possibility of creating a negative outcome like violating someone’s rights … If you’re only thinking about the direct effects of your actions then it seems kind of easy to avoid murdering someone or causing someone to be murdered but if you think about this spiraling uncertainty and all the chaos that your actions create and how they change the entire course of history it seems like action that you take has some possibility of causing some horrific outcome in the future and so they might all be forbidden.
Amanda Askell: Yeah one result is that you could think that this ends up in dilemmas so a theory that says you should basically never risk violating someone’s rights far into the future and then I can say every action available to be has a risk above some threshold of doing this then if your theory just says if the probability is above some threshold and it is in this case then you just shouldn’t undertake the action, that theory would presumably end up with just dilemmas given cluelessness worries.
Robert Wiblin: Which I guess on pragmatic grounds is a reason to prefer more linear theories than ones with strict prohibitions where something is like infinitely bad or very bad or just totally impermissible.
Amanda Askell: Yes-
Robert Wiblin: Because it’s twice as likely it’s twice as bad.
Amanda Askell: And I think that most theories would have that as part of them. So take a theory like a kind of moderate deontological theory, it’s not clear to me that they would actually have the same kind of problem with cluelessness that say consequentialists have because they might just say that the thing that matters is the causal effects of your actions and not necessarily things later in the causal chain that although would not have happened had you not done the thing are not in fact things that you are responsible for.
Robert Wiblin: Because another agent has touched them and now they’re responsible-
Amanda Askell: Yeah there’s an agent that’s like, well it’s not the case that if foreseeably this action will lead to Jane being born and then Jane goes onto commit a robbery that I was somehow culpable for the action of Jane’s robbery because-
Robert Wiblin: Indeed like all of her ancestors are culpable.
Amanda Askell: Exactly and so theories that deny that as presumably a lot of theories are going to just might less of a problem closeness. It might have some problem but they might not think that things like future rights violations are the responsibility of current agents. They might say yes, it is actually quite … We do want to work out this problem because we also do care about the causal impact of our actions as a lot of non consequentialists do, we just don’t think that the key thing is going to be something like rights violations that occur in the future.
Robert Wiblin: Do you see more people working on this general research question?
Amanda Askell: I think that I would classify a lot of questions in this area as … This feels to me again like an important but not necessarily urgent question. So it depends on what people would be doing otherwise I suppose.
Robert Wiblin: I suppose another way of looking at this is just being people may be trying to research what are the floater effects or the long term effects of our actions and often they end up working on existential risk or long term future projects and they’ve in some cases found things that they can work on now that they think have satisfactorily confidently positive effects in the long run but that might just be a more practical way of approaching the question.
Amanda Askell: Yeah. So there is this question of … I am not going to give a great answer to that, how useful is very theoretical research in areas that can have very real world impacts? And I think maybe I could kind of step back my earlier answer because maybe if you find a very good response to this problem it can lead to further insights that are actually themselves very useful. So things like how to quantify how valuable information about the long term effects of our actions is, is kind of difficult without an answer to this slightly more abstract seeming problem.
And so one response someone might have is something like, oh well, just do the practical work, just try and work out flow-through effects and just do all of that kind of stuff. And I’m like, yes that is really important but actually maybe if you could just generate a fairly neat solution to the abstract problem it would just give this really good grounding to all of the other practical work that occurred in this area and that could in fact be kind of helpful. So yeah, I think that sometimes this theoretical research can really great very good foundations for later practical research.
Robert Wiblin: So, given that these long term effects of our actions might be very large and also very uncertain, does that imply that this should be one of the main things that we’re researching, ’cause just the value of further information about them is so huge?
Amanda Askell: Yeah so I’ve thought before that one thing that is kind of unfortunate is that value of information is often just kind of a side consideration when it comes to thinking about how we can do good in the world. If we think that the long term effects of our actions are very large.
If we think that the long term effects of our actions are very large, it could be that finding out more about the expected long term consequences of what we do is actually an extremely valuable part of investing in intervention. And so if you think that the value of information is very high in the ethical domain, this can favor a couple of things.
One is that it can favor kind of doing more research so just trying to investigate how the world is actually going to be and what the impact of a given policy has been in the past and all of this kind of stuff, but I think another thing that it means is that investing in interventions could itself be valuable mainly because we then get information about the impacts of those interventions and so we can kind of run experiments basically, and we can try out things and see if they work.
And I’ve talked a little bit about moral value of information in the past and I think that the main thing that I kind of concluded from it was that it’s very easy to take this kind of evidence based mindset when it comes to doing the most good. We are like, let’s just take these interventions, for which I have the most evidence about what the nature of their impact is, and let’s just invest in those or you can take a kind of more expectations-based kind of approach where you say, “Well, actually, what we should do is we should run some experiments and we should try out various things and see if they work because we just don’t have a huge amount of information in this domain.”
And if you take that kind of attitude, you can end up, kind of, investing in things a bit more experimentally and I think that there’s potentially a better case to be made for that than people have appreciated and so that might be one consequence of this is just, “Hey, the ethical value of information is actually higher than we thought and maybe we should just be trying to find new ways of gaining a bunch of new information about how we can do good.
Robert Wiblin: So this is a somewhat generalized document in favor of doing things that have very uncertain outcomes as long as you can learn from them in some generalizable way?
Amanda Askell: Yeah. I think an interesting, kind of, consequence of this that is perhaps somewhat counterintuitive is that if you have two interventions and one of them is very well evidenced. We know really precisely how much good it does and we have another intervention that has a kind of plausible mechanism … because it can be terrible in expectation. So there’s a plausible mechanism for doing a bunch of good, but it has a huge range that it could do virtually nothing or it could actually be really fantastic.
So an example of the first thing might be antimalarial nets, for example, so insecticide-treated bed nets or just, there’s a lot of randomized control trials about how effective they are, but you could have another intervention on malaria that’s much more experimental and we just don’t know how effective it’s going to be, that you could actually think that, in expectation, it’s actually less effective than the first one in terms of its direct impact and yet overall, you should invest in the second one because you’ll get information about where it lies on the scale of value and that will mean you can either reinvest in it or just never invest in it again.
And so, yeah, it’s like actually, maybe you should prefer interventions with less evidential support over those with more evidential support.
Robert Wiblin: Yeah. So that makes sense as long as you’re getting a good measurement of what the impact is. So this is kind of an argument in favor of, we should do a lot of science and technology and should spend a lot of money on R&D because it will have a really beneficial long-term consequences. But I guess in the case of cluelessness anyway, it seems like we might just learn basically nothing. We do a whole lot of work and then we still don’t … like we’re just as clueless at the end of the process.
Amanda Askell: Yeah. You still just haven’t touched these long-term-
Robert Wiblin: Because the problems are so fundamental and the future is so chaotic.
Amanda Askell: Yeah. You’re just like … I actually just think that I can’t really predict what the very long-term outcomes will be so I think this is a good argument for why if you can get information, especially about very long-term impacts of your actions, and that information is especially valuable, but I don’t think this is a solution to the problem of cluelessness because I think even if you can get information of that form, you would probably still have this problem because you’d be just given chaos. These long-term unforeseen consequences of my actions can still occur.
Robert Wiblin: Okay. So we’re kind of stuck with cluelessness. I mean, how would you feel about the kind of practical solution that a lot of people have gone on board with this, of just trying to reduce the probability of civilization collapsing in the hope that that is a good sign post to a good future. Is that at all satisfactory in your mind?
Amanda Askell: I think there’s, kind of, a satisfactory answer to a lot of things just because I’m like, these problems are difficult, but if you generate this space where you can reflect on them and work on them, that’s almost always a good thing so I’m sympathetic to that kind of approach to most very fundamental ethical problems. It seems very plausible to me that you should try and secure the lives of people living now and people living in the near future because you can’t solve these problems if you don’t have people who can work on them. And so, yeah, I’m sympathetic to this being the approach that people take.
Robert Wiblin: Okay. That’s somewhat reassuring.
Amanda Askell: Yeah.
Robert Wiblin: So just coming back to the value of information issue, you gave a talk about that at EA Global last year, right?
Amanda Askell: Yeah.
Robert Wiblin: So, it sounds like one of the conclusions is that you should spend more resources that you otherwise would doing things that, in a sense, not very evidenced backed, where you’re unsure what the outcome is going to be so long as you can measure and learn from it?
Amanda Askell: Yeah.
Robert Wiblin: Are there any other, kind of, key conclusions that people should take away from this value of information consideration?
Amanda Askell: I think just thinking about the different ways in which you can gain information … so I think, often, when people think about information, they really do just think about the research component and I think it’s important to know that a really good way of getting information is just by doing the thing and then getting the data to yourself.
Sometimes, the data just doesn’t exist. I think this generalizes, I mean, we haven’t talked about careers so much, but I think this is actually really important in one’s career as well, is that if you have an opportunity to simply try things, this can be a really good way of getting information about how good it is and people can get kind of paralyzed by the evidence and think that the thing to do is analyze existing evidence.
I think that one of the other conclusions that I came to with this is, think about investment as something that’s mining value of information, not just direct value so I think that was a main one.
Robert Wiblin: And I guess this is also an argument for the community as a whole, kind of, sending one person into lots of sort of different areas so that they can learn about whether it’s promising and bring that information back to the group.
Amanda Askell: Yeah, I think it’s important. I’m not saying this always overwhelms things. Maybe they’re just really important things for people to be doing and really important career tracts that are fairly narrow because immediate needs or something, outweigh this consideration, but all else being equal, I think it’s quite good for people to be trying lots of different things and seeing what the impact of them is if there’s a plausible mechanism for it being fairly high impact. So obviously, there has to be a plausible mechanism. You might just think that there are certain paths that people can go down. They’re just not likely to be super high-impact.
Robert Wiblin: Clearly, we shouldn’t send someone into everything. It’s only the things that seem promising enough, where you can learn a lot by putting someone-
Amanda Askell: Yeah. That seems right to me.
Robert Wiblin: Is there anyone who has, kind of, a good process for going through and estimating the value of information from different actions or is this still, kind of, an unsolved practical problem?
Amanda Askell: There’s a lot of research in how you should evaluate the value of information. There’s a lot of results that should make us a little bit pessimistic about this. So the thing I’d recommend that people read on it, it’s just very interesting, is stuff on multi-armed bandits, the multi-armed bandit problem. And one interesting and, kind of, relevant puzzle for practical, real world stuff is puzzles where the probabilities of success for each thing that you’re trying change.
So it’s like, imagine you’re playing a couple of multi-armed bandits, but the probability that they will … the expected payout actually changes overtime. This is an extremely difficult problem and it’s extremely difficult to know where you should explore and where you should exploit when you have problems of this form and I think a lot of the real world problems have that form, where the amount of value you get from working in a given domain might change drastically depending on ways that things in the world are going.
And so I can’t offer a huge amount of practical advice here, but I do think it would be quite possible for someone to do very applied work in this area, actually trying to assess how we should assign information values like working on a given problem, say.
Robert Wiblin: I’m planning to interview the authors of a book called Algorithms to Live By. There’s a chapter about these multi-armed bandits.
Amanda Askell: Yeah. It’s also a really great book. I recommend it.
Robert Wiblin: Yeah, it is very good. Unfortunately, I think they finished that chapter talking about how this is a very difficult, somewhat unsolved problem in computer science of what you do when the payoffs are changing over time. Maybe we’ll see if in the meantime they’ve managed to come up with any other answers.
Amanda Askell: If we just solved a potentially unsolvable problem.
Robert Wiblin: Okay. Let’s push onto a new topic. One, kind of, theme that I’ve noticed to some of the things you’ve written online is, kind of, what seems to me like an attempt to, kind of, synthesize the views or arguments that are often associated with social justice activism, with, kind of, a more rational or analytical, philosophical style.
Amanda Askell: Yep.
Robert Wiblin: Is that something that you’re consciously trying to do?
Amanda Askell: I think it’s party just that I often agree with many of the arguments or conclusions of people who are advocating for greater social justice and I see a lot of common themes between that and people who are effective altruists. This idea of expanding your, kind of, moral circle beyond people who are just like you and in your, kind of, local area, but rather to lots of different people in society and looking back at the, kind of, history and the effects that those people have underwent and trying to basically improve the lives of as many people as possible. And I think that, in some ways, this can get, kind of, … there are really good arguments for many of the positions that I think social justice advocates are putting forward.
And so I always just want to, kind of, make those arguments because I think, sometimes, they can be caricatured or bad forms of them can end up being released on the internet and suddenly everyone thinks that that’s … I don’t want it to be the case that people start to think, “Oh, the only defense of these, kind of, social justice movements are these arguments that I disagree with.” I’m like, “No. There are actually really good arguments there and so we should search for those and look at the merits of the best arguments and not merely dismiss things because we don’t like the way that it’s put”.
Robert Wiblin: So it’s a bit surprising that there’s so many, kind of, conclusions that you, kind of, agree with that are often being justified on bad grounds, but … How is it that the conclusions are good if you think that the typical arguments being made are not good? Is this an example of moral convergence with different theories or when thought through properly, kind of, reach the same ideas?
Amanda Askell: I think that there are actually good arguments. So it’s hard to talk about it without specifics, but something that I am really interested in is some issues in things like criminal justice reform and I think that the arguments in favor of that, that are quite effective, are ones that look at the history of, say, the criminal justice system in the U.S.. Who it currently affects? The fact that it affects minorities really strongly even when there’s, kind of, parity of crimes being committed.
And so, those arguments are out there and I think that what people sometimes do, is they don’t … or maybe they disagree with a conclusion or they just see a bad form of the argument made somewhere and don’t think, well, actually, maybe there’s a really good case to be made for this or maybe there’s a really good case for us to be a bit humble about what we think in these areas because we’ve just come out of really, quite terrible periods in history and we should maybe think that our society isn’t set up in this really great way for everyone, seems pretty plausible to me.
And instead, they’re just, kind of, seeing an argument they don’t like made by someone on Twitter or something and they’re taking that to be representative of all of the work that’s gone into this, when, actually, I think that the work that’s gone into this from historians and philosophers and various other people is often way better than the thing that you’re reading on Twitter.
So it’s, kind of like, remember that there’s actually really good to … and I think there really is quite robust stuff here and that we shouldn’t just, kind of, dismiss things because we don’t like the way that one person puts it. So I feel very strongly about that, I guess.
Robert Wiblin: So, to make it more concrete and, I guess, possibly, more provocative to some group that I’m not sure which one yet, are there any, kind of, specific debates around social justice activism that you think deserve a steel man philosophical defense that haven’t been defended as well as you’d like?
Amanda Askell: So, I think, maybe I want to focus on a, kind of, reframing of some issues that I hope that people can agree on. So sometimes I think that it’s really unfortunate that policies that could potentially be good and that we’re really just trying out … for example, positive discrimination. Positive discrimination is quite controversial, I think. I see that more as a, kind of, social experiment, something that we should try out for a long time and see if it works and see if it makes people’s lives better and if it does, then that’s great and if it doesn’t, then we’ve performed an experiment and we found out that this wasn’t the way to actually improve people’s lives.
And so sometimes, I think that you can defend a lot of policies and you can find convergence on policies if you explain to people that this is an area where it’s important for us to just try various things and to get the information on whether they work. And sometimes I think people are doing the thing that we talked about earlier, where they’re looking for … that’s like, “I must have just established that this intervention works really well before I try it,” whereas, my attitude is like, “Hey, here’s a defense of these policies.
We should just try them for a fairly long time and see if the long-term ramifications of them are good because that’s what we’re trying to improve. Isn’t just the life of this one person right now, but rather to try and make society more equitable and function better for everyone. Yeah. I think that’s a steel man of specific policies that are controversial. Yeah, I have lots of very pro, kind of, social justice views that I think are very steel man-able, I guess.
Robert Wiblin: I guess, with experimentation, the usual approach is to experiment on a smaller scale and then increase the scale as the evidence base gets better. Why do you think we should experiment with a little action on a broader scale first or for such a long time?
Amanda Askell: Yeah, so I’m not sure about the broad scale versus … with a bunch on interventions, you probably want to experiment with them. There are ethical issues with experimentation, if you think that it’s likely that this policy will succeed because then you’re harming the people in the areas where you’re not performing the experiment. I’m more in favor of just trying to gather more information, but I kind of understand that people might have worries about that kind of thing.
I think that long-term, the structures that we have in society now, took a long time to build up and the idea that our goals should just be to, kind of, in the short-term, just change the lives of people and everything will be fine, seems implausible to me. Rather, I think that we want to, kind of, slowly make adjustments to society that will make everyone better off and that will involve doing things that are better in the long-term.
I am not certain that we should just be doing broad scale things rather than experimenting and seeing what works. That could be really good. Maybe you have one university that tries one thing to make their classes more inclusive and then you have another university that tries another thing and then you get more information about which of those things was better. That seems quite good to me, but I also do think it’s good to take this long-term attitude towards these things and be like, “We want to change society incrementally in the long run and not just in the immediate next two years or something like that.”
Robert Wiblin: So, it seems to me like there’s often kind of a tension between people who are really analytical, philosophical style of reasoning and social justice activists. Do you think this is indicative of really fundamental disagreements or is it more a matter of how they speak and how they like to communicate and perhaps things being lost in translation?
Amanda Askell: I mean, I tend to think that it’s more the latter, but I also have … I’ve been in this and I know other people who are similar where I have never been … I didn’t go to college in the U.S., for example, and a lot of people talk about the specific U.S. college experience that I just didn’t have. So all of my exposure to things like feminism and social justice, came really vie academics and people who were making extremely reasonable and sound arguments that I agreed with … obviously, there’s lots of reasonable disagreement on all of these issues, but I never encountered something that I thought was anything that was inconsistent with just analytical, kind of, careful arguing styles … or at least for the most part, I didn’t.
And so far as there’s tension being created in the, kind of, public discourse, I suspect that it’s not over … it’s not because one side has logic and reason on its side. I think there’s just a cluster of really reasonable disagreements here. And it could also just be that people are just dividing into political tribes and that’s, I think, a very damaging thing that can happen and that could also just be the source of it here.
Robert Wiblin: Yeah, how impactful do you think it would be for someone to try to do a rational synthesis of social justice ideas or try to carve out, kind of, conclusions that are appealing to one side, using reasoning that’s appealing to the other side or to help people understand one another. I guess, it seems like that would be quite a personally challenging thing to do because you’d be attacked by all kinds of people whenever you touch these issues.
Amanda Askell: I don’t know how unfair it is to academic work in this area because I have looked into this a little bit and you have readers on some of these issues. So there are historians or people who look at the history of U.S. policy, like historians of U.S. housing policy, for example, can give you a lot of information about why you see very entrenched divisions in housing in the U.S. now, and that work is good and accessible. I do think that there could be … maybe it would be good to have more work that is engaging directly with the public on some of this stuff and I think we’re seeing more of that.
I’ve definitely seen a couple books come out recently, which were targeted at a general audience and were just trying to slowly go through all the arguments in favor of, say, … there was a recent book on misogyny and I read through some of it (‘Down Girl’ by Kate Manne). And it was like, yeah, this is just reasoned arguments about the nature of misogyny that aimed at a general audience and I think does a decent job of communicating these ideas in a way that … not everyone is going to be sympathetic to it, but at least it’s not obscure. It’s very standard, kind of, analytical style, I guess.
Robert Wiblin: So, when I mentioned on Facebook that I was going to be interviewing you, someone asked the question, whether you think utilitarianism is compatible with social justice causes and to what degree do you think they’re actually intentioned?
Amanda Askell: Yeah, so I think that one thing that happens a lot with people who think that utilitarianism is correct, is that you can end up having to prioritize causes based on how much harm you think that they’re causing. And so this can mean that you have a kind of ranking of things in terms of badness, like the things that you want to work on. So it took me a long … When I was younger, I became vegan. I was very interested in animal ethics.
I think one of the first books I read in ethics was Peter Singer’s Animal Liberation and then when I heard about effective altruism, I was really convinced by these arguments that global poverty was very important. And that’s actually where I’ve put most of my money, for example, and then slowly, I was, kind of, reluctantly convinced that these issues, like reducing existential risks were actually really important.
And I think that’s a good process to go through. There’s a sense, in which, if you reach these unusual conclusions about what’s most important, non-reluctantly, I kind of trust them less. I came to that conclusion kicking and screaming and trying to find every argument against it, but it can make it look like you think that the other stuff is less important. So if you come to the conclusion, “Oh, I should be working on reducing existential risk, so I don’t work on global poverty and I don’t work on issues that affect animals,” that you somehow think that those things are not important.
And I think the same can be true of social justice causes and I think it’s important to emphasize that that’s not the case. I think criminal justice reform in the U.S. is an incredibly important topic and one that people should be tackling. I think issues of improving the lives of women is incredibly … both within the U.S. and around the world is incredibly important and one that people should be tackling and so I don’t think that there is a tension fundamentally because I think that often, people working on social justice issues have the, kind of, the core thing that effective altruism or utilitarians, kind of, agree with.
Namely, they’re trying to expand their moral circle and they’re trying to benefit the lives of people and the tension comes at the level of what they prioritize and I think this is both, in terms of the causes that they end up investing in and also things like whether think that systematic versus incremental change is important. So I see those as being two of the, kind of, tensions.
I think utilitarians are often more inclined to favor incremental change rather than sweeping changing if they think that it’s implausible that we can actually get sweeping change, whereas a lot of people are more … I think a lot of social justice movement work is focused on, not solely on systematic change, but more so on systematic change and so I think it’s sort of unfortunate because I would rather have this attitude of, there’s a cluster of things that are super important. I want people working on those things and just because I’m having to do this weird ranking and then working on things that I think are most important, it doesn’t in any way diminish the ethical work that other people are doing.
Robert Wiblin: I guess, imagining that there’s kind of two groups: social justice activists and people who are skeptical of social justice activism. What would be your biggest criticism of each group? How would you like to see them change and improve?
Amanda Askell: So, I suspect that I would like more of a, kind of, prioritization attitude in social justice activism and it’s not to say it’s not there, but prioritization is not a, kind of, common tool that’s wielded in most places, I guess, and so it would be nice to see … and also maybe a broadening of the scope of issues that people work on and that’s sort of happening. I think you’re seeing people care a little bit, like immigration issues, for example, more than in the past and I hope that we’ll also see this with caring about global poverty issues as a, kind of, social justice concern.
And so I think a mix of really trying to target the things that will have the most impact would be really good and also, yeah, broadening the issues that people consider, which is happening and I think it’s going to be good thing. I suspect, more based on the interaction with other people or testimony from other people than any personal interaction I’ve had, that the thing that people mainly find off-putting about social justice activists is the methods of engagement of some of them.
And maybe some people feel, kind of, attacked when they just don’t understand these issues or they want to get to know them and they feel like they make mistakes and then they get, kind of, eviscerated and they’re like, “Well, I just don’t know anything about this issue and I don’t feel like I’m being engaged with fairly,” and if that’s the case, it’s probably not a good way to have a dialogue with people. It’s good to be like, “Look, some people just won’t understand this stuff or have encountered it and it’s important to be kind and considerate while they’re learning about it and if we aren’t, that’s just bad for discourse.
And I think from the, kind of, more … people who are anti-social justice activism, I think it’s a mix of … what I would want to see more of is a combination of epistemic humility and probably historical research. So in some ways, for me, a large influence was simply looking at the history … how recent the history of a lot of this stuff is. We just had an incredibly unequal society and we’ve had laws enforcing really unacceptably, unequal ways up until … like school segregation was happening until way more recently than I thought.
And when I looked into school segregation in the U.S. and I was … there were schools that were fighting this really not that long ago. And so I think that having a, kind of, more historically informed attitude towards why we should, kind of, expect society to be, kind of, unequal to be good and taking that historical information and using it to be like, “I’m going to be a bit humble about this issue so I’m not going to assume by default that everything is equal. Instead, I’m going to assume that it is just very likely that people are, kind of, doing worse than they otherwise would if we had just had a kind of fair society.
So, yeah, historical research and a little bit of epistemic humility is probably the other good thing.
Robert Wiblin: So, how do you think the quality of discourse between those groups could be improved the most so that they can actually gain some kind of mutual understanding and, perhaps, even be able to work together on solving some problems that they both accept?
Amanda Askell: Yeah, so I think that one phenomenon that I’ve seen happening sometimes in these debates is that people will … they’ll have a controversial view that they want to, kind of, put forward or at least have in the eyes of their reader, kind of, supported, but they won’t want to take ownership over the controversial view and so they’ll assert something that seems to, kind of, strongly implicate the controversial view. So an example of this might be saying something like, “Oh, most of the wage gap between men and women can be explained by women’s choices.”
And so someone might post something about a study that just talks about the fact that women take more time off work to do childcare, explains the different in income. And this can seem to imply that there’s no problem basically. So there’s no bias against women in the workplace, nor is there any need for systematic change of the way that we do childcare or the nature of taking time off for maternity leave or anything.
And they might not want to assert, there is no problem and there is no bias against women in the workplace and there is no need for change, but that’s often, kind of, implicated if you don’t cancel it, if you don’t say, “I’m not saying that this explains everything or that there’s no need for change, but we should note that some of this is explained by this other phenomenon.” And so if you express your values and you show that you really care about women and that this is just a way of finding out the best way to improve the lives of women, I think a lot of people are actually really sympathetic to that kind of claim.
So if I were to say, “Oh, it turns out that this is the thing that’s causing women to earn less, so we should be focusing on that thing and how to improve it and what we can do here.” That’s just a very different thing than just asserting the fact with the full awareness that what people are going to pick up on is the standard, kind of, cluster views that people who assert those facts have. And so I think that this is quite damaging because you want to just engage with people at the level of their actual views and to be able to criticize those views.
And doing this, kind of, thing where you implicate a controversial view without asserting it and then if someone says, “I don’t believe in the controversial view,” you say, “I never said that. I just said this fact.” It just means that people can’t actually engage at the right level with one another. And so, I guess, when I see that kind of thing happening, it makes me sad because it just doesn’t lead to, kind of, good discourse.
And so I think that one thing that I … It’s maybe a kind of obscure thing that I want to see happen a bit more, is people kind of taking ownership over the things that are implicated by what they say and either canceling it by saying I don’t actually think that thing or just embracing it and saying the thing that I think is this more controversial thing and I think it’s supported by this piece of evidence that I just gave you. Because it just feels like a more honest discourse then.
I know what someone’s view is and I like to think I try to do that. I try to strongly express my values before I state something and if I’m aware that the thing that I could say could be interpreted in a way that is not consistent with my values, then I try and eliminate that interpretation or try to clarify it later. If I say something and people are like, oh, it sounds like you’re saying that this terrible thing. Then, I’ll be like, “Oh, I totally didn’t mean that. I see how you thought I was saying that.
It seems like an important part of good discourse to me. It seems like an important part of honest discourse and so I would want to see people not doing, kind of, yeah, this discourse via implication or something.
Robert Wiblin: Yeah, because this is consistent with like … one way of shortening that advice is, kind of, the more controversial, the more you’re talking about a hot button issue, the more you have to be extremely careful to be clear about exactly what you are saying and what you are not saying?
Amanda Askell: Yeah. I think that’s right. I-
Robert Wiblin: And I guess, also, show concern for the other side and they’re interest. So you can completely disagree about … you might share more values and more goals than might be immediately apparent.
Amanda Askell: Yeah, yeah. And strongly share your values. In some ways, the thing that I don’t like is when I think someone’s values are bad. So if I think that someone genuinely cares about all of the people that I care about and they just think that there’s a different way of helping them, our disagreement is in many way, much less strong, right? That’s a productive disagreement to have where they’re like, “Look, I really want these people to flourish, but this policy just isn’t helping them right now. So I want the poor to flourish, but this taxation policy is actually harming them and so I don’t agree with this taxation policy.”
That’s a much better discourse to have than having a discourse with someone where you’re like, “I’m not even sure you care about these people.” So I think both being very clear about your views, but also being really clear about your values can be very helpful here.
Robert Wiblin: That reminds me of this blog post you wrote a while back, which I really loved about vegetarianism and abortion and, I guess, trying to get good moral discourse between groups with actually different values, I guess, in this case. Do you want to explain the argument that you were making then?
Amanda Askell: Yes. The argument was basically that we often seem to betray a kind of complete lack of what I call moral empathy, where moral empathy is trying to get inside the mindset of someone who expresses views that we disagree with and see that from their point of view, what they’re talking about is a moral issue and not merely a preference. The first example is vegetarianism where you’ll sometimes see people basically get very annoyed, say, with their vegetarian family member because the person doesn’t want to eat meat at a family gathering or something like that. I think the example I give is, this makes sense if you just think of vegetarianism like a preference.
It’s just like, “Oh, they’re being awkward. They just have this random preference that they want me to try and accommodate.” It’s much less acceptable if you think of it as a moral view. You see this where people are a bit more respectful of religious views. So if someone eats halal, I think that it would be seen as unacceptable to … people wouldn’t have the same attitude of, oh, how annoying and how terrible of them, but I also think that this is … So, I wanted to use a couple of examples in this post of this phenomenon.
And the one on the other side is the issue of abortion, where a lot of people who are anti-abortion, the criticisms of them are things like, people will respond to anti-abortion arguments by saying things like, “Well, if you don’t like abortions, just don’t have one.” And you’re like, “But this doesn’t make any sense because if I take this person at their word and they genuinely believe that abortion is murder, it doesn’t make sense to say, if you’re against murder, just don’t murder people.”
That’s not an argument that anyone would find convincing. We’re like, “No, we think this is morally wrong and so we should all be against it.” And I think that when you engage with people at the level of their actual moral views, you can actually find ways of having a productive dialogue. You might think that they are being disingenuous and that is something that you should definitely … you should assign some probability to, that sometimes people will say, “Oh, this person says that they’re anti-abortion and they’re actually just misogynistic or something.”
But it’s like, absolutely, that’s a possibility, but it also is just possible that they genuinely just believe that abortion is murder and then you have to try and engage with them at that level and actually engage with that belief. And I think that when you do that, you can just have a slightly more productive dialogue, where you’re not treating … I think it must be frustrating for someone with that genuine belief to have people treat them like they’re being disingenuous or that they just have some preference against abortion and it’s probably much more productive to just assume, kind of, good faith on the part of the person you disagree with. I think it’s probably more productive.
Robert Wiblin: Yeah. Well, I mean, it’s highly likely to be more persuasive.
Amanda Askell: Yeah, because you can engage with them at the level of the arguments that they actually agree with so-
Robert Wiblin: Or, at least the arguments that they are giving.
Amanda Askell: Yeah, yeah. Exactly. And like-
Robert Wiblin: Do you think people are too quick to produce these alternative explanations for why people who disagree with them are doing what they’re doing?
Amanda Askell: I mean, sometimes I think that it can look … and maybe I’m inclined to be overly charitable towards people because I’m kind of like, if I can find the best version of this person’s argument and I can show that that’s wrong, then I’m safe. We’ve just changed their mind, ideally. Sometimes I think it’s that people see, kind of, inconsistencies and their best explanation is some kind of disingenuousness. So in the case of abortion, an example is that a lot of people who are anti-abortion are also anti-contraception, even if the contraception doesn’t result in fertilization.
And that can seem a bit strange because it’s like, well, even if you’re against abortion and contraception, you surely think that abortion is much worse than contraception and so surely, here’s a point of agreement. We can all just agree to have much more contraception available. Why can’t we just move forward with that being roughly the policy? I think a more charitable explanation of this is that the people who are anti-abortion and anti-contraception are kind of thinking about what, in their minds, is the best possible world.
And the best possible world is one, for them, where people don’t use contraception and people don’t get abortions. There are positions in ethics that say that that’s the relevant world, is the best possible world. And if that’s correct, that kind of charitable interpretation where it’s like, they’re not malicious.
And if that’s correct, that kind of charitable interpretation where it’s like they’re not malicious, you can again engage with that because I think that belief is false. I don’t think that the best possible world is the one that’s relevant. And so maybe by breaking down that belief, you can actually find convergence on a key point of agreement.
Sometimes, people are just seeing consistencies and things that they don’t and in the position of their opponents and they’re treating it as evidence of bad faith. And it’s like sometimes people can’t in good faith argue for things that are ethically wrong and it’s fine to just engage with them on the assumption that they’re arguing in good faith.
Robert Wiblin: Well, one thing is that they could be mistaken for a complicated reason that they haven’t realized. Another thing would be when you’re debating with people who come from a very different school of philosophy, perhaps like a more theological and religious one, then there may just be all kinds of arguments on that terrain that you are not familiar with and you don’t understand.
And so for you to say, “Well, I looked at these things and I couldn’t find a way of making them consistent,” might just be a measure of your ignorance rather than a measure of the inconsistency of their view.
Amanda Askell: And you should obviously … as with all things, it’s almost certainly the case that sometimes it’s just that there is no good argument. Someone’s just seeing something on the Internet. But I tend to think that a good procedure is to try as much as possible find the best version of these arguments. And be aware that some people are just operating with different assumptions and different pieces of evidence than you, and that you’re just in a much better position to convince people if you can demonstrate.
I find people in conversation much more happy and just much more willing to discuss with you if you show that you actually have cared enough to go away and research their worldview and you might be like, “Look, I looked into your worldview and I don’t agree with it, but I’ll demonstrate to you that I understand it.” It just makes for a much more friendly discussion basically because it shows that you’re not like, “I don’t even need to look at the things that you’ve been raised with or understood or researched. I just know better without even looking at them.”
Robert Wiblin: I’m going to tell you that you don’t believe what you believe are the reasons that you say-
Amanda Askell: Exactly. Yeah. It’s really bad dynamic for debate, I think. So in terms of things that would improve discourse, this is like assuming, generally, that people are operating in good faith and trying to then reconstruct their view given that assumption can just be like really helpful in a lot of moral debates.
Robert Wiblin: Yeah. There’s been a few cases in the media recently of people expressing very controversial moral views and very distinctively moral views, which I disagreed with actually all of them. But I really didn’t like them getting shut down because I felt like if they were right, then it was extremely important that they be able to voice these views because it would mean that we were making a huge moral mistake.
And there’s areas where I have controversial views where I think society is making a big moral errors and people might shut me down and I feel, Well, I’m happy to let other people express their controversial moral views for the sake of us or learning from one another and potentially correcting these errors and as much as they’re wrong in part because they might be right.” But also because I want to have a social norm that allows me to express my controversial views as well without being shut down.
And it is like when these are moral issues is might be like the most important kind of things going on in as much as we can identify a moral catastrophe that people aren’t recognizing. It’d be so important that it might be worth accepting some degree of discomfort or some degree of controversy in order to ensure there is a process that will allow us to bring those to life.
Amanda Askell: Yeah. I’m inclined to agree. Most people actually to some extent agree on this issue where it’s like there’s this important norm of soft freedom of speech so not the stuff that’s like guaranteed by government, but rather that we allow people to like express controversial views without having disproportionate responses to them in a way that just dis-incentivizes them from talking.
And now one thing that I think has really unfortunate about the Internet generally or maybe there’s this really difficult transition that people have to make from the known Internet era to the Internet era. And one example of this is that people don’t often think about proportionate feedback. So sometimes, I’ve expressed this before where I say, I will often look at what someone said and I might think the thing that you said was wrong and I disagree with it, and then I look at the amount of feedback you’ve had from people already.
And often if they’ve expressed it publicly on the Internet, they’ve had this huge amount of negative feedback. They basically had people swarming them, telling them that they hate them and I’m like that actually like you … in as far as you deserve to be punished for say they express something that was actually a moral. It’s like I don’t want to just add to that because it can really have this damaging effect where people are so scared of saying anything that people might disagree with because they know that they’re going to get this disproportionate response.
And so I then just choose not to add to it even though I disagree with them, and even though I agree with the people who are saying this view is false. And I think that there’s not a lot of norms that we currently have, which tell people. They indicate that we should do that and that we should be proportionate and our response to people and that’s unfortunate.
A further thought that I have on that is just the thing that I’m inclined to criticize often more than just the content of someone’s view is how they express it. Because one thing is that if you do express a controversial view and you do it without care and you do the thing that I was talking about earlier where you do it via implicature and in this indirect way, you create this huge amount of work for the person who wants to disagree with you.
So I’ve had views where I wanted to disagree with them, but I’m aware that in order to disagree with their 300 word statement, I’d have to give like 2000 word response. That’s like, “Look, here’s precisely what you’re seeing because you weren’t clear. Here’s precisely what your statements implicate because that wasn’t clear. And here’s why all those things are false.” And so there is this thing where I’m like also people need to take responsibility for the way that they say things and that’s why I’m often inclined to criticize much more than even the content of the views.
Robert Wiblin: Yeah. I like that. But I have this half formed thought that I’ve been having recently which is like … so I think for the sake of being able to like figure out what’s true, we should have a lot of forbearance towards people who express things that we don’t like that even if it makes us angry, we should try to stay calm. But I realized that this also implies that when someone says … … this is kind of this outrage cycle, right?
So someone says something outrageous and then someone gets outraged at them and then people get outraged to the outrage and then this outrage of the outrage of the outrage and just goes on. I realized that I should also have a lot of forbearance potentially for the people who were hurt by the original claim, who were made very angry and perhaps shattered at them or like said it said things that someone else finds offensive, and that it’s understandable.
Someone says something that’s particularly that might be hurtful to an individual because it affects them directly and they get angry. I should also have forbearance towards that. I might not try to get too angry in response. If more people tried to adopt this view maybe at every point in the outrage cycle, you could potentially be tempting things down a little bit. But at the same time, some people might respond that you need to have outrage to the outrage to preserve the freedom at the first level of discussion. I don’t know. It gets very complicated.
Amanda Askell: Yeah. I often think that … I wish we could probably, and maybe already exists, but like your norms here that are friendly and so when I say things that people should take responsibility for the way that they express things. I also think we should have like a lot tolerance for people who fail to do that and give them an opportunity to clarify. So one thing I would never do is just be like, “Your statement implicated this thing. This thing is morally repugnant. You’re more morally repugnant and no one should ever listen to anything you say.”
I would just say, “Hey, the reason why people are disagreeing with you so strongly is because the thing that you said implicated this thing and then you can clarify it and you can remove that implication.” And so we find it very difficult to know where to draw the line between some friendliness and outrage. And the danger that you talked about is a kind of real one where sometimes we can find it hard to see whether a view that is controversial or out there is in fact just on the edge of current ethics and we’re not even seeing it.
So if historically arguments that like, “Hey, maybe animals actually matter and we shouldn’t just harm them arbitrarily,” might have looked really absurd. Similarly, for arguments that you should favor people regardless of whether they’re from your own country or not. We’d have looked anti-patriotic and possibly we’d be perceived as really controversial and bad.
Robert Wiblin: Seditious.
Amanda Askell: Yeah. And so because we know that in the past some views that we now think are correct, were extremely controversial and would have been shut you down, we should have attitude of risk. We’re really understand that any view that someone puts forward is of that form. It could be over that form. And so we do want to protect some ability for people to talk controversially. And then, is there a line at which you simply refuse to engage with someone? And I think there is.
It’s just that what we’re trying to determine is where that line is and some people are seeing views that they disagree with and shutting them down and it’s like, “Well, maybe even if we feel like that’s appropriate, we should step back the lane and allow for more room for some views of this form or for people to clarify their views or whatever and reserve extreme outreach for genuinely and obviously religious views.”
Robert Wiblin: Yeah. Yeah. Although I’m in favor of forbearance, it’s certainly not absolute. You do see people who stir controversy or say things that are predictably will make people feel threatened or insulted, and they’re not even driving a potentially some really important moral truth. Then the thing may not even really be relevant to any decisions that we could make.
And they don’t seem to be in good faith trying to reduce the amount of threatening us of what they’re saying as much as possible, which is what that do. If they were really just concerned with the truth of the matter and the offense they were causing was just not unintended and unavoidable consequence. So in those cases, maybe we shouldn’t just stoke the average cycle, but you should be strongly discouraged during that kind of thing.
Amanda Askell: Yeah, and I think this is the thing about responsibility over the way that you see things because it can feel like disingenuous if it seems like you’re just trying to create kind of intellectual clickbait and it’s like you could have views that are a little bit out there. But whenever I have views of that form, I like to think that I try and put them carefully. I try and explain precisely why and bring people.
You reached this point where you thought that this controversial thing was true and ideally you reached it with values that were good and from an earlier starting point and I’m like, “If you genuinely think that that controversial thing is true, you can give these reasoned arguments from the starting point that you started at to that conclusion.”
That’s what I think a lot of the good work. If you look at the kind of controversial views that turned out to be we now think are correct like, “Hey, you should actually care about people in other countries and you should care about animals and you should expand your moral circle far beyond like just your kin.” A lot of the work that went into that wasn’t like intellectual clickbait. It wasn’t like a high from this fun thing and super controversial, but I’m going to claim it’s true.
It was like careful reasoned arguments that people were giving. And so that’s more powerful and that we should probably really praise that. And as a result, we should focus more on … it’s fine to just be the way that you did this was wrong, even if your conclusion turns out to be correct via these reasoned arguments, the fact that you didn’t give them, it’s actually in and of itself, something that we should be giving you negative feedback about it.
Robert Wiblin: So that’s a nice segue to the next section of three questions that I ask a lot of guests. And then the first one is, what’s your most unusual view, kind of philosophical or otherwise, and perhaps especially relative to our potentially the listeners to the show?
Amanda Askell: My most unusual view is very hard for me to say because I do have some unusual views. I’ve argued for things. I don’t think there’s an obvious distinction between things like prison and just corporal punishment. And so I find it a bit strange that people are so fine with prison and not okay with corporal punishment.
Robert Wiblin: Yeah. Flesh this one out. I mentioned a lot of people wouldn’t have had this argument.
Amanda Askell: Yeah. So the idea is just think about … there are various things that punishment is intended to do and so one of the argument for prison is it’s intended to keep people isolated from society where they can’t do harm. But there are all these other functions that punishment is supposed to serve. So it’s supposed to merely punishing the person for having done something wrong, for example.
Robert Wiblin: It’s kind of retribution.
Amanda Askell: Retribution and also preventing other people from doing the wrong in future. So creating these disincentives and a lot of people … if you just care about keeping people away from society, then it doesn’t seem obvious that the prisons should in any way be bad places. But a lot of people seem okay with the idea that you can group together all of the punitive aspects of punishment and put them into the prison system.
And that is what prison is. It’s like there’s very punitive thing where a lot of people … the thought experiment I sometimes give is like you can have three years in a US prison or you can lose your pinky finger. And which would you prefer? I would prefer to lose my pinky finger and you can do this like trade off with people where you’re like, “How much corporal punishment would you be willing to take on in order to avoid being in a US prison for like many years?”
And we can use that to measure roughly how punitive it is to be sent to prison. And then a lot of people seem to think that the idea of corporal punishment is really terrible. But that somehow prison is okay even though it’s like the imprisonment has this extremely detrimental effect on the lives of the people who goes to prison. And I don’t quite see how you can maintain that view. And so you can go in many directions with this. The direction I tend to go in is, we should think that the punitive aspects of prison.
If we think that corporal punishment is bad and I think we should, then we should have this equal attitude towards the way that we run prison systems so that they’re extremely harmful for the people who go there. I come at this from this kind of consistency thing of, “I just can’t maintain a difference there.” If I would prefer to lose my pinky finger than go to prison and why hasn’t society harmed me more by sending me to prison for three years? Then it would have just cut off my finger.
Robert Wiblin: I mentioned that some people might attack you for this thing. How can you say that corporal punishment could possibly be acceptable? You might say, how can you say that prison can possibly be acceptable?
Amanda Askell: Yeah. To my mind, this isn’t an argument for the acceptability of corporal punishment. It’s really to emphasize that we’re not taking into account how bad prison is for people and maybe you think it’s fine and maybe you think the punishment is justified. But it’s like when you see a new story and it’s like someone has gone to prison for like 10 years, think about what you would be willing to undergo and way of physical harm to yourself in order to avoid that punishment and remember that what is happening to them is as bad as that.
So my thought is I want to take the visceral negative reaction that people have to corporal punishment and attribute that to a prison sentence and be like, “If you think that it would be really terrible to lose your left hand, but you’d be willing to lose your left hand to avoid eight years in prison, then when you hear that someone received an eight-year prison sentence, that same feeling of horror should apply to that sentence.
And obviously, that isn’t always going to mean that you don’t think that the sentence is just. It’s just to say like, let’s make sure that we’re being proportional in our understanding of this punishment and not pretend that there is nothing and that we’re just hearing these numbers. This is more true of more minor crimes, for example. And you might hear a sentence that’s extremely high for a fairly minor offense. And ideally, this would get more emotions, more in tune with what’s actually happening to the person in question.
Robert Wiblin: I recall you wrote some article about this years ago so we can try to dig that up and-
Amanda Askell: Yeah, I think I have a blog post on it.
Robert Wiblin: I think that this topic is also discussed in a book called When Brute Force Fails by an academic who studies Criminal Justice Reform, Mark Kleiman. So I’ll stick up a link to that book if you’re interested in learning more. Okay. So second common question, what do you think effective altruism is doing wrong and it doesn’t have to be the worst thing that’s happening, but something that you think is under appreciated that you’ve noticed?
Amanda Askell: I say don’t give too much confidence to what I see on this issue because I would need to reflect more on it to give better views. My immediate response is like there are two conceptions of effective altruism. There’s a thin one and a thick one. And on the thin conception, it’s just that you want to do the most good and you want to use evidence to do the most good.
For me, it’s hard to object to that. People might disagree about what in fact does the most good and so some people who think that more systematic changes is feasible. They’re not disagreeing necessarily with effective altruism, they’re just disagreeing with what is effective. But there’s thicker notion is more like the way that the beliefs in the community and the way that the community operates.
I suspect I would like to see more inclusion of people with different backgrounds and views. Effective altruism has attracted to this certain type of person and I hope that broadens out because that would really increase the general level of expertise that the community can appeal to and it would also potentially increase it’s broader appeal.
I also think that being a bit more careful about how things are presented would be good because you can alienate potential allies. I’ve talked a bit about the fact that I think that a lot of very altruistically motivated people might look at effective altruism and think that you don’t care about social issues for example. And you can actually just correct that by making clear that you do think that these issues are really important.
It’s just that you think that there are other areas that are really neglected and that’s why you’re working on them. So you can cancel some of that stuff that is going to alienate people like, “Oh, you only care about this course area and you think everything else is rubbish.” And related to the issue of appealing to a broader group of people or at least bridging gaps. I think that it’s easy for this in-group talk to start occurring within effective altruism and it’s a really natural thing to have. People want to start using technical language, but one has to be aware that you’re trying to talk to many people.
Effective altruism is not just trying to appeal to see one set of academics who already have technical language is trying to appeal to both members of the public and members of various different disciplines. And I think that in order to do that effectively, you really have to keep your communication as accessible as possible. And that means avoiding terms that are just no common English terms, for example, without fully explaining them. And so I think that trying to be more inclusive in the way that things are described could also be quite good for effective altruism.
Robert Wiblin: Yeah. I’m also pretty worried about jargon. It’s not that unusual for me to hear conversations between people involved in the community that I can’t follow because they’re just using all of these skewer terms. One thing that particularly bothers me is coming up with new terms that don’t have any natural meaning where there’s already a word like a short phrase that could be used to describe them, which I see happening reasonably often now. It seems almost deliberately exclusionary to come up with these and really unnecessary.
Amanda Askell: I understand the instinct. I do like the fact that in philosophy, it’s really common to … if you think that an English language term just doesn’t quite describe the thing that you mean to just make up a term and then attach meaning to it and the idea is just to avoid ambiguity. But really that’s only okay within certain discourses. And really it’s only okay if you actually define the term very commonly even if people are doing this in talks, it’ll actually give the explicit definition.
And so when people just start using these terms in a kind of offhand way, the cost to that is exclusion and it really has to be worth the cost. There is no other way of describing this term. So I’m going to take on the burden of every time I use this term explaining to people what I mean by it because it’s so useful. And that’s true of some things. There’s certainly some concepts that are just like that, but they’re not that common. And so I feel quite negatively about discourse that I can’t just enter and understand.
And I don’t think it’s like a mark of a good writer or a good communicator that it takes a huge amount of effort to understand them. In fact, I often think that we should have norms where if you don’t understand people relatively quickly, you’re not required to continue to engage with it because for many reasons, but it’s the job of communicators to clearly tell you what they mean. And if they feel like it’s not your job to-
Robert Wiblin: They impose such large demands on other people.
Amanda Askell: Yeah. And actually it can be damaging. So people can do this in ways that are not just about jargon but about ambiguity. And so if you communicate in a way that’s ambiguous or that uses a lot of jargon, what you do is you force people to spend a lot of time thinking about what you might mean and suppose that the January like 14 interpretations of what you mean. If they’re smart and conscientious reader, they’re going to be charitable and they’re going to attribute the best interpretation to you.
And this is actually really bad because it can mean that … ambiguous communication can actually be really attractive to people who are excited about generating interpretations of texts. And so you can end up having these really perverse incentives to not be clear in a way that’s just going to alienate a bunch of people and make some people really attracted to the message that you’re putting across.
And so there’s a lot of things operating here that I’m super wary of and I think that there are norms in philosophy. They’re not always followed, but it’s one thing that I always liked about the discipline is you’re told to always just basically state the thing that you mean to state as clearly as possible. And I think that’s like a norm that I live by. And I also think that people appreciate when reading.
Robert Wiblin: Yeah, this is getting close to a hobby horse of mine quite an extremist on this communication issues. So when I notice people who I think of being vague or obscurantist that they’re not communicating as clearly as they could, my baseline assumption is that they’re pulling a scam. They’re pulling the scam where they’re expecting other people to do the work for them and they’re trying to cover up weaknesses in what they’re saying by not being clear. Maybe that’s too cynical.
Maybe that’s too harsh and interpretation. I were saying we should be charitable to other people but honestly my experience very often just has been even after looking into it more, that has been my conclusion that especially people who can’t express things clearly but claim that they have some extremely clear idea of what you’re trying to say. I feel that they’re just pulling a con.
Amanda Askell: Yeah. Something that happens is things can be very difficult to read or understand because it involves a lot of prior knowledge or assumes a lot of prior knowledge. And so if you come to something without that knowledge, it can be really hard to interpret what the person is saying. And so this can be in some ways through very little fault of the person who is communicating. It’s just like this is a really technical area of engineering and if you’re new in engineer and you’ve never read any papers in this field, you might not understand this paper.
But it will refer you back to things. It will continuously refer you back until you can basically give yourself the education that you need to understand it. But if you’re coming from the outside, it can be hard to differentiate between someone who’s just so far ahead of you that you just don’t understand what they’re writing and someone who is just being obscure.
Even if you understood everything relevant to this area, you would still think that this person is being obscure. Taking advantage of the difficulty of differentiating those things is really bad. And in a lot of ways, I’m pretty good at telling the difference between the two things. Some of the papers I have to do a lot of literature reviews for my thesis and some of the papers they are quite technical and shorter than they need to be.
Now, I read them and I’m like, “I wish you had made this three pages longer and just explained everything because that would really help the reader a lot. But I know that the thing that you were saying was true and you referred me back to lots of things. You gave me the citations, you made reference to the theorems that I needed to know to be able to understand this and so on. And so you gave me the tools and you weren’t being obscure. It was just difficult to read you without that background.
As a result, I both think that actually we should kind of criticize it and academic work can be inclined to make her work much more accessible to people, not just within our field but in other fields and that we should be extra inclined to criticize it with people who are writing to a general audience because there’s just no need to do it then.
Robert Wiblin: One pattern I noticed that makes me especially annoyed is when they did criticize the people who can’t understand they look down on them. And that’s a real red flag for me that something odd socially is going on here. Maybe one reason why I’m particular skeptical here, I just rarely find that I encountered this problem myself.
But even in quite complicated areas, I usually find that I can at least get people to grasp what I’m pointing towards even if I can’t explain all the technical details. There’s usually some core that people can conceptualize.
Amanda Askell: I think that’s right. And sometimes it can be hard to communicate in one medium and easier in other. So I find a lot of the work that I do easier to communicate if I have diagrams, for example. And so harder to communicate verbally, easier to communicate on paper. But I’ve almost never met a topic where I’m like, “I could in no way describe this to someone so that they have a basic understanding of it.”
And I would see as a failing on my part if I couldn’t. I wouldn’t assume that this was just some there. They’re just these mysterious areas there too obscure to ever bring someone up to speed on. I would just assume that … often when I am in that position, I don’t fully understand the issue.
And so I’m quite inclined to be like, “It’s really our job to make things clear to people and people appreciate it and then understand it.” And also, it’s completely acceptable for people to ask what things mean and to ask really simple questions. Maybe this is too much of an aside. But there’s this interesting phenomenon that I think does happen, which is like when people become sort of secure that people aren’t going to look down on them, they start asking really simple questions again.
So a question I often ask, when someone says they talk about an issue and there’s a word that they use and I just don’t understand it or I don’t understand how they’re using it. I’ll ask what they mean and I feel completely comfortable doing that. And it’s because I’m pretty comfortable that they’re not going to think I’m stupid or that like are there, I’m going to like sneer at me and be like, “How can you not know that word?”
They know that it’s important for me to understand the word. And I just like feel comfortable asking the question. And this is kind of pernicious because people who don’t feel comfortable are the people who feel scared that people are going to look down on them. And that means that people who would find it really useful to have clarification feel like they don’t even have the tools for finding that clarification.
And so I think that worries me especially because it’s like this practice doesn’t alienate people who are sufficiently confident that they’re willing to ask. But it really alienates people who are a little bit confident or worry that they don’t know an area. And those are exactly the people that would benefit from clarification of what the person is meaning. And so it’s this negative thing where you certainly shouldn’t look down on people because it creates that really bad spiral and it really alienates people in a way that it makes me quite annoyed.
Robert Wiblin: I’ve noticed this technique of using vague and confusing language to trick people into thinking that you’re really smart works better with people who are intellectually insecure for this reason because they don’t feel okay to challenge you. And they feel like they have to pretend to understand, and they can’t probe about each point like, what did you mean by this? What do you mean by that?
It’s really frustrating to see it happen. It’s like, “No, just call bullshit.” They’re not saying anything real, but this is kind of huge topic that maybe it deserves its own section.
Amanda Askell: You know I think you should really resist this. I resist it a lot and maybe this is controversial. But in things like the history of philosophy, we have to study historical figures. I feel like often the historical figures that we study can be the ones that are more obscure because the ones that were super clear don’t really require much interpretation.
But this can also mean that there’s this incentive to come up with these theories about what the person meant. That are just like consistent with the thing that they said. The thing that they said was sufficiently vague that there are lots of really great consistent interpretations and then say that that’s actually what they meant.
I’m just like, “We can just avoid all of this by just reacting negatively and not engaging with discourse.” It’s insufficiently clear and just being like, “I read it a few times. I think that what you’re saying is like too vague.” It’s your job to clarify, but at the moment it’s not really doing anything for me.
Robert Wiblin: Alright. That was a big diversion. The third question that I asked about people is, what is the best argument against effective altruism or perhaps what you’re working on or what the community as a whole is working on?
Amanda Askell: Oh, that’s an interesting question. So again, I want to distinguish the thick and thin notion of effective altruism. I find it hard to think about arguments against the idea that you should be altruistic. And if you’re going to be altruistic, you should try and be quite effective. And I’ve already mentioned some problems with the community that could be kind of arguments against some practices within the community.
I think that in terms of what people are working on and what I’m working on. A lot of what we’re moving towards presupposes to some extent things like the importance of the far future. So this is something that people are getting really concerned with on effective altruism. And I think quite fairly, but it does rely on sort of views within population ethics that are not uncontroversial.
And so I think that one thing that would be nice to see and we will see a lot more of this is like just much more public defense of this population ethics assumptions underlying a lot of the reason to focus on issues that are long run rather than affecting people that are alive today. So I’d like to see more of that. I think it’s important and I’ll be great to see more of it.
Robert Wiblin: Okay. So we’ve talked about a couple of blog posts that you’ve written. Do you think that blogging has helped your career or is it something that the philosophy has looked down upon?
Amanda Askell: I’m not sure about philosophers. It’s often funny that I rarely put out work that is … my academic work is not my thesis. There’s a good chance that more people will have read a given blog posts of mine than will ever read anything that’s in my thesis. And so there’s one potential negative effect or do things like blogging is that now means that the things that represent me are pieces that I wrote in 25 minutes or 50 minutes or a few hours and not the things that I spent four years working on, and that also those things are much harder to communicate.
In general, it’s good. I wouldn’t mind doing more of it because it forces you to … it’s a different style of writing. I don’t get to assume a bunch of previous knowledge and it forces you to be really clear about your ideas. So I think that publishing generally is fairly good blog posts yet it’s like you have to … people have to be a bit more tolerant that you’re probably going to say false things and make mistakes. But in general, it’s quite good for people to just be able to refer to things. The negative aspect of blog posts is that there are a lot of benefits that come with completing a piece.
There’s actually law of benefit to the kind of detailed work that you go into when you write an academic paper and it just doesn’t come out in a blog post and because you never need to do it. And so you could find that there’s some really great objection to something that you’re stating and you’re just not going to find it if you’re just spending a few hours like writing up an idea that you had. It’s not the only thing that we should be doing, but I don’t think it’s like in and of itself negative.
Robert Wiblin: Let’s talk for a while now about your thesis topic which you’ve been working on for so many years. Which is, broadly speaking, I guess, infinite ethics and the problem that presents and how we might solve them.
Amanda Askell: Yep.
Robert Wiblin: So, what is the fundamental issue raised by infinite ethics?
Amanda Askell: So there’s several problems raised by infinite ethics. So, infinite ethics is just ethics where the future contains infinitely many agents who can have positive or negative levels of wellbeing. And most plausibly, it affects ethics when the causal ramifications of your actions are infinite. So the future light cone is infinite. A lot of ethical theories care about the wellbeing levels of different agents, even if that’s not all they care about. So, utilitarianism is a theory that cares only about how your actions affect the wellbeing of agents in the future. But you also have things like non-absolutist deontological theories, which might care about things like rights violations. But they also care about how many agents are affected by your actions now.
So, I might not be allowed to say, break a promise, but if it’s the case that, in order to save 100 lives, I have to break a promise, maybe it’s acceptable then. And so, in cases where there are only finitely many agents affected by your actions, there are a lot of principles that we can just endorse without problems. So an example of two principles that are really important in ethics. One kind of class of principles are sometimes called sensitivity principles. And these are principles that say, if I can make at least, say one agent, better and I don’t make any agents worse off, then I’ve made the world better. So you’re sensitive to local improvements.
And another class of principles that people think are important are these principles which we might call equity principles. And so these basically say that if the only difference that I make is who is affected, but not, say, the total level … Like how happy everyone is. Then, my theory shouldn’t care about that. So if I have three agents, one of them is like at utility three, one of them is at utility two, one of them is at utility one, then I should perhaps be somewhat indifferent as to which of the agents have those utility levels. And the thought is that’s a fairness principle. Because I’m not biased in favor of one specific agent.
And in finite cases, those principles are generally consistent with one another. In infinite cases, they can often be somewhat in tension. Or at least people have started to identity tensions between these principles. So the reason for this is, one way to formulate equity principles is to say something like, “Well, if the distribution of wellbeing is the same at world one and at world two, then world one is as good as world two.”
But then, I can give a case … So I think this is from Hamkins and Montero. So imagine that you have agents whose names are just their utility levels. And you just have all of the integers. So there are infinitely many agents, zero, one, two, three, four, et cetera. And also minus one, minus two, minus three, minus four, et cetera. And then I take each of these agents and I improve their utility by one. And now I still have the same infinite series of integers. So I have zero, one, two, three, four and minus one, minus two, minus three and minus four, et cetera. And so, by the sensitivity principle, I’ve made the world better. But if we think that our equity principle is one that says these two worlds are equally good because the distribution of utility levels at the worlds is the same, then we end up with a kind of contradiction.
So this is just one example of the kinds of problems that you get in infinite ethics that don’t arise in finite ethics. And most of the problems are of that class. They’re of the class of, we have these principles in finite ethics that are perfectly consistent. And as soon as you move to the infinite world, we actually realize that these principles are no longer consistent.
Robert Wiblin: Okay. So, these problems arise when you have an infinite number of agents, potentially? Or finite number of agents and some of them would have infinite levels of welfare?
Amanda Askell: Yeah. You can … So, infinite worlds are usually worlds where there’s potentially infinite positive utilities, say, and infinite negative utility. I focus on worlds that contain infinitely many agents with finite utility levels. You can also generate your sim problems by having one agent with infinite positive utility. I think the set of problems you generate in the case are going to be slightly different. And so it’s important to say what you’re focusing on.
Robert Wiblin: Sure. Okay. So how likely should we think it is that we live in a universe where there could be infinite welfare?
Amanda Askell: Yeah, so … There are a lot of hypotheses that seem to give us reason to at least not be certain that this is not the case. So some theories in physics seem to predict infinite universes. So like eternal inflation theory is a notable theory on which the areas of the universe that are conducive to live can be infinite in extent and could contain infinitely many agents with the potential to have positive and negative levels of well being.
You also get much more unusual theories. So the kind of theories that you need to just generate a non-zero credence can be quite weird. You can have simulation hypotheses and all of this kind of stuff. And that would be enough to make you at least not be certain that it’s not the case. Actually I think that there are like much better arguments for saying, “We should take this very seriously” because at least in some theories, some of kind of empirically testable theories, this could certainly be the case.
Similarly, if the curvature of the universe is zero and there’s some indication that that might be the case, you’d expect a universe that’s infinite in extent.
Robert Wiblin: Okay. So there’s some physical reasons to think that the universe might continue forever or be infinite in space.
Amanda Askell: Yep.
Robert Wiblin: And I guess that’s also good when you think of a multiverse or like all possible things exist, and other theories that would say there’s an infinite number of beings.
Amanda Askell: Yep. Although we do have to discriminate, I think, between theories that say that there are infinitely many agents and theories that say you can causally affect the wellbeing levels of infinitely many agents. It’s not to say that these problems wouldn’t arise even if it were merely the case that there were these isolated universes that we couldn’t affect but that contain infinitely many agents. But I think once can then start to appeal to the causal ramifications of your actions to try and solve that.
But importantly there are some theories in which the causal ramifications are themselves non-finite. And so we can’t get around it simply by doing that, I think.
Robert Wiblin: And you’re saying we’re not sure that this is false. And I guess that’s because, if you think that there’s any probability that you might affect an infinite number of agents, then a lot of these problems go through even if you’re not sure about it, right?
Amanda Askell: Yeah. So what you want to try to do is figure out first what happens … Just if you assume that this is correct. So let’s just assume that the universe is infinite and has all these properties. And then what you want to try and do is step back and say, “What if I’m just uncertain about whether this is the case or not?” And so, one of things I try to do in my thesis is to at least indicate there are puzzles, there are analogous puzzles that arise for agents who are merely uncertain over these infinite outcomes.
And there’s also something a little bit concerning if you can show that some of these hypotheses that you have nonzero credences in generate these bad very problems in ethics.
Robert Wiblin: Yeah, so I always heard about infinite ethics as this problem that … Let’s say it’s possible that there’s an infinite amount of utility in the world or the future will have an infinite amount of utility then … Because if you add a finite number to infinity, it’s still infinity. And if you multiply infinity by a finite number, it’s still infinity. Then it seems like you shouldn’t be able to … You can’t have any effect on the future total of utility in the world. And every action seems equally as good and potentially also, it seems undefined, because you get to have both infinite positive and infinite negative.
Amanda Askell: Yeah.
Robert Wiblin: And so there’s just nothing you can really say about relative goodness of the consequence of your actions.
Amanda Askell: Yeah. And so I think we can avoid the first problem, this problem of, like, I just add some amount to this infinite positive utility and I don’t get any difference. So that is based on the idea that we should be measuring the goodness of outcomes by something like the cardinality of the utility. So, where cardinality is just like what can be put into one correspondence. And because, if I take an infinity and I boost two agents by two, it’s still the case that I have the same total cardinality of utility, or whatever.
I don’t think that’s how we should think about utility and infinite worlds. So I’m more sympathetic to the idea that you should care about improvements and you can simply say that if I make any agents better off, then, relative to the world in which they’re not made better off, I have, in fact, made a difference. So imagine I have infinitely many agents at utility one and then I boost two of them up to utility two. You might think, I haven’t made a difference. But I have if I adopt these sensitivity principles like Pareto. So Pareto says, if I make some agents better off and I make no agents worse off, then I’ve made the world better. That principle is going to say that that world is better, and the one where it improves the lives of two agents.
The key problem is that, from point of view, once we bring that principle in, we can generate this inconsistency with these really plausible equity principles. So, yeah, we solve one problem and we generate a bunch of other problems, basically.
Robert Wiblin: Okay, so you want to say that the classic infinitarian paralysis, this thing where every new consequences of all actions look equally good, and may be undefined. Yet that that isn’t a major problem in your mind.
Amanda Askell: I think that’s not a major problem if you accept something like the Pareto principle or kind of extension of the Pareto principle.
Robert Wiblin: So why should we accept the Pareto principle rather than just kind of summing it all and saying, well, it’s infinite no matter what?
Amanda Askell: I think the question then is, what are our kind of fundamental ethical commitments? So, to my mind, the idea that improving the lives of some agents and not making any agents worse off results in a better world is a pretty fundamental ethical principles. Whereas the principle that says something like, sum the total wellbeing and then just look at what the sum says … Especially in infinite cases, where this isn’t well defined. I mean, then you’re all identifying the sum with infinity. To me, the more fundamental principle is the sensitivity one and not this one that says something like … When you really reflect on it, it’s quite a strange principle to say that sum the total utility, and if it’s infinite, just label it infinity and say that anything that gets that label is equal to anything else that gets that label. And that seems really implausible to me. Whereas Pareto seems like a very plausible ethical principle. So yeah, I would happily ditch that sum principle, in exchange for keeping the Pareto principle.
Robert Wiblin: Do other philosophers in this area, I guess, all two or three of them, do they agree with you? Should this be regarded as a problem that we should stop worrying about so much? Because, I imagine there’s potentially hundreds of listeners who are aware of this infinitarian paralysis issue and might be somewhat troubled by it. And you’re saying, “No, don’t worry about this. There’s other issues that we need to worry about.”
Amanda Askell: Yeah. So I think that my impression is that the key problems that people have focused on in this literature often do involve a commitment to the Pareto principle first and foremost. So there are multiple literatures where this stuff is happening. So some of it’s happening in philosophy, and some of it’s happening in the intergenerational equity literature in economics. And there’s discussion that goes back and forth there. But most of the discussion in that literature does presuppose Pareto. And doesn’t presuppose something like, you’re just looking at the sum total, and if the sum total is positive infinity, then just treat all of those things as equal.
And that principle is just inconsistent with really plausible notions of what makes a world better. So imagine infinitely many agents at plus one utility, whatever that means on our scale. And then take half of those agents and just bring them down to zero. And it’s like, well, the total’s still infinite and positive. Should I really say that I’ve not made the world worse by making infinitely many agents worse off? I mean that seems like a bad result. So I think if your theory says I can make infinitely many agents worse off without making any difference to how good the world is, that doesn’t feel like a good ethical theory to me.
Robert Wiblin: So, instead of getting rid of this desire to kind of sum all the utilities, are there any different ways of treating infinity or conceptualizing infinity that would get around this problem?
Amanda Askell: So, I think that the issue is not with how we are conceptualizing infinity. There are decision rules that can be used to try and discriminate between different worlds that contain infinite utility. So one important problem in infinite ethics is what you’re summing over. So, in finite worlds, it doesn’t matter whether you sum over times or over agents. So I can take a roomful of people … Suppose there are two people in the room, and they’re going to be in the room for two hours. And at each hour, I look at how much wellbeing there is in the room in an hour. And in the first hour, there’s four total. And in the second hour there’s six total. And so I say, okay, this two person two hours was ten total units of wellbeing. Or, I could look at how much wellbeing each agent has over the course of the two hours. So it turns out that the first agent has five over the course of the two hours and the second agent also has five over the course of the two hours. And I’ll get the same result.
In infinite worlds, it matters whether you’re summing over times or over agents. So, I can take … I mean I could give an example, sort of a known example that generates this problem. So, this is like the very famous thought experiment in internet ethics which says, imagine I have a world and there are infinitely many agents distributed in space, time, and all of them are really happy. And I drop a sphere of suffering into this world, and it expands at a finite rate. Once you’re within the sphere of suffering, you will never leave it, and you’ll be unhappy thereafter. And these agents live for an infinite amount of time. So this means that in this world, every agent starts off happy for a finite period of time and then they end up unhappy for an infinite period of time, because they’re within this terrible sphere of suffering that’s expanding.
And then I could take a different world, where, same set up, but all of the agents start off deeply unhappy and I drop a sphere of happiness into the world and it expands at a finite rate. And once you’re within the sphere, you’re happy. And so each agent in this world is unhappy for a finite period of time and then happy for an infinite period of time.
Now if you were to just look at any given time in the first world, you would see… so just imagine, we’d just take its time slice. And we look, and there’s infinitely many happy agents. And there’s this finite sphere of suffering agents. So at any given time, this world has finitely many unhappy people and infinitely many happy people. And so, if you just looked at times, you might say actually this first world with the sphere of suffering is better. Then take the second world and at any given time, you’re going to see finitely many happy people and infinitely many unhappy people. And so you’re going to say the sphere of happiness world is worse. But if you look at each agent’s lives, in the sphere of suffering world, they have infinitely bad lives. And in the sphere of happiness world, they have infinitely good lives. And so if you were to sum across lives, you would say that the sphere of happiness is a better world.
And so you get this conflicting assessment in infinite cases. And this actually is a problem for how you are going to add up utility if that’s what you want to do in infinite worlds. Because times have this … Basically, space/time has this nice thing which is, it kind of comes with structure. And so I can look at one region and then there’s a natural second region. So I can look at the first error, and then I can look at the second error. Agents don’t come with this kind of structure. I can move agents around in space/time, and they don’t come with a kind of natural ordering.
So if you have a natural ordering of agents, you can start to use all of the standard tools that we use to kind of compare these infinite sequences. I should say that the sphere of suffering and sphere of happiness cases is a case by Cain. It’s a good paper, and I recommend people read it. But you can take these infinite streams of times and you can do things like look at the difference from stream one and stream two and then see whether … So if the difference for example … If the limit of the difference is positive, then you can just say that one world is better than the other. You can start to use principles like this which people have proposed in the kind of intergenerational equity literature and elsewhere. If you’re summing over agents, if agents like natural structure, you can kind of see why those principles aren’t going to help you very much, because now you have to just say things like under all possible orderings of sequences.
So under all possible orderings of agents, this first stream is better than the second, then the first stream is better than the second. But that’s such a weak principle. Because it will virtually never be the case, that under all orderings the difference is not going to diverge. So that’s why I don’t think that the problem is necessarily with our conception of infinity, but arises because, if we believe that the things we’re summing over lack a natural ordering, then a lot of the tools that we have to use to compare them really don’t work as well.
Robert Wiblin: Okay. I just want to take a quick diversion from this to get you to answer people who might be listening to this and are saying, “We’ve lost the plot worrying about infinite worlds.” Like, “This has just become absurdly philosophical. How can this have anything to do with doing the most good?”
Amanda Askell: Yep.
Robert Wiblin: What do you say to that? To that criticism?
Amanda Askell: So I have sympathy for that response. I think the reason why these questions are important is because they demonstrate inconsistencies with fundamental ethical principles. And those inconsistencies arise and generate problems even if you’re merely uncertain about whether the world is like this. And the fact that the world could in fact be like this means that I think we should find these conflicts between fundamental ethical axioms quite troubling. Because you’re going to have to give up one of those axioms then. And that’s going to have ramifications on your ethical theory, presumably also in finite cases. Like if you rejected the Pareto principle, that could have a huge effect on which ethical principles you think are true in the finite case.
But, I do have sympathy for the concern. I don’t think that this question is an urgent one, for example, and so I don’t think that people should necessarily be pouring all of their time into it. I think it could be important because I think that ethics is important and this generates really important problems for ethics. But I don’t necessarily think it’s urgent.
And so I think that one thing that people might be inclined to say is, “Oh this is just so abstract and just doesn’t matter.” I tend to think it does matter, but the thing maybe that you’re picking up on is it’s possibly not urgent that we solve this problem. And I think that’s probably correct.
Robert Wiblin: Okay. So let’s come back to your thesis and trying to solve infinite ethics. So what did you end up concluding with your thesis?
Amanda Askell: So I started out trying to solve the problems of ethics, some of which I’ve indicated here and some of which existed in the literature. And I kept hitting very similar problems. The thing that I realized then is like, perhaps I kind of have an impossibility result on my hands and the reason why I keep hitting these same problems is there’s some fundamental tension here that I can then try to identify. So I feel like I did identify that fundamental tension, and so this is between the Pareto principle, which I described earlier, which I think of as a very fundamental axiom in ethics. And this collection for further axioms.
So, the further axioms, one of them is a permutation principle which just says that I can permute the populations of worlds and I can find other worlds where some agents play different qualitative rules. So, for example, I might play the role of an agent three years from now, or something like that. Instead of the rule that I play in this very world. I also have a principle which is the claim that this at least as good as relation between worlds is qualitative relation. So that just means that if I have two worlds, w1 and w2 and I have another pair of worlds that is a qualitative duplicate of that pair of worlds, w3 and w4, then w1 is at least as good as w2 if and only if, w3 is at least as good as w4.
Where qualitative duplicates just mean kind of in all qualitative respects. So everyone’s wearing the same color shorts, the sky’s the same color, everyone has the same levels of wellbeing. And this seems really plausible to me, because the alternative would be that ethics has to be on non-qualitative facts. And it’s controversial, the nature of non qualitative facts but basically that includes who I am. So I care deeply about the fact that Obama is an a world. That doesn’t seem like it should matter whether it’s Obama or just someone who’s qualitatively identical to Obama. So it seems like ethics shouldn’t care about such facts. And so I think that’s a really plausible principle. And the other two that are also quite plausible are transitivity. So, it’s not the case that world one is better than world two. World two is better world three, and world three is better than world one. So it just prevents these cycles.
And then the final axiom is kind of a minimal completeness axiom which just says that it isn’t the case that it isn’t ubiquitous incomparability between infinite worlds. And so because I was generating these kind of incomparability results from the first four axioms: Pareto, the permutation principle, the qualitativeness of the at least as good as relation and transitivity. We then can’t also accept those four axioms and the idea that we can kind of offer complete rankings of infinite worlds. And so we have a kind of case of genuine ethical incomparability, with respect to infinite worlds. And that creates a lot of problems in subjective ethics, the ethics of permissibility. I think we get analogous problems, and it seems problematic in and of itself. It’s not incomparability as a result of ignorance about the way that the world is. It’s just that some worlds are fundamentally incomparable ethically.
I then explore what it means to give up one of these principles in ethics, but the key argument is, we can’t accept all of these things.
Robert Wiblin: So, there’s a lot to deal with there. Does this come back and have any implications for finite worlds as well?
Amanda Askell: Yeah. So I think these puzzles affect finite ethics insofar as we … Even if we think that the world has a very high probability of being finite, we can still generate similar kind of puzzles for subjective permissibility. So that is like, what I should do, given my uncertainty about the way that the world is, if I’m just uncertain about whether the world is infinite or not. And that’s one way in which it could affect finite ethics.
Another, slightly more abstract way that it could affect finite ethics is you might think that ethical theories are a lot like theories in physics, say. So if you’re more of a realist, you might just think that something like utilitarianism or deontology, it’s just akin to a theory about the way that the world is. And so if you find that in some ways that the world could be, your theory completely breaks down. That’s a kind of like failure of your theory.
And I’m quite sympathetic to that, to the extent that this just means that I actually do get a little bit worried by these puzzles because I feel like the right ethical theory should be able to be consistent with all the ways that the world could be that are kind of at least consistent with our laws of physics, say. And the fact that a lot of our theories are not, are generating these problems, is concerning.
So those are the two main respects in which I think it can affect what you should do, even if you’re very confident that the world is finite.
Robert Wiblin: Okay. So you listed those five different things that we’d like to have, but we can’t have all five. So, which ones would you consider dropping? Which ones are least painful, and what would that imply?
Amanda Askell: So this is a tricky question. I find each of these axioms highly plausible. Moreover, dropping many of them doesn’t even help as much as you’d want it to. So you might reject the Pareto principle, for example, because you don’t think the agents are this special unit over which you should be doing ethics. Maybe you think that subjective experiences are the thing that matter. But in fact, this kind of worry is going to apply for things like subjective experiences too, assuming you can identify the same subjective experience across worlds. Or, if you just care about subjective experiences, you can end up being basically unable to rank infinite worlds anyway, because it’s not clear how you could possibly compare infinite worlds with infinitely many positive subjective experiences and infinitely many negative subjective experiences.
So, I like Pareto and dropping it isn’t just obviously isn’t going to help you. The same is true of some of the other axioms, like the permutation principle. You might reject it for various reasons but we can actually accept weakened versions of it and generate the same puzzle. So, I am kind of uncertain about which of these axioms I would give up. I really don’t like it that we get non-comparability out of these results. If I wanted to just avoid non-comparability, I might be inclined to say that we have to reject Pareto, but I really don’t like that result.
Because I think if you reject transitivity, you just get cycles and ethics operates really badly if you just have cyclical orderings. The other principles seem really fundamental. Pareto is the other big ethical principle. And so, I didn’t want to give up Pareto, but that might be one way of getting around these problems.
Robert Wiblin: So, accepting non-comparability is kind of like accepting defeat ethically, right. Just saying whoa, just like, everything’s incomparable. I suppose in a sense, that’s an ethical result, but it’s also kind of a concession that ethics doesn’t have anything to say.
Amanda Askell: Yeah. At least in a large number of worlds. It could be the case that we find ways of doing like, subjective ethics. So we say, “Yes, many of these infinite worlds are incomparable. But, we have some credence that the universe is finite and when the universe is finite, we can compare worlds.”
And so, you might be able to come up with subjective principles that kind of give away, like screening off the worlds that you’re uncertain over that are incomparable. So I generated problems for those theories at the very end of my thesis. But I think that’s like a kind of incomplete project, basically. And so one thing that I would like to do, going forward, if I ever kind of get a chance to work on this specific problem, is to try and see what subjective principles we could try and use, even if we kind of accept this incomparability result.
Robert Wiblin: Okay. And if we drop transitivity, then we end up just with some other kind of nonsense outputs where we can’t really say that something’s better than something else, because they’re just running in circles?
Amanda Askell: Yeah. You end up with cycles. And some people have tried to … have argued against transitivity and ethics. So it’s not something that never occurs, but it can make it, I think, quite hard to generate principles that tell you what you ought to do. So if I just say that one outcome is better than … You know, outcome one is better than outcome two is better than outcome three is better than outcome one. And you know …
Robert Wiblin: What should you do?
Amanda Askell: Yeah. What should I so in that case? And it might just be that, when you have these cyclical rankings of the outcomes of your actions, that you just say all things are permissible or all things are impermissible. But that seems like a really bad result for ethics. And so, we have these things that we want ethics to satisfy, and one of them is to say that it’s not the case that all actions are permissible. And so, giving up transitivity means that you also have this big task of then trying to generate these really plausible principles about what you ought to do.
Robert Wiblin: Okay. And it sounded like saying getting rid of Pareto would only help you in particular cases anyway, and it’s also pretty unpleasant, because Pareto was so intuitive to you?
Amanda Askell: Yep.
Robert Wiblin: And the other two conditions, I don’t actually understand, so I maybe won’t follow up on those. But, it sounds like it’s a pretty bitter pill to swallow, the impossibility result here.
Amanda Askell: Yeah.
Robert Wiblin: So I guess you’ve made things worse. Or, you’ve solved the infinite ethics problem I was concerned about before, but given us an even more challenging one.
Amanda Askell: Yeah. I think I did and my thesis just made things worse. That was the main outcome that I didn’t want. But in some ways, just being clear on exactly what you can’t have can be really helpful.
So, to be more charitable to the idea of giving up Pareto over agents, you can just think that what we care about is the amount of wellbeing at spatial/temporal regions. And then we do actually have theories that can generate kind of rankings of actions that are fairly intuitively acceptable if you think that that’s what we’re aggregating over.
Basically, each person who has a favorite ethical theory or view of the world is going to say “Yes, I don’t like giving up any of these things but I know what I’m going to give up”, and so the people who already didn’t like transitivity may give up transitivity in this case. The people who like aggregating over space time might just say, sure, I give up Pareto over agents because what I cared about was the space time regions.
Robert Wiblin: Yeah, I gotta say I don’t find Pareto that intuitive because I don’t think of agents being fundamental. I just wanna think of it as kind of scattered experiences and the idea that all of the experiences that are attached to kind of ‘Rob’, that natural grouping that seems very strange to me.
Amanda Askell: Yeah, and so I’m sympathetic to that view. I think the best theories that get around this problem that are inconsistent with agent based Pareto principles are these spatial temporal region views. But if you have this view which is like, I don’t actually care about the amount of wellbeing, how it’s distributed across space time but I do care about subjective experiences, it’s actually super unclear how you can possibly rank infinite worlds because these subjective experiences have even less structure than agents do.
Suppose that each agent is composed of thousands of subjective experiences and suppose that there’s no way of identifying those experiences across worlds, it’s not the case that my experience right now can be identified with some other subjective experience I could be having right now, then it’s just not clear how you can rank infinite worlds. ‘Cause most worlds are just going to contain infinite positive experiences, infinite negative experiences.
Robert Wiblin: There’s no ordering or correspondence-
Amanda Askell: Exactly.
Robert Wiblin: -to any of this.
Amanda Askell: Yes. So this subjective experiences view, I kind of am sympathetic to it from a philosophical point of you but it’s not clear that helps us at all. If anything it makes us kind of throw up our hands and be like, I have no idea how to rank these worlds now because I have even less structure than agents or space time. It’s a reason to reject Pareto but it doesn’t necessarily give us a result [crosstalk 00:35:53] in infinite worlds.
Robert Wiblin: You’re just going to end up with another impossibility proof.
Amanda Askell: Yeah, that’s the concern.
Robert Wiblin: So, it seemed like earlier you might be gesturing towards a possible solution which would be to say that in infinite worlds everything is incomparable and in finite worlds things potentially are comparable. Which would suggest that we should do that trick where we just imagine that the world is finite because if it’s infinite then there’s nothing that we can do and then try to maximize the world assuming that it’s finite so we end up with infinity goggles where we’re just blind to any infinite outcomes. Is that an option?
Amanda Askell: Yeah, so you could just have a principle which says “ignore all possibilities that the world is infinite.” So just care about the worlds that are finite. And then suppose that I have two actions that I can undertake. The first action will certainly improve the life of this one agent, Bob, by plus one. And the other action, which is just not doing that thing, will not improve Bob’s life. And now suppose that if the world is infinite then not improving Bob’s life will actually make infinitely many agents better off than they would have been and improving Bob’s life will not. It has a guarantee of not improving the lives of infinitely many agents. It seems like you should potentially just not improve Bob’s life then. You have some change of making infinitely many agents better off. And if you don’t find that plausible then, you imagine a chance of rescuing infinitely many agents from hell or something like this.
It seems like infinities, even small probabilities of them in a case like that should potentially trump. But if you just ignore infinities you’re going to saying just help Bob so just don’t care about all these potential infinite benefits or losses.
Now a different principle that you might have is one that says, okay, ignore the outcomes of your actions that are incomparable. So basically if any action in any state generates a result that’s incomparable with any other action in any other state, then ignore that. And this leads to a bunch of other puzzles that I kind of discuss a little bit. An obvious one might just be that it’s not clear that there is always going to be any action, that you’re not just going to ignore the outcomes of in that case. We can imagine that for every action there is some other action such that the outcomes of both of the actions are incomparable in at least one possible state of the world and so then it would be like, okay, there’s no action. There’s no outcome that I actually pay attention to here.
So, for reasons that I probably go into more detail in elsewhere, I think that these puzzles come up and it’s just very difficult to actually generate very good principles which tell you what to ignore without having really bad further consequences.
Robert Wiblin: So this all sounds like a council of despair to some extent. How much of an update is this against moral realism or naturalism or at least against consequentialism?
Amanda Askell: Yeah, so the interesting thing for me is that I think that this affects all moral theories and not just consequentialism because the impossibility result is going to apply to just any theory and of course some theories are going to bite different bullets. So if you’re an absolutist deontologist, you might just say, I should just ignore the fact that my action may harm infinitely many people if in performing the action I’m satisfying some duty that I have say. But I think that’s a real problem. A lot of people think that things like the case where you have to tell the Nazi that comes to your door where you’re hiding Jews because you’re obligated not to lie to them. A lot of people think that that’s a really terrible counter example to Kant’s theory and I’m inclined to agree. I think that’s a terrible thing to do and this is to say yes, and you shouldn’t even care if you’re saving all these infinitely many people and I think that would also be a very bad result.
So, you can bite bullets but I think every moral theory has to face the fact that when our actions affect potentially infinitely many people we’re going to have to give up some plausible axiom or some axiom that at least most people think is plausible.
That is a kind of general worry for kind of ethical theories. Should it make us more inclined to be anti realists and just to think that this is a real blow to ethics? And I’m not sure. I do think that I updated slightly a way for something like moral theory on the basis of this because it starts to look like you can have a moral theory that has all the properties that you want. I don’t know how much of an update it was but it was a partial update for me, I guess, and a partial update away from consequentialist theories as well.
Robert Wiblin: I guess given that it’s only a partial update it sounds like you think there’s a possibility that you’re mistaken or that someone else in the future will come up with a more satisfactory resolution of these issues that would make it work again on some moderately intuitive level?
Amanda Askell: Yeah, or just … Given that these impossibilities results … When you get impossibility results in ethics they do affect everyone, and they are just showing that there are certain axioms that we just can’t jointly satisfy. And so I don’t think that people are going to be able to show that you can somehow jointly satisfy those axioms.
I think one of the things that’s happening in this literature is that you’re really showing that certain axioms are just fundamentally in tension with one another. We might discover there are actually ways of giving up one of these axioms that are much palatable. So an example might be, just to generate a kind of possible one, we find a really good way of dealing with intransitivity and ethics. We just say actually intransitivity is correct but here’s a really good decision theory for what you should do when there are these intransitive orderings.
That’s why it’s kind of a partial update, is more that we don’t yet know what the best theories that are only consistent with some of these axioms are going to look like and they could actually look quite good.
Robert Wiblin: Okay, so before we started here I was expecting to worry that infinite ethics would lead to some kind of moral fanaticism where people would just be trying to create infinite amounts of good and they wouldn’t care about anything else but it seems like we’ve gone almost in the entire opposite direction. We wish that we had such a strong conclusion that we could work with but in fact it’s just leading to incompatibility in the lots of reasonable worlds.
Amanda Askell: Yeah, in many ways I do worry about fanaticism concerns. This idea that I should just only care about really small probabilities potentially of infinite value over really high probability of finite value. That seems bad to me but then as I’ve kind of worked on this stuff I’m like actually we have a problem generating a kind of ranking of actions or outcomes in infinite worlds so I’m like, first we need to worry about generating any ranking, then we need to kind of worry about the plausibility of that ranking. And ideally at the very end we somehow come up with a plausible and complete ranking is the kind of goal but we’re seeing problems for doing any of that. So this is like what I think is the key problems in infinite ethics at the moment.
Robert Wiblin: Yeah, we’re more paralyzed than fanatical.
Amanda Askell: Yeah.
Robert Wiblin: In infinite worlds would you possibly be able to make comparisons between really simple and boring worlds, like one with an infinite number with positive experiences and one with an infinite number of negative experiences?
Amanda Askell: Sure, there’s spaces where … And this is why if the cause of ramifications of your actions are finite then we can rank worlds because we can have principles which say yes … If one world is better than another by Pareto then it’s simply better. You get these mild extensions of Pareto which just say that if it’s better for infinitely many people and worse for finitely many people then the first world is better as long as it’s infinite utility and they’re not better off by some infinitesimal amount, for example.
So there are these worlds that are comparable by Pareto or very mild extensions of them and the concern that you have is that these are not very … If you just take the space of all possible worlds or all kind of random walk worlds these are not going to be very common.
If the world is finite, if you can be certain that it’s finite than you just happen to be in a little pocket of those comparable worlds and so yes, if you suddenly became certain that the world is finite or standard theories for comparing infinite worlds would actually just generate complete rankings and would be, here you go, here’s the actions you should undertake.
Robert Wiblin: So, what does infinite ethics mean if anything for kind of priorities, for the effective altruism community or listeners’ day to day actions?
Amanda Askell: I kind of like to think that it doesn’t mean too much. I think these are very interesting puzzles and I hope that they show people that we can find these very kind of fascinating problems with ethical theories in physically possible worlds.
I do think that it means that these things might be important to sort out in the long term in the same that any major problem for a scientific theory is really interesting and important to sort out in the long term but it doesn’t necessarily mean that we should be working on something different right now.
And so I think that one possible kind of caveat to that might mean that work that allows us to do that work in the future is a bit more important. So if you think that there are really important unresolved problems, then things that give you the space at some point in the future to research the stuff can be more important.
Robert Wiblin: Okay yeah, so we shouldn’t strongly commit to something that prevents us from further reflecting on infinite ethics or indeed all these ethical paradoxes that-
Amanda Askell: Yes, exactly, you want to leave this space of research open and that’s quite important and these issues might not be urgent but at some point it would be really nice to work through and resolve all of them and so you want to make sure that you leave space for that and you don’t commit to one theory being true in this case. And I think that the lessons of impossibility theorems and ethics that are important are mainly that ethics is hard and that you shouldn’t act as if one ethical theory or principle or set of principles is definitely true because there are a lot of inconsistencies between really plausible ones. And so I think that’s a more general principle that one should live by and maybe these impossibility results just kind of strengthen that.
Robert Wiblin: Okay, so your thesis is very long and you’ve got a-
Amanda Askell: Yes it’s very long.
Robert Wiblin: -and you’ve got a bunch of papers, so in a sense we’ve only talked about one result I guess, the most important result and we’ll stick up a link to a bunch of papers, draft papers and your thesis on this topic if that’s okay. The listeners who are keen for more can go and check that out.
Let’s talk a bunch about philosophy as a career. Most listeners are not about to become academic philosophers, but a few of them might. People might be interested to learn what that kind of lifestyle is like. So what do you think people will misunderstand about careers in philosophy?
Amanda Askell: This is a hard question because it depends a lot on like the assumptions that people have going in. So my impression now is that there are far fewer jobs in philosophy than there used to be and that there are like a law philosophy PhDs coming out. And so I think maybe something like the difficulty of that process and also how demanding and competitive it is. So you’re going to have to dedicate … I spent two years doing masters, six and a half years doing a PhD.
If I want to be an academic philosopher I’d probably have to do a postdoc to spend some time publishing the things that I’ve already written, then you’ve got the same amount of time during tenure track. A huge amount of your time during tenure track is going to be with teaching and Admin and are relatively small amount of time and researching in order to publish enough to get tenure, which means publishing in good journals, which means that the topics that you’re writing are determined by what can be published in a journal.
And obviously, talking to how I perceive the US system to be at the moment. One thing I’ve worried about is if you really want to focus on doing effective research, for example. Are these jobs as conducive to that as you think? Be aware that you might not spend as much of your time as you think doing that work. And so I think that’s one issue that people might just have this image of you just sitting doing research all the time and researching whatever you want. And I think that’s probably not true until you’re quite senior in your career.
Robert Wiblin: Do you think more people should be trained to become philosophers or fewer?
Amanda Askell: I think fewer by quite a long way I guess. My impression of the job market is that there are a lot more temporary posts, a lot more adjunct team positions, and fewer permanent positions, and that there are a lot of people still going into PhD programs. So, yeah, I think it’s very tough for people right now. I don’t know the numbers. So, I need to go look at the numbers of basically how many PhDs in Philosophy are produced each year and know how many jobs … That were the jobs that people wanted to going into their PhD, which will probably not be adjunct team jobs and temporary positions, but will be tenure track jobs, how many of those exist. I wouldn’t be surprised if the numbers were horrifying. And so yeah, I think that it’s tough to recommend this career path I guess if that’s the case, which almost certainly is.
Robert Wiblin: Yeah. Is there a case for doing a philosophy PhD, even if you think you’re probably not going to get one of these desirable academic jobs at the end?
Amanda Askell: There’s a question of what people do after their PhD, and then a key question there is also what they could do otherwise. I think you can do things with a PhD in philosophy. It can be a little bit tough because I think a lot of people don’t know what philosophers do. The work that I do often intersects really strongly with economics, for example. I feel a bit more comfortable like … Sometimes I’ll just describe my research as like kind of on the border of economics, because it makes people engage with the kind of questions that I’m going to be engaging with. But philosophy is so broad as a discipline that if you say that you’ve got a PhD in philosophy, that can mean that you’ve been doing work on free will, or that you’ve been doing work on neuroscience, or that you’ve been doing work in physics, or that you’ve been doing work in decision theory. I think it can be tough because people don’t necessarily know what you’ve done and there’s not a good model for a PhD in philosophy.
I also suspect that given that cost, if one can do a PhD in another discipline that is more tailored to whatever they want to do if that’s not an academic job, that’s probably going to be an advantage. So, if you know you want to work for a think tank for which an economics PhD would be really useful, you could potentially do that with a philosophy PhD, but it seems like it’s probably going to be easier with a more tailored PhD. So, if you don’t want to go into academic philosophy, I think there will be often an alternative degree that is better.
Robert Wiblin: I guess it’s just also an enormous time investment.
Amanda Askell: It’s a huge time investment.
Robert Wiblin: And you’re not getting paid in the meantime. It’s quite a difficult lifestyle it sounds like.
Amanda Askell: Yeah, you get your graduate stipend, but relative to what you would be earning, it’s probably extremely low.
Robert Wiblin: Even if you’re just trying to gain skills and credibility, it might be possible to get more of that at less cost somewhere else I would imagine.
Amanda Askell: Yeah, especially like yeah, if you think about because a lot of the programs are tailored towards academic teaching or at least being able to teach, you have to do a broad array of like courses for example. So, you will come out like … Ideally, and a lot of the programs that you come out with a really broad knowledge of the subject matter and you’ll have to have done metaphysics, you’ll have to have done history, philosophy, you’ll have to have done value theory, you’ll have to done some film math, you’ll have to have done logic because you have to be able to teach all of these areas. If you think that you’re not going to want to become an academic, this is highly tailored training for something that you might never use again.
Sure, that builds up general skills. Philosophy is really great for being able to analyze arguments and being able to clarify concepts and be able to reason generally, but is there a more efficient way to do that than to get a five to seven year PhD? I suspect that there is. I think that for a lot of jobs that are no academic philosophy, they’re just going to be better things you could do than a philosophy program.
Robert Wiblin: So, you’re basically at the end of your PhD now, and you’re leaving with a very prestigious credential, graduating from NYU, and as listeners can hear, you’re very smart. So you’ve probably got a lot of different options in front of you. So, you could you could try to go into philosophy now, become an academic. I guess, try and become a public intellectual or something like that, do some kind of advocacy work. I guess you could also become a researcher in an effective altruist organization like Open Phil. Are there some other options that are at the top of your mind at the moment?
Amanda Askell: Also, you can end up doing research that’s relevant to industries for example. So, if you end up doing technology research, this could be relevant to technology companies. I think people shouldn’t rule out even if you’re a researcher or going into think tanks that you just think are really important, but not necessarily effective altruism intent or whatever. Those are the kinds of areas that I think that are in the options available.
Robert Wiblin: How are you going to weigh these up? It would be interesting to walk through the decision that you’re make now, even though you might want to share all of your thoughts that we can at least, so that there’s some considerations, so people can see how you think this through. Do you see becoming an academic has potentially high impact?
Amanda Askell: I think that becoming an academic can be high impact, long term and also you have a lot of interaction with students, and that can be quite impactful I imagine. But I think that things like via the public intellectual route or where you do important work often later on in your career or when you’re more senior, that’s the route to impact for a lot of academics, I think. Or at least to clarify, I think that’s probably true philosophy academics, but that’s mainly because philosophy academics are less likely to interact directly with say other fields and with policy. There are lots of fields like economics and just policy where academics can actually have a big influence on things like policy and governance. That’s the reason why I say I think philosophy is more like public intellectual heavy impact, basically.
Robert Wiblin: How does going into academia, compared with working at an effective altruist organization or in a think tank or in a foundation that you feel shares your values?
Amanda Askell: I think this is like a really hard question to answer because there’s like a route to impact via doing important academic work especially important academic work that is accessible to the public. And you’re in a kind of better position to do that if you’re working within the university system. But it could also be the case that there are some really important and kind of urgent research topics that we just need people working on now and then figuring out the best way to continue working on those topics even if it’s not within academic departments, at least in the short term.
I guess my plan is to spend some time doing research in the topics that I think are really important and then to figure out what I’m best at out of the topics that are really important and where I think I would have the largest impact. Then to use that to guide the decision I then make. So, if it’s the case that I can do that better within an academic course and that’s what I would aim for. And if I think it’s like, I could do it better at other organizations, then that’s what I’d aim for.
Robert Wiblin: You mentioned working in technology companies or in policy. Are there and any specific options that you have in mind there?
Amanda Askell: No, if I’m if anything, at least I feel like I’m putting my money where my mouth is when it comes to value of information and the explorer exploit trade off. Where I think that when you start to work for somewhere with a fairly prescribed rule, in many ways, you’re on exploit mode because you have to then work within the confines of the rule that you selected, whereas if you’re not sure about what the best thing for you to be doing is, I think you can make more sense to have an exploratory period where you explicitly don’t commit to a given rule in order to be in a better position to be able to go to someone later and say, “This is what I think I’m really good at. This is what I think is really important. Do you think is important? Do you want someone working on this? Then go into exploit mode and doing that thing.
The main thing, I’m trying to do at the moment is actually leave my options completely open and have this phase where I can explore really important topics and see if I can do good work in those areas.
Robert Wiblin: Okay. So, it sounds like for a while now you’re planning to not really make any commitments and instead just work on writing what you think is most important?
Amanda Askell: Making no commitments is probably like a bit too strong, and it’s not like I’m going to work on anything. So, I’ve mainly done some research into the topics that I think are going to be potentially really important and I will work on those topics in the near term and see if I’m any good at it basically. I’m not like going off and doing research into the best shape of surfboards. I’m sure there’s already a lot of research in that and I can’t contribute much to it.
Robert Wiblin: Yeah, actually almost certainly much more than on infinite ethics. But when you finish PhD don’t you usually have to immediately apply for academic jobs?
Amanda Askell: You usually would apply in the fall. A big question for me is whether I will apply in the fall. I think that there’s some chance I would apply for postdoctoral positions in the fall, but not necessarily tenure track jobs. And there’s sometimes I just won’t to apply for anything. And in many ways, I feel like I’m exposing myself a little bit here because it feels very risky to me. There’s this idea that academia or at least within philosophy is very punitive of people who leave for any given period of time. And so, if there’s a hint gap where people assumed that you were applying for jobs and not getting jobs or they’ll assume that you’re not somehow not committed to academia. This always seems really negative to me because I’m like, if someone goes away and spends some time thinking about what they want to do, and then they come to the conclusion that actually these academic jobs are the right thing for them, they are many reasons like a much better position than someone who just applied because it was the thing that was expected of them.
But I think it’s risky but, if I was going to apply, I’d have to apply in the fall, and at the moment it’s looking more likely that I won’t do that.
Robert Wiblin: It sounds like one of the most important things for you at the moment is keeping your options open and being able to go into other parts or reverse track.
Amanda Askell: Yeah.
Robert Wiblin: That’s right. How much do you decide between the … How much do you think you’ll decide between these options based on their expected social impact? Is that a top consideration for you or just one of several?
Amanda Askell: I think for me it’s not the top consideration. I think I’m at the moment, not making career decisions that are self-interested, and I think I’m taking own risks that I wouldn’t otherwise take if I didn’t think that actually this was the way to do the most good.
Robert Wiblin: What kinds of risks are you referring to?
Amanda Askell: Things like the potential of not applying for academic jobs is a big one. It’s just the thing that I’ve optimized my life for at least since I was in my early 20s. Maybe since I was 21. Also. I’m much happier to not have stable employment in order to have this ability to explore topics, a little bit more. And I think I wouldn’t have been happy with that. I think I’ve really tried to internalize the idea that I should just be trying to do a bunch of good with my career. And I think if I hadn’t, then my career paths would be quite different at the moment.
Robert Wiblin: You’d be kind of playing it safe and doing the-
Amanda Askell: I’d be playing it much safe. Yeah, I think I’m naturally, extremely risk averse with my career. I think some people are and it can be quite damaging in some ways. I’m trying to be much more risk neutral than I would otherwise be.
Robert Wiblin: When you’re thinking about the options you have in front of you, what some of the key questions that you have, or trying to figure out which one of them is going to be highest impact?
Amanda Askell: I’ve often been thinking about research and I should potentially branch out from that. It’s hard for me to have that kind of research mindset, but often it’s like, how important is this topic? How important is it that we solve it really soon? I think academics can be inclined to go towards whatsoever is interesting as a puzzle, and if it also seems important, that’s great. It’s like, well, actually, you shouldn’t care as much about whether it’s interesting or not. And it can be important, but not urgent and instead we should be focusing on a mix of like urgency and importance.
I tend to think about that, and then I combine it with and what am I good at, and what are my skills? It might be that I’m better at say, communicating ideas and that there’s a lack of good just write ups of things that are clear, then you have this opportunity to come in and fill that gap, I guess. It’s figuring out what I’m best at, plus what I think is important and urgent research topics.
Robert Wiblin: Makes sense. How do you figure out what is your comparative advantage? What’s, the best personal fit?
Amanda Askell: I think he’s a mix of, I often just ask people I suppose.
Robert Wiblin: What do they think you’d be good at?
Amanda Askell: Or just you do things and then you get feedback from people that’s of the form like, “Oh, that thing was really good.” Or like, “We’ve got a million people who are already good at this other thing, and you’re also good at it.” But like, you know, like whatever it’s not particularly amazing skill. Plus like a terrible, probably a really high amount of intuition.
I think other people will often explicitly reason more than I do. I often think that my implicit reasoning brain is a little bit stronger than my explicit reasoning brain unless I do some of the work.
Robert Wiblin: When it comes to these kind of practical decisions?
Amanda Askell: When it comes to just most things. I often just think that I solve all my research problems while sleeping or something, or just like while walking down the street thinking about flowers or something. I just feel like the thing that’s turning in the background is the thing that’s doing most of the work sometimes.
Robert Wiblin: Yeah. So, you’re very interested in exploration and flexibility, but how much are you also thinking about which next roles will skill you up the most, getting the skills?
Amanda Askell: This is like a big … I’m explicitly trying to carve out a bunch of time in my life to both explicitly learn about things that are relevant to the research that I want to do, but also I’m really trying to explore new areas for me because it just seems extremely important I get a sense of how much I would have to learn to be really up to date in various important topics. I strongly favor things the that I think would increase my skill set. I’m much more inclined to work on research topics that I think are at the edge of my ability to do than research topics that I feel very comfortable with and think are important, just because I get much more information, plus, I actually learn some skills along the way.
Robert Wiblin: Having invested so much time though in formal education, do you just feel like now’s the time to cash out and do something rather than focusing on I’m getting better?
Amanda Askell: Strangely not, but this is because I’ve just done the PhD. I feel like I’ve spent several years now just like … I have been learning things, but the process isn’t one of … When you learn about a new area, it’s like learning a skill. You go from being terrible to pretty good really quickly, and then you go from being pretty good to really really good incredibly slowly. I feel like I’ve had this topic, where I’m trying to go from pretty good to really, really, really good. That’s what a lot of the PhD is. You’re getting really diminishing returns in terms of what you’re learning because you’re having to do this thing of getting to the peak of understanding and getting to the edge of a given literature, like saying new things.
I love the kind of learning that is just going back to things I knew very little about because it’s just super satisfying. I know I’m like almost the opposite where I’m like, I want to learn a bunch of things I’m just terrible at now.
Robert Wiblin: What properties of the jobs you consider taking matter other than their social impact? What are the most important other considerations for you?
Amanda Askell: I think that sometimes we can be a bit … I at least can be a little bit indifferent to my own needs and to have a failure to recognize that that can really impact on my life in a negative way. So, things like having job security, having like some income security, having things like health insurance. I can ignore those things until I realized that I actually can’t do good work without some sense of psychological security that I’m not going to have no job and no place to stay in a few months. I think that for me., realizing that was like, okay I actually should focus a little bit more on making sure that I’m just secure. It’s the scattered brain philosophy thing of just like not thinking about this and then being like, “Oh,-”
Robert Wiblin: You’re probably anxious.
Amanda Askell: Yeah, exactly. Listen, maybe because I forgot to buy health insurance and-
Robert Wiblin: I could be fired at any moment.
Amanda Askell: Yes, exactly. I have no backup plan and all this kind of stuff. So yeah, I think for me it’s a mix of creating a space where I can actually do good work and that involves taking care of your basic human needs. I’m trying to be a bit more attentive to that than I have been in the past. So, I’m slowly learning.
Robert Wiblin: Okay. You talked a bit about the urgency of questions which I guess you’re thinking like questions can be important to solve, but they have to be solved right now, they have to be solved by this generation where these other questions where, even if they’re not so important, we need to know the answers immediately because they’re guiding our current decisions. What do you think are the most urgent research questions that’s within philosophy or related areas?
Amanda Askell: I think that some things that I think are going to be really important and that are getting increased attention are things like ethics and policy questions surrounding new technology. I knew that you had Miles talk about AI policy. I think that’s like a really prime example of an area that I think is going to be like really increasingly important, but also strikes me as urgent in the sense that you get to have more people working on that right now.
I think other questions that are important and somewhat urgent could be questions that relate to things like course prioritization and how to decide which courses to focus on. Some of that is abstract work and some of it is more concrete. I’ve talked abstract terms about valley of information, and so far it’s actually working out how much information you get from either investing in or do research in a given topic is valuable and important. It would be really nice to be able to quantify that.
There are a bunch of questions around where you should actually focus your resources, I think could be really important and it would be really good to have people working on them right now. Those are the two ones that strike me as important, direct research questions. Then there are other ones where I think that the communication over them can be important. So, maybe some stuff in population ethics or things about the long run future and arguments in favor focusing on it. I like it when I see people working on both those questions and also communicating them in a way that’s clear and makes the arguments explicit.
Robert Wiblin: It seems like in choosing between the options in front of you, you’re likely to end up doing research, and the question is where do you want to be doing that research? Because some kind of tradeoffs between perhaps the prestige of the organization you’re associated with, and what topics you can study and what kind of freedom you have to work on different things, and where you have to publish it and who you have to impress. What are the considerations that you have in mind there?
Amanda Askell: I think that a lot of people talk about the importance of research freedom. By that they mean something like the freedom to just work on just any topic that you want. But I think it’s important to note that even within academic jobs, that doesn’t strike me as the kind of freedom that you have because there’s this constraint of having to publish and having to published in top journals within your field, and the things that are just acceptable papers in top journals in your field might be a fairly constrained set of topics. By research freedom there, you don’t mean like I can literally work on anything because if you want to get tenured, you’re going to have to work on topics that are publishable. I think it’s similar even if you’re trying to do research outside of this like traditional have, outside of universities or for like organizations for example. The question isn’t something like, do I have complete autonomy over what I research rather, are my values and what I want to work on aligned with this organization? And will I be able to do the work that I think is really important within this rule that I have?
That could mean that you have more research freedom, like a think tank that is more aligned with your values, than you will at an academic department where technically you can work on anything. Just because at the think tank, you can actually write papers that you think are great and important but that no journal in your field would publish. If that’s the case, then it feels like you have more freedom to work on the things you really want to work on. And so, I think that it can be easy to think of research freedom as just this thing which is like, yeah, I just wake up in the morning and I write on anything I want to write on. And instead I think it’s like no, I always want my writing and my work to be useful for some end. It’s just where Can you do the best work that’s actually useful for a community or the think tank you’re working for, or just for whatever ends your research is intended meet.
That’s not always going to mean that you have the most freedom in an academic job. It might mean that you have more freedom working within a fairly constrained position in a think tank, for example. Because I think it’s really important to bear that in mind that it can just be easy to hear research freedom and think this kind of naïve, do what you want. I just don’t think that’s how I couldn’t conceive of it I guess.
Robert Wiblin: I was actually thinking the reverse that probably academia would have the greatest constraints on you. I love the lines you’re saying. Whereas if you could find a private organization that shares your values, or a donor who wants to fund what you actually care about, then in some ways it’s easier to do it. A bit more as an independent agent or as part of an organization that you’ve specifically chosen for this purpose.
Amanda Askell: Yeah, and you can also get that within academia, you are getting an increasing number of academic institutions that are effective altruist affiliated or are trying to do work in important areas. It’s not to say that you can’t do it within academic departments, but at least if you’re going into jobs in the U.S, I think my big worry with that is just that you have this pressure to publish and to publish in top journals. Until you have tenure and that means that, for like five to seven years, you’re really constrained by writing on the topics that are publishable within those journals. If you have a really important paper that’s like practically relevant on how to assess value of information of an intervention to use one of the examples from earlier. And you just can’t get it published, and you know it’s not publishable, it’s going to be really hard for you to work on that paper. In fact, you may just need to set it aside.
I think that in many ways you can have more freedom just going somewhere that’s a bit more aligned with your values, whether within academia or outside of it I guess.
Robert Wiblin: It sounded like maybe you’re very interested in technology and strategies around the development of new technologies, AI being the most prominent example. Can you imagine working on that? Like going into an organization focused on AI specifically?
Amanda Askell: Oh, yeah. This is the topic that I’m primarily, at least moving my research into at the moment. I think there are a lot of issues both in ethics and policy surrounding artificial intelligence that are tractable and important and interesting and that I look forward to working on started doing some research on already. So, yeah, I’m very interested in those areas, and-
Robert Wiblin: You’re dipping your toe in the water at the moment?
Amanda Askell: Yeah. I have been dipping my toe in and it’s hope to just basically start doing a lot of research in that area.
Robert Wiblin: So, I’ve noticed that friends of mine who studied philosophy seem to be very clear thinkers and unusually clear communicators. Do you think that’s because they learned a lot whilst studying philosophy or were they just unusually rational to begin with?
Amanda Askell: I think this is actually an incredibly hard question to answer a priori, because obviously there’s definitely a selection effect if you just like analyzing arguments coming up with counter arguments and you’re attracted to that clear, rigorous style of thinking, then philosophy can be an attractive field. But I also think that sometimes the extent to which that’s like a selection effect can be a bit overplayed.
I’ve certainly had students … I tend to like, when I’m teaching, I just think that it’s easy for people to just think people either get or they don’t. And that’s because the people teaching are the people who just got it. They just understood from they naturally they took to philosophy like a duck to water. So, when they see students struggling, they just can’t really empathize with why they would be struggling. One thing I find really interesting when teaching is just that I’ve tried to really, if I felt like a student wasn’t understanding something, really see why they might not understand it and not assume it’s just some natural inability. The amount of times you can actually make a huge amount of progress with someone by doing that, by just realizing there’s something fundamental that they’re not quite seeing or getting and then helping them get that thing and then see them make a huge amount of progress, made me a bit more inclined to think yeah maybe we’re actually doing something that’s helping people to see something that some people just see naturally, like the structure of an argument. And like love trying to find ways of breaking down arguments that are different so that they meet different people’s understanding and styles.
Very often, we do … I think I once talked about this as sequences versus clouds. Where the idea was we teach arguments sometimes like they’re these series of deductions and they’re these linear things like P, if P then Q. Therefore Q. Q if only if R, therefore R. And we do this in a sequence. For a lot of people, it’s better to take an argument that someone gives you in a newspaper, take the conclusion and then just negate the conclusion. So, say it’s false. Then put all of their assertions back into this cluster and ask them, does this cluster look consistent? If the cluster looks consistent, they can actually say hey the argument is not valid because I could actually have all of your premises and not your conclusion. And if the cluster does look inconsistent, then say okay, what’s the least likely thing out of this cluster? Then if it’s not the negated conclusion that they pull out, they’ve got a counter argument. It’s like, okay that’s the premise you disagree with.
It’s not like a big thing, it’s just a way to re-conceptualize an argument. I find sometimes when you just do that with people, some people just have that way of thinking and so they’re just appealing to that rather than this sequential way of thinking can actually really help people.
I think I’m actually a bit more of an optimist of, like, I don’t think that we necessarily teach people by just teaching them logic. I think we just expose them to a bunch of arguments again and again and again and give them some tools to be able to analyze arguments and that that can actually I think, just a hypothesis, could be totally false, but it feels like actually people can make progress on this and it’s not just something that people are born with.
Robert Wiblin: Our final question is, I guess you’re leaving something like 25 years possibly 26 years of formal education, have you learned any tips that people can use to be more successful within the cocoon of school and university?
Amanda Askell: I think something that people find strange, I’m never sure whether to admit to this, I didn’t have to take many exams when I was younger and when I was an undergraduate. But when I did, I found them kind of terrifying. So, one thing that I would do is I would try and figure out, say that the exam I knew would have 10 questions or something, and I’d have to answer three. I would try and figure out six of the answers that were, or six of the questions that we’re going to probably come up and then I would just write full essay answers to those questions. I would just try and memorize those essays. I have quite a-
Robert Wiblin: Word for word?
Amanda Askell: Word for word. I have a really good text memory. I have a very poor memory in many respects, but I can often, remember things I’ve written in red. This actually … I remember in at least one exam which will remain nameless, I basically just wrote out essays that I had already just memorized before I went in, because I just didn’t want to feel nervous in the exam. The questions I thought would come up came up and I just wrote out my memorized dancers. So with exams, you maybe don’t always need to go in there and just demonstrate your ability. Maybe you can just memorize a bunch of text that you’ve already written and then go in and just write it out.
Robert Wiblin: It’s an incredible combination of mental ability and inability.
Amanda Askell: Yeah, at least knowing your weaknesses. You’re just like, “I don’t act well when I’m nervous. Things go badly. I’m going to be nervous in this exam. So, I’m going to utilize this other ability which is this memorization and then I’m just going to use that to help me through this process.”
Robert Wiblin: Did you do well in the exam?
Amanda Askell: I did pretty well in the exam. I think I was happy with the result of this method.
Robert Wiblin: I guess that’s a great suggestion there for any listeners who are very stupid, but have an incredible memory.
Amanda Askell: I don’t think very stupid is the correct description there! Get bad exam nerves is maybe more accurate.
Robert Wiblin: My guest today has been Amanda Askell. Thanks for coming on the show Amanda.
Amanda Askell: Thanks for having me.
Robert Wiblin: Just a reminder to check in on those articles about being a congressional staffer, how often you should just give up on things, whether you can guess which psychology studies have true findings, and whether it’s important to play to your comparative advantage.
The 80,000 Hours Podcast is produced by Keiran Harris.
Thanks for joining, talk to you next week.
Learn more
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.