Conversation with Paul Christiano on Cause Prioritization Research

Christiano

Participants

  • Paul Christiano: Computer science PhD student at UC Berkeley
  • Katja Grace: Research Assistant, Machine Intelligence Research Institute

Summary

This is a verbatim email conversation from the 26th of March 2014. Paul is a proponent of cause prioritization research. Here he explains his support of prioritization research, and makes some suggestions about how to do it.

Note: Paul is Katja’s boyfriend, so consider reading his inclusion as a relevant expert with a grain of salt.


Katja: How promising do you think cause prioritization is generally? Why?

Paul: Defined very broadly—all research that helps us choose what general areas we should be looking into for the best philanthropic impact—I think it is a very strong contender for best thing to be doing at the moment. This judgment is based on optimism about how much money could potentially be directed by the kind of case for impact which we could conceivably construct, but also the belief that there is a good chance that over the very long-term we can hope that the philanthropic community will be radically better-informed and more impactful (think many times) than it currently it is. If that’s the case, then it seems likely that a primary output of modern philanthropy is moving towards that point. This is not so much a story about quickly finding insights that let you find a particular opportunity that is twice as effective, and more a story of accumulating a body of expertise and information that has a very large payoff over the longer-term. I think that (not coincidentally) one can also give shorter-term justifications for prioritization vs. direct spending, which I also find quite compelling but perhaps not quite as much so.

Katja: Why do you think not enough is done already?

Paul: You could mean what evidence do I have that not enough is done, or what explanation can I offer for why not enough has been done even if it really is a good thing. I’m going to answer the second.

I think a very small fraction of philanthropists are motivated by a flexible or broad desire to do the most good in the world. So there aren’t too many people who we would expect to do this kind of thing. As a general rule there seems to be relatively little investment in expensive infrastructure which is primarily useful to other people, and relatively little investment in speculative projects that will take a long time and don’t have a great chance of paying off. I do think we are seeing more of this kind of thing in general recently, due to the same kinds of broader cultural shifts that have allowed the EA movement to get traction recently.

Katja: How much better do you think the very best interventions are likely to be than our current best guesses?

Paul: This kind of question is hard to answer due to ambiguity about “very best.” I’m sure there in some sense are very simple things you could do that are many orders of magnitude more cost-effective than interventions we currently support. So it seems like this really needs to necessarily be a question about investigative effort vs. effectiveness. In the very long-term, I would certainly not be surprised to discover that the most effective philanthropy in the future was ten or a hundred times more effective than contemporary philanthropy.

Katja: I believe you value methodological progress in this area highly. Is that true? What kind of methodological progress would be valuable?

Paul: There are a lot of ways you could go about figuring stuff out, and I expect most problems to be pretty hard without a long history of solving similar problems. Across fields, it seems like people get better at answering questions as they see what works and what doesn’t work to answer similar questions, they identify and improve the most effective approaches, and so on. This is stuff like, what questions do you ask to evaluate the attractiveness of a cause or intervention? Who do you talk to how much, and what kind of person do you hire to do how much thinking? How do you aggregate differing opinions, and what kind of provisional stance do you adopt to move forward in light of uncertainty? How confident an answer should you expect to get, and how should you prioritize spending time on simple issues vs. important issues? You could write down quite a lot of parameters which you can fiddle with as part of any effort to figure out “how promising is X?” and there are way more parameters that are harder to write down but inevitably come up if you actually sit down and try to do it. So there is a lot to figure out about how to go about the figuring out, and I would imagine that the primary impact of early efforts will be understanding what settings of those parameters are productive and accumulating expertise about how to attack the problem.

Katja: Why is it better to evaluate causes than interventions or charities?

Paul: I could give a number of different answers to this question, that is, I feel like a number of considerations point in this direction.

One is that evaluating charities typically requires a fairly deep understanding of the area in which they are working and the mechanism by which that charity will have an impact. That’s not the sort of thing you can build up in a month while you evaluate a charity, it seems to be the sort of thing that is expensive to acquire and is developed over funding many particular interventions. So one obvious issue is that you have to make choices about where to acquire that expertise and do that further investigation prior to being really equipped to evaluate particular opportunities (though this isn’t to say that looking at particular opportunities shouldn’t be a part of understanding how promising a cause is).

Another is that there are just too many charities to do this analysis for many of them, and the landscape is changing over time (this is also true for interventions though to a lesser extent). If you want to contribute to a useful collective understanding, information about these broader issues is just more broadly and robustly applicable. If you are just aiming to find a good thing to give to now this is not so much an issue, but if you are aiming to become better and better at this over time, judgments about individual charities are not that useful in and of themselves. Of course, while making such judgments you may acquire useful info about the bigger picture or make useful methodological progress.

My views on this question are largely based on a priori reasoning (and on all of these questions), which makes me very hesitant to speak authoritatively about them. But it’s worth noting that GiveWell has also reached the conclusion that a cause is a good level of analysis at least at the outset of an investigation, and their conclusion is more closely tied to very practical considerations about what happens when you actually try and conduct these investigations.

Katja: Can you point to past cause prioritization research that was high value? How did it produce value?

Paul: Three examples, of very different character:

  1. GiveWell has done research evaluating charities and interventions that has clearly had an effect at improving individuals’ giving and improving the quality of discourse about related issues, and have made relevant methodological progress. Labs is now working on evaluating causes and I think their current understanding has already somewhat improved the quality of discourse and had some positive expected impact on Good Ventures’ spending. The kind of story I am expecting is more long-term progress, so while I think good work will produce value along the way, I am very open to the possibility that most of the value is coming from gradual progress towards a more ambitious goal rather than improved spending this year.

  2. Many EAs have been influenced by arguments regarding astronomical waste, existential risk, shaping the character of future generations, and impacts of AI in particular. To the extent that we actually have aggregative utilitarian values I think these are hugely important considerations and that calling them to attention and achieving increased clarity on them has had a positive impact on decisions (e.g. stuff has been funded that is good and would not have been otherwise) and is an important step towards understanding what is going on. I think most of the positive impact of these lines of argument will wait until they have been clarified further and worked out more robustly.

  3. There is a lot of one-off stuff in economics and the social sciences more broadly that bears on questions about which causes are promising, even if it wasn’t directly motivated them—moreover, I think that if you were motivated by them and doing research in the social sciences or supporting research in the social sciences, you could zoom in on these most relevant questions. I’m thinking of economics that sheds light on the magnitude of the externalities from technological development, the impact of inequality, or determinants of good governance; or history that sheds light on the empirical relationship between war, technological development, economic development, population growth, moral attitudes etc.; or so on. One could potentially lump in RCT’s that shed light on the relationship between narrower interventions and more intermediate outcomes of interest. All of this stuff has a more nebulous impact on causing the modern intellectual elite to have generally more sensible views about relevant questions.

Katja: If new philanthropists wanted to contribute to this area, do you have thoughts on what they should do?
(if they wanted to spend $10,000?)
(if they wanted to spend $1M?)

Paul: If it were possible to fund GiveWell labs more narrowly that would be an attractive opportunity, and GiveWell seems like an alright bet anyway. Their main virtue as compared to others in the space is that they are on a more straightforward trajectory where they have an OK model already and can improve it marginally.

It seems like CEA has access to a good number of smart young people who are unusually keen on effectiveness per se; it seems pretty plausible to me that they will eventually be able to turn some of that into valuable research. I think they aren’t there yet (and haven’t really been trying) so this is a lot more speculative (but marginal dollars may be more needed). If it were possible to free up Nick Beckstead’s time with more dollars I would seriously consider that.

Katja: If you had money to spend on cause prioritization broadly, would it be better spent on prioritizing causes, more narrow research which informs prioritization (e.g. about long run effects of technological progress or effectiveness of bed nets), outreach, or something else? (e.g. other forms of synthesis, funding, doing good projects)

Paul: The most straightforwardly good-seeming thing to do at the moment is to bite off small questions relating to the relative promise of particular causes, and then do a solid job aggregating the empirical evidence and expert opinion to produce something that is pretty robustly useful. But there is also a lot of room for trying other things. Overall it seems like the most promising objective is building up the collective stock of knowledge that is robustly useful for making judgments between causes.

Katja: It is sometimes claimed that funders care very little about prioritization research, and so efforts are better spent on outreach than on research, which will be ignored. What do you think of this model?

Paul: I think that the number of people who might care is much larger than the number who currently do, and a primary bottleneck is that the product is not good enough. Between that and the fact that I’m quite confident there are at least a few million cause-agnostic dollars a year that seems sensitive to good arguments, I would be pretty comfortable contributing to cause prioritization. Outreach might be a better bet, but it’s certainly less certain, and my current best guess is that it’s less effective for reaching the most important people than building a more compelling product.