#105 – Alexander Berger on improving global health and wellbeing in clear and direct ways

The effective altruist research community tries to identify the highest impact things people can do to improve the world. Unsurprisingly, given the difficulty of such a massive and open-ended project, very different schools of thought have arisen about how to do the most good.

Today’s guest, Alexander Berger, leads Open Philanthropy’s ‘Global Health and Wellbeing’ programme, where he oversees around $175 million in grants each year, and ultimately aspires to disburse billions in the most impactful ways he and his team can identify.

This programme is the flagship effort representing one major effective altruist approach: try to improve the health and wellbeing of humans and animals that are alive today, in clearly identifiable ways, applying an especially analytical and empirical mindset.

The programme makes grants to tackle easily-prevented illnesses among the world’s poorest people, offer cash to people living in extreme poverty, prevent cruelty to billions of farm animals, advance biomedical science, and improve criminal justice and immigration policy in the United States.

Open Philanthropy’s researchers rely on empirical information to guide their decisions where it’s available, and where it’s not, they aim to maximise expected benefits to recipients through careful analysis of the gains different projects would offer and their relative likelihoods of success.

Job opportunities at Open Philanthropy

Alexander’s Global Health and Wellbeing team is hiring two new Program Officers to oversee work to reduce air pollution in south Asia — which hugely damages the health of hundreds of millions — and to improve foreign aid policy in rich countries, so that it does more to help the world’s poorest people improve their circumstances. They’re also seeking new generalist researchers.

Learn more about these and other vacancies here.

Disclaimer of conflict of interest: 80,000 Hours and our parent organisation, the Centre For Effective Altruism, have received substantial funding from Open Philanthropy.

This ‘global health and wellbeing’ approach — sometimes referred to as ‘neartermism’ — contrasts with another big school of thought in effective altruism, known as ‘longtermism’, which aims to direct the long-term future of humanity and its descendants in a positive direction. Longtermism bets that while it’s harder to figure out how to benefit future generations than people alive today, the total number of people who might live in the future is far greater than the number alive today, and this gain in scale more than offsets that lower tractability.

The debate between these two very different theories of how to best improve the world has been one of the most significant within effective altruist research since its inception. Alexander first joined the influential charity evaluator GiveWell in 2011, and since then has conducted research alongside top thinkers on global health and wellbeing and longtermism alike, ultimately deciding to dedicate his efforts to improving the world today in identifiable ways.

In this conversation Alexander advocates for that choice, explaining the case in favour of adopting the ‘global health and wellbeing’ mindset, while going through the arguments for the longtermist approach that he finds most and least convincing.

Rob and Alexander also tackle:

  • Why it should be legal to sell your kidney, and why Alexander donated his to a total stranger
  • Why it’s shockingly hard to find ways to give away large amounts of money that are more cost effective than distributing anti-malaria bed nets
  • How much you gain from working with tight feedback loops
  • Open Philanthropy’s biggest wins
  • Why Open Philanthropy engages in ‘worldview diversification’ by having both a global health and wellbeing programme and a longtermist programme as well
  • Whether funding science and political advocacy is a good way to have more social impact
  • Whether our effects on future generations are predictable or unforeseeable
  • What problems the global health and wellbeing team works to solve and why
  • Opportunities to work at Open Philanthropy

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Highlights

What is neartermism, and what should we call it?

Alexander Berger: Sort of accidentally, we had taken to calling it neartermism just by contrast to longtermism. Originally it had been short-termism, and that was even more of a slur [laughs]. So we’ve gone from short to near, and we felt like that was a very marginal improvement, but we thought we could do better. And so we spent a long time going through a process of trying to brainstorm names and come up with a sense of what are we really trying to do? What do we think about the affirmative case for this? Not just what is it defined against. And we did a bunch of surveys of folks inside and outside of Open Phil, and we came away thinking that ‘global health and wellbeing’ was the best option. We also thought about this phrase ‘evident impact,’ which I noticed that you used in a tweet about the show, and I think in our survey that was like the third most popular.

Alexander Berger: I think there is something in the tendency that that term gets right, which is around the idea of feedback loops and evidence and improving over time versus just this broad, utilitarian feeling of global health and wellbeing. But I like that global health and wellbeing ends up being actually about what we are about, which is maximizing global health and wellbeing rather than maximizing feedback loops or maximizing concreteness, which is a nice positive thing, but not the thing that I see as actually core to the world or core to the project.

Rob Wiblin: I suppose neartermism wouldn’t be such an unreasonable name for, say, the moral philosophy position that it’s better to benefit people sooner. If you can help someone today versus someone in a hundred years’ time, it’s just better because it happens sooner rather than later. Or, potentially, I suppose it could be a not-unreasonable name if you think it’s more important to benefit people who are alive now, rather than future generations. But of course, there’s lots of other reasons why people work on things like GiveWell and reducing poverty and so on.

Alexander Berger: Yeah, and it seems to me that nobody… I think the philosophical position that it’s better to help people sooner rather than later does not seem to have very many defenders. And I certainly wouldn’t want to sign up for that position. I think there’s probably some lay appeal to it. I think that part of the concern with neartermism is that it seems to be more about population ethics, or your view on temporal discounting, which is very much not how we think about it.

Arguments for working on global health and wellbeing

Alexander Berger: I think arguments for global health and wellbeing are first and foremost about the actual opportunities for what you can do. So, I think you can just actually go out and save a ton of lives. You can change destructive, harmful public policies so that people can flourish more. You can do so in a way that allows you to get feedback along the way, so that you can improve and don’t just have one shot to get it right. And at the end, you can plausibly look back and say, “Look, the world went differently than it would have counterfactually if I didn’t do this.” I think that is pretty awesome, and pretty compelling.

Alexander Berger: I [also]think we just don’t have good answers on longtermism. The longtermist team at Open Phil is significantly underspending its budget because they don’t know where to put the money.

Alexander Berger: When I think about the recommended interventions or practices for longtermists, I feel like they either quickly become pretty anodyne, or it’s quite hard to make the case that they are robustly good. And so I think if somebody is really happy to take those kinds of risks, really loves the philosophy, is really excited about being on the cutting edge, longtermism could be a great career path for them. But if you’re more like, “I want to do something good with my career, and I’m not excited about something that might look like overthinking it,” I think it’s pretty likely that longtermism is not going to be the right fit or path for you.

Cluelessness

Rob Wiblin: Yeah, it’s interesting that on both the most targeted longtermist work and on the broader work people talk about this term ‘cluelessness.’ Basically because the world is so unpredictable, it’s really hard to tell the long-term effects of your actions. If you’re focused on the very long-term future, then it’s just plausible that it’s almost impossible to find something that is robustly positive, as you say. On almost anything that you can actually try to do, some people would argue that you could tell a similarly plausible story about how it’s going to be harmful as how it’s going to be helpful. We were talking about how you can do the same thing with the broader work — like improving judgment, maybe that’s bad because that would just lead people to be more aggressive and more likely to go to war. I suppose if you put that hat on, where you think it’s just impossible to predict the effect of your actions hundreds of thousands of years in the future, where do you think that leads?

Alexander Berger: I think cluelessness cuts against basically everything, and just leaves you very confused. One place where I encountered it is, I mentioned earlier this idea that if you lived 10,000 years ago and you saved the neighbor’s life, most of the impact that you might have had there — in a counterfactual utilitarian sense, at least in expectation — might have run through basically just speeding up the next 10,000 years of human history such that today maybe a hundred more people are alive because you basically sped up history by one-millionth of a year or something.

Alexander Berger: That impact is potentially much larger than the immediate impact of saving your neighbor’s life. When that happens with very boring colloquial-type things… By the way, that argument is from a blog post by Carl Shulman.

Alexander Berger: The impact that you actually would want to care about, maybe, from my utilitarian ethics, is this vastly disconnected future quantity that I think you would have had no way to predict. I think it makes you want to just say wow, this is all really complicated and I should bring a lot of uncertainty and modesty to it. I don’t think that should make us abandon the project, but it does make me look around and say everything here is just more complicated than it seems, and I want to be a little bit more sympathetic to people who are skeptical of totalism and making big concentrated bets, and are maybe not even that interested in altruism. Or they’re just like, I’m just doing my thing.

Alexander Berger: Again, I think it would be better if people were more altruistic. I don’t think that’s the right conclusion, but really genuinely feeling the cluelessness does make me feel more like, wow, the world’s really rich and complicated and I don’t understand it at all. I don’t know that I can make terribly compelling demands on people to do things that are outside of their everyday sphere of life.

Examples of valuable feedback loops

Alexander Berger: I think a really central one is just being able to see what works and then do more of it, which is a funny low-hanging fruit. But I think often, in other categories where you don’t even know what intermediate metrics you’re aiming for, you don’t have that benefit. So for instance, the amount of resources flowing into cage-free campaigns in farm animal welfare has, I think, well over 10x-ed because they were working. And it was like, oh okay, we have found a strategy or tactic that works, and we can scale. I think that accounts for a very material portion of the whole farm animal welfare movement’s impact over the past decade. But if you were somehow unable to observe your first victories, you wouldn’t have done it. So I think that there’s something about literally knowing if something is making progress. That’s a really, really important one.

Alexander Berger: Also, on the other side, being able to notice if the bets aren’t paying off. So, we have a program that’s focused on U.S. criminal justice reform. We don’t do calculations for every individual grant, necessarily. We make the big bets on Chloe Cockburn, who leads that program. But if after five years the U.S. prison population was growing, we don’t observe the counterfactual, but that would raise questions for us of given our cost-effectiveness bar and the level of reduced incarceration we would need to be hitting to make this pencil compared to other opportunities for us, being able to observe the state of the world and say, “Is the state of the world consistent with what it would need to be in order for these investments to be paying off?” is an important benefit that you can get in the neartermist global health and wellbeing side that you can’t necessarily get in the longtermist work, I don’t think.

Alexander Berger: Another one is just really boring stuff, but you can run randomized controlled trials that tell you, okay, the new generation of insecticide-treated bed nets is 20% more effective, because the level of resistance before to the old insecticide was reducing it by 20%. You wouldn’t have necessarily known that if you couldn’t do the data and improve. So none of those are necessarily order-of-magnitude kinds of things, but I do think if you think about the compounding benefits of all of those, and the ways in which basically the longtermist project of trying something and maybe having very little feedback for a very long time is quite unusual relative to normal human affairs… It wouldn’t shock me if the expected value impact of having no feedback loops is a lot bigger than you might naively think. That’s not to say that longtermists have no feedback loops though, they’ll see, are we making any intellectual progress? Are we able to hire people? There are things along the way, so I don’t think it’s a total empty space.

Rob Wiblin: Yeah, longtermist projects are pretty varied in how much feedback they get. I mean, I suppose people doing really concrete safety work focused on existing ML systems, trying to get them to follow instructions better and not have these failure modes in a sense, they get quite aggressive feedback when they see, have we actually fixed this technical problem that we’ve got?

Alexander Berger: Yeah and I think that work is awesome. My colleague Ajeya has written about how she’s hoping that we can find some more folks who want to do that practical applied work with current systems and fund more of it. Again, this is just a heuristic or a bias of mine, but I’m definitely a lot more excited to bet on tactics that we’ve been able to use and have worked before, relative to models where it’s like, we have to get it right the first time or humanity is doomed. I’m like, “Well, I think we’re pretty doomed if that’s how it’s going to be.”

GiveWell's top charities are (increasingly) hard to beat

Rob Wiblin: Open Phil has been making grants to reliable, proven GiveWell charities for a while. Things like the Against Malaria Foundation, which distributes bed nets. But it’s been hoping to maybe find things that are better than that by using science and politics and maybe other methods to get leverage, and so it’s been exploring these new approaches, trying to find things that might win out over helping the world’s poorest people. And you’d been doing that by working on scientific research and policy change in the United States, but the leverage that you’d gotten from those potentially superior approaches was something like ten to 1,000, probably closer to ten than 1,000. And that wasn’t enough to offset the 100 to X leverage that you get from transferring money from one of the world’s richest countries to the world’s poorest people. Is that right?

Alexander Berger: Yeah. I think that’s a great summary.

Rob Wiblin: Okay. That raises the question to me, if you were able to get even 10x leverage using science and policy by trying to help Americans, by like, improving U.S. economic policy, or doing scientific research that would help Americans, shouldn’t you then be able to blow Against Malaria Foundation out of the water by applying those same methods, like science and policy, in the developing world, to also help the world’s poorest people?

Alexander Berger: Let me give two reactions. One is I take that to be the conclusion of that post. I think the argument at the end of the post was like, “We’re hiring. We think we should be able to find better causes. Come help us.” And we did, in fact, hire a few people. And they have been doing a bunch of work on this over the last few years to try to find better causes. And I do think, in principle, being able to combine these sources of leverage to… I think of it as multiplying 100x, you should be able to get something that I think is better than the AMF-type GiveWell margin.

Alexander Berger: But I don’t think it blows it out of the water by any means. So this pre-figures the conclusion in some ways from some of our recent work. I think we think the GiveWell top charities, like AMF, are actually ten times better than the GiveDirectly-type cash transfer model of just moving resources to the poorest people in the world. That already gives you the 10x multiplier on the GiveWell side, and so then we need to go find something that is a multiplier on top of that. I actually think that’s quite a bit harder to do, because that’s a much more specialized, targeted intervention relative to the relatively broad, generic, just give cash to the world’s poorest people, which is a little bit easier to get leverage on.

Alexander Berger: I do think we should be optimistic. I think we should expect science and advocacy causes that are aimed towards the world’s poor to be able to compete with the 10x multiplier of cost effectiveness and evidence that GiveWell gets from AMF to GiveDirectly. But I’m uncertain to skeptical after a few years of work on this that we’re going to be able to blow it out of the water. And so I think about it as, it gets you, with a lot of work and a lot of strategic effort, into the ballpark. And so we have a couple of these new causes that I could talk about where we think we’re in the ballpark of the GiveWell top charities, but we haven’t found anything yet that feels like it’s super scalable and, in expectation, ten times better than the Against Malaria Foundation. We’re working hard to find stuff that’s in the ballpark.

Why it should be legal to sell your kidney

Alexander Berger: Around the time when I donated, I actually wrote an op-ed in The New York Times arguing that we should have a government compensation system for people who want to donate. Because there’s just an addressable shortage, where you can donate and live a totally good life. It’s not very risky. This is fine. And like, there’s a government system for allocating kidneys to people who need them the most. And it would actually literally save the government money, because people are mostly on dialysis, which is really expensive and it’s very painful and you die sooner. And so giving them a transplant, it saves money, it extends life. And just not enough people sign up to do it voluntarily. And so we can make it worth their while, similarly to how we pay cops and firefighters to take risks.

Alexander Berger: I don’t feel like this is some crazy idea. But I think that the reason it doesn’t happen is actually opposition from people who are worried about coercion and have a sense of bodily integrity as something inviolable.

Alexander Berger: That makes them feel like there’s something really bad here. Honestly, I think the Catholic Church is actually one of the most important forces globally against, in any country, allowing people to be compensated for donation. It’s very interesting to me, because I just do not share that intuition. I mean, if you think about it as treating people as a means to an end, I could imagine it. Like if you thought it was a super exploitative system, where the donors were treated really badly, I could get myself in the headspace.

Rob Wiblin: But then it just seems like you could patch it by raising the amount that they’re paid and treating people better. So banning it wouldn’t be the solution.

Alexander Berger: Yeah, treat people better. One of the things I said in my op-ed, I think, was like look. We should pay people and treat them as good people. A little bit like paid surrogacy. I think people have bioethical qualms about it sometimes, but by and large, people think of surrogates as simultaneously motivated by the money and good people doing a good thing. And I think we should aspire to have a similar system with state payments for people who donate kidneys. Where it’s like, you did a nice altruistic thing, and it paid for college or whatever.

Articles, books, and other media discussed in the show

Vacancies at Open Philanthropy

Articles and essays

Blog posts

Other links

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.