#21 – The world’s most intellectual foundation is hiring. Holden Karnofsky, founder of GiveWell, on how philanthropy can have maximum impact by taking big risks.
#21 – The world’s most intellectual foundation is hiring. Holden Karnofsky, founder of GiveWell, on how philanthropy can have maximum impact by taking big risks.
By Robert Wiblin and Keiran Harris · Published February 27th, 2018
The Green Revolution averted mass famine during the 20th century. The contraceptive pill gave women unprecedented freedom in planning their own lives. Both are widely recognised as scientific breakthroughs that transformed the world. But few know that those breakthroughs only happened when they did because of two donors willing to take risky bets on new ideas.
Today’s guest, Holden Karnofsky, has been looking for philanthropy’s biggest success stories because he’s Executive Director of Open Philanthropy, which gives away over $100 million per year – and he’s hungry for big wins.
As he learned, in the 1940s, poverty reduction overseas was not a big priority for many. But the Rockefeller Foundation decided to fund agricultural scientists to breed much better crops for the developing world – thereby massively increasing their food production.
Similarly in the 1950s, society was a long way from demanding effective birth control. Activist Margaret Sanger had the idea for the pill, and endocrinologist Gregory Pincus the research team – but they couldn’t proceed without a $40,000 research check from biologist and women’s rights activist Katherine McCormick.
In both cases, it was philanthropists rather than governments that led the way.
The reason, according to Holden, is that while governments have enormous resources, they’re constrained by only being able to fund reasonably sure bets. Philanthropists can transform the world by filling the gaps government leaves – but to seize that opportunity they have to hire the best researchers, think long-term and be willing to fail most of the time.
Holden knows more about this type of giving than almost anyone. As founder of GiveWell and then Open Philanthropy, he has been working feverishly since 2007 to find outstanding giving opportunities. This practical experience has made him one of the most influential figures in the development of the school of thought that has come to be known as effective altruism.
We’ve recorded this episode now because Open Philanthropy is hiring for a large number of positions, which we think would allow the right person to have a very large positive influence on the world. They’re looking for a large number of entry lever researchers to train up, 3 specialist researchers into potential risks from advanced artificial intelligence, as well as a Director of Operations, Operations Associate and General Counsel.
But the conversation goes well beyond specifics about these jobs. We also discuss:
- How did they pick the problems they focus on, and how will they change over time?
- What would Holden do differently if he were starting Open Phil again today?
- What can we learn from the history of philanthropy?
- What makes a good Program Officer.
- The importance of not letting hype get ahead of the science in an emerging field.
- The importance of honest feedback for philanthropists, and the difficulty getting it.
- How do they decide what’s above the bar to fund, and when it’s better to hold onto the money?
- How philanthropic funding can most influence politics.
- What Holden would say to a new billionaire who wanted to give away most of their wealth.
- Why Open Phil is building a research field around the safe development of artificial intelligence
- Why they invested in OpenAI.
- Academia’s faulty approach to answering practical questions.
- What kind of people do and don’t thrive in Open Phil’s culture.
- What potential utopias do people most want, according to opinion polls?
Keiran Harris helped produce today’s episode.
Highlights
And so there’s this kind of saying, this kind of joke that once you become a philanthropist, you never again tell a bad joke. Because everyone’s gonna laugh at your jokes whether they’re funny or not. Because everyone wants to be on your good side. And I think that can be a very toxic environment. I mean I personally am a person who really prizes openness, honesty, direct feedback. I really value it. I really value people who criticize me. But a lot of people that I interact with don’t initially know that about me, or maybe just never believe it about me.
And so if someone is worried to criticize me, I may unintentionally just be doing the wrong thing, and never learn about it. And so I think one of the worst qualities a Program Officer can have is being someone who people won’t tell the truth to.
Something you can do is you can just fund people to develop ideas, new policy ideas, that may be kind of a new way of thinking about an issue. That is different from, for example, funding elections.
You can also fund grassroots advocacies. You can fund people who are organizing around a common population or a common topic, like formerly incarcerated persons. And you can just support these people to organize and to work on issues they’re passionate about and see where that goes. You can also fund sort of think tanks that try to kind of broker agreements or try to take the new ideas that are out there and try to make them more practical.
So I think there’s a whole bunch of different things philanthropists can do, and a lot of the time, the longer term and higher risk, in some ways, the bigger impact you can have.
There doesn’t seem to be a lot of interest today in people spelling out what a really good future would look like, what a really good future would look like in the long run. And, I think it’s just interesting. I don’t know if it’s always been this way, but it seems like not a very lively topic, these days, and it just makes me curious. …
And so, I tried to write different Utopias that would appeal to different political orientations. I had this theory it might break down that way. So, I wrote one that was trying to sound very libertarian, and it was like, all of our freedom, anyone can buy anything, sell anything, do anything. And then, I wrote another about how this very wise and just government tries to take care of everyone, I’m gonna ask people to rate that. And then, I had some things that were supposed to appeal to conservatives, as well. And then, it just turned out that the freedom ones just did the best, and the government one just did the worst, even with this very left-leaning population.
Articles, books, and other media discussed in the show
- Job opportunities at Open Phil
- Casebook for The Foundation: A Great American Secret
- Open Phil’s case studies in the history of philanthropy
- The Rise of the Conservative Legal Movement: The Battle for Control of the Law
- Update on Cause Prioritization at Open Philanthropy
- Our Grantmaking So Far: Approach and Process
- Hits based giving by Holden Karnofsky
- Three Key Issues I’ve Changed My Mind About
- The role of philanthropic funding in politics
- Radical Empathy
- Thoughts on the Singularity Institute (SI)
Transcript
Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, the show about the world’s most pressing problems and how you can use your career to solve them. I’m Rob Wiblin, Director of Research at 80,000 Hours.
My guest today, Holden Karnofsky, is one of the most significant figures in the development of effective altruism. He has been applying his substantial intellectual energy to finding great giving opportunities since 2007, when he founded GiveWell with Elie Hassenfeld.
This has given him more hands-on experience of how to actually assess very varied giving opportunities, get around the huge uncertainties involved, and build a thriving research team, than anyone else I know.
As a result he has ended up as the primary advisor to Good Ventures, a cost-effectiveness focussed foundation which expects to give away billions of dollars over its lifetime.
I should also say that Good Ventures is 80,000 Hours’ largest funder, though I don’t think that’s changed anything about this interview.
We’ve recorded this episode now because Open Philanthropy is hiring for a large number of roles, which we think would allow the right person to have a very large positive influence on the world. They’re looking for a large number of entry lever researchers to train up; 3 specialist researchers into potential risks from advanced artificial intelligence, as well as a Director of Operations, Operations Associate and General Counsel.
I’ll put up a link to their jobs page. If you’d like to get into global priorities research or foundation grantmaking, this is your moment to shine and actually put in an application.
This interview is likely to be of broad interest to most regular subscribers. However, if you have no interest in working at Open Phil, you can probably skip our discussion of their office culture and the jobs they’re trying to fill towards the end.
But don’t miss our discussion at the end of Holden’s public opinion survey of different utopias.
Without further ado, I bring you Holden Karnofsky.
Robert Wiblin: Today I’m speaking with Holden Karnofsky. Holden was a co-founder of the charity evaluator GiveWell, and is now the Executive Director of Open Philanthropy. He graduated from Harvard in 2003 with a degree in social studies and spent the next several years in the hedge fund industry before founding GiveWell in 2007. Over the last four years, he has gradually moved to working full time at Open Philanthropy, which is a collaboration with the foundation Good Ventures to find the highest impact grant opportunities. Thanks for coming on the podcast, Holden.
Holden Karnofsky: Thanks for having me.
Robert Wiblin: We plan to talk about some kind of thorny methodological questions that Open Philanthropy faces, the kinds of people you’re looking to hire at the moment, because you do have a few vacancies, and how listeners can potentially prepare themselves to get a job working with you. But first off, while I think a lot of the audience will have some familiarity with GiveWell and Open Philanthropy, maybe start by telling the story of what these organizations do and how they developed.
Holden Karnofsky: Sure. I’m currently the Executive Director of Open Philanthropy, and I am co-founder, though I no longer work at, GiveWell, and they’re very related organizations, so maybe the easiest way for me to talk about what they do is just kind of go through the story of how GiveWell started and how that led to Open Philanthropy. GiveWell started in 2007 when Elie Hassenfeld and I worked at a hedge fund, and we wanted to give to charity, and we wanted to sort of get the best deal we could. We wanted to help the most people with the least money.
We found that we had a lot of trouble figuring out how to do this. We tried using existing charity rating systems. We tried talking to foundations, who largely didn’t tell us much. We tried talking to charities, who a lot of the time were kind of hostile and didn’t appreciate the inquiries. And at a certain point we kind of came to the conclusion that we felt there was not really a knowledge source out there that could help people like us figure out, between all the different things charities do, between, let’s say, if you’re trying to help people in Africa, providing clean water versus providing sanitation services versus providing bed nets to protect from malaria, which one could help the most people for the least money? And we were having trouble finding that, and we found ourselves very interested in it, and sort of more interested in it than our day jobs.
So we left the hedge fund, we raised startup funds from our former co-workers, and we started GiveWell. Today, GiveWell publishes research online that’s very detailed, very thorough, and sort of looks for charities that you can have confidence in. They’re cost-effective, they help a lot of people for a little money, they’re very evidence-based. The evidence has been reviewed incredibly thoroughly, and GiveWell also makes sure that there’s room for more funding, so each additional dollar you give will help more people.
GiveWell currently tracks the money that it moves to top charities, and it’s around $100 million a year is going to the charities that it recommends, on the basis of its recommendation. A few years after starting GiveWell, we met Cari Tuna and Dustin Moskovitz, and they were facing kind of a similar challenge to what Elie and I had originally faced, but very different in that they were looking to give away their money as well as possible, but instead of giving away a few thousand dollars a year like Elie and me, they were looking to give away billions of dollars a year over the course of their lifetimes.
So in some ways there was a lot of similarity between the challenge they were facing and the challenge we did. They wanted to give away money, they wanted to help the most people possible for the least money, and they were having a lot of trouble finding any guidance on how to do this, any research, any intellectual debate of any kind. Difference is that I think the task they were set up to do is fundamentally different from the task GiveWell is set up to do, and so we launched something that at the time was called GiveWell Labs and has since kind of morphed into Open Philanthropy that is about trying to help people like that do the most good with the money that they’re giving away.
There are some really important differences. In some ways the two organizations have opposite philosophies, so GiveWell tends to look for things that are really proven, where the whole case can be really spelled out online. Open Philanthropy tends to look for things that are incredibly risky, often bold, often work on very long time horizons, and often things that we feel that no other funder is in a position to do and that it would a great deal of discussion and expertise and trust to understand the case for. And so they have kind of fundamentally different philosophies and fundamentally different audiences.
GiveWell now is, I think, functioning better than it ever has without me, and I am the Executive Director of Open Philanthropy, so what I currently do is I spend my time trying to build an organization and an operation that is going to be able to give away billions of dollars as well as possible. Our current giving is between 100 and $200 million a year, so we’re kind of at an early-ish phase in our process.
Robert Wiblin: If some listeners have only just heard about this idea of effective philanthropy, why should they care? Do you think it’s a particularly important issue that people should be focused on?
Holden Karnofsky: Yeah, I think it’s very important. I mean, I think some of the interesting thing about giving, whether it’s individuals giving to charity or large-scale donors giving to philanthropy, which is my current focus, I think one is that it’s just clearly an area where you can make a difference, where you can actually change the world. And I think when we look for the best things, the things that help people the most for the least money, I’ve been quite surprised personally by just how good they are and how much good you can do with how little money.
For example, GiveWell estimates that you can sort of avert an untimely death of an infant for every few thousand dollars spent, and Open Philanthropy kind of aspires, while taking more risk and working on longer time horizons, to do even better than that. So I think there are great opportunities to do good, and I also think it’s a very neglected topic. The funny thing is that there’s a lot of debate in our society about how the government should run, what the government should do. There’s even debate about what corporations should do, but there’s very little debate about how to do good charity, what foundations should do, what philanthropists should do.
It’s not considered an intellectual topic generally, and when people think about philanthropy, they kind of think of just a lot of warm fuzzy feelings, not a lot of scrutiny. They think about people putting their name on buildings, funding hospitals, and I think that’s really too bad, because actually I think when you look at the track record of philanthropy, it’s had some really enormous impacts. I think it’s changed the world for the better and for the worse and sort of been behind some of the biggest stories of the last century. For example, both the pill, the common oral contraceptive, and the Green Revolution, which is the set of developments that arguably led to several major countries developing and a billion people avoiding starvation, both of those had a really significant role for philanthropy, arguably were kind of primarily backed by philanthropy.
So philanthropy really can change the world, and in many ways I think it presents a given person with a greater opportunity to change the world than some of the topics that get more attention. And so I think the fact that it is so neglected and that it’s generally not considered an intellectual topic creates a huge opportunity to do what other people won’t and to have an outsized impact on the world.
Robert Wiblin: Initially GiveWell looked at a couple of different problems that people could donate to solve, including poverty in the United States, poverty overseas and, I guess, education in the US, and maybe some others, but it ended up basically focusing on helping poverty in the developing world, I guess especially extreme poverty and major health problems. Is that right?
Holden Karnofsky: Yeah, that’s right.
Robert Wiblin: What kind of problems does Open Phil now focus on?
Holden Karnofsky: Open Philanthropy has looked for causes that are what we call important, neglected and tractable, and so we have kind of a broad set of causes, and especially because when we were getting started, we wanted to try a few different things and get a feel for a few different kinds of philanthropy. One thing that we don’t currently work on is global health and development, and the reason for that is that we think GiveWell’s top charities are very strong options if you’re looking to directly help the global poor. We aren’t sure. Maybe if we put in a lot of work and a lot of research, we would find things that are higher risk and better, but it’s not at all obvious that we would, and we feel that GiveWell presents an outstanding option and something that’s very hard to beat in that domain.
And so what Open Philanthropy has tried to do is look for other ways, other schools of thought on how you might have a lot of impact. The basic story here is, if you start from the place of I want to help an inordinate number of people or do an inordinate amount of good with my money, what are some ways that that might happen? The sort of GiveWell approach or the GiveWell philosophy largely comes down to the idea that by sending your money to the poorest parts of the world, to the people who need it most, you can have more impact than by sending it, let’s say, to your local community.
Open Philanthropy works with some different theories. We work on US policy, with the theory being that as Americans, as people embedded in the US, we do have some understanding and some networks of the US policy landscape, and there is a decent historical track record where sometimes relatively small expenditures by a philanthropist can have a big impact on how governments make decisions, can help governments do better work. And so you can get a big, in some sense, leverage there. You can get a big multiplier.
Another thing that we’re very interested in is scientific research, so that’s another case where, by spending a relatively small amount, you can be part of developing a new innovation that then becomes shared for free, infinitely and globally, and you can have an outsized impact that way. And then a final category for us is global catastrophic risks, where we feel that the more interconnected the world becomes, the more it become the case that sort of a worst case scenario could have a really outsized global impact, and there’s no particular actor that really has the incentive to care about that. Governments do not have the incentive, corporations do not have the incentive to worry about really low-likelihood, super duper worst case outcomes.
So those are some of the areas we work in, and specifically the causes that we work in have come from a mix. We spent a lot of time, we spent really a couple years, just trying to pick causes and doing research on what’s important, neglected, and tractable, and we picked our causes through a mix of what our research found and also, frankly, through our hiring. One of our philosophies is that a lot of how much good you do has to do with who you hire, whether you have the right person for the job. A lot of our philosophy is about finding great people and empowering them to be creative and make individual decisions and have autonomy.
And so a lot of the causes we work in have been shaped by whether we’ve been able to find the right person to work in them. So through that we’ve come to a relatively small set of major focus areas, and then another set of causes that I won’t go into here. The major focus areas we have at this moment, we have a big investment in criminal justice reform. We feel that the US over-incarcerates people really badly. We feel that thorough research has made us conclude that this is not helping basically anyone. This does not provide a public safety benefit, that we could reduce incarceration, reduce the horrible cost of prison, both human and financial, and really maintain or improve public safety.
And we work in that cause partly too because we think there’s an opportunity to win. We think it’s politically tractable. So out of all the causes of comparable importance, many of them you’re never going to get anything done unless you can get something done at the national level, but criminal justice reform is a state and local issue, and we think it also is sometimes less of a partisan issue. A lot of times people on the left and the right are both interested in reducing prison populations, because that is cutting government, that is saving money, and that is often helping the least privileged people. So that’s a major cause that we work in.
Another major cause we work in is farm animal welfare. We believe that one of the ongoing horrors of modern civilization is the way that animals are treated on factory farms, just incredible numbers of animals treated incredibly poorly on factory farms. We believe that we’ve seen opportunities that are relatively cost-effective where our funding has helped contribute to a movement of getting animal welfare to be a higher priority for fast food companies, for grocers, and we believe that this has led to incredible amounts of just reduction in animal suffering, improvement in farm animal welfare, for relatively small amounts of money and good cost effectiveness. So that’s a major area.
And then two other major areas are global catastrophic risks, so we work on biosecurity and pandemic preparedness. If you were to tell me that somehow the human race was going to be wiped out in the next 50 years and ask me to guess how, one of my top guesses would be a major pandemic, especially if synthetic biology develops and the kinds of pandemics that are possible become even worse, and the kind of accidents that can happen become even worse. So we have a program that’s really focused on trying to identify the worst case pandemic risks and prepare the global system for them so they become less threatening.
And then another global kind of catastrophic risk we’re interested in is potential risk for advanced artificial intelligence, which I think is something you’ve probably covered before in your podcast-
Robert Wiblin: Had a couple episodes about that.
Holden Karnofsky: … and happy to get into more detail on it. Okay, cool, and happy to get more into that in a bit. Finally, we do a fair amount of work to support the effective altruism community itself, including 80,000 Hours. And then we have a whole bunch of other work. I mean, we do climate change, and we work on some other US policy issues.
Finally, another major area for us is scientific research funding. That’s where we kind of look for moonshot science that we can support, for example, trying to speed up the use of gene drives technology to eradicate malaria.
Robert Wiblin: So on the global catastrophic risks and science research sides of things, we’ve had an episode with Nick Beckstead, who works at Open Philanthropy, where we talked quite a lot about those two. And on the factory farming, we have a very popular episode with Lewis Bollard, where we spent about three hours talking about all of the different angles on that. So if you’re interested in hearing about those two causes in particular, then we have other episodes for you to check out that I’ll stick up links to. And I’ll also mention that one of the last things we’re going to talk about is a whole lot of vacancies that are coming up at Open Phil, so this interview could be quite long, but if you’re interested in working with Holden, then stick around or maybe just skip to the end to hear that section about what kinds of people they’re looking for and how you can apply to work at Open Phil.
Let’s just dig in deeper and find out a bit more about how Open Phil actually works. How much are you hoping to dispense in grants over the next 10 years?
Holden Karnofsky: Well, over the next 10 years, we’re not sure yet. I mean, what we know is that Cari and Dustin are looking to give away the vast majority of their wealth within their lifetimes. We also believe that we want philanthropy to become a more intellectual topic than it is, and we have kind of aspirations that if we do a good job, if we have useful insights on how to do great philanthropy and help a lot of people, that we will influence other major philanthropists as well, and we’re already starting to see small amounts of that.
So over the long run, I know I’m looking there. Over the next 10 years, I think it’s really TBD, and I think we want to just take things as they come. At this stage, we’re giving away between 100 and $200 million a year, and our priority right now is really to learn from what we’re doing, to assess our own impact over time, to build better intellectual frameworks for cause prioritization and deciding how much money goes into which cause and which budget. And so I think we have a lot of work to do just improving the giving we’re already doing, and I think it’ll be better to wait until we have a really strong sense of what we’re doing and what we’re about before we ramp up much more from here. So the next 10 years specifically, I think that just will depend on how we’re evolving and what opportunities we see.
Robert Wiblin: How do you make specific grant decisions? What’s the process by which you end up dispensing money?
Holden Karnofsky: The basic process, the basic story or the kind of pieces of philanthropy as I see them, the job of a philanthropist, is first to pick causes, pick focus areas, pick which issues you’re going to work on. So that would be like, for example, saying we’re going to work on criminal justice reform, we’re going to work on farm animal welfare, and that’s something that, as I mentioned, and I think somewhat different from many other foundations, that was something that we put a couple years into by itself. The next thing that a philanthropist needs to do is basically build the right team. And so those are two things that I think are very core to Open Philanthropy, and two things that make us what we are, and to the extent we’re doing a good job or a bad job, it comes down to which causes we’ve picked and which people we’ve hired.
When it comes to the grant-making process, what happens is we generally have a program officer who is the point person for their cause. For criminal justice reform, it’s Chloe Cockburn. For farm animal welfare, it’s Lewis Bollard. And that person really leads the way. One of things we’ve tried to do at Open Philanthropy is maximize the extent to which this can be a very autonomy-friendly, creativity-friendly environment. We don’t slap any rubric or requirements on what kind of grants we’re going to do, and we try to minimize the number of veto points, the number of decision-makers. So it’s kind of one person out there talking to everyone, doing everything they can to educate themselves, and bringing the ideas to us. And then once a program officer wants to make a grant, we have a process, which you can link to, we’ve written about it.
Basically, they complete an internal write-up, which asks them to answer a bunch of questions like what is special about this grant, how will we know how it’s going, when will we know how it’s going, what predictions? We ask people to make sort of quantified probabilistic predictions about grants. What are the reservations? What’s the best case against this grant? That person writes up their case and then it gets reviewed by me and by Cari Tuna, who’s the President of Open Philanthropy, and that leads to the approval. And then we get to the part of the grant process where we need to basically get the payment made and get all the terms hammered out. So that’s kind of the general outline.
Robert Wiblin: Let’s look at this another way. Let’s imagine that I was a billionaire and I was looking to give away quite a lot of my wealth, but I didn’t really know where to start. What would you suggest that I do?
Holden Karnofsky: I would say that when you’re starting a philanthropy, the kind of first order questions are what areas you’re going to work on, what causes, and then also who you’re going to hire, what kind of staff you’re going to build, because you’re not going to be able to make all the decisions yourself when you’re giving away that kind of money. I feel like really the most important decision you’re going to make is who is making the decisions and who is making the grant proposals and how are they doing that?
What I would urge someone to do if they were a billionaire and they were just getting started, if I had one piece of advice, it would be to work really hard at those two decisions and take them slowly. I think that a lot of times when you go around asking for advice in philanthropy … and I think this kind of reflects that philanthropy is not considered such an intellectual thing … a lot of experienced foundations will tell you, you should really just start with the causes that you’re personally passionate about. You should take the things you’re personally interested in. If homelessness in your home town, for whatever reason, if that’s what strikes you, then that’s your cause, and that’s what you’re going to work on.
And then from there, you should maybe hire people that you already know and already trust, and then start figuring out what your processes are going to be. Open Philanthropy does … it’s not how every foundation thinks about it, but it’s common advice … and Open Philanthropy does actually sort of take the opposite view there. We think that picking a cause, if you’re going to work on homelessness in your home town, versus maybe criminal justice on a national level, versus maybe potential risk for advanced AI, that sort of makes maybe like a huge amount of the difference, maybe almost approximately all of the difference, in how much good you’re ultimately going to accomplish.
And furthermore, some causes are kind of naturally popular, because they’re just naturally appealing. Cities that have a lot of wealthy people are going to be cities that have a lot of philanthropy going into them. Causes that are kind of easy to understand and immediately emotionally resonant are going to have a lot of money going into them. And so oftentimes it’s the causes that take more work, more thought, more analysis to see the value of that may actually be your best chance to do a huge amount of good.
As a billionaire starting off, I would say your first job is to pick causes and probably then to pick people, and my biggest advice is to do it carefully and to take your time. I think I understand the advice to pick causes quickly, because it could be very daunting and frustrating to not have causes, and it’s much easier to get stuff done and to have a clear framework when you do have causes. And certainly we spent a couple years trying to pick causes, and during that time it was just a strange situation. It kind of felt like we were doing not much, but in retrospect I’m glad we put in all that time and maybe even wish we’d put in more.
Robert Wiblin: So how did you go about picking those causes?
Holden Karnofsky: When we were originally picking our causes, we basically had these three criteria: importance, neglectedness, and tractability. Importance means we want to work on a cause where a kind of win that we could imagine, or an impact that we could imagine, would be really huge and would benefit a lot of persons and would benefit them a lot. Then there’s neglectedness. So on the flip side of this, a lot of the most important causes already have a lot of money in them. A lot of what we did is we tried to figure out where would we fit in and what would we be able to do differently?
And then there’s tractability. So all else equal, we’d rather work in a cause where it looks more realistic to get a win on a realistic timeframe. And the way that we literally did this is we did these sort of shallow investigations, then medium investigations. We had a large list of causes. To give you an example, we kind of wrote down all of the global catastrophic risks that we had heard about or thought about, things that could really derail civilization, everything from asteroids to climate change to pandemics to geomagnetic storms. And then for each one we kind of had a couple conversations with experts, we read a couple papers, and we got an initial sense, how likely is this to cause a global catastrophe, what are some of the arguments, who else works on it, where is the space for us, and what could we do to reduce the risk?
We had this kind of not super-quantitative but definitely systematic, and we’ve got the spreadsheets on our website, where we rate things by importance, neglectedness, and tractability, and took the things that stood out on those criteria. And then we started looking for people to hire in them, and then the hiring process somewhat further determined what causes we really got into.
Robert Wiblin: I know that when you chose the particular problems that you did, you kind of committed to stick with those for quite a number of years, because you thought you had to spend some time to really develop expertise to even know what to do. Do you think that you got the right answers in the first place? Are there any things that you would do differently if you were doing it again today?
Holden Karnofsky: I think we have learned a lot since then, and I think we’ve had a lot of updates since then, so for example, I’ve written publicly about a series of major opinion changes that I had that all kind of coincided. This blog post called, “Three Key Things I’ve Changed My Mind About”, where I currently have a higher estimate of the importance of potential risk from advanced AI specifically than I used to, and that also kind of made me update in the direction of a lot of the causes that some of the most dedicated effective altruists have kind of done a lot of their own research and recommend. I think I moved in the direction of a lot of those causes.
I certainly think we’ve updating our thinking, and I’m glad we got moving in a reasonable period of time. I’m glad we got some experience. I’m glad we got to try different things. And I think the causes right now are really excellent. The ones that we’ve scaled up, we scaled them up over a period of years, and so we were able to see them as we were doing it. And so I think they’re excellent causes, but I don’t think that that intellectual journey is over, and I think in some ways we’re at the beginning of it. Sure, we picked some initial causes and we made some initial grants, and we learned some things about how to give, but our next big mission is trying to figure out which causes, which focus areas are going to grow the most.
And as we ramp up giving, what the relative sizes of the budget should be, what the priorities should be, how much money goes into each thing, and I think you could see our effective priorities, they may shift over time as we continue to tackle that question, which I think is a very thorny, somewhat daunting question. I could even imagine it taking decades to really work out, but of course, we’re going to try and make pragmatic amounts of progress, to make pragmatic amounts of increases in our giving.
Robert Wiblin: Do you expect to still be here in a few decades? I don’t mean you, but will Open Phil and Good Ventures still exist in a few decades, or is the plan to gradually wind them down?
Holden Karnofsky: Cari and Dustin want to give away the vast majority of their wealth within their lifetimes. That’s what they’re looking for. I could imagine there being a case to do it faster than that. You know, it depends on how things play out and how we think today’s opportunities compare to tomorrow. I think it’s very unlikely that it will be slower. In other words, I don’t think they have interest in leaving behind an endowment.
Open Philanthropy is different. I see Open Philanthropy as sort of an intellectual hub for effective philanthropy, almost could serve some of the role of a think tank, although it also operates as a funder. And so I think that if at some future date, Cari and Dustin have spent down all their capital, I’m guessing and I’m sort of hoping, I guess, that if it’s doing a good job, that Open Philanthropy is still around, helping other philanthropists make the most of their money.
So I see Open Philanthropy as more of an intellectual institution that has no particular reason to go away at any particular time, as long as it’s doing good work. Obviously, it shouldn’t continue to exist if it’s not, whereas Cari and Dustin, they have a fortune and they’re looking to give it away in a certain period.
Robert Wiblin: Could you see Open Philanthropy expanding the number of cause areas that it works in two- or threefold over coming decades, or is it likely to be that to add one you have to take one out?
Holden Karnofsky: Oh, I think there’s a really good chance that we’re going to increase the number of focus areas we work in. Just as we increase our giving, we could do it by increasing the budgets of current areas or going into new ones, and I imagine it’ll be a combination.
Robert Wiblin: What are kind of the key themes that tie together all of the focus areas that you’ve chosen and the focus areas that you almost chose? Why is it that these causes stand out in particular?
Holden Karnofsky: We went out there to choose causes based on importance, neglect, and tractability, so that’s the kind of direct answer, and we didn’t really optimize for anything other than that. But I would say that having chosen a bunch of these causes, I think I have noticed a couple of broad buckets that most of them seem to fall into. There are kind of two theories of how philanthropists might really be onto something big today that isn’t getting enough attention. They can have a really outsized impact. One theme, and we have a blog post about this idea, is this idea of what we call “radical empathy”, which is this idea that a lot of the worst kind of behavior in the past, when you look backward and you feel really bad about certain things like just attitudes toward women and minorities and slavery and things like that, a lot of it could be characterized as just having too small a circle of concern and just saying, “We’re going to be nice to people who are sort of in our tribe, in our club, in our circle. But then there’s this whole other set of people who are not counting as people, and we don’t think they have rights, and we think they’re just different, and we don’t care about them, and we treat them as like objects or means to an end.”
I think if you could go back in time and do the best philanthropy, a lot of it would be trying to always be working with a broader circle, always be trying to help the people who weren’t considered people, but would later be considered people. So for example, working on abolitionism or early feminism or things like that. And a lot of our philanthropy does seem to fall into this category of we’re trying to help some population that many of the wealthy people today who have the power to be helpful just don’t care about or don’t consider people in some sense, or just aren’t really weighing very heavily, are highly marginalized.
And so criminal justice reform is certainly an example of this. I mean, people affected by our over-incarceration disproportionately are sort of low income or disproportionately minority, and in general, I think also people think about offenders, or people think about incarcerated persons and think, “I don’t care about them. I don’t care about their rights. Why should I care about their suffering?” And so that is kind of a cause where we’re trying to help a population that I think is quite marginalized, and that makes the dollar go further.
You could say the same thing about global poverty. The GiveWell top charities is kind of a similar situation where a lot of people believe charity begins at home. Of course, some countries are much richer than others, so the home of the rich people gets a lot of money, gets a lot of charity, and the home of the not-so-rich people doesn’t. You just have a lot of people in America who believe that Americans always come first, and they don’t care about people in Africa, and so you get this opportunity to do extra good by helping an extra-marginalized population.
And then some of the other work we do is going a little bit further in that direction, to the point where it gets legitimately debatable, in my opinion. The farm animal welfare work, I think a lot of people … I mean, I think the reason we’re seeing such amazing opportunities in farm animal welfare and such sort of high leverage places to get these really quick wins that affect a lot of animals is because most people just don’t care about farm animals. They say, “Chicken? Well, that’s dinner, that’s not someone I care about. That’s not someone with rights.” We believe that it’s possible we’ll all look in 100 years and say, “That was one of the best things you could be doing, is helping these creatures that we now realize we should care about, but at the time we didn’t.”
That’s definitely a common theme that runs through a lot of what we do, and then the other theme is … it’s very related to long-term-ism, and it’s this idea of kind of X factors for the long-term future. This idea that there could be dramatic societal transformations that kind of affect everyone all the way into the future. An example of this is like if you had gone back 300 years and you were trying to do charity or philanthropy just before the Industrial Revolution, I think the Industrial Revolution ended up being an incredibly important thing that happened that had incredibly large impacts on standard of living, on poverty, on everything else. I think in retrospect it probably would have been a better idea to be thinking about, if you were in the middle of that or just before it, how it was going to play out and how to make it go well or poorly, instead of, for example, alms. Instead of, for example, giving money to the low income people to reduce suffering in the immediate situation.
And so when we look at global catastrophic risks, and to some extent a breakthrough science as well, we’re looking at ways that we can just affect an enormous number of persons by kind of taking these things that could be these high leverage moments that could affect the whole future. And so our interest in AI, our interest in pandemic, our interest in science, a lot of it pertains to that.
Robert Wiblin: I know you’ve been working on a series of articles about the history of philanthropy, looking at really big wins that philanthropists have had in the past, and I guess potentially some failure stories as well. Is there anything you’ve learned about that in terms of which focus areas tend to be successful and which ones tend not to?
Holden Karnofsky: Yeah, I think we’ve learned a ton from the history of philanthropy, and we have a whole bunch of blog posts about it, and we’ve started a little bit of a grant-making program where we actually fund or contract with historians. What we initially did is we just took this casebook. This “Casebook for The Foundation: A Great American Secret” is, I believe, what the book is called, and it’s got these two-page sort of vignettes of a hundred different philanthropic successes. I kind of read through them and looked for patterns and made a blog post about that, and then what we started doing is taking some of the more interesting-seeming ones and asking, and paying historians to go and really check them out and say, “What really went down here? Was philanthropy really that effective and really that important?”
And then there are some others we’ve been able to learn about independently, just by reading books and whatnot. Yeah, we’ve learned a ton. One of the philosophies that I think history of philanthropy has pushed us toward is what we call “hits-based giving”, and that’s the idea … again, it’s a very different school of thought from GiveWell … the idea of hits-based giving is a little bit similar to venture capital, which is that rather than trying to exclusively fund things that you think will probably work, what you do is you fund a whole bunch of things that individually, each one, might have, let’s say, a 90% chance of failing and a 10% chance of being a huge hit. And then out of 10, you might get nine miserable failures, one huge hit, and then the huge hit is so big that it justifies your whole portfolio.
I was definitely pushed toward this vision of giving by reading these case studies, because even though the case studies are cherry-picked, when you see some of the examples I saw, like the Green Revolution, I think one of the most important humanitarian developments in all of history, certainly in the last century, and looks like it was largely a philanthropy story. When you look at the pill, also just incredibly revolutionary and important, and largely the result of a feminist philanthropist. You know, you just say, “Boy, I could fail a lot of times. If I got one of those hits, I would still be feeling pretty good about my philanthropy.”
I think when I read through this stuff, I was initially surprised at how successful things had been, just how big some of the big successes were, and that’s not to say there’s no failures, because I think there’s a lot of failures. So that was one of the first things we took away, also a lot of the interest in scientific research, a lot of the interest in policy and advocacy. Those are things I think philanthropy really had a track record of doing well in. More recently Luke Muehlhauser did a study of how philanthropy has contributed to the growth of academic fields. I think there’s actually … And one of the things I learnt from it, is many times throughout history, let’s say there’s a topic that just for whatever reason it’s not really being discussed much in academia. It’s not something professors work on. It’s not something students work on. Professors unlikely to switch into it, because they kinda already have their specialty. Students are unlikely to switch into it, because there’s no professors in there. And then philanthropy comes along, one example is the field of geriatrics.
But philanthropy comes along and says, “We think this should be a major academic field.” And through fund kind of all parts of the pipeline, professors, students, conferences, you name it, we believe they’ve really been a big part of causing these fields to grow. And so we’ve tried to look at what they did and learn from it. ‘Cause we look at the situation today and we say, “Boy, we wish there was more of a field,” for example, “Around the AI alignment problem.” There’s a lot of academic researchers working on machine learning and making AI more effective. But in terms of making AI more reliable, more robust, safe, in-line with human intentions, we wish there was more of a field there. So I think it’s been a major source of learning for us. I’ve really enjoyed the history of philanthropy case studies that the historians we’ve worked with have written, and just they give me an enriched picture of what has come before and what we can learn from.
Robert Wiblin: Do you just wanna flesh out those stories of the green revolution and the invention of the pill? ‘Cause I had no idea that they were funded by a philanthropist. And you might expect that I might know. So-
Holden Karnofsky: Yeah. Yeah.
Robert Wiblin: … perhaps [crosstalk 00:32:32] have no idea.
Holden Karnofsky: Yeah, sure. On the green revolution, I believe this began with the Rockefeller Foundation essentially funding a team of agricultural scientists, among them Norman Borlaug who would later win The Nobel Peace Prize. And the idea is that they were looking to develop better crops for the developing world. So kind of U.S. agriculture or whatever developed world agriculture was good at kind of breeding and optimizing crops for the climates they were in. And they wanted to provide kind of a similar service for poorer countries. And what happened was, they developed these crops that were incredibly productive and incredibly sort of just economically fruitful to work with. And it originally was in Mexico.
But one of the things that I believe happened as a result of this eventually scaled up, and they started optimizing them for a whole bunch of different countries and climates. And India at the time this happened was kind of in the midst of a famine. And I believe they went from being a wheat importer, to being a wheat exporter. And then there was just this massive growth that was kicked off beginning with the agricultural sector, that then leading to nationwide growth in several countries that are now held up as some of the big examples of countries coming out of poverty in the 20th century. And so it’s hard … I mean, there have been estimates that a billion deaths from starvation were prevented by these kind of new and improved crops, and all the developments that came from that.
And it’s really hard to think of something that’s been more important for human flourishing. There are things you could say, but this really started with a foundation. And I think it’s interesting, because at the time I don’t think that was the kinda thing that a government funder would necessarily do. So this comes back to my example of how philanthropy can play a really special role in the world. I think governments often, they have a lot of resources but they maybe kind of bureaucratic, and they may be kind of fighting yesterday’s battles in some sense. Doing the things that are already socially accepted. And I think global poverty reduction was not as big a worldwide priority then. And so philanthropy kinda led the way.
The pill too. Katherine McCormick was a feminist philanthropist. And the feminist Margaret Sanger came to her with this idea that she knew a scientist who had been working on studies with rabbits to see if he could get them to basically control the menstrual cycle, and control fertility. And Sanger believe this could be a huge breakthrough for feminism just to give women more options, and let them live their lives the way they wanted without having all the biological constraints they had at the time. And again, not the kinda thing government was really excited about. In fact I believe that they … Initially they couldn’t advertise this as a birth control. I think they advertised it as something else, and then they put a warning label on it saying, “May prevent fertility.” I think they’re actually [crosstalk 00:35:07]-
Robert Wiblin: … prevents period pain. Or something like that.
Holden Karnofsky: Yeah. Exactly. Yeah. And I think they were required to put the label on there. But that label was effectively their advertising, the warning label. So it’s another example of something that it wasn’t really a major society-wide priority. It wasn’t really a government priority. It was at the time of something new and edgy, and different. And I think again, it was like you look at what happened. And there was this neglected research on rabbits and fertility. And it got funded by just a private philanthropist, not a huge amount of money. And boy, did it change the world.
And so when you look forward, you gotta ask I mean, what are the things that just, they’re not a world-side priority today, but if the world becomes sorta better off and wiser, and more cosmopolitan, maybe they could be in the future. So the way that animals are being treated on factory farms. This is not a giant hugely popular cause. There’s not a lot of government funding trying to do anything about this. And so this is another case where maybe starting with private philanthropy, we start to turn the tide. And so yeah, those are kind of some examples.
There’s philanthropic examples that are much more, I would say debatable in their impact. So there’s a book called, The Rise of The Conservative Legal Movement, but Steven Teles. Where it’s argued that conservatives did a really outstanding job of philanthropy in terms of effectiveness. And they really changed the public dialogue in the U.S. kind of for the good, or for many decade. Which they explain in some ways some of the strange occurrence of intellectual activity in the U.S. And some people think this is amazing, and some people think it’s terrible. But it certainly was a big deal. And so I think if you’re looking for a lot impact for your money, trying to think about how some of these hits came about is pretty fruitful.
Robert Wiblin: Have you learned any other lessons that you wanna point out from the history of philanthropy?
Holden Karnofsky: Yeah. I mean from the field building exercise, when we talk about failures I mean, we focus more on successes, ’cause we’re hits based. And we’re trying to understand, there’s gonna be a lot more failures than successes. And there’s a lot more ways to fail than there are ways to succeed. So we tried to understand the successes. But the field building was pretty interesting too, because there are a couple examples of fields where it looks like someone tried to build a field in a way that actually stunted the growth of the field.
So nano-technology, and I believe cryonics might’ve been held up as examples of this in Luke’s report where, by coming into the field and creating a lot of hype and media coverage without really building strong connections to the scientific community, people kind of made certain topics taboo and illegitimate. And took things that could’ve had a decent scientific foundation, and made them just like impossible to work on in academia. That’s a great example of the mistake we really don’t wanna make with our research, safety, alignment. So this was something I was really glad to know about.
Robert Wiblin: Yeah. I did another episode with Professor Inglesby who mentioned that if you come into an area with an interest in one particular area of a field and saying like, “This is much more important than everything else in the field,” you can potentially really alienate everyone else. Because you’re basically saying, “I’m gonna try and cannibalize all of your people and all of your grant funding.” And so even if you do think that the subset of a field that you’re particularly interested in is more important, maybe you shouldn’t announce that, and shouldn’t suggest any hostility to others, because you turn people who otherwise might be supporters or at least neutral into adversaries. Is that something that resonates with you?
Holden Karnofsky: Yeah it certainly sounds possible. I mean it’s very hard to make all these generalizations I think. But I can certainly see it happening. And I think one of the things that makes it worse is if you’re letting the media and the hype get ahead of the science. I think that does seem to be, intuitively and from these couple of example, like a good way to antagonize scientists. And ultimately the scientists, they’re the ones who determine who’s getting tenure there. They’re the ones who determine what’s a good career. In my opinion, pinning the media against a scientist is not really a desirable situation.
Robert Wiblin: Okay. So that’s something about choosing causes, and I guess ways of tackling those problems. Let’s talk about hiring staff. How do you go about hiring?
Holden Karnofsky: Well, one of the things that we feel is that interviews are incredibly unreliable. I think there’s research on this. It’s also been our experience. What we wanna do is, we wanna try and simulate working with someone to the greatest extent possible. And so a lot of times when we’re hiring, we try and do these little work trials where someone will do an assignment that has some value to us. It’s often not the most important thing on our plate, ’cause we don’t wanna be reliant on the work trial to get it done. But something that has some values to us, and we can kind of simulate working together in a collaborative realistic style. And I think we’ve written some, so we have a couple blog posts about how we hired our first program officers. And I think that’s one of the most important decision, like I mentioned, that a foundation makes. Especially the way we operate.
I mean I mentioned that our grant making process … Our grant making process is really dependent on having one person who’s very deep in the field, big time expert, and knows what they’re doing, and is creative and can propose great things. And so hiring the wrong person would be a big mistake there. And so we did a pretty intensive process to hire our program officers, and a lot of what we looked for is, we looked for people who could kind of communicate in a very systematic way. So that we understood what arguments they were making and why they wanted to fund what they would fun.
We also looked for people who are incredibly well connected and well-respected in the field. Because we knew that their greatest source of input was gonna be from other people in the field, other experts. And a lot of the key was gonna be to get those people to open up to them, despite the fact that a lot of those would want funding. And finally we looked for people who are very broad. So when we interviewed people for the program officer role, we spoke to some people who kind of, they had been doing one thing and they knew how to do it. Whether that’s grassroots advocacy, or grass-tops advocacy for example. They knew their thing, but they weren’t kind of thinking about all the different pieces of their field, and all the different things you could do to push their cause forward, and how they fit together. So we looked for breadth as well.
Robert Wiblin: Yeah. How much autonomy do program officers have? Do you usually approve the kinda grants that they suggest? Or is it a bit more competition for influence?
Holden Karnofsky: Sure. So Open Philanthropy grant making philosophy again, inline with hits based giving. Inline with trying to take big risks, and do things that could be amazingly good even if they might fail a lot of time. One of the things that we wanna do is minimize the number of veto points, minimize the number of decision makers. And have kind of the person who knows the field best has the most expertise, really be in the lead of what they’re doing. And that said, we don’t wanna be too absolutist about it, because we don’t have perfect confidence in our hiring decisions and also philanthropy as like I said, it’s not necessarily a very intellectually developed field. So most of the people we’ve hired did not previously work in philanthropy. They have a lot to learn.
And so we also need to kind of hold ourselves accountable and understand where the money’s going. And so one of the compromises we found is what we call the 50/40/10 rule, which is that … So we take a program officer and we look at their portfolio, which is all the grants they’ve made. And one of the things we ask them for is that we’d like 50% of the grants by dollars to be classified as good, and by, “Good,” I mean that the decision makers, the grant approvers, which are myself and Cari Tuna, kind of affirmatively have been convinced of the case for this grant. That if you sort of took for example out grant to Alliance for Safety and Justice, and you start arguing me about it, I feel like it reasonably defend the grant. And you would sort of own it in a sense.
But then another 40% of the portfolio, so bringing us to 90% total is okay if we mark it as only okay. And what, “Okay,” means is like, we can see where the person’s coming from. We can see how if we knew more, we might be convinced it’s a good grant. We don’t necessarily buy all the way into it. And then the other 10% of the portfolio is what we call discretionary where there’s actually a very stripped down process. And a program office can just send a very short email. And unless we see a major red flag or a downside risk, it goes through in 24 hours if we don’t respond. And we don’t have to have any buy in to the grant.
And so the idea there is that it’s trying to strike a balance. So we look at our portfolio, and half of it we kind of understand, and it’s been argued to us and it’s been justified using our somewhat thorough internal writeups. And then the other half of it is more like, it’s an opportunity for the program officer to take the things that would’ve been harder to convince us on, because they took more context and more expertise. And just not put in this much sweat, and just get them done anyway. And so that’s where we’ve really tried to empower the program officers in that way.
Robert Wiblin: So in the context of still giving advice to a hypothetical billionaire who’s starting a foundation. What are the biggest likely mistakes that they might make in the hiring? How would they end up hiring someone who was a real mistake?
Holden Karnofsky: I think two of the biggest mistakes during made in hiring, I think one thing is that … I mean, these both relate to the things I’m saying about how we look for a program officers. But I think that … I mean, you’re probably not hiring someone who’s doing philanthropy before. And if you do, you’re probably hiring someone who’s doing it under much more constrained environment, under a foundation that kind of had some pre-declared are of focus and knew exactly what it wanted. So, the person you’re hiring is probably not exactly experienced in exactly the thing that you want them to do.
And I think one danger is that they’re just gonna keep doing what they used to do. Keep doing what they’re used to doing. And that’s why we think it’s really important to look for breadth. And I still worry about this even with the people we have, ’cause I think it’s just, it’s very easy. Let’s say that you’re working on farm animal welfare, and you work vegan promotion. Convincing people to go vegan. Or you work kinda corporate campaigns, which is convincing corporations to treat animals differently. Those are very different activities. And a lot of the times with people working on the two things, they don’t always have a ton of interaction, they don’t always have a lot of common intellectual ground.
And you have to ask yourself, is your program officer, are they funding only one of them because that’s what they know and that’s what they like? Or they’re funding only one of them, because they made a calculated decision? Or they’re funding both of them, because they’re trying to just compromise with everyone and be in everyone’s good side? Or they’re funding both of them for good reasons? So I think having people who can really look at all the different aspects of a field I think is very important.
And then I think another major … Another thing that I kinda just like actually worry the most about with our program officers is, I think it is a really challenging dynamic in philanthropy. And I think this makes it a bit different from other kind of areas that might be considered reference classes for it is that, you’re out there trying to make intellectual decisions and decide what to do. But practically everyone you’re getting advice from and feedback from, and thoughts from is someone who either now or maybe in the future is hoping they’ll get money from you. And so they don’t wanna be on your bad side.
And so there’s this kind of saying, this kind of joke that once you become a philanthropist, you never again tell a bad joke. Because everyone’s gonna laugh at your jokes whether they’re funny or not. Because everyone wants to be on your good side. And I think that can be a very toxic environment. I mean I personally am a person who really prizes openness, honesty, direct feedback. I really value it. I really value people who criticize me. But a lot of people that I interact with don’t initially know that about me, or maybe just never believe it about me.
And so if someone is worried to criticize me, I may unintentionally just be doing the wrong thing, and never learn about it. And so I think one of the worst qualities a program officer can have is being someone who people won’t tell the truth to. Being someone who doesn’t take criticism well. Being someone who’s overly pushy, overly aggressive. And kind of is always using the fact that they have some kinda power and they’re able to give money, to always kinda be the person who’s getting praised and getting complimented. And laying things out the way they want to be. And be very bad at sort of doing this active listening and encouraging people to share thoughts with them.
And I think it’s a huge challenge. I mean I think it’s somewhat related to the management challenge of, how do you get people to show their honest thoughts with you when they might fear criticism. And we at least want program officers who worry about it a lot, and who form close enough relationships that they’re able to hear the truth from many people. And if they don’t, I mean I wouldn’t be very optimistic about the philanthropy there.
Robert Wiblin: Given that the program officers, they’re trying to get quite a lot of money out the door. Potentially tens of millions each year. And they might wanna make grants to a dozen, maybe more organizations. And they don’t wanna say, “Yes,” to everyone. ‘Cause then they’re not adding that much value. So they must look into potentially dozens of organizations each year. And they have to follow up on the previous grants. I mean how do they find the time to manage all of this stuff and actually still be like somewhat thorough?
Holden Karnofsky: Sure. How do program officers find the time. I mean, they do work really hard. It is a bit of a … It’s a job with huge opportunities to do a lot of impact. And it’s also very challenging job. We’re also I think in the process of just staffing up a bit to take some of the load off of them for grant renewals, grant check-ins. So some of our program areas already have sort of more than one person. Where we’ll have an associate reporting to the program officer and helping them out. And some of them I think we’re gonna get there in the future.
But you know, the other thing I’d say is that I think a lot of the ideal way to be a program officer is to be very network dependent. And so I think one of the things that we encourage people internally to do, is to spend a ton of time networking, talking to people. We definitely encourage them to go to dinner with people, and expense it and all that stuff. Because I think, let’s say you work on farm animal welfare. If you know everyone else who works on that topic, and you kind of … All the people are doing the best work. I mean you don’t have time to be friends with everyone. But you know all the people who are doing the best work and have the most thoughts. And you have good relationships with them. I think you can learn a lot just by talking and brainstorming.
And then a lot of the … It’s gonna happen if you do that well, in my opinion is you’re gonna hear a lot of the scuttlebutt about what the different organizations are good at, what they’re not good at. Who’s the best at each thing, what the biggest needs in the field are. And so that can form a lot of the basis for how you ultimately decide what to prioritize, what to choose, whom to fund. Of course, one of the things that makes this challenging for Open Philanthropy and very different from GiveWell is that, I think a lot of the most valuable information here is exactly the information that people will not tell you, unless they trust you.
It’s information about just who’s good and who’s not so good at they do, and what the biggest needs in the field are. And sometimes also information about how to deal with adversaries. So when you’re doing political advocacy, you often have people who are against you. And so the information people will only tell the people they trust, is the key information behind assessing a grant. And a lot of times it comes from a lot of just interpersonal, getting to know people, building trust, forming opinions of people that is not only delicate information. It’s information that even if you wanted to explain it all, you couldn’t.
And so that’s why Open Philanthropy really had this emphasis on, we don’t only wanna fund that which we can explain in writing. That’s why we have the 50/40/10 rule. Like we don’t wanna only fund that which Holden buys into. And that’s also why we don’t have the same approach to kind of public communications that GiveWell does. We don’t try to explain all the reasoning behind each grant. And it does … It’s pros and cons. I mean I think this frees us up to do really creative high-risk great things. And I think it also, it makes our work in some ways less satisfying, less thorough, less easy to kind of take apart into its component pieces than GiveWell.
Robert Wiblin: Is there a tension between wanting to be friends with these people and get their honest opinions, and perhaps a bit of gossip out of them as well. And potentially also having to kinda crack the whip with the people who are giving grants to about their performance? I mean, how much do follow up. And yeah, pay close attention to potential mistakes that they’re making?
Holden Karnofsky: Yeah. I think it’s a huge tension. And I think this is one of these fundamental challenges of philanthropy that I wish I had more to say about what to do about it. It continues to be an ongoing topic for us. And there is certain things that we’re experimenting with today to try and do a better job. Just evaluating ourselves, and seeing are we getting the best information from everyone? And also do we have good relationships with everyone? I think one place that I kind of analogies it to is management.
So I think management has some of the same challenges. You’re trying to help someone, you wanna know how they’re doing, you want the truth from them. You’re also responsible for evaluating them. That has some relationships to a philanthropist-grantee relationship, but there’s also some important differences. So it’s just an example of something that I think is kind of this open question in philanthropy. I can’t point you to a lot of great things to read about it. But I think it’s a really important topic. And it’s one of the things that we want to develop a better view on and share that view with other philanthropists.
Robert Wiblin: So you said that one of your goals is to make philanthropy as a whole a bit more of an intellectual exercise. And I guess, you’ve been writing up some pretty long blog posts about your process for deciding what problems to work on, and how you decide how to split resources between different problems. I’ll stick up links to those. Is that the main reason why you’re writing those things up in such great detail as to see if you can change the discussion among other foundations?
Holden Karnofsky: Well, one of the reasons why we’re writing them up in such great detail is because it just, it helps us get better feedback. And it helps us get clear about our own thoughts. So when a decision’s important enough, I believe that it is good to just write it down in a way that you’re not using a bunch of esoteric inside language that you might be bearing implicit assumptions in there that makes sense to you and the people you talk to, but not to others. I think just the process of trying to make it clear and trying to make it for a more general audience somewhat raises new questions and clarifies the thinking.
And then I also think we’ve gotten good feedback on some of these thorny questions we work by sending them to people, especially in the effective altruistic community for comment. But yes, another thing we’re trying to do is document a lot of the thinking behind the tough intellectual decisions we’re making. And I think at some future date, I’m hoping that that content, we may have to clean it up. We may have to present it differently, ’cause currently it’s like it’s almost academic in tone. It’s just very dry. So I don’t think we’re in a place right now where this stuff is getting into the press and all kinds of philanthropists are reading it, and changing what they do.
But we do send it to people we have individual contact with. And I could imagine that in the future, it’ll become this sort of, the intellectual backbone of some more presentable presentation of what we end up settling on is the right intellectual framework for philanthropy. And we’re definitely not there yet.
Robert Wiblin: Let’s talk about one of those more difficult decisions that you have to make as a-
Holden Karnofsky: Yeah.
Robert Wiblin: … foundation. Which is how you split the total kind of endowment between different focus areas. What is the process that you use for doing that?
Holden Karnofsky: Sure. So this is something we’ve struggled with a huge amount. So one of the questions we struggle with that has to do with this is when we’re looking at a grant, and we’re trying to say, “Should we make this grant or not?” One way to frame the question … I mean once you’ve put in the time to investigate a grant. One way to frame the question of whether you should make the grant is, would the money do more good under this grant, or under the last dollar that we will otherwise spend? And so in other words, if you imagine that you’re gonna make a $100,000 grant, you can either make the grant or you can not. And if you don’t, then you have a 100,000 extra dollars. And in some sense that’s gonna come out of your pot of money at the end.
And so we call this the last dollar question. And we try to think about each grant compared to the last dollar we’re gonna spend. And this becomes a very challenging problem, because initially the way we wanted to think about it, is we wanted to just kind of say, “Let’s estimate. Let’s sort of have a model of how much good per dollar the last dollar does. And let’s take each grant and have a model of how much good per dollar that grant does. And then when a grant looks more cost-effective than the last dollar, we’ll do it.”
And that brings us into this domain of making these kind of quantitative estimates of how much good you’re doing with a grant. And I think there’s a lot of problems with trying to make these quantitative estimates. And some of them are very well-known. But I think some that have been I think a little but unexpectedly thorny for us is what we call, the worldview split problem.
And so the way to think about that is, let’s say I’m deciding between a grant to distribute bed nets and prevent Malaria. And a grant to fund cage-free campaigns that will help chickens. And let’s say that I find that I can either sort of help, let’s normalize and let’s just pretend that help is always the same amount. We could help a person for a $1,000, or we can help a chicken for $1. Help a 1,000 chickens for a $1,000. Which of those is more cost-effective? And I think it really comes down to this incredibly mind-bending question, which is, do you think chickens can feel pain? Do you think they can have good lives? Do you think they count as sentient beings? Do you think they have rights?
How do you think about the experiences of chickens and how much you value the lives of chickens compared to persons? And I think maybe the initial answer here is, “Well, I don’t care about chickens, I care about people.” But I think the more you think about it, the more this becomes a kind of a mind-bending question. Because if you care about people more than chickens, on what basis is that? Is it that you value sophisticated behaviors? Is it that you value people in your community? And why is that? And which of these things might apply to chickens?
And the problem is that if you end up deciding that, let’s say you value chickens 10% as much as humans, that would tell you that the chicken grant is much better. Let’s say that you don’t value chickens at all. That would tell you the human grant is much better. And so when you’re trying to make these estimates of how much good you’re gonna do, or how many people you’re gonna help, or how many persons you’re gonna help per dollar, a lot of it is just there’s this small number of really tough mind-bending philosophical questions that if you go one way, it says you should put all your money into farm animal welfare. If you go another, it says you should put all your money into global poverty. And maybe if you go a third way, it says … I know you’ve had another podcast on this, that you should put all your money into sort of [long-termism 00:55:07]. Because there may be a lot of chickens in the world, but there’s even more sort persons in the future than there are of any kind in the world.
And so that is a really tough one for us and for a variety of reasons, which we’ve written up in some detail on the web. We haven’t been comfortable just picking our sort of mid-probabilities and rolling with them, because we haven’t been comfortable with the idea that all of the money would just go into one sort of worldview, or one sort of school of thought of giving. And one of the reasons we feel that way is, we do believe there’s a big gap in the philanthropy world. And we believe that we have special opportunities to help on a whole bunch of different fronts. So we think we have special opportunities to help reduce global catastrophic risks. We think we have special opportunities to help animals. And we think we have special opportunities to help people.
And so to leave really outstanding opportunities to kind of help and change the way people think about things on the table, in order so you can put all your money into just one kind of giving, has not been something that’s sit right with us. And that’s something that we’ve written about. So we’re still … This is just a … This question is just very much in-progress. And we have this blog post kind of laying out how we’re now trying this approach where we kind of, we think about our giving in a whole bunch of different levels at once.
So we think about each grant and how good it is according to its own standards. So we think about how much a farm animal welfare grant helps chickens, and how much a global poverty grant helps human. And how much a global catastrophic risk reduction grant reduces global catastrophic risks. And then we also have to have sort of, an almost separate model of how much money we wanna be putting behind the worldview that says global catastrophic risks are most important. How much total we wanna be putting behind the worldview that says that humans are most important. And how much behind other worldviews.
And so I think we have a long way to go on this. And I think we have a lot of work to do both philosophical and empirical. And one of the reasons that we’re on a hiring push right now is I think, this could be a decade’s long endeavor to really … And there is no right answer. So the idea is not that there’s gonna be like one set of numbers that is in some sense correct. But just to come to a very well considered positions on this stuff where we feel we’ve considered all the pros and cons, and we’re making the best decisions with this very large amount of money that we can possibly make under deep reflection and under being highly informed. That’s just a ton of work. And I think it’s gonna be a very long project for us. And we’ve just kinda gotten started on it.
Robert Wiblin: So I’ll put up a link to your latest blog post from last month about this topic of how, yeah, how you try to split the money between the different focus areas. One thing that occurred to me when reading it is that you kinda layout three archetype or focus areas, one is human welfare today, one is animal welfare today. And another is the long term future, or humans around in the future. Do you worry that the split could end up being influenced by kind of an arbitrary way in which you’re categorizing these things?
So you could divide them up differently. You could have humans alive today, animals lives today, and then humans lives in the 22nd century. Humans alive in the 23rd century. Humans alive in the 24th century. Or you could split up humans alive today by country or something like that. And it’s entirely clear that like there’s three clusters here, or that’s the natural way of cutting it up. And perhaps like by defining three different things, you are suggesting that well, that [inaudible 00:58:14] start adjusting from is one third for each of them. Do you see my concern?
Holden Karnofsky: Oh, sure. Yeah. No, I definitely think so. I mean I think that one of the challenges of this work is that it’s, we’re trying to kind of list different worldviews, and then we’re trying to come up with some sort of fair way to distribute money between them. But we’re aware that the concept of worldview is a very fuzzy idea. And it could be defined in a zillion different ways. And I mean I can tell you how we tried to tackle it so far. I mean first off, there’s questions we struggle with where it feels like there’s just like part of me wants one answer, and part of me wants another answer. And it’s just an intuitive thing. That it’s just, I don’t feel like I struggle a lot with how I value the 25th versus the 26th century. I think I probably should either value both of them a lot, or just like …
Or for one reason or another, many of them epistemological rather than philosophical, for one reason or another discount those very far centuries that I have a very hard time understanding or predicting. And same with animals where I feel like there’s kinda one side of my brain that wants to use a certain methodology for deciding how much more a weight to give to chickens, and how much more a weight to give to cows and all that stuff. And it’ll use that methodology.
And then there’s another side of my brain that says, “That methodology is silly,” and we shouldn’t do radically unconventional things based on that kind of very weird suspect methodology of pulling numbers out of thin air, on how we value chickens versus humans. So some of this is just purely intuitive. And a lot of philanthropy ultimately comes down to that. I mean in the end as a philanthropist, you’re trying to make the world better. There’s no objective definition of better. All we can do is we can become as well-informed and as reflective, and as introspective as we can be. But there’s still gonna just be these intuitions about what the fundamental judgment calls are, and what we value. And that’s how we’re gonna cluster things.
Another way in which we do decide how to set up these worldviews is just practical. So some of why I wouldn’t be very happy with all of the money going to animal welfare, is that I think that is a … Animal welfare is kind of a relatively small set of causes today. And there are a lot of idiosyncrasies that all those causes have in common. And so if we became all-in on animal welfare, our giving would become very idiosyncratic in certain ways. And I think in some ways you could think of that as like muddying the experiment of doing effective philanthropy and trying to help effective philanthropy catch on.
And so the bottom line is like, I don’t have a clean philosophical way of seeing exactly what all the questions are, and exactly how much money should go everywhere. But what I can do is, I can say, there’s a point of view that says that we should optimize our giving for just the long-term future and for reducing global catastrophic risks. And I notice that if we put all the money into that view, we would have certain problems both practical and just intuitive, philosophical perhaps, problems. And so maybe we want some sort of split there, and then I can look at a similar split with the animal stuff.
So it’s a mix of practical things and just very good intuitive things. But yes, we don’t believe that we’ve got any objective answers to how to define, “Good,” how much good is being done per dollar. But we do believe what we can do is be very informed, very reflective about these things. And consider all the arguments that are out there. Consider them systematically. Put our reasoning out so that others can critique it. And we think that’s a big step forward relative to the default way of doing philanthropy, which is really just to go with what kind of … Before any reflection, before any investigation feels interesting, and passion compatible to you. So we think it’s a big improvement. But yeah, we’re not … We can’t turn this into a real science. There’s no way.
Robert Wiblin: When it comes to deciding the splits, do you find in your mind that you start with a kind of an even split, and then move from there? Or do you start from, “We should give a 100% of it to the most cost-effective one.” And then kind of adjust down from there based on considerations that suggest that you should split more evenly?
Holden Karnofsky: Well, sort of the question here is what does cost-effectiveness means. So I mean certainly in general, if we could agree on a definition of cost-effectiveness, I would always wanna give to the most cost-effective thing. The question is more about methodology. The question is more about, do you feel that the right way to estimate cost-effectiveness is to, for example, write down your best guess at the moral weight of each animal, and then run the numbers, or do you feel that it’s to allow certain more kind of wholistic intuitions about what to do enter into your thinking? So yeah, I mean, I think the question is about what cost effectiveness is. I’m definitely inclined to do the most cost effective thing if I can figure out what feels cost effective to me. And cost effective just means good accomplished per dollar, and so it’s entangled with the idea of what’s good, and that’s the intuitive idea.
That said, I mean, I think, I guess the intuition does sort of start in a 50/50 place although we’ve tried to list the reasons that it shouldn’t be 50/50. So I think one of the concepts we talk about in our post is let’s say you’re deciding between the sort of long termist view that you should optimize the long term future, and the near termist view that you should optimize for kind of impact you’ll be able to see in your lifetime. You can ask yourself one question, which is, if I reflected on this an inordinate amount, and if I were possibly way more self-aware and intelligent than I actually am, what do I think I would conclude, and what’s the probability that I would end up deciding that long termism is correct or near termism is correct? So that might give you not a 50/50 split. That might give you like an 80/20 or a 70/30.
And then there’s these other things we’ve put in there too, which is like, you should take into account, you know, perhaps even if one worldview is less likely, there’s more value to be had if it’s correct. There’s more outstanding giving opportunities. There’s a lot we’re putting in there, and certainly, we’re not just gonna end up going even split, but I think there is some intuition to go there by default, though we try to counteract that listing counter considerations.
Robert Wiblin: If you think that there’s really sharp declining returns in some of these kind of boutique focus areas, like perhaps artificial intelligence or factory farming, then that could suggest that in fact it doesn’t really matter exactly what kind of split you get between them. As long as you’re giving some to those focus areas, then that’s where most of the [inaudible 01:03:46] is gonna get done. So is it possible that maybe you shouldn’t spend too much time fussing about the details, if you’re gonna kind of split it somewhat evenly anyway?
Holden Karnofsky: I think in some ways that was the attitude we took at the very beginning when we were picking causes. And I think that attitude probably made more sense when we were early, and we just, we wanted to get experience and we wanted to do things. We didn’t wanna hold everything up for the perfect intellectual framework. And we’re certainly not holding things up. We’re giving substantial amounts. But I think at this time, I mean, I think it is true that we can fund the best opportunities we see today in AI and in biosecurity, without getting that close to tapping out the whole amount of capital that’s available. That is certainly true, but there’s a question a, of what kind of opportunities will exist tomorrow. So one of the things that we do as a philanthropist is we do field building, and we try to fund in a way that there’ll be more things to fund in the future than there are in the present. So that’s one complication there is just imagining. There may actually be a lot of things to spend on at some future date.
And then another thing is, you know, one of the things that I find kind of mind bending is thinking about there’s probably something you can do for every worldview. They may not be as good as the best opportunities, but it’s still something. And so if you believe, as you covered in other podcasts, if you believe that there’s so much value in the long term future, you might believe that relatively sort of intuitively low impact interventions, like let’s say directly funding surveillance labs in the developing world to prevent pandemics. You might still believe that in some mathematical sense is doing more good in expectation and helping more persons than even very effective sort of science that’s wiping out very terrible diseases. And so what you do with that observation, when you could do something that really feels intuitively outstanding and impressive, and helps a lot of people, versus something that feel intuitively kind of low leverage, but according to some philosophical assumptions could actually be much better. And that’s one of the things that we’re struggling with at this time.
Robert Wiblin: So what kind of wins have you had so far? Are there any things that you’re particularly proud of?
Holden Karnofsky: Yeah, for sure. It’s early days, so it’s only been a couple years that we’ve really been giving at scale, and I think, in general, I think of the timeframe as, for philanthropy, as being somewhere between like five and twenty years to really hope to see impact. So we’re not at a stage where I would necessarily be, let’s say demanding to see some of our hits yet. But I do think we’re starting to see the early hints of things working out, and kind of this [inaudible 01:06:01] framework. I think the area where that’s been most true is farm animal welfare. That’s been a really interesting case because I think it’s an incredibly neglected cause, and so, you probably talked to Louis about this, but there are these animals being treated incredibly terribly, and they could be treated significantly better for very low costs on the part of the fast food companies and the grocers. So going from caged chickens, battery cages, to cage free chickens is a very low cost thing. It’s like a few cents a dozen eggs or something like that. And just the corporations don’t do it because just no one cares, and no one is even bringing it up.
And so, when you fund these corporate campaigns, you can get big impact. And we believe we came in when there were already a couple wins. There was already some momentum. But we do believe that we were a part of helping to accelerate the corporate campaigns. And fairly shortly after we put in a large amount of money, we saw just this wave of pledges that swept all the fast food companies and all the grocers in the U.S. So hopefully a couple decades from now, you won’t even be able to get eggs that aren’t cage free in the U.S. So the impact there in terms of the number of animals helped per dollar spent already looks like pretty inordinate. And it’s hard to say exactly what our impact was, because the momentum was already there. But I do think of that as sort of an early hit.
And meanwhile, you know, we’re also funding farm animal work in other countries, where things are less far along, because we also wanna seed the ground for those future wins. And we wanna be there for the tougher earlier wins too.
You know, in criminal justice reform, we have seen some effects as well. We picked that cause partly because we thought we could get wins there. It’s another one where it’s been on a slightly shorter timeframe. There was a big bipartisan bill in Illinois that we think will have a really big impact on incarceration in Illinois. There’s a, something that we believe is partly attributable to our funding, and we’re probably going to check it out through our history of philanthropy project at some point, just to kind of check ourselves, and better ourselves.
And in some of our other causes, well, the world hasn’t been wiped out by a pandemic, so that’s good. I mean, some of our global catastrophic risk reduction cause, it’s much harder to point to impact. The nature of the cause is less amenable to that. I think that is some reason that it’s good for that not to be the only kind of cause that we’re in. And we have to [inaudible 01:08:09] more by intermediate stuff. So with AI, we’re happy if we see more people doing AI safety research than used to be doing it, especially if they’re doing it under our funding, which is the case at this time. But it’s an intermediate thing.
Robert Wiblin: What are some other approaches that different foundations take that you think are interesting or that you respect or have some successes in their history?
Holden Karnofsky: Yeah, for sure. We have a certain philosophy of giving, and we’ve kinda tried to emulate a lot of the parts we liked of other foundations. But there’s a lot of really different schools of thought out there that promise to be very effective. So our kind of philosophy is work really hard to pick the causes, work really hard to pick the people, and from there, try to minimize decision makers and emphasize autonomy for the program officers, and kind of have each thing be done by the person who knows the field best, kind of having a lot of freedom. And I think we also take the same attitude toward our grantees, where by default, the best kind of giving we can do is to find someone who’s already great, already quite aligned with what we’re trying to do, and just support them with very few strings attached.
I think you do see some other models that are very interesting, so I think the Bill and Melinda Gates Foundation I think has had a huge amount of impact, and I find a lot of what they do really impressive. And a lot of it has been, I think, in many ways, more kind of driven. It’s been a little bit more, in some sense, top down. So I would say they’ve kind of come up with what they want major decision makers such as governments, to do, such as put more money into highly cost effective things like vaccines. And they’ve just kinda gone out and really gone lobbying for it, or soft lobbying, making the case for it. And I think their work has been a bit more prescriptive. They have their thing they want to happen and they try to push it to happen. They do everything, and they’re very big. But I think a lot of their big wins have looked more like that and less like, let’s say our cage-free work, where we saw something was already going on and we just kinda tried to pour gasoline on it in a sense.
The Sandler Foundation is kind of different in a different direction, and we’ve written about them on our blog. One of the things they do differently from us is they’re just very opportunistic. We try to develop these focused areas that we become very intense about, and we have people who are just obsessed with them and experts with them. And the Sandler Foundation is more like they have a whole bunch of things that they would fund if the right person or the right implementation came along. And so they kind of, they funded Center for American Progress. They funded ProPublica. They kind of were startup in both of these, and Center for Responsible Lending. And a lot of it is like there’s a whole bunch of things they could do at any given time, and they’re not all united by one cause, and they’re just waiting for the right team to come along. And they operate on a very small staff, just like a few people.
So that’s interesting. Then there are foundations that are more kind of operating, so foundations that are kind of doing the work themselves, doing the advocacy themselves. Kaiser Family Foundation, I think, was originally maybe set up as a grantmaker and essentially became a think tank. And I think all those foundations have had really interesting impact.
You know, and then a final one, the Howard Hughes Medical Institute, they’re a science funder and I think they’ve got just this kind of different philosophy from most of what we do, although we do some things like this. But their philosophy is kind of rather than trying to identify where you’re gonna see the most impact in terms of lives improved and kind of back chain from there, a lot of what they do is they just believe in basic science. They believe in breakthrough fundamental science. They believe in better understanding the basic mechanisms of biology, and they have this program that kind of, it’s a program that’s an investigator program that’s setup to identify outstanding scientists and give them a certain amount of funding, and I think it tends to be pretty similar from scientist to scientist. And we do have some work on breakthrough fundamental science, but they have just an incredibly prestigious program. They’ve been early to fund a lot of Nobel laureates, and they really have specialized their entire organization around basic science, which is a little bit of a different frame than most of what we do, although we have some of that going on too.
Robert Wiblin: How do you expect that you’ll be able to change policy, and has that changed since the 2016 election?
Holden Karnofsky: Sure. So I think the way we like to think of it is that we try to empower organizations that can have a productive effect of the policy conversation. There’s a whole bunch of different ways to do that. We have a blog post, an old one called, I think it’s called The Role of Philanthropic Funding in Politics. And something you can do is you can just fund people to develop ideas, new policy ideas, that may be kind of a new way of thinking about an issue. That is different from, for example, funding elections.
You can also fund grassroots advocacies. You can fund people who are organizing around a common population or a common topic, like formerly incarcerated persons. And you can just support these people to organize and to work on issues they’re passionate about and see where that goes. You can also fund sort of think tanks that try to kind of broker agreements or try to take the new ideas that are out there and try to make them more practical.
So I think there’s a whole bunch of different things philanthropists can do, and a lot of the time, the longer term and higher risk, in some ways, the bigger impact you can have. So this book I mentioned before, The Rise of the Conservative Legal Movement, really argues that by focusing more on the upstream intellectual conversation, and that certain funders have a lot of influence on politics that they couldn’t have had if they’d just tried to jump in on an election and start funding candidates, where the candidates kind of already know what they think and what they believe, and you can kind of pick from a very small set of options.
And so I think there’s a whole bunch of ways to fund politics, and what we’ve tried to do is when we pick an issue, we try to pick someone who can see all sides of the issue, and see how all the pieces fit together, and it’s really different for every issue.
Robert Wiblin: Do you think there’s good evidence that trying to change the intellectual milieu, or the ideas that people in the political scene are talking about is more effective than just funding candidates at the point of an election?
Holden Karnofsky: Well, I think it’s one of these things that’s just inherently incredibly hard to measure, because you’re dealing with, I mean, to trace the impact of something like the Conservative Legal Movement over the course of decades on the overall intellectual conversation. You not only can’t do it with randomized controlled trials, it’s real, there’s not much you can do in terms of quantitative analysis period. You kinda just have to be a historian, an ethnographer… And so I think it’s hard to say, and I think a lot of what we try to do is rather than getting everything done to a formula, we try to just be as well informed, as reflective as we can be, and work with the people who are as well informed and as reflective as we can find. And so, when it comes to choosing when are we going for the long term, the big win, versus when are we going for the short term, the tractable thing, a lot of that comes down to once we have a cause, we try to pick the person who we think can think about that question better than we can, and then we follow their lead. And that’s the program officer.
Robert Wiblin: What would you say to a potential philanthropist who wanted to do a lot of good with their money, but their one condition was that it had to be something that Open Phil wouldn’t fund, or Good Ventures wouldn’t fund?
Holden Karnofsky: Sure. So, I mean, I’m kind of a strange person to ask about this, but I do have some answers none the less. First off, there are things we just don’t fully fund for various reasons. We don’t fully fund GiveWell top charities. That’s for reasons that have been laid out in great detail in the cause prioritization post. There are organizations where we don’t wanna be too much of their funding. We put out an annual post on suggestions for individual donors, so that’s a possibility. And then, the other thing is, there are these kind of, there’s the EA funds, some of which are run by people who work in Open Philanthropy. And those are more specifically to give people who work here the opportunity to fund things that Open Philanthropy can’t or won’t do. And there is some stuff in that category. And so occasionally, we’ll be looking at a giving opportunity, and we’ll say, that poses too much of a communications or a PR risk, or there’s some other reason, or we don’t have full agreement among the people who need to sign off on grants, which we try to minimize the number of veto points, but sometimes we still won’t have agreement.
So an example of something is, something that I am somewhat interested in as a philanthropist is the idea of experimenting with just like experimental individual re-granting, so just taking someone who I think it would be interesting if they had a chance to try their own philanthropy and see if they would do it differently from me. And we might have enough in common in terms of values and in terms of worldview and in terms of goals, that I can feel confident that if we sort of re-grant, just granted a bunch of money to them to re-grant, that I could feel comfortable that money would not be used terribly or the person would work really hard to optimize it, and they might find something much better than what we could do. And if they couldn’t, they might give it back. And so that’s something that we’ve talked about experimenting with, but I think it’s, I think there are some logistical obstacles and there’s also some just internal … It’s not just, I think there’s a higher level of excitement from me than from some of the other people, such as Cari and Dustin, for this idea, and so we aren’t necessarily experimenting with it at the level or pace that I would necessarily do if it was my money.
And I think that’s a pretty interesting idea is just look around yourself, because there’s someone I know who they really might want a shot at being a philanthropist. Who know what they would come up with and they have a lot in common with my values. That’s something that I’d be pretty interested in and that I’ve sometimes encouraged people to try out.
Robert Wiblin: And one of the big benefits there is that they can just use kinda the unique knowledge that they have that’s very hard to communicate.
Holden Karnofsky: Exactly.
Robert Wiblin: Yeah.
Holden Karnofsky: Yeah, exactly. And one way of putting it is like, you know, if you gave, if you have … Let’s say, just to use some made up numbers, let’s say you had a billion dollars and you gave a million to someone who you wanted to see what they would do. The idea would be like their first million dollars is better than your last million, because it’s like they’re, in some sense, their ration of human capital to money spent is gonna be extra high for that money, and so they might put extra time into finding things that you don’t have time to chase down, ’cause you can only run so big of an organization without starting to run into organization issues that we don’t necessarily want to contend with yet.
So I think that’s part of the thinking. And yeah, also part of the thinking is this is a theme running through Open Philanthropy, but we don’t, when we’re doing hits-based giving, we don’t expect every good idea to be communicable. We think sometimes you have deep context, you have relationships, you have expertise, you’ve thought about stuff for hundreds or thousands of hours. And sometimes it’s just better to give someone a shot as long as you’re controlling the downside risks.
Robert Wiblin: Hmm. I guess if you had billions of dollars to invest as a for profit vehicle you probably wouldn’t found a conglomerate and then decide what businesses do we want to run as this enormous conglomerate. You would like give it out to lots of different entrepreneurs who could start businesses.
Holden Karnofsky: Right. You would invest in a bunch of different folks, which is what people do.
Robert Wiblin: Okay, let’s shift gears for a minute and talk more about artificial intelligence safety, which I think is one of the focus areas that you’re most involved in, right?
Holden Karnofsky: Yep.
Robert Wiblin: What are the main categories of work on AI that you’re involved in funding or supporting.
Holden Karnofsky: Sure. We thing potential risk for advanced AI is a really great philanthropic opportunity in a sense, because we think it’s very important, very neglected, a little bit less sure of the tractability, but there you are. So some of the things that we do in this area … First off, just to set the stage a bit. I don’t know if you’ve had previous or future podcasts on this, but-
Robert Wiblin: We’ve had a few, but maybe not everyone’s heard them.
Holden Karnofsky: Okay. Okay, cool. So to set the stage, we have a view that it should be certainly possible, in principle, to develop AI that can be incredibly transformative. And when we use the term transformative AI, we’re talking about AI that can cause kind of a change in the world that might be roughly comparable to what we say in the industrial revolution or the agricultural revolution. So I think sometimes when people talk about AI and they kind of make fun of the idea of a singularity, like this dramatic change in the world that kind of sounds kind of eschatological. I don’t think we’re, we’re not necessarily looking at what one might call singularity, but we might look at a level of world transformation that does have historical precedent and is incredibly dramatic.
And so the industrial revolution is something that the world changed incredibly fast and it’s almost like unrecognizable afterwards compared to beforehand. And we think AI could bring about that kind of change. The reason for that is that we think when those giant transformations happen, a lot of the times the reason is dramatic changes in technology, dramatic changes in what’s possible and how the world works. And that’s a lot of why humans have had kind of the outsize impact on the planet that we have, why we’ve driven a lot of other species extinct despite the fact that we’re quite unimpressive compared to a lot of those animals, physically speaking. And so, when we think about AI, we think about the things that humans do in order to develop new technologies, and in order to transform the planet according to their wishes, for better or worse. The things humans do, it seems definitely possible in principle to us that they could be done, the same things could be don faster and more effectively if they could be implemented in a computer that was kind of carrying out, in some sense, the same information processing, the same inference, the same science, but with a great deal more computing power.
And so, the other thing is that when we look at the details of the situation, and this is something that I don’t think we’ll have time to get into detail here. But when we look at kind of just our technical advisors, what they think, and our read of the situation, we think it’s actually somewhat likely this could happen fairly soon. So kind of the whole field of AI or computer science is only a few decades old, only something like maybe 60 years old, and we can imagine that maybe not likely, but definitely possible. Something like at least 10% probability that sometime in the next 20 years we could kind of reach a threshold where AIs are able to do incredibly transformative things. They don’t necessarily need to be able to do everything that humans can do, but if you had an AI that, for example, could sort of do science, could read the existing scientific papers, propose experiments and do science and greatly accelerate science, that could be sort of transformative. And we do think that is something that is somewhat likely, in the sense of non-truly likely, in the sense of 10% or more.
And you know, 10% or more in the next 20 years, a lot of people would look at that and say who cares. But as a hit space philanthropist, we work on very long timeframes, and we’re happy to work on things that may be less than 50% likely to have an impact, but they do have an impact, it’ll be huge.
So when we look at this situation of the world could change in this kind of very dramatic and global and sort of irreversible way, and it could happen in the next 20 years, then we start to think to ourselves, well, if there’s something we could do to help the world be more prepared for that transition, so that it happens in a way that’s safer, that is more conducive to human empowerment, then we could imagine this becoming a win kind of the size of some of the wins I talked about before, with the green revolution and whatnot. And that could be, that fits very well into the hit space giving framework.
So then the question becomes what could we do if this is going to happen. And again, we think it’s only a decent chance. Is there anything a philanthropist could do to make it more likely this happens in a good way instead of a bad way? And so then that gets to our picture of what the AI risks are. And one thing about AI is that if you have this AI that is helping you do science at an accelerated rate, helping you develop new technologies, in some ways, that is a source of power. And that could be a source of concentration of power potentially, depending on the exact details of how AI develops and what order it occurs in. You could imagine something happening like with Alpha Go, where something went from not being able to beat any professional humans to being able to essentially crush all professional humans in the span of a few months.
You could imagine that if you had sort of an equivalent of that for doing science, you could imagine a situation somewhat analogous to the beginning of the cold war where there was just a couple of countries that were ahead of everyone else on a certain branch of science, and that gave them sort of inordinate power, and kind of concentrated power, and it was very scary. And you can imagine this happening with AI. This could be one of these technologies that developing it faster than others causes power, for a time, to concentrate, and to become imbalanced. And so that’s one risk that we see. And we refer to misuse risk as the risk that let’s say that there’s a state that uses advanced technologies that come from advanced AI for ends that we think are not conducive to human flourishing? So that’s a misuse risk.
And then, the other risk we see from AI is what we call accident risk, which is the AI itself, if it’s sort of given a cost function or an objective function that is sort of not very well designed, or just a little bit carelessly designed, or just isn’t perfectly capturing what we really hope to be optimizing for. I think we could end up with sort of essentially for the first time ever, an intelligence that is able to do things, really important things and really broad scope things that humans can’t, that also wants something that’s opposed to human flourishing. And this has been spelled out at some length, this idea, in the book Superintelligence by Nick Bostrom, so I won’t go too much into it. But we think it’s an area where we can imagine technical research really making it easier to build an AI that we can be confident is going to essentially help humans accomplish the things they’re trying to accomplish instead of kind of pursuing some degenerate, some degenerative jacked up function and maybe causing a lot of damage.
So that’s kind of how we see the risks. Broadly, and I can go into more detail on this, but broadly how I would describe our intervention, I would describe our core intervention as field building. So it is, even on a 20 year time period, which I think is a bit, is a little bit aggressive, and we’re saying there’s at least a 10% probability of it. But even on a 20 year time period, that’s a really hard time period to really make plans about. This is a technology that doesn’t exist yet. And so we should not be confident in our ability to know the future. We shouldn’t be saying that AI will be designed in very particular ways and doing very particular things. That’s not what our attitude is about. But we do believe that the world would be better off, if this happens, the world would be a lot better off if there’s already a large, robust, excellent field of experts who have spent their careers thinking very deeply about what could go wrong with AI, and what we could do to prevent it.
And so, for this example, this accident risk that Nick Bostrom writes about, if we develop extremely powerful AI in 20 years, and there’s this field of maybe hundreds, thousands of people who have just been thinking about the different ways AI can go wrong and how to build a robust, and safe, and aligned AI, we think we’ll be a lot better off than if we’re kind of under the status quo, where there’s this very small fringe sort of sub community that thinks about it. And similarly for some of the misuse risk, and the geopolitical challenges, we think if people have been thinking about what kind of imbalance of power can be created, how it compares to the situation with nukes, how it’s different, we think the world will be better off than if it just catches everyone by surprise and everyone’s scrambling to just improvise a way to navigate this situation peacefully.
And so, our intervention is field building, and as I mentioned earlier, field building is something we think philanthropy has a track record of doing. So in a nutshell, that’s what we’re trying to do. We want there to be more people who have made it their life’s calling and their life’s career to think about how things might go if there was a very sudden transition to a very powerful AI. And that’s our kind of bet on how to help.
Robert Wiblin: So what kind of things could you fund, or what developments could there be in the whole AI space in general that you think would make the biggest difference in the next couple of years?
Holden Karnofsky: Sure. I’m gonna divide it up again into kind of the technical front and maybe the more geopolitical or strategy side. So on the technical front, the thing that I’ve said we wanna see is we’d like there to be a major academic field of people who are basically doing technical research that reduces the odds of a really bad accident from AI. And, you know, some of the examples of the work that’s been done on this. So, you know, one challenge that you could imagine leading to a very bad accident with AI is the current kind of reinforcement learning systems, deep reinforcement learning systems, it’s like you kind of have to hard code what their objective is. So you take something and you say, well, let’s say for example, whenever you’re gonna play this game over and over again, this Go game, and whenever you get more stones or controlled spaces than the opponent, that’s good, and whenever you get less at the end of the game, that’s bad. And if you define good and bad in this very clean, tight, algorithmic way, then from there, the AI is able to very kind of cleverly and intelligently kind of get that outcome.
So it knows what outcome it’s trying to get, and it becomes very good at getting it in these very creative ways that use a lot of things that look a lot like human creativity and intuition. You could imagine this becoming a problem if in the future AIs are very powerful and very broad scope, and you might have a situation where a very well defined objective, like maximize the amount of money in this bank account is something that AIs can find very clever and very creative ways of doing it, and maybe also illegal, and also maybe damaging and bad, whereas if you try and give the AI a goal like, “Hey, can you please stop other AIs from mucking things up?” That’s a poorly defined objective, and you aren’t able to give it the same kind of pattern of learning from reward and punishment and learning how to optimize. And so there is this very nascent field that one might call reward learning, which is trying to transmit these kind of fuzzy, poorly defined human ideas of what we want to AI, so that the AIs are optimizing for things that we ourselves don’t know how to describe, but we know them when we see them.
There’s a paper that was a collaboration between Open AI and Deep Mind on this, Learning from Human Preferences, where they kind of, they had humans kind of say, just kind of say, “I know it when I see it.” And they looked at the behavior an AI was doing, and they kind of manually trained the AI to do things that, like a back flip, that they knew how to recognize. They didn’t know how to algorithmically describe. And there’s other kids of, there’s other areas of reward learning, like inverse reinforcement learning, which is kind of inferring an agent’s reward function from its behavior.
You know, there’s another category, another thing that might cause a bad accident is that a lot of AIs, they’re very good at making decisions as long as what they’re seeing is similar to what they’ve seen before in some sense. So they were trained on a certain set of inputs, and that’s how they learned. Let’s say they looked at a bunch of images that had cats and dogs in them and that’s how they learned how to tell apart cats from dogs. And then if you show them completely new images that come from a different set, maybe they have some new Instagram filter on them, they might behave in extremely strange ways.
And so, there’s this idea of adversarial examples where you can design images that look for all the world, like a dog, but will be classified by an AI, with perfect confidence, as an ostrich. And that’s the kind of thing that you can imagine, something that is very smart and is able to come up with clever, creative, high technology ways to do things, if it kind of breaks in this way where it’s doing completely the wrong thing, ’cause it’s in an unfamiliar situation. Learning how to deal with adversarial examples and build AIs that can’t be screwed with in that way would be really nice. And so one of the things in the next two or three years is I’d like to see more top AI researchers who are spending their lives on something like the adversarial examples problem, or the reward learning problem.
And one of the things that we’re hoping to do is we fund professors to work on problems like these, and then we’re also funding, we have an AI fellows program where we’re choosing winners as we speak for more junior students to be able to say, “Hey, I can make a whole career out of this. Why don’t I start now?” So I think that’s a major area where I think things could be a lot better. And, you know, one of the roles we’re hiring for is directly related to that. And then, the other areas is kind of the other thing I talked about, the misuse, the geopolitical risks. And that’s where, I think it’s simultaneously could be kind of silly, but also could be very important for people with a lot of knowledge of international relations and politics, and geopolitics too, to kind of game out. What happens if there’s a sudden increase in technological development capabilities because of AI, and it doesn’t come to all the countries at the same time, in the same way? What is the result of that gonna be, and are there agreements or preparations that we can make in advance so that that is kind of a less destabilizing worldly stabilizing situation.
So again, that’s where we are actively looking for people who we think can devote a career to kind of gaming that out, thinking about what the implications of that might be. And we are already sort of seeing the growth of people who are policy analysts on other AI risks, on risks, for example, of unemployment. For us, the highest stakes are kind of in the things I described, and the things where it seems most important to get ahead of the curve and not respond to them as they’re happening. So finding people who are gonna devote their lives to these things, very high on our list of things we wanna see happening.
Robert Wiblin: So you mentioned two different potential hires there, someone focused on technical work and someone focused more on policy and international relations. Could you go into a bit more detail of exactly the kind of people that you’re looking to hire there in case someone who’s listening might be suitable?
Holden Karnofsky: Yeah, for sure. On the technical research side, one of the challenges that we have is that there are a bunch of people who are working on technical agendas relevant to AI safety and AI accidents. So there’s Machine Intelligence Research Institute doing their research. There’s Future of Humanity Institute doing theirs. There’s a nonprofit called Ought that we’re working on a grant to that does something else. There’s various academic labs that we’ve funded. There’s the Center for Human Compatible AI at Berkeley. And it’s kind of a growing field, and the thing that there isn’t right now is anyone who’s making it their full-time job to understand the pros and cons and the weaknesses and strengths of all these different lines of research and adversarial examples too. And the work going on at the labs, Open AI, Deep Mind brains, et cetera. And so, you have a lot of researchers who are, they’re working on their thing. They’re working on their technical path. And they might be somewhat aware on the side of other people’s work. But they’re also very busy because they’re doing their own research.
And the thing that we would love is someone who’s just, their entire job is to understand, at a technical level, what is the technical problem being solved? What are some of the algorithms that have been come up with to solve it? What are they doing well?
Holden Karnofsky: Solved. What are some of the algorithms that [inaudible 01:33:02] to solve it? What are they doing well? What are they not doing well? How could this lead to an AI that is robustly aligned, and that we have nothing to worry about from in terms of accidents? How could this fail to lead to that and what else might we need? I think if that person existed, a, they would be really useful to AI researchers trying to figure out what to focus on, because they would know which lines of research look especially neglected or especially promising, and which ones seem like a good fit for different people based on their skills.
I think these people could also advise us really usefully to help us determine where should our biggest priorities be? What kind of researchers should we be looking for? Should we be spending more time looking for people with a security background, with a math background, with an ML background? They could really advise us, because we have a lot of different ways we can do things.
Our current AI technical safety team really has its hands full just designing funding mechanisms, so ways to provide fellowships and grants to academic researchers and ways to source the researchers and make sure we fund the people who are interested in this work. Someone who could really specialize in just digging in on these technical agendas and helping us understand which ones are most important to support, and which lines of research are most promising to get more feedback on and to provide more support for, that would be really excellent.
Robert Wiblin: Is anyone really qualified to answer those questions, or is it just a matter no one really knows but some people will have better guesses than others?
Holden Karnofsky: Yeah, I think it’s definitely the latter. That’s a common theme in Open Philanthropy’s work is we’re doing philanthropy, so there aren’t conclusive answers to any of these questions. A driving idea of Open Philanthropy is that if you take something really seriously and you spend your life trying to understand all the considerations, and you’re making considered judgment, that’s probably better than if you have thought about something for 10 minutes or thought about it for an hour or maybe even thought about it for a week.
Yeah, I mean I don’t think we’re ever going to have the answer. Science is hard to predict especially and hard to evaluate. It would be really great if there was just someone who lived and breathed this stuff. I would, if they did and they seemed to have reasonably good judgment and answered questions well, I would definitely trust their opinions on who is doing especially exciting research that needs to be amplified. I would trust their opinions more than my own, and I would trust them more than the people today, frankly, who are very bright. I think someone who has spent that kind of time on this stuff, I would put more weight on their view. I think that would be to the benefit of our funding and researchers’ career choice and other things.
Robert Wiblin: Heard on the grapevine that you’re also considering hiring someone who would become a world expert on when we should expect AI to be able to do different things, so looking at timelines of development of artificial intelligence capabilities. Is that right?
Holden Karnofsky: Yeah. That’s absolutely right. That’s another topic where I look at the situation. Open Philanthropy has our view that transformative AI is, at least, 10% likely some time in the next 20 years. That view is based on conversations that our technical advisors and based on some amount of internal analysis. I mean I don’t think we’ve done a wonderful job on it. I don’t think that we have considered all the different arguments and counterarguments. I don’t really think, frankly, that anyone else has either, because the task of trying to estimate when AI will be able to do what is just extremely different from the task of doing AI research itself.
I think this is … One of my issues with the current dialogue around AI is people say, “Well I think AI is really far off. Or I think transformative AI is coming really soon.” What are these statements based on? I think one of the best things anyone has come up with, to base them on, is surveys of AI researchers. You send them out a form, you say, “When do you think we’ll have artificial general intelligence or human level or whatever your preferred term for it is?” Then, you average the answers.
I think the problem is what you’re doing is you’re going to people who spend their entire lives trying to get today’s state-of-the-art algorithms on today’s state-of-the-art hardware to do the most interesting and breakthrough thing possible today in some particular subfield. That just doesn’t have a heck of a lot to do with the task of estimating when AI will be able to do what. I think when we have talked about the latter topic, we’ve gotten into a lot of conversations about what are we up against here?
I mean if we want an AI to be able to do certain things better than a human can, it becomes relevant to look at how much computing power today’s top AIs have compared to today’s humans and estimate what is the brain doing equivalently, in terms of a computer? What can we expect from the future, in terms of when will we have enough AI hardware to run something similar to a mouse, or to run something similar to a monkey, run something similar to a human. These are questions that just don’t come up if you’re trying to build an image classifier or a reinforcement learner that finds novel ways of beating video games or classifying images.
I think it’s one of these things where it’s a fundamentally different field. It’s a different discipline. I don’t believe there’s anyone right now who has made it their entire life’s calling, and their life’s work, to understand the different arguments about when AI will be able to do what. I don’t think that’s something we can have answers on. I think it’s something that if someone is thinking really hard about all the different facets of the problem, and all the different arguments and counterarguments, I would put more weight on their view than on my own. I would put more weight on their view than on most others. I don’t think there’s anyone like there right now. That’s something we would love to hire.
Robert Wiblin: Yeah. I was a little bit surprised when I heard you were going to hire someone for this. It seemed like it might just be a problem where we’d already done what we could, and it was just fundamentally uncertain. It sounds like you think there’s lines of research that haven’t been done. We’ve surveyed people who have some relevant expertise, but not really the most relevant expertise. Someone could just become the world expert on this, if they spent a few years on it.
Holden Karnofsky: Yeah. That’s roughly how I feel. I think there’s these questions, like the ones I was mentioning, about how you translate what the brain is doing into computing power. They just haven’t been studied very much. It’s not anyone’s job to study them. A lot of the best analysis I’m aware of today is just really informal stuff done by people in their spare time.
Robert Wiblin: Yeah [crosstalk 01:38:38].
Holden Karnofsky: Yeah. I think … What did you say?
Robert Wiblin: AI Impacts is one organization that’s done some of this, yeah.
Holden Karnofsky: Yeah. That’s a very small organization. We support them.
Robert Wiblin: It’s like one or two people, I think.
Holden Karnofsky: Yeah, exactly. I think this is incredibly underdeveloped field relative to machine learning itself. I think there’s extra work to be done. There’s lots of arguments that haven’t been fully reckoned with. I do believe that. I think a decent counterpoint to this is like, “Look. You can do all the work you want and try and understand the brain. It is not possible to predict the future 20 years out. No one’s ever done it well.” That’s an argument someone could make. I think it’s a decent argument.
I think there’s also a counterpoint to that. One, I think people do somewhat overestimate how futile it is to predict the future. We have an ongoing project on this. We have a contractor working right now on looking back at a bunch of 10 or 20 or 30 year predictions, and scoring them according to whether they came true or false. Those predictions that were made decades ago. We’ll see how that goes.
The other thing I’d say is you can’t know the future. It seems possible that you can be well calibrated about the future. If you look at, for example, Slate Star Codex. Every year, it’s putting up a whole bunch of probabilities about things that that blogger is not an expert on. It doesn’t really necessarily know a whole bunch about. There will be a probability that things improve in the Middle East. A probability that someone wins a Presidential election. It’s like this person doesn’t necessarily know what’s going to happen. What they do know is they have some knowledge about their own state of knowledge. They have some knowledge about their own state of ignorance.
What they’re not able to do is accurately predict what’s going to happen. What they are able to do is make predictions such that when they say something is 90% likely, they’re wrong about 10% of the time. When they say something is 80% likely, they’re wrong about 20% of the time. When they say something is 50% likely, they’re wrong about half the time. They know how likely they are to be wrong, which is different from knowing what’s going to happen.
One of the things that I’d look for in this timelines person is a deep familiarity with the science of forecasting, which is something that we’re very interested in and we try to incorporate into our grant making. A deep familiarity with that, and an understanding of what is realistic for them to say 20 years out? If someone said, “I got it. Transformative AI is coming in the year 2031, in February 12th.” I would just say, “That’s ridiculous. I don’t care how much you know about the brain. That’s not something someone can know.”
If someone hands me a probability distribution and I understand that what they’re making is partly a statement about their own uncertainty, but their own uncertainty is a more thoughtful uncertainty than mine, because they’ve contended with these questions more, that’s someone whose opinion I would take pretty seriously. I think that would be a big step forward compared to anything we have right now.
Robert Wiblin: That was three positions. One looking at research agendas within AI, technical research. Another one looking at research agendas within policy and international relations and strategy. A third one looking at developing timelines, how likely it is that AI will have particular capabilities at particular different dates.
Holden Karnofsky: Yeah.
Robert Wiblin: Are those jobs going to be advertised in the next couple of weeks or months?
Holden Karnofsky: I think they’ll probably be up when this podcast goes out.
Robert Wiblin: Okay.
Holden Karnofsky: I think they’re actually going up tomorrow.
Robert Wiblin: Excellent, okay.
Holden Karnofsky: Yeah, yeah.
Robert Wiblin: I’ll stick up links to that.
Holden Karnofsky: Cool, yeah.
Robert Wiblin: If you’re interested in any of those roles, then take a look.
Holden Karnofsky: The other thing I would add for these roles is I think we put out these roles because if we see someone who is really ready to go in one of these roles, we’re going to be very excited. We can hire them for sure. Partly we posted the roles too to give people a sense of if you become a generalist in [Open Phil 01:41:49] … That’s an example of something. If we don’t find someone who is ready to take that role on today, it might be that someone who is a generalist, a research analyst at Open Phil, could be ready for that kind of role maybe in a couple of years.
That’s another thing is that if these roles sound interesting to you and they sound like something you’d love to do, but you don’t feel like you’re necessarily the best qualified person for them today, I think you definitely do is apply for our generalist research analyst role, which I imagine we’ll talk about later. It’s just an example of the kind of thing you could end up growing into if you’re a good fit for it.
Robert Wiblin: Six months ago or a year ago, you joined the board of the machine learning research nonprofit OpenAI, right?
Holden Karnofsky: That’s right.
Robert Wiblin: Do you feel you’ve made a useful contribution there since you joined?
Holden Karnofsky: My general feeling is with OpenAI, first, let me just talk about what that grant is.
Robert Wiblin: Sure.
Holden Karnofsky: What we’re trying to do with it.
Robert Wiblin: Yeah.
Holden Karnofsky: OpenAI is a nonprofit essentially AI lab. We’ve made a bunch of grants to safety organizations, to Machine Intelligence Research Institute, Future of Humanity Institute. These are organizations dedicated to working on AI safety. OpenAI is something different. OpenAI is both working on AI safety and working on advancing the state-of-the-art in AI research. I think that comes with a different profile. It’s significantly more expensive undertaking. I think it also … There’s pros and cons, in terms of the ability to contribute to safety.
I think in some ways … An organization like that is, in many ways, better positioned to do things about certain elements of AI safety than safety focused organizations. For example, one of the things I said we really want to do is field building. We want there to be a field of people who do AI safety research. One of the challenges we run into is that we need people to see that there are careers in that field.
One of the best things that could happen for AI safety is if it became common belief and common knowledge that there are great career tracks and great jobs available. Some of the most desirable jobs, just generically, are jobs at a lab, like OpenAI, like DeepMind, like Google Brain, a lab that is just like right in the heart of the field, right in the cutting edge of capabilities. To me, groups like OpenAI, DeepMind and Brain, they have outsized influence on just how safety is perceived among ML researchers, and how likely ML researchers are to see it as a legitimate career, as a place where they can really have a good career trajectory and do great work and be surrounded by great people. That’s one way in which a place like OpenAI is in a special position.
Also, when you think about some of the strategic and geopolitical challenges I mentioned, I do think that if and when, rather I should say if, if we’re in a situation where transformative AI looks like it’s going to be developed quite soon, I’ve talked about some of the issues that raises geopolitically and some of the balance of power issues. I think it’s quite likely that the labs that are advancing the state-of-the-art are going to be seen as the experts, are going to have the people who are in highest demand to consult, to weigh in on how these situations should be handled.
I think there, again, there are certain ways in which OpenAI, I think, is in a special position to affect how that kind of thing happens. I also think they’re also, as a lab that works on state-of-the-art, they face considerations that a safety focused organization does not, such as when to be open. I mean when they have researched that could advance the state of the field and could be a big public good, but also potentially at some future date, I wouldn’t say today, but at some future date could also be dangerous in the wrong hands. I mean what kind of research should be shared and shouldn’t be?
I think OpenAI is, in general, I think these industry labs are just incredibly important. I think what they do and what they do for safety just sets the tone for how all AI researchers, and how all people, think about safety and what they do about it. I think in many cases, industry labs, unlike academic labs, are places where there’s not much opportunity for Open Phil and our funding to make much difference.
OpenAI is an exception. We jumped at the opportunity because they’re kind of industry conceptually. They’re working on the state-of-the-art. They’re very heavy on large scale experiments and stuff like that, but they’re also nonprofit. We jumped at the opportunity to become closely involved with them in a way that we felt we might play a role in them putting more of an emphasis on best practices, from the perspective of reducing AI risks.
What we did is we made a major grant to OpenAI. Open Phil holds a board seat, which is currently filled by me but the seat is held by the organization. The idea was when we put this in our grant writeup, which is available, the idea is that we want to have the opportunity to make our case to OpenAI about how to do the best things for reducing AI risks. Then, to the extent that OpenAI is doing the best it can, we want to support it. We think that a lab that is setting a good example and doing the right things for safety is a positive impact on the world.
When I talk about the impact that’s had since then, it hasn’t been that long, but I want to be clear that what I don’t want to do, because I don’t think it’s appropriate, is to disentangle what OpenAI has done from what I have done as a board member. I mean as a board member, I’m part of the organization. I don’t think it’s particularly appropriate to try and talk about, let’s say, every internal conversation that’s been had. Instead, the questions that we are going to ask at renewal, which is a couple of years off, but that we’re asking always in the interim is is OpenAI on track to become an organization that’s really focused on safety, and really doing everything it can to maximize its contribution to improving AI safety?
Basically, if the answer to that question is yes, we’re going to be happy — whether that is attributable or partly attributable to us or not, we don’t care. We think if they’re a good influence, we’re going to be happy to support them and continue the relationship. If the answer is no, then we’re not. I mean, so therefore, without going into any particular detail, I mean I would say that I feel optimistic about this at the moment. I think that OpenAI leadership, I think, is really, genuinely and thoroughly, passionately committed to safety. I also think that they have had plenty of disagreements with us about exactly what that means. I think we’ve had a lot of fruitful conversations. I do feel optimistic that we’re going to get to a good place.
Robert Wiblin: As recently as 2012, you wrote a blog post called Thoughts on the Singularity Institute where you explained why you didn’t think that risks from artificial intelligence were as serious as some other people in the Bay Area were making out. I guess, now, you’re one of the most important players in the entire field. I guess the fact that you changed your mind makes me somewhat more confident that we’ve reached the right conclusions. You certainly weren’t biased in favor of reaching this view to begin with. How do you feel about having basically done a, well, it seems like a 180? Is that how you would perceive it? Or do you think it’s more of a subtle change of opinion?
Holden Karnofsky: I wouldn’t describe it as subtle. I don’t know if I’d go all the way to a 180. I think if you look at the post that I wrote, it was pretty focused on a particular organization. I think I did try to limit my claims and scope. Obviously I think people took a certain tone from it. I tried to limit my claims to the things that are being said about the nature of this risk do not make sense, and so I don’t recommend supporting this organization, which is different from saying like, “There’s no AI risks.”
That said, I mean, yeah, I’ve changed my mind. I would not call it a subtle change. I would say I’ve changed my mind quite a bit, and certainly how I see the nature of the alignment problem. I tried to spell this out in the post Three Keys Ways I’ve Changed My Mind. I think, at that time, it didn’t make a ton of sense to me. I think it’s not that intuitive generally that you could have an AI that is very powerful and very smart in some ways, but also is pursuing a really destructive goal, and also there’s nothing people can do to stop it. I think there’s various objections you could make that say, “It shouldn’t be that hard.”
It’s not that it’s guaranteed. I basically am a believer, and always have been, in what is called the Orthogonality Thesis that you definitely could build something very smart with terrible values. There are certain arguments that maybe it shouldn’t be that hard to do it right. Maybe it shouldn’t be that hard to get a computer to do what you want it to do, roughly speaking. That, at the time, I think the arguments, as a complete non-expert, the arguments made sense to me, but that didn’t really matter. The counterarguments, the arguments that it shouldn’t be that hard. They made sense to me, but also I just didn’t feel that the top people in the ML world were on the side of MIRI, or were really paying attention to them.
I thought that was a bad sign. I believed that there was probably a really good counterargument out there somewhere, even if it wasn’t the exact one I had in my mind. It might be the one I had in my mind for why it was going to be much easier to avoid this kind of alignment problem than MIRI was saying. That is something where I think it was just wrong. I think that some of the things that I’ve been talking about, where it’s a very different thing … I mentioned it’s very different to work on AI all day versus to predict when AI can do what.
I think it’s also very different to work on AI all day and to think about future potential risks from AI alignment. I think those are different things. I think that I have changed my mind about just how to interpret the lack of dialogue around that, and the lack of endorsement there. I think, in some ways, there’s still a lack of dialogue. I don’t think a ton has necessarily changed, but I’ve definitely changed my mind how to interpret it. I spelled that out a fair amount.
I mean I look back at that post, and I still think some of the specific things MIRI was saying that I was criticizing them for, I haven’t necessarily come around to their side of those specific debates. I think on the general issue, I think they were really early pointing to what is now being called the Alignment Problem. I think that’s one of the most important problems to be thinking about in the world today, and they get a lot of credit for being early to it. I definitely changed my mind on that.
Robert Wiblin: This leads into the next topic I was going to bring up, which is Open Phil doesn’t have the resources to look into every debate that comes up that’s relevant to the grants that you’re making or the problems that you might work on. To some extent, you have to rely on expert judgment because you just can’t reinvent the wheel every time. At the same time, sometimes you probably have to deviate from expert judgment otherwise the things you do are going to be very boring. You’re just going to be working in areas that are very crowded, because you’re just following in a consensus. How do you thread that needle of deciding when to trust the experts and when not to, and when to make a contrarian bet?
Holden Karnofsky: Sure, yeah. The question of how to weigh expert consensus and all the rest of the considerations is definitely one that we struggle with. I mean I think, in general, Open Phil wants to be set up to do unconventional, risky things that go against consensus. A lot of the biggest hits, I think, are going to be when maybe a small set of experts, who are looking at something in an unusual way, have an insight. A lot of times just because of the way the world is, a lot of people will be resistant to that and it won’t necessarily go over that well.
It might be several decades that people actually decide the original contrarians were right. We want to be able to make bets like that. That is how we’ve tried to set ourselves up. As far as Open Phil’s philosophy goes, the right time to be contrarian is when the expert who is making the most sense to you is a contrarian. I think a bad time to be contrarian is when all the experts say X, and it seems to you like not X. All the experts are saying X, and you don’t know … And it’s like every single one of them knows more than you do.
You don’t know anyone who knows all the things they do and agrees with you. That’s a bad time to be contrarian. I mean I think with an incorrect interpretation of who is an expert in what, that was one of the [inaudible 01:53:07] I was using when I wrote the critical things about MIRI, then called the Singularity Institute. That’s what I was seeing is I didn’t see any intersection in the Venn diagram between deep familiarity with AI systems and endorsing the kind of issues that were being raised.
The thing that Open Phil tries to do is we try to, I mean a ton of what I do personally, in my job, is I consider my main job to be hiring and managing. It’s all about deciding whom to trust and for what, and what everyone’s area of expertise is and what they’re good at. I certainly don’t have time to understand everything myself. I basically don’t understand anything these days as well as I’d like to or even close.
I see my job as my goal, and the thing that I try to optimize for, is I want to hire people who are just obsessed with their fields, really sharp and are as deep in their field and as knowledgeable about it as it’s reasonable to be, and then they’re familiar with all the people who are experts on subtopics within their fields. Then, when someone who is as well positioned to understand the situation as anyone else in the universe is making a contrarian point and their contrarian point is making sense to us, and it seems really important and we have a decent story about why their point might be rejected even if true, then I think that’s a time when we’re happy to take a contrarian bet.
Robert Wiblin: Do you worry that it’s too easy to come up with stories explaining why people are rejecting views? You can just say, “Oh well. People are conservative, they don’t like to change their view. They’re wedded to the existing thing, because they’ve already endorsed it.”
Holden Karnofsky: Yeah.
Robert Wiblin: Yeah. People with odd views always have some story about how they’re persecuted and everyone else is laboring under an illusion.
Holden Karnofsky: Yeah. I mean I think if the only way you’re checking your views is you’re like, “Do I have a story about what the other side is doing wrong?” Just like never enough, and I think that’s a really dangerous way to reason. I think of it as just one little piece. I generally favor trying to build also a pretty sophisticated, and somewhat specific, model of what the world’s current institutions are set up to do well and what they’re not set up to do well. If you don’t have a pretty detailed model, you can always look at anything and just be like, “Well institutions are messed up. Of course, this very important thing isn’t happening.”
In fact, a lot of important things in the world do happen. A lot of causes that are important and wonky and difficult to follow in counterintuitive ways still get a lot of attention, because there are a lot of institutions in the world that are meant to be intellectual, analytical, rigorous and work on things that maybe counterintuitive, but that are important.
I think having some kind of model is one of the things that I try to informally, and not as formally as I’d like to, but I try to informally have a sense of when I hear a new grant idea, I say, “What institution does it seem like this would most likely fit into? Is this something an academic can do? Is this something a scientist can do? Is this something a think-tank can do? Is this something a Government can do, and why aren’t they doing it?” I’m trying to refine my ideas of what the different institutions do tend to do and what they don’t.
Robert Wiblin: What things do you think you’ve learned, over the last 11 years of doing this kind of research, about in what situations you can trust expert consensus and in what cases you should think there’s a substantial chance that it’s quite mistaken?
Holden Karnofsky: Sure. I mean, I think it’s hard to generalize about this, and sometimes I wish I would write out my model more explicitly. I thought it was cool that Eliezer Yudkowsky did that in his book, Inadequate Equilibria. I think one thing that I especially look for, in terms of when we’re doing philanthropy, is I’m especially interested in the role of academia and what academia is able to do. You could look at corporations, you can understand their incentives. You can look at Governments, you can sort of understand their incentives. You can look at think-tanks, and a lot of them are just like … They’re aimed directly at Governments, in a sense. You can sort of understand what’s going on there.
Academia is the default home for people who really spend all their time thinking about things that are intellectual, that could be important to the world, but that there’s no client who is like, “I need this now for this reason. I’m making you do it.” A lot of the times, when someone says, “Someone should, let’s say, work on AI alignment or work on AI strategy or, for example, evaluate the evidence base for bed nets and deworming, which is what GiveWell does … ” A lot of the time, my first question, when it’s not obvious where else it fits, is would this fit into academia?
This is something where my opinions and my views have evolved a lot, where I used to have this very simplified, “Academia. That’s like this giant set of universities. There’s a whole ton of very smart intellectuals who knows they can do everything. There’s a zillion fields. There’s a literature on everything, as has been written on Marginal Revolution, all that sort of thing.” I really never know when to expect that something was going to be neglected and when it wasn’t. It takes a giant literature review to figure out which is which.
I would say I’ve definitely evolved on that. I, today, when I think about what academia does, I think it is really set up to push the frontier of knowledge, the vast majority, and I think especially in the harder sciences. I would say the vast majority of what is going on in academic is people are trying to do something novel, interesting, clever, creative, different, new, provocative, that really pushes the boundaries of knowledge forward in a new way. I think that’s really important obviously and great thing. I’m really, incredibly glad we have institutions to do it.
I think there are a whole bunch of other activities that are intellectual, that are challenging, that take a lot of intellectual work and that are incredibly important and that are not that. They have nowhere else to live. No one else can do them. I’m especially interested, and my eyes especially light up, when I see an opportunity to … There’s an intellectual topic, it’s really important to the world but it’s not advancing the frontier of knowledge. It’s more figuring out something in a pragmatic way that is going to inform what decision makers should do, and also there’s no one decision maker asking for it as would be the case with Government or corporations.
To give examples of this, I mean I think GiveWell is the first place where I might have initially expected that there was going to be development economics was going to tell us what the best charities are. Or, at least, tell us what the best interventions are. Tell us is bed nets, deworming, cash transfers, agricultural extension programs, education improvement programs, which ones are helping the most people for the least money. There’s really very little work on this in academia.
A lot of times, there will be one study that tries to estimate the impact of deworming, but very few or no attempts to really replicate it. It’s much more valuable to academics to have a new insight, to show something new about the world then to try and nail something down. It really got brought home to me recently when we were doing our Criminal Justice Reform work and we wanted to check ourselves. We wanted to check this basic assumption that it would be good to have less incarceration in the US.
David Roodman, who is basically the person that I consider the gold standard of a critical evidence reviewer, someone who can really dig on a complicated literature and come up with the answers, he did what, I think, was a really wonderful and really fascinating paper, which is up on our website, where he looked for all the studies on the relationship between incarceration and crime, and what happens if you cut incarceration, do you expect crime to rise, to fall, to stay the same? He picked them apart. What happened is he found a lot of the best, most prestigious studies and about half of them, he found fatal flaws in when he just tried to replicate them or redo their conclusions.
When he put it all together, he ended up with a different conclusion from what you would get if you just read the abstracts. It was a completely novel piece of work that reviewed this whole evidence base at a level of thoroughness that had never been done before, came out with a conclusion that was different from what you naively would have thought, which concluded his best estimate is that, at current margins, we could cut incarceration and there would be no expected impact on crime. He did all that. Then, he started submitting it to journals. It’s gotten rejected from a large number of journals by now. I mean starting with the most prestigious ones and then going to the less.
Robert Wiblin: Why is that?
Holden Karnofsky: Because his paper, it’s really, I think, it’s incredibly well done. It’s incredibly important, but there’s nothing in some sense, in some kind of academic taste sense, there’s nothing new in there. He took a bunch of studies. He redid them. He found that they broke. He found new issues with them, and he found new conclusions. From a policy maker or philanthropist perspective, all very interesting stuff, but did we really find a new method for asserting causality? Did we really find a new insight about how the mind of a …
Robert Wiblin: Criminal.
Holden Karnofsky: A perpetrator works. No. We didn’t advance the frontiers of knowledge. We pulled together a bunch of knowledge that we already had, and we synthesized it. I think that’s a common theme is that, I think, our academic institutions were set up a while ago. They were set up at a time when it seemed like the most valuable thing to do was just to search for the next big insight.
These days, they’ve been around for a while. We’ve got a lot of insights. We’ve got a lot of insights sitting around. We’ve got a lot of studies. I think a lot of the times what we need to do is take the information that’s already available, take the studies that already exist, and synthesize them critically and say, “What does this mean for what we should do? Where we should give money, what policy should be.”
I don’t think there’s any home in academia to do that. I think that creates a lot of the gaps. This also applies to AI timelines where it’s like there’s nothing particularly innovative, groundbreaking, knowledge frontier advancing, creative, clever about just … It’s a question that matters. When can we expect transformative AI and with what probability? It matters, but it’s not a work of frontier advancing intellectual creativity to try to answer it.
A very common theme in a lot of the work we advance is instead of pushing the frontiers of knowledge, take knowledge that’s already out there. Pull it together, critique it, synthesize it and decide what that means for what we should do. Especially, I think, there’s also very little in the way of institutions that are trying to anticipate big intellectual breakthroughs down the road, such as AI, such as other technologies that could change the world. Think about how they could make the world better or worse, and what we can do to prepare for them.
I think historically when academia was set up, we were in a world where it was really hard to predict what the next scientific breakthrough was going to be. It was really hard to predict how it would affect the world, but it usually turned out pretty well. I think for various reasons, the scientific landscape maybe changing now where it’s … I think, in some ways, there are arguments it’s getting easier to see where things are headed. We know more about science. We know more about the ground rules. We know more about what cannot be done. We know more about what probably, eventually can be done.
I think it’s somewhat of a happy coincidence so far that most breakthroughs have been good. To say, I see a breakthrough on the horizon. Is that good or bad? How can we prepare for it? That’s another thing academia is really not set up to do. Academia is set up to get the breakthrough. That is a question I ask myself a lot is here’s an intellectual activity. Why can’t it be done in academia? These days, my answer is if it’s really primarily of interest to a very cosmopolitan philanthropist trying to help the whole future, and there’s no one client and it’s not frontier advancing, then I think that does make it pretty plausible to me that there’s no one doing it. We would love to change that, at least somewhat, by funding what we think is the most important work.
Robert Wiblin: Something that doesn’t quite fit with that is that you do see a lot of practical psychology and nutrition papers that are trying to answer questions that the public have. Usually done very poorly, and you can’t really trust the answers. But, it’s things like, you know, “Does chocolate prevent cancer?” Or, some nonsense … a small sample paper like that. That seems like it’s not pushing forward methodology, it’s just doing an application. How does that fit into to this model?
Holden Karnofsky: Well, I mean, first up, it’s a generalization. So, I’m not gonna say it’s everything. But, I will also say, that stuff is very low prestige.
And, I think it tends … so first off, I mean, A: that work, it’s not the hot thing to work on, and for that reason, I think, correlated with that you see a lot of work that isn’t … it’s not very well funded, it’s not very well executed, it’s not very well done, it doesn’t tell you very much. The vast majority of nutrition studies out there are just … you know, you can look at even a sample report we did on carbs and obesity that Luke Muehlhauser did, it just … these studies are just … if someone had gone after them a little harder with the energy and the funding that we go after some of the fundamental stuff, they could have been a lot more informative.
And then, the other thing is, that I think you will see even less of, is good critical evidence reviews. So, you’ll see a study … so, you’re right, you’ll see a study that’s, you know, “Does chocolate more disease?” Or whatever, and sometimes that study will use established methods, and it’s just another data-point. But, the part about taking what’s out there and synthesizing it all, and saying, “There’s a thousand studies, here are the ones that are worth looking at. Here are their strengths, here are their weaknesses.”
There are literature reviews, but I don’t think they’re a very prestigious thing to do, and I don’t they’re done super great. And so, I think, for example, some of the stuff GiveWell does, it’s like they have to reinvent a lot of this stuff, and they have to do a lot of the critical evidence reviews ’cause they’re not already out there. And, the same with David.
Robert Wiblin: Okay. Let’s move on to talking about some of the job opportunities that are coming up at Open Phil over the next few weeks. I saw that you just put on your website vacancies for a whole lot of general research analysts. Tell us a bit about that. It seems like you’re hiring quite a lot more people than you normally do at any one point in time.
Holden Karnofsky: Yeah. So, we’re going on a bit hiring push, and basically the story is we have a couple … we have, basically, a few research analysts now that all, well not all, but mostly, started as GiveWell interns or people who worked at GiveWell when we were still more closely integrated with GiveWell. And, the thing we’ve never done as Open Philanthropy is really hire in research analysts, and intensively mentor them and train them the way that GiveWell does.
And, for a while, that was because we didn’t feel we had the mentorship and management capacity, and we were more focused on other things. We were trying to get our grant money up to a high level, which it’s at now. I mean, it’s gonna grow more in the future, but it’s at a high level. And, we were trying to do a whole bunch of other stuff, and I think, late last year, a bunch of us were looking at the situation, and we said, A: people who have been research analysts are just creating huge amounts of value for Open Phil, and a lot of them are … some of them, I mean, not a lot, there’s only a few in total, but some of them are becoming specialists in a sense, that they’ve been generalists, they’ve been doing analyses. But now, we’ve got one who’s focused more on AI strategy, we’ve got one who’s focused more on biosecurity, for the time being.
These are people who could end up filling some of the roles, you know, specialized roles looking at particular aspects of particular problems, that are hardest to fill from hiring from outside. And then, as those people specialize, we don’t have shrinking generals needs, we have growing generals needs, because we’ve got this giant, daunting project of how much money goes into each cause, which we talked about, and how do we prioritize between different causes.
And, we think there’s gonna be just a ton of work to do there. Some of it philosophical, like these questions of how to think about long-term-ism, and how to weigh it. Most of it, I would say, empirical, just trying to imaging what would it look like to put money behind different worldviews, what causes would that put us in, what would be the cost-effectiveness of that, and what would be the downsides and upsides of being those causes. And then, there’s a lot of empirical work there.
Plus, as our portfolio matures, we’re gonna wanna do more work looking back and evaluating our own impact. And so, there’s gonna be a ton of work to do there. So we said, we’ve a ton of work to do. We have fewer people in this role doing it than we used to, because some of them are now specializing in needs we especially have in high-priority causes. These people can do a huge amount of good, and we’re a little bit more of a mature organization, and we’re a little bit more ready to take on the challenge of bringing a whole bunch of people to intensively train and mentor them, and try to get them to the point where they can contribute greatly as research analysts.
So, we decided to go on this big push, and we’re hoping to hire a bunch, and I’m just really, really excited about this search, because the research analyst of today is the core contributor of in the future.
There’s a lot of roles at Open Phil that just take so much Open Phil context. A lot of them are on special causes or sub-topics that we have an especially distinctive take on, so you have to be really familiar with the way we think, really familiar with Open Phil, familiar with our ways of communicating, familiar with our ways of being productive. And then, also our managerial roles. I think it’s roles like that, where it’s only because we hired a research analyst two years ago, three years ago, five years ago, that we have someone today who’s able to take on this incredibly high-leverage, core contributor role.
And so, I’m excited about this search because I think we have an opportunity to really invest heavily in the long-term future of Open Phil, and hire people who are gonna have interesting jobs and stuff to contribute, but they’re also gonna be the people who are positioned to become leaders of the org in the long run. And positioned to do a lot of other things too, because I think that we do train people in some generalizable skills, such as rigorously and critically evaluating things, writing up your reasoning, so that others can follow it and so that it’s calibrated, and working with maximal productivity and thoroughness straight off.
Robert Wiblin: So, how many do you think might hire upfront? And, do you expect to keep everyone? It seemed like you might just double the size of the organization, basically overnight.
Holden Karnofsky: I don’t think we’re gonna double the size of the organization overnight. So, I think we’re about 20 people now. When I look at our medium-term management capacity, my best guess is we’ll probably end up with three to six research analysts who are here full-time, long-term. We’ll see. I mean, if there’s more outstanding candidates, or fewer, we can always change that number.
Robert Wiblin: Yeah. So, how much good do you think someone can do in this role? How much money are we thinking of moving, and on the margin of the grant opportunity, it’s really strong?
Holden Karnofsky: Sure. I mean, it’s very hard to generalize, and very hard to do anything like quantifying. Because careers are very dynamic, and because, I think, many of the most successful careers just take so many unexpected twists and turns, to estimate the good someone’s gonna do in their career … I mean, I often encourage people instead to put a lot of weight, to kind of look at all the things that look like they’re within the margin of error and be high impact, and then put a lot of weight on what it seems like they would be really great at and excited to do, and just able to stick with for a long enough time to become world-class at. So, I don’t wanna make too precise statements.
But, I can say that Open Phil, I think, A: it’s a special organization, especially for people who are interested in effective altruism. I don’t think there’s any funder of remotely comparable size, or any advisor to comparably sized funders, that has this value of doing the most good possible, and this very analytical, determined way that we do. And so, if you’re passionate about effective altruism, or you’re passionate about some of the specific causes, like the long-termist causes, like the farm animal welfare, and including potential risks from advanced AI, I think this is one of the places where you can have the most impact. It’s hard for me to come up with anything that clearly beats it.
And, I mean, look at the situation. There’s currently about 20 people on staff, and we’re giving between 100 and 200 million dollars a year, staff is gonna grow, giving is gonna grow, but the overall feeling is there’s a lot going on per person, and there’s a lot of opportunities.
And, I also think this is a role and this is an organization where we really believe in investing in people, and getting them to the point where they’re able to do exciting things with a lot of autonomy. And, that is the track that a lot of our current research analysts have taken. So, yeah, I think this is an opportunity to really have a lot of leverage, to influence a lot of resources, and to build a lot of skills, which I think is some ways is just as important, or more important.
Robert Wiblin: So, I guess doing all of this training with new staff now can potentially delay grants a bit, but you think over the three to five year time-scale, that’s gonna pay off well, ’cause you’ll have new managers and really good research analysts, and other ways you’d be bottle-necked later on?
Holden Karnofsky: That’s right. I mean, it delays something. I mean, I don’t actually expect our grant flow to go down, because we … the current way the organization is, is there’s mostly program officers making the grants, and then, a lot of the generalists are more working on things like cause-selection and prioritization. So, something will slow down, right? We’ll have to trade something to do all this investment in mentorship and management, but I definitely think it’ll pay off. And, I think we can spare today, just taking the long view.
Robert Wiblin: So, Open Phil has a somewhat distinctive culture. What kind of people do you find thrive in that, and what kind of people don’t enjoy it and end up leaving after a while?
Holden Karnofsky: Sure. So, I would describe Open Phil’s culture as focused, first and foremost, on truth-seeking. I think we definitely want an environment where everyone is able to be comfortable and be supported in their work, and all that. But, sometimes disagreement is uncomfortable, and in the end, if we have to choose, we’re always gonna do the thing that gets us closer to finding the best answer and doing the best thing.
And so, there’s definitely a lot, at Open Phil, there’s a lot of the phenomenon of people kind of having critical discussions with each other, where you might say something and someone else will say why they disagree with you. And, one of the major values that we really try to inculcate at Open Phil is people thinking for themselves, speaking up when they don’t agree with something, and that includes pushing back on their manager, which is a bit of distinctive … a bit of an unusual thing to see in an organization, and a bit of a hard thing for us to promote.
And so, we don’t always do as well as I’d like, but, in general, my model is when I’m managing someone, I’m asking them to do things and they see a lot that I don’t see as they’re doing those things, and they are able to … they’re gonna have a lot of insights that I don’t have. They’re going to be right about a lot of things where I’m wrong, and in a lot of organizations, there’s this kind of mentality that you should go along to get along, you shouldn’t mess with your alliances, when someone tells you to do somethings you should just do what they said, you shouldn’t argue with them. And, we really try and get people out of that mindset.
We think disagreement is good, feedback is good, pushing back on your manager is good, it leads to better results, and we try and create an environment where people can really focus on getting to the truth and treat criticism as an opportunity to improve, and have this kind of constant hunger to be better at everything.
So, that’s the distinctive thing about the culture. Some other distinctive things about it: I think we’re very into calibration and transparency of reasoning. So, a general pattern at Open Phil is that we really like it when someone makes a suggestion or a recommendation. It’s really great if they also, at the same time, say the best reason not to take the recommendation. That’s especially something with grants, where we’re very upfront about it. We say the grant write-up needs to include a section on why it would be good not to make this grant, and what the best counter-argument would be. And, I think we do train this.
So, we don’t expect people to be necessarily amazing at it, coming in. But, this is an organization where we all try and be clear with each other at all times. This is why I think this … this what my reasons are, this is how confident I am, this is what reservations I have. And so, again, it’s all in that service of truth-seeking.
And, the final thing that I would say is somewhat noticeable about the Open Phil culture is this pro-change, pro-improvement all the time mindset. And, obviously, it sounds nice when I say it that way, but it can be quite disorienting that we don’t necessarily put value on stability or predictability in and of itself, or put much value on it. I mean, we think it has value, but there are gonna be times … everyone here has been through the process of putting a ton of work into something, and then at some point just realizing, “You know what? This isn’t that valuable,” and we’ll drop it. And, we’re not gonna do anything with it, we’re not gonna finish it, we’re not gonna publish it. That happens.
And, I think this is a place where we’re always trying to get to the best thing, not get held back by some cost, or by any kind of corporate politics, or any kind of agreeableness heuristic. And, I think for some people that’s disorienting, and it can feel like it’s not the best for everyone. But, I think for people who really are incredibly passionate about getting to the truth and doing the most good, and whatever it takes to get there, I think this is a really great place to be, and a pretty special place.
Robert Wiblin: How much does it pay? I mean, it’s not that cheap living in San Francisco. It’s notoriously quite pricey.
Holden Karnofsky: Sure. I mean, pay is a function … we consider both someone’s role and they’re relevant experience. So, there’s not a general answer to that, and I’m not gonna name salaries, we don’t do on the web, I’m not gonna do it in a podcast. In general, our philosophy is that we try to make pay, in some sense, a non-issue, in that we try to pay such that people who are passionate about our work are never gonna have to turn us down for money reasons. They’re gonna be able to live at a good standard of living, and it’s gonna be competitive with, for example, other nonprofit jobs they might take.
On the other hand, we don’t wanna pay people the way that hedge funds pay people. We don’t want people coming here for the money, because one of the main qualifications for being great at Open Phil is truly buying into the mission, truly understanding it, truly having a passion for it. And, we don’t want this to be the place where people come and people stay just because they want a paycheck. That would put a burden on our evaluation process, that [inaudible 02:17:28] we don’t think we’d be that well equipped to handle.
So, I would say that this is not a job … you shouldn’t take this job if your main priority in life is money. Everyone here could make more if they went somewhere else, a for-profit. But, I also don’t think money’s gonna be a reason not to take it, if you’re excited about the work.
Robert Wiblin: Maybe, could you describe a bit what the work is actually like, on a day-to-day basis?
Holden Karnofsky: Well, it obviously varies by the role. But, some of the main things that I anticipate some of the new hires doing is they come in … and I’ll focus on the research analysts, ’cause it’s kind of the most generalist one, for now … I think there’s cause-prioritization, and then there’s, related to cause-prioritization, there’s kind of literature reviews.
So, this would be … we want to know, let’s say, how good cage-free systems are for chickens, as opposed to battery-cage systems, which is something we’ve put out a write-up on. Or, we want to know how much money we could spend on a certain cause, and how cost-effective that would be, how much good we would expect it to do per dollar. And, these are matters of just empirical investigation, finding the literature, finding the best papers, being critical, finding the weaknesses in the papers, finding the reservations, and then creating a write-up where everyone can see what conclusion you’re reaching and why, and what the major weaknesses in it are.
Another core function of Open Philanthropy is grant investigation. So, that would be … you know, there’s a grant we’re thinking about making, you talk to the organization, you ask them the questions that we need to ask them, and you try and write up the case for and against the grant. So, a lot of it tends to be pretty analytical work. It tends to be desk work.
I think one of the weaknesses of this job is it isn’t always the most satisfying from the perspective of having a lot of camaraderie, because we do a lot of different projects that are all kind of independent of each other, ’cause we’re kind of money heavy, and have a lot of money per time, in some sense. And so, we’re often trying to have not too many people working on exactly … on one given thing. So, I think that can be a downside, although I think, possibly, the new research analysts, especially if a lot of them are working on cause-prioritization, may have more interaction and collaboration and discussion.
But, generally, a lot of work where we’re trying to reach a conclusion, we’re trying to do it in a very analytical, thorough way, where you’re balancing efficiency and thoroughness, and you’re coming out with a product that people can understand why you’re saying what you’re saying. It could be a great intellectual development process for people who wanna be able to do that kind of thing. I think we have experience with it, and are ready to train people for it. And, obviously, not for everyone.
Robert Wiblin: What kind of career capital are people building, and what kind of career progression is there, both if they stay with Open Phil and if they potentially leave after a couple of years?
Holden Karnofsky: Sure. So, in terms of career capital, I mean, I think right off the bat, we’re gonna be investing in people. And I think we’re gonna be … some of the main skills we’re gonna be teaching are how to critically assess evidence, how to find the weaknesses in it, how to reach a conclusion when there’s a lot of confusing information and a lot of people trying to sell different conclusions in different ways. How to cut to the bottom of it, so you’re not just reading abstracts, you’re not just listening to what experts think, but you’re looking at what their evidence is, and how they’re coming to that conclusion. And, maybe you’re also using a bit of best practices from the science of forecasting.
So, I think people are gonna be learning to do that. They’re gonna be learning to communicate in an efficient, thorough, calibrated way. They’re gonna learning to gather information efficiently and to get projects done. And so, I think, right there, those are really good general skills that I think for a lot of the things that need to be done, and for a lot of things effective altruists wanna do, are really useful.
And then, over time, I think there’s all kinds of ways people can grow. So, we really try here to always have everyone doing things that stretch them, things that challenge them. Lots of people have moved up quite rapidly and quite dramatically here. Over the long run, the upside within the organization is pretty unlimited. I mean, you could end up running the place. It’s just a matter of what you end up being good at and what fits you.
Certainly, some of the longer-term roles, you could end up as a program officer or a grant investigator. So, some cause, or some sub-cause, that we are having trouble hiring for from the outside, or that we think is just especially important, or something else where you’re a really good fit for it, you might end up being in the program officer role, where you’re responsible for tens of millions of dollars a year, or maybe more, of grant-making to accomplish some objective. And, you’re really the point person on that, and you’re really running that show, definitely subject to oversight at the 50, 40, 10 rule that I described.
It could lead to other things too. So, I mentioned the AI rules, spending your life on timelines and when AI can do what, spending your life on AI strategy and handling the geopolitics, spending your life on AI technical agendas, and what the most promising ones are. I think those all have strong elements of this Open Phil research analyst role. I mean, being able to sort through a lot of claims and a lot of confusing information, make sense of it all, and then write down your reasons and have those reasons be vetted, and do it all efficiently, that’s what you need to do.
And, I think those are some of the most important rules out there. We’re trying to hire for them, but I can also imagine people specializing in AI strategy, and other places; think tanks, FHI. I can imagine people specializing in AI timelines, and AI technical evaluation at other places. So, there’s all kinds of places this could lead, and I think it’s a great place to come to develop a bunch of skills and be part of a very high-impact organization.
Robert Wiblin: For these roles, are you expecting to hire recent graduates, or people who have already been in the workforce for a couple of years?
Holden Karnofsky: Really, both. So, we don’t have a really strong take on what career-stage is appropriate here. We’re gonna see who applies. I would basically encourage anyone. If this sounds exciting, you should apply.
Robert Wiblin: So, that’s the generalist research analysts. But, there were a couple of other roles you were hiring for, which is a Grants Associate, an Operations Associate, and a General Counsel. Do just wanna describe those ones as well?
Holden Karnofsky: Yeah, for sure. So, that’s another thing as we’re positioning ourselves so that at some in the future, I give more, I think we need to have … I think we’ve had a lot of improvement on operations recently, which I’m very excited about. We need to keep that improvement going. And so, the General Counsel would be … that would be an attorney, and that would be someone who’s trying to, basically, help us just more efficiently get grants out the door by catching all the legal issues with them before we do it.
And with this, and with some of the other roles, speed is impact. And so, a while ago we made a grant to try and cause Gene Drives, a technology that might eradicate malaria, to try and get it to be developed faster. We put in something like 20 million dollars to try and speed up the say when malaria would be eradicated. And, every day we speed up the eradication of malaria. I think it’s something like a thousand lives per day, a thousand untimely deaths prevented per day. If we can speed up that eradication … and, you know, this grant for Gene Drives took months, because there were a lot of complications with universities and with, you know, various issues, and so, speed is impact, and if general counsel can help us speed things up, and then Grants Management Associates can help us speed things up, and also just give our grantees a better experience and empower them more.
And so, one of the things we wanna be is we wanna be an organization that not only makes great grant decisions, but is a great partner. And, when we support someone, we’re making life easy for them, we’re getting them the money quickly, we’re getting them the money in a way that works for them. And then, they’re able to go out and do great things. And so, those are just examples of what we’re trying to strengthen and make our operations much more robust.
And also, one of the ones you didn’t mention, but we’re looking also for a Director of Operations. So, it’s kind of an organization-wide push that we wanna be able to do more things operationally, wanna be able to make faster, more supportive grants, we wanna do a better job constantly assessing ourselves, and understanding what the grantee experience of us is. And, there’s a whole bunch of other needs that we have that I won’t go into here, but they could speed our impact, as an organization.
Robert Wiblin: Cool. So, I guess we’re kind of done describing the vacancies that you’ve got right now. Do you wanna give a final push to encourage anyone who’s listening to actually fill out the form and apply for the job, rather than just listen to this and then never think about it again?
Holden Karnofsky: Well, yeah, you should all apply! Definitely, I think Open Philanthropy is one of the most exciting organizations out there. Just the amount that we’re able to give and then the way which we’re able to give it, just very unconstrained by anything except doing the most good. And, if that excites you, I mean, I’m just telling you we have big needs.
All of these roles can make a big difference to the organization, as well as being just a source for personal development. So, if you’re excited about this organization, and you’re even curious about whether you might be a fit, definitely apply. And, I definitely consider this one of the highest-impact things I can certainly think of for someone to do, and I’m hoping this job-search goes well, because I think this is the future of Open Philanthropy.
Robert Wiblin: Alright. So, as a final topic, I heard a couple of months ago that you went to an academic conference about Utopias. Is that right? Was this for work, or just more for fun?
Holden Karnofsky: This was for fun. This was just to try something different. So, I’ve just been idly curious about whether there’s much good writing out there on the topic of Utopia. And, generally, I know I’m familiar with some of the literature, and it’s kind of an interesting topic to me, because it’s very hard to describe a Utopia that sounds appealing. And, I think it somewhat relates to some of the discussions about long-term-ism, when you think about how good could the future be.
Sometimes, the conversation gets kind of awkward, ’cause you’re like, “Well the future could be so good,” but then as soon as you start to really concretely visualize, it’s like hard to visualize a very specific world that won’t bother … at least bother a lot of people that you’re talking to, or people have objections to it. And, in fact, I have the impression people used to write more Utopias and now they write more Dystopias.
And so, I’ve just been curious about this, and I was googling around, and trying to think, “Does anyone write about this topic, and why it’s difficult?” And, the only thing I was able to find was this Society for Utopian Studies that was having its annual conference.
So, I really wanted to go there and check it out. And, to go, I had to submit a paper. So, what I did was I kind of took a bunch of classic literary Utopias, and I ran them through basically Mechanical Turk surveys using some of Spencer Greenberg’s technologies to ask people on the internet just, “Here’s a utopia. How does it sound? Does it sound good? Does it sound bad?” And this, to my knowledge, is the first time that someone has tried to empirically analyze how people feel about different descriptions of an ideal world, knowing going in that none of them were gonna do that great, and really, none of them did. And, I’m curious about why that is, and find it interesting.
And so, I put this together and then I went to the conference. And, I thought the whole thing was a very interesting experience. It turns out that the conference was mostly focused on literary criticism using Utopian lens on literature, which wasn’t totally obvious to me before I showed up. And, I think somewhat … it further drives home that there doesn’t seem to be a lot of interest today in people spelling out what a really good future would look like, what a really good future would look like in the long run. And, I think it’s just interesting. I don’t know if it’s always been this way, but it seems like not a very lively topic, these days, and it just makes me curious.
Robert Wiblin: Did the academics at this conference find your presence somewhat amusing, given that you don’t know anything particular about literature? I don’t imagine … you’re not an academic, and you were bringing them a survey from the Internet about Utopias.
Holden Karnofsky: Yeah, you pretty much hit the nail on the head. I think some of them thought, “Boy, this is really different. This really interesting.” And some of them were like, “What are you doing here?” So, yeah, that kind of happened, for sure.
Robert Wiblin: So, you said none of the Utopias performed particularly well. What were the general trends? Were there any Utopias that people disliked less than others?
Holden Karnofsky: Yeah. I was actually pretty surprised by how it all went down, because, first off, I did some analysis on the survey-takers, to see their political affiliations. And, they were very heavily skewed to the left. And so, it was … I think Hillary Clinton would’ve won an election with this population by 50 or 60 points, or something like that.
And so, I tried to write different Utopias that would appeal to different political orientations. I had this theory it might break down that way. So, I wrote one that was trying to sound very libertarian, and it was like, all of our freedom, anyone can buy anything, sell anything, do anything. And then, I wrote another about how this very wise and just government tries to take care of everyone, I’m gonna ask people to read that. And then, I had some things that were supposed to appeal to conservatives, as well. And then, it just turned out that the freedom ones just did the best, and the government one just did the worst, even with this very left-leaning population.
And, I think that’s somewhat … I didn’t really understand why that had happened. It could have been a function of the way I wrote it, although I don’t really think it was that. But, I think I got a better inkling of it at the conference, when I was just talking to people about their opinions, and I think there’s a feeling that a lot of people have that any world that is described too specifically feels totalitarian.
And, I think that’s why, a lot of times, it’s so hard to describe a good future world, is that when you say, “Well, this is what people do all day.” People say, “Well, what if I wanna do something else?” And so, really emphasizing that theme of freedom seemed to do well, and that’s something that I’m interested in, because I actually think it is worth talking about, what we would like a future world to look like. I think we could have a lot of really fruitful debates that look a little bit more long-term, and that think a little bigger, than debates about exactly what the marginal tax rate should be today. But it is challenging to have those debates when, like, just the mere act of describing something makes it sound kind of top-down and centralized.
And then, you can describe Utopia in more abstract terms. You’re like, “Well, what if we just have all the resources we want, and we do whatever we want?” But then that’s not very emotionally compelling, and people don’t really know how to picture it, and don’t know what you’re talking about. So, it’s kind of an interesting challenge to find a way to have coherent conversations about our vision for the world, that don’t sound like totalitarian and over-controlling. And, that was something that I did learn from the survey.
Also, I mean, this is less surprising, but none of the Utopias from literature, or the ones I made up, none of them scored all that well. But, I had a different section of the survey that just asked for Utopian traits, so just stuff like, “No-one goes hungry,” which doesn’t describe the whole world, it just says, “Well, in an ideal world, no-one would go hungry.” And, actually, a lot of those scored really well.
So, there’s no description of a world that a got a lot of excitement or agreement on. But then, stuff like, “There’s no disease,” “There’s no hunger,” “Everyone lives to age X, healthily,” and even for pretty high numbers, even for like age 100 or 1000 was getting pretty good agreement on them. And so, that was something, too. It was, like, you can people to agree on some basic stuff, but it’s just hard to describe the whole thing without offending someone.
Robert Wiblin: Didn’t you ask, “Would it be good if people were sleeping with many other partners?” And, people didn’t like that. And then, they also didn’t like it if they were only sleeping with the one partner? Which, I guess only leaves one remaining option of sleeping with nobody, which you didn’t ask about. But, there’s perhaps a bit of contradiction?
Holden Karnofsky: Yeah. I don’t know that it’s straight up contradiction. I think it’s more just showing that you can’t get consensus. So, I think some people liked one, some people liked the other. But, I actually did add a third option. So, I had one that was something like, yeah, “People have many lovers,” another that was like, “People are monogamous, and faithful to one lover,” and then, I had a third option that’s, “People have the choice of which one to be.” And then, I used randomization to see which one people saw, and none of them scored that well. So, it’s like, none of those three scored that well, including the choice. And so, I think that was just illustrating to me that it’s just-
Robert Wiblin: Well, there really is only compulsory celibacy as the remaining option!
Holden Karnofsky: Right! Yeah, I didn’t think to test that one. I don’t think it would have polled very well!
Robert Wiblin: Probably not!
Holden Karnofsky: So, I mean, I think it’s more just like the act of describing it, just seems too specific and it always seems to offend someone. And, I think it’s kind of an interesting obstacle to having conversations about the future.
Robert Wiblin: Yeah. I mean, it seems like in literature, Utopias are almost necessarily gonna have a negative edge to them. ‘Cause you can’t write a book about a world where just everyone is rich and happy and there’s no conflict. That would be incredibly boring.
Do you think that’s part of the thing that, like, people are primed to think that Utopias must be disturbing, ’cause that’s how they’re always presented in fiction?
Holden Karnofsky: That’s more of a modern thing. I think there are definitely literary Utopias. I mean, there are definitely books that just, you know, Looking Backward, Edward Bellamy, is just trying to … it’s like the plot is this guy, I forget, I think there’s some time-travel involved, and he’s walking around this world and everyone’s just explaining how wonderful it is.
Robert Wiblin: Okay. And, they don’t like kill everyone at 40, or something like that?
Holden Karnofsky: Yeah, no. There’s nothing to it. And, it doesn’t sound, today, like the kind of novel that would sell very well. It was very successful at the time. People were starting societies that were based on the ideals of this novel. So, it’s not like there’s no precedent for Utopian literature that is successful and that people like. I mean, it does sound kind of boring to me, but it’s not really inherent to the situation. So, I don’t really know what it is.
Robert Wiblin: Do you think it’s useful for more people to do these kinds of surveys? Should we trust the general public to determine what kind of Utopia we march forward into?
Holden Karnofsky: Well, I certainly wasn’t trying to take a vote. I was more just trying to understand how people think about things-
Robert Wiblin: Let’s not include ISIS in the vote, when we do that!
Holden Karnofsky: Yeah. Yeah, I mean, I will say I would rather people applied to be research analysts, but, you know, it could be kind of interesting, yeah.
Robert Wiblin: My guest today has been Holden Karnofsky. Thanks for coming on the show, Holden.
Holden Karnofsky: Cool. Thanks a lot, Rob. Good talking to you.
Robert Wiblin: I hope you enjoyed that episode! There were a number of articles that we talked about which I’ve linked to in the associated blog post.
If you’re considering applying to Open Philanthropy you may want to read our problem profile on Global Priorities Research, which I’ll add a link to.
For those thinking about the AI focussed roles, I’ll link to our profile on positively shaping the development of artificial intelligence.
And for those interested in the operations roles, we discuss these in our career review of working in effective altruism-focussed organisations, which I’ll link to as well.
If you know someone who would be suitable for one of these positions, it would be great if you could forward them the show.
And if you enjoyed this episode you should also check out episode 10 with Open Phil program officer Nick Beckstead, episode 8 with Open Phil program officer Lewis Bollard, and episode 4 with a past employee of Open Phil, Howie Lempel.
Thanks for joining – talk to you next week.
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.