#110 – Holden Karnofsky on building aptitudes and kicking ass
#110 – Holden Karnofsky on building aptitudes and kicking ass
By Robert Wiblin and Keiran Harris · Published August 26th, 2021
On this page:
- Introduction
- 1 Highlights
- 2 Articles, books, and other media discussed in the show
- 3 Transcript
- 3.1 Rob's intro [00:00:00]
- 3.2 Holden's current impressions on career choice for longtermists [00:02:34]
- 3.3 Aptitude-first vs. career path-first approaches [00:08:46]
- 3.4 How to tell if you're on track [00:16:24]
- 3.5 Just try to kick ass in whatever [00:26:00]
- 3.6 When not to take the thing you're excited about [00:36:54]
- 3.7 Ways to be helpful to longtermism outside of careers [00:41:36]
- 3.8 Things 80,000 Hours might be doing wrong [00:44:31]
- 3.9 The state of longtermism [00:51:50]
- 3.10 Money pits [01:02:10]
- 3.11 Broad longtermism [01:06:56]
- 3.12 Cause X [01:21:33]
- 3.13 Open Philanthropy [01:24:23]
- 3.14 COVID and the biorisk portfolio [01:35:09]
- 3.15 Has the world gotten better? [01:51:16]
- 3.16 Historical events that deserve more attention [01:55:11]
- 3.17 Applied epistemology [02:10:55]
- 3.18 What Holden has learned from COVID [02:20:55]
- 3.19 What Holden has gotten wrong recently [02:32:59]
- 3.20 Having a kid [02:39:50]
- 3.21 Rob's outro [02:44:50]
- 4 Learn more
- 5 Related episodes
Holden Karnofsky helped create two of the most influential organisations in the effective philanthropy world. So when he outlines a different perspective on career advice than the one we present at 80,000 Hours — we take it seriously.
Holden disagrees with us on a few specifics, but it’s more than that: he prefers a different vibe when making career choices, especially early in one’s career.
While he might ultimately recommend similar jobs to those we recommend at 80,000 Hours, the reasons are often different.
At 80,000 Hours we often talk about ‘paths’ to working on what we currently think of as the most pressing problems in the world. That’s partially because people seem to prefer the most concrete advice possible.
But Holden thinks a problem with that kind of advice is that it’s hard to take actions based on it if your job options don’t match well with your plan, and it’s hard to get a reliable signal about whether you’re making the right choices.
How can you know you’ve chosen the right cause? How can you know the future job you’re aiming for will still be helpful to that cause? And what if you can’t get a job in this area at all?
Holden prefers to focus on ‘aptitudes’ that you can build in all sorts of different roles and cause areas, which can later be applied more directly.
Even if the current role or path doesn’t work out, or your career goes in wacky directions you’d never anticipated (like so many successful careers do), or you change your whole worldview — you’ll still have access to this aptitude.
So instead of trying to become a project manager at an effective altruism organisation, maybe you should just become great at project management. Instead of trying to become a researcher at a top AI lab, maybe you should just become great at digesting hard problems.
Who knows where these skills will end up being useful down the road?
Holden doesn’t think you should spend much time worrying about whether you’re having an impact in the first few years of your career — instead you should just focus on learning to kick ass at something, knowing that most of your impact is going to come decades into your career.
He thinks as long as you’ve gotten good at something, there will usually be a lot of ways that you can contribute to solving the biggest problems.
But that still leaves you needing to figure out which aptitude to focus on.
Holden suggests a couple of rules of thumb:
- “Do what you’ll succeed at“
- “Take your intuitions and feelings seriously“
80,000 Hours does recommend thinking about these types of things under the banner of career capital, but Holden’s version puts the development of these skills at the centre of your plan.
But Holden’s most important point, perhaps, is this:
Be very careful about following career advice at all.
He points out that a career is such a personal thing that it’s very easy for the advice-giver to be oblivious to important factors having to do with your personality and unique situation.
He thinks it’s pretty hard for anyone to really have justified empirical beliefs about career choice, and that you should be very hesitant to make a radically different decision than you would have otherwise based on what some person (or website!) tells you to do.
Instead, he hopes conversations like these serve as a way of prompting discussion and raising points that you can apply your own personal judgment to.
That’s why in the end he thinks people should look at their career decisions through his aptitude lens, the ‘80,000 Hours lens’, and ideally several other frameworks as well. Because any one perspective risks missing something important.
Holden and Rob also cover:
- When not to do the thing you’re excited about
- Ways to be helpful to longtermism outside of careers
- ‘Money pits’ — cost-effective things that could absorb a lot of funding
- Why finding a new cause area might be overrated
- COVID and the biorisk portfolio
- Whether the world has gotten better over thousands of years
- Historical events that deserve more attention
- Upcoming topics on Cold Takes
- What Holden’s gotten wrong recently
- And much more
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel
Highlights
Aptitude-first vs. career path-first approaches
Holden Karnofsky: It’s somewhat of a fuzzy distinction. Some of the aptitudes I list basically just are paths, like academia. But I think a career path tends to be based around a particular endpoint. And a lot of times that endpoint is sometimes cause-specific, or even organization-specific. So it’ll say, I want to work at an organization like this, working on a problem like that, in this kind of role. And it’s just like, it’s a very specific target to be aiming for. And I think most people are just wrong about where their career is going to be in 10 years. Their best guess is just wrong. I mean, it depends a little bit. There are some career paths where you really… They’re very well-defined. They’re scaffolded. You have to get on them early. People give you a lot of help figuring out how you’re progressing.
Holden Karnofsky: So the distinction kind of dissolves there, but I think there’s other careers where it’s very hard to tell where you’re going to end up, but it’s very tractable to tell whether you’re good at the kind of thing you’re doing and figure that will land you somewhere good. And if it doesn’t land you in direct work it’ll land you somewhere good anyway, because there’s things you can do as long as you’re good at what you do. So I would say that’s the distinction.
Holden Karnofsky: And I would say to imagine the distinction, just imagine two project managers, and one of them is saying, “I’m hoping I will pick up skills here that will allow me to work at an effective altruist organization on project management.” And the other person is saying, “I don’t know what my plan is. I’m just trying to get good at project management. And I’m trying to see if I’m good at it, and I’m really focused on that.” And then you go 20 years later and it’s like, well, it turns out that all the effective altruist organizations are fine, they don’t need project management, but like there’s some other weird job that didn’t exist at the time that we were having this conversation, and it’s not project management, but it takes a lot of the same skills. You needed to be a good manager. And it’s not even at an EA organization. It’s like, you’re at a non-EA AI lab, but by being a valuable employee, you’re getting a voice in some of the key issues or something like that. It’s just like something you didn’t think about before.
Holden Karnofsky: And so you imagine these two people and it’s like, the first person was focused on… One of the people, I don’t remember which one I said first, was focused on the right question, which is, “Am I good at this, or should I switch to another aptitude?” And another person was like, “I have this plan,” and their plan didn’t work out.
Specific aptitudes
Holden Karnofsky: Yeah, so there’s a cluster of aptitudes called ‘organization running, building, and boosting.’ And that includes things like doing management of people and helping organizations set their goals, hit their targets. It can include operations jobs. It can include business operations jobs. It can include, if you stretch it a little bit, communications jobs, a lot of things. And it’s like, if you come into an organization and your thing is you’re a project manager who keeps people on track and you’re a good personnel manager, that’s an aptitude you’re building, and that aptitude is going to stay with you.
Holden Karnofsky: If you’re doing that at a tech company, but you’re really good at it and you get better at it, and then you go later to an AI lab, you’re still going to be good at it. And that’s going to be one of the skills that you bring to the AI lab. And that’s a case where you were able to build something useful without having immediately a job in the cause you wanted to be working in that then did transfer. So, that’s an example of an aptitude. I talk about various research aptitudes, where you try to digest hard problems. I talk about a communications aptitude where you try to communicate important ideas to big audiences. I go into a bunch of different things.
Rob Wiblin: Just to complete the list of aptitudes that you mentioned, I think the last few ones were software and engineering aptitude, quite a transferable skill. Then there was information security. So it’s like computer security, helping people keep secrets. I guess that one’s beginning to approach a little bit like a job or career. And then there’s academia. So, that’s like a broad class of roles.
Holden Karnofsky: Some of them are broad classes of aptitudes that have a bunch of stuff in common. A lot of the point of the post was to be like, how do I take my guess at what I could be great at? And how do I start learning whether it was a good guess, and start changing my mind about it, which I think is often a better framework to be learning than some of these other frameworks.
Holden Karnofsky: And then I think another aptitude that I just skipped over is political and bureaucratic aptitudes. So like, if you’re in the government and you’re climbing through the ranks, that’s the thing that you can be good at or bad at. You can learn if you’re good or bad at it. And if you’re good at it, you’re going to be good at it. And if you change your mind about what cause you to work on, you’re still going to have those skills, and they’re going to help you get into whatever cause you wanted. I also talk about entrepreneurs. I talk about community-building aptitudes, like trying to help people discover important ideas and form a community around them. That’s a thing that you can try out. You can see if you’re good at it. If you’re good at it, you’re going to keep getting better at it, et cetera.
How to tell if you're on track
Holden Karnofsky: I think some of these other frameworks, which I think are great, but I think you can end up lost on, how am I refining my picture of what I should be doing and where it should be. So it’s like, I wanted to work in AI, I managed to get this job in an AI organization. What do I do now? What am I learning about what kind of job I should be in? The aptitudes framework is a way of saying, look, if you’re succeeding at the job you’re in, then you are gaining positive information about your fit for that aptitude, not necessarily for that cause. And if you’re failing at the job you’re in, you’re gaining negative information about your fit for that aptitude, not necessarily that cause or that path.
Holden Karnofsky: And so, yeah. I mean like, so you gave the example of, I want to be in government. And it’s like, well, yeah, if you go into government and you have some peers or some close connections, they can probably, after a year, they can tell you how you’re doing. They can say, hey, you’re doing great. You’re moving up. You’re getting promoted at a good rate. People here think you’re good. This is a good fit for you. Or they can tell you, you’re kind of stalling out. And people don’t like this about you and that about you, or the system doesn’t like this or that about you. And then you can start thinking to yourself, okay, maybe I want to try something else. Maybe government’s not for me.
Holden Karnofsky: So it’s this framework where it’s not too hard to see if you’re succeeding or failing. A lot of people who aren’t necessarily all the way in your headspace and don’t have all the same weird views you have can just tell you if you’re succeeding or failing. And you can tell if you’re getting promoted, and it also matters if you’re enjoying yourself and if you’re finding it sustainable. These are all actual predictors of whether this is going to be something that you keep getting better at and end up really good at.
Just try to kick ass in whatever
Holden Karnofsky: The job market is really unfair, and especially the market for high-impact effective altruist jobs is really unfair. And the people who are incredible at something are so much more in demand than the people who are merely great at it. And the people who are great are so much more in demand than people who are good. And the people who are good are so much more in demand than the people who are okay. And so it’s just a huge, really important source of variation. And so then it’s like, can you know enough about what cause or path you want to be on, that the variance in that, the predictable variance in that, beats the variance in how good you’re going to be? And I’m like, usually not. Or usually at least they’re quite competitive, and you need to pay a lot of attention to how good you’re going to be, because there’s a lot of different things you could do to help with the most important century, and a lot of them are hard to predict today.
Holden Karnofsky: But a robust thing is that whatever you’re doing, you’re doing it so much better if you’re better at it. So, that’s part of it. Also, people who are good at things, people who kick ass, they get all these other random benefits. So one thing that happens is people who kick ass become connected to other people who kick ass. And that’s really, really valuable. It’s just a really big deal.
Holden Karnofsky: You look at these AI companies and they want boards, and they want the people on their boards to be impressive. And it’s like, those people, a lot of those people are not AI people necessarily, or they weren’t for most of their careers. They kicked ass at something. That’s how they got there. And you know, a lot of my aptitude-agnostic stuff is about this too, where I’m like, let’s say that you missed, and you picked a skill that turned out it’s never going to get you a direct-work EA job, but you kicked a bunch of ass, and you know a bunch of other people who kick ass. So now you have this opportunity to affect people you know, and get them to do a lot of good. And they are not the people you would know if you hadn’t kicked ass.
Ways to be helpful to longtermism outside of careers
Holden Karnofsky: The basic vision is just, you’ve got a set of ideas you care about, and you’re good at something. You’re a role model. People look up to you, people are interested in the ideas you’re interested in, and they listen to you, they take you seriously. You show up to community events with other people who care about these issues. You make those events better. Now more people come to the events, more people are enjoying the events. More people are kind of getting into the ideas, taking them seriously. You’re also helping improve the events. You’re saying these events aren’t working for me, this community isn’t working for me, and I’m this great role model and I’m this kind of person you should want to make happy.
Holden Karnofsky: And so you’re doing all that stuff. And I think that’s a fair amount of impact. I think that just being a person who is connected to other successful people at different levels of success and just kind of living the good life and having an influence on others by your presence… So that’s a lot of it. And then I think there’s another thing that I think is a large chunk of the impact you can have in any job, which is kind of a weird meta thing, but it’s being ready to switch, which I think is actually super valuable. It’s super hard. And so if what you are is you’re a person doing a job that’s not direct work on longtermism, there’s two different versions of you.
Holden Karnofsky: There’s a version of you where you’re comfortable, you’re happy. And someone comes to you one day and they’re like, “You know what? You thought that your work was not relevant to longtermism, but it turns out it is, but you have to come work for this weird organization, take a giant pay cut, maybe move, get a lot less of all the stuff that you enjoy day to day. Will you please do it?” And there’s a version of you that says, “Oh, I can’t do that. I’m completely comfortable.” And there’s a version of you that’s like, “I’m in. Done. I’m ready.” I think it’s really hard to be the second kind of that person. And I think if you’re a successful person in some non-longtermist-relevant job, you could be thinking about what it would take for you to be that person. You probably need financial savings. You probably need just a certain kind of psychological readiness that you probably need to cultivate deliberately. You probably need to be connected enough to the community of longtermists that you wouldn’t completely be at sea if you entered into it one day.
Holden Karnofsky: And so it’s hard. It’s a lot of work. It’s not easy. And it’s super valuable in expectation, because you’re really good at something. And we don’t know whether that something is going to be useful. So there’s a chance it is, and the usual expected value calculation applies. And so I think if you’re managing to pull off both those things, that you’re kind of… You’re respected by people around you, you’re a positive influence and force and someone who’s looked up to, and you’re ready to switch at any moment, I’m like, “Gosh, your expected impact is really good. I’m comparing you now to someone who, they’ve got that job that everyone on the EA Forum is always going gaga over, and they’re barely hanging on to it.” And I’m like, yeah, I think the first person. I think the first person has a higher expected impact.
The state of longtermism
Holden Karnofsky: I think about this a lot, because my job is to help get money out to help support long-term goals, and especially because recently I’ve focused more exclusively on longtermism. And I mean, I think it’s an interesting state we’re in, I mean, I think the community has had some success growing, and there’s a lot of great people in it. I think there’s a lot of really cool ideas that have come from the community, but I think we have a real challenge, which is we don’t have a long list of things that we want to tangibly change about today’s world, or that we’re highly confident should change, or that we’re highly confident that we want people to do. And that can make things pretty tough.
Holden Karnofsky: And I think it reminds me a bit… It’s like an analogy to a company or an organization where when you’re starting off, you don’t really know what you’re doing, and it wouldn’t really be easy to explain to someone else what to do in order to reliably and significantly help the organization accomplish its mission. You don’t even really know what the mission is. And when you’re starting off like that, that’s like the wrong time to be raising tons of money, hiring tons of people… That’s the kind of thing that I’ve struggled with a lot at GiveWell and Open Philanthropy is like, when is the right time to hire people to scale up? If you do it too early, it’d be very painful, because you have people who want to help, but you don’t really have a great sense of how they can help. And you’re trying to guide them, but unless they just happen to think a lot like you, then it’s really hard to get there.
Holden Karnofsky: And then eventually what happens is hopefully you experiment, you think, you try things, and you figure out what you’re about. And I guess there’s a bit of an analogy to the product/market fit idea. Although this is a bit different, because we’re talking about nonprofits and then figuring out what you’re trying to do.
Holden Karnofsky: But that’s an analogy in my head, and it makes me think that longtermism is in a bit of a weird position right now. We’re in this early phase. We still lack a lot of clarity about what we are trying to do, and that makes it hard to operate at the scale we could to push out money the way that we hopefully later will.
Why Holden thinks 'cause X' is a bit overrated
Holden Karnofsky: The EA community is very focused on a few causes right now. And maybe what we should be focused on is finding another cause we haven’t thought of yet that’s an even bigger deal than all the causes we’re thinking about now. And the argument goes, well, if no one had thought about this existential risk in AI stuff, then thinking of it would be by far the best thing you could do. And so maybe that’s true again now. And so I get that argument and I certainly think it could be right.
Holden Karnofsky: And I don’t think the right amount of investment in this is zero. I also think we should just look at the situation. You’re kind of pulling causes out of an urn or something, and you’re seeing how good they are and you’re thinking about how much more investment in finding new causes is worth it. And it’s like, if the first three causes you pull out are all giving you the opportunity to let’s say benefit 10% of the current global population, if you do things right, then you might think maybe there’s a way to do a lot better than this. And then you pull out a cause that’s like, well, this century we’re going to figure out what kind of civilization is going to tile the entire galaxy. And it’s like, okay, well I think that drops the value of urn investment down a bit.
Rob Wiblin: …What more could you want? Where else is there to go?
Holden Karnofsky: Exactly, where else are you going to go? And it’s also neglected. So you’ve got importance and neglectedness off the charts. You’ve got a tractability problem.
Holden Karnofsky: But that’s exactly why, I mean the kind of person who would be good at finding cause X who finds these crazy things no one thought of… Well, there’s plenty of crazy things no one thought of that could be relevant to how that particular cause goes. There’s so much room to be creative and find unknown unknowns about what kind of considerations could actually matter for how this potential transition to a galaxy-wide civilization plays out and what kinds of actions could affect it and how they could affect it. There’s all kinds of creative open-ended work to do. So I think it’s better to invest in finding unexpected insights about how to help with this cause that we’re all looking at that looks pretty damn big. I’m more excited about that than just looking for another cause. It’s not that I have some proof that there’s no way another cause is better, but I think that investment is a better place to look.
Articles, books, and other media discussed in the show
Holden’s writing
- My current impressions on career choice for longtermists
- Cold Takes
- Most Important Century series
- Expert Philanthropy vs. Broad Philanthropy
Open positions at 80,000 Hours
Open positions at Open Phil
- Open Phil’s Technology Policy Fellowship
- Communications Officer
- Program Officer: South Asian Air Quality
80,000 Hours Podcast episodes
- Phil Trammell on how becoming a ‘patient philanthropist’ could allow you to do far more good
- Alexander Berger on improving global health and wellbeing in clear and direct ways
- Ajeya Cotra on worldview diversification and how big the future could be
- Max Roser on building the world’s first great source of COVID-19 data at Our World in Data
Other links
Transcript
Table of Contents
- 1 Rob’s intro [00:00:00]
- 2 Holden’s current impressions on career choice for longtermists [00:02:34]
- 3 Aptitude-first vs. career path-first approaches [00:08:46]
- 4 How to tell if you’re on track [00:16:24]
- 5 Just try to kick ass in whatever [00:26:00]
- 6 When not to take the thing you’re excited about [00:36:54]
- 7 Ways to be helpful to longtermism outside of careers [00:41:36]
- 8 Things 80,000 Hours might be doing wrong [00:44:31]
- 9 The state of longtermism [00:51:50]
- 10 Money pits [01:02:10]
- 11 Broad longtermism [01:06:56]
- 12 Cause X [01:21:33]
- 13 Open Philanthropy [01:24:23]
- 14 COVID and the biorisk portfolio [01:35:09]
- 15 Has the world gotten better? [01:51:16]
- 16 Historical events that deserve more attention [01:55:11]
- 17 Applied epistemology [02:10:55]
- 18 What Holden has learned from COVID [02:20:55]
- 19 What Holden has gotten wrong recently [02:32:59]
- 20 Having a kid [02:39:50]
- 21 Rob’s outro [02:44:50]
Rob’s intro [00:00:00]
Rob Wiblin: Hi listeners, this is the 80,000 Hours Podcast, where we have unusually in-depth conversations about the world’s most pressing problems, what you can do to solve them, and why Aristotle is overrated. I’m Rob Wiblin, Head of Research at 80,000 Hours.
For many of you Holden Karnofsky won’t need any introduction, because he’s a — or maybe the — driving force behind GiveWell and Open Philanthropy.
He was also the guest in the episode right before this one, because this is the second half of a recent conversation we had. The first half was about why Holden thinks there’s a surprisingly strong case that we may be living in the most important century in the history of humanity, and perhaps even the universe itself.
Before you skip back to that one, we split this episode in such a way that you don’t really have to listen to part one before part two, though you can if you like.
Holden’s professional goal is to positively shape the long-term trajectory of humanity, and after ten years trying to figure out how to do the most good through philanthropic grants, he has a lot of thoughts on how that can and can’t be done.
To that end he recently wrote an article titled “My current impressions on career choice for longtermists.” Career choice for longtermists? That sounds like 80,000 Hours’ thing.
Holden’s advice and our advice are similar in a lot of ways, but Holden prefers a fairly different emphasis or framing of career choice and we wanted to make sure you all knew about this alternative perspective from our own.
After that we discuss the overall state of longtermism as a set of ideas that try to move the world in the right direction. In particular, the need to find projects that are both helpful and able to absorb hundreds of millions (or billions) of dollars. Holden also explains why he’s pessimistic about our chances of finding a so-called ’cause X’: something we’re not even aware of today, but that could be one of the most important moral problems of our time.
The final section is the most fun and general interest, as we cover questions from the audience as well as a bunch of random topics Holden has been writing about. Those include:
- Has the world really gotten better over the last few thousand years?
- What historical events deserve much more attention
- What we can learn about the world from COVID
- What Holden has gotten wrong recently
- And more
As I mentioned last time, Holden works at Open Philanthranthropy, which is 80,000 Hours’ largest funder.
If you’d like to share any feedback on this or other episodes, our email is [email protected].
Without further ado, here’s Holden Karnofsky.
Holden’s current impressions on career choice for longtermists [00:02:34]
Rob Wiblin: Alright. Let’s push on and talk about another thing that you wrote recently, My current impressions on career choice for longtermists, where you’re trying to bring together a bunch of experiences you’ve had over the last 14 years, I suppose, thinking about how to have a large impact. And then in more recent years, thinking especially about the most important issues for longtermists and seeing the careers of people around you, how they’ve gone well and how they’ve gone badly. What are some key, high-level points that you make in the blog post?
Holden Karnofsky: Probably the biggest key high-level point is just a different framework for career choice than I think effective altruists normally talk about. And it’s especially focused on early-career people who don’t really know what they’re good at yet, don’t really know what they’re about yet. And it’s kind of trying to address this question. Well, I’m a longtermist, let’s just simplify and say, I want to help with the most important century. I believe the hypothesis, I want to help. I’m really early in my career. What do I do? And I think the normal answer is kind of like, you should find a way to get a job that is going to set you up to be in this place in 20 years, in this kind of job in 20 years. Or it’s, you should get a job that is working on this cause, or the cause of AI.
Holden Karnofsky: And if you can’t do that, then the cause of biosecurity, and if you can’t do that, then the cause of EA community building, or whatever. It’s not necessarily in that order. And I have a bit of an issue with that kind of advice. I think it’s good advice. I think different lenses are good. So, with career choice, I think you want to just consider several frameworks. I’m not trying to say one is the only one one should use, but I think a problem with that kind of advice is it’s like, it often is hard to process and hard to take actions based on it, and hard to get a reliable signal about whether you’re making the right choices. So a lot of times it’s like, well I would love to work at an EA organization making plans about the most important century, but I wasn’t offered any jobs like that, now what? And it’s just like, now you have to start making guesses that if I go into this job it’ll lead there, that stuff is very hard to predict.
Holden Karnofsky: It’s very hard to predict what kind of job you’re going to end up in, or what kind of thing is going to work out. What kind of cause you’re going to… People will switch causes during their career all the time. And so I just think a thing that you can do with that early career stage that is more predictive is you can ask, “What aptitude can I build, and become really good at? And that aptitude is going to stick with me throughout my career, no matter what cause I go into, and it’s going to be something that I’m building every year, even if I switch causes, even if I switch worldviews, even if things go differently than I thought, even if this job doesn’t work out, I’m building this aptitude.”
Rob Wiblin: What are a few of the specific aptitudes that you describe as an alternative way of framing what you’re trying to achieve out of a career?
Holden Karnofsky: Yeah, so there’s a cluster of aptitudes called ‘organization running, building, and boosting.’ And that includes things like doing management of people and helping organizations set their goals, hit their targets. It can include operations jobs. It can include business operations jobs. It can include, if you stretch it a little bit, communications jobs, a lot of things. And it’s like, if you come into an organization and your thing is you’re a project manager who keeps people on track and you’re a good personnel manager, that’s an aptitude you’re building, and that aptitude is going to stay with you.
Holden Karnofsky: If you’re doing that at a tech company, but you’re really good at it and you get better at it, and then you go later to an AI lab, you’re still going to be good at it. And that’s going to be one of the skills that you bring to the AI lab. And that’s a case where you were able to build something useful without having immediately a job in the cause you wanted to be working in that then did transfer. So, that’s an example of an aptitude. I talk about various research aptitudes, where you try to digest hard problems. I talk about a communications aptitude where you try to communicate important ideas to big audiences. I go into a bunch of different things.
Rob Wiblin: So, 80,000 Hours over the years has talked quite a lot about career capital, which we define very broadly as anything that puts you in a better position to get a job that has impact in the future. But it obviously includes things like skills, things like the people you know, the credibility you have, even just having money in the bank so that you can potentially change jobs without having to stress too much. Is this kind of a similar concept to building career capital in your early career, or does it maybe have a different emphasis?
Holden Karnofsky: Well, if you build an aptitude, then you’re building career capital. Aptitude is a kind of career capital. I think the thing that I’m emphasizing in the post is that when you’re at this early stage in your career, and you’re looking for a helpful question to ask, you’re looking for a, “Do I want to do A, B, or C?” And when A, B, and C are different causes, I think that doesn’t always have very clear or reliable implications for what that means for what kind of job you’re working in and how you know if you’re on track. Whereas if A, B, and C are being a project manager, being a researcher, being a communicator, then it’s like, well, if I want to do this, then I should take this kind of job. I can almost certainly find some kind of job like that.
Holden Karnofsky: And then here’s how I’ll know if I’m getting better at it. And then if I switch, then I’ll do that kind of job. And that’s how… So, it’s a question you can ask. Which aptitude, which kind of career capital do I want to build? And you can ask the question, you can take a guess. You can probably try out the guess, and you could probably get information on whether the guess is a good guess, and you could do all that earlier in your career in a reliable way, versus trying to gather information on what cause should I be in. Which can be done differently, but you don’t always necessarily learn a lot about that from your day-to-day work in your job.
Rob Wiblin: Just to complete the list of aptitudes that you mentioned, I think the last few ones were software and engineering aptitude, quite a transferable skill. Then there was information security. So it’s like computer security, helping people keep secrets. I guess that one’s beginning to approach a little bit like a job or career. And then there’s academia. So, that’s like a broad class of roles.
Holden Karnofsky: Some of them are broad classes of aptitudes that have a bunch of stuff in common. A lot of the point of the post was to be like, how do I take my guess at what I could be great at? And how do I start learning whether it was a good guess, and start changing my mind about it, which I think is often a better framework to be learning than some of these other frameworks.
Holden Karnofsky: And then I think another aptitude that I just skipped over is political and bureaucratic aptitudes. So like, if you’re in the government and you’re climbing through the ranks, that’s the thing that you can be good at or bad at. You can learn if you’re good or bad at it. And if you’re good at it, you’re going to be good at it. And if you change your mind about what cause you to work on, you’re still going to have those skills, and they’re going to help you get into whatever cause you wanted. I also talk about entrepreneurs. I talk about community-building aptitudes, like trying to help people discover important ideas and form a community around them. That’s a thing that you can try out. You can see if you’re good at it. If you’re good at it, you’re going to keep getting better at it, et cetera.
Aptitude-first vs. career path-first approaches [00:08:46]
Rob Wiblin: So, I maybe find this distinction between the aptitude-first and career path-first thing pretty blurry at times.
Rob Wiblin: So if you’re trying to develop the ‘organization building, running, and boosting’ aptitude, and you go into a project management role, and then you’re developing project management skills, and maybe the measure of success in that is, do other people think that you’re really flourishing and killing it in that job. So from one point of view, that’s aptitude building. I guess it also seems quite a bit close to almost a career path, or at least the beginnings of a career in a particular kind of role. Is there any way of sharpening the distinction in my mind and the listeners’ minds?
Holden Karnofsky: I mean, it’s somewhat of a fuzzy distinction. Some of the aptitudes I list basically just are paths, like academia. But I think a career path tends to be based around a particular endpoint. And a lot of times that endpoint is sometimes cause-specific, or even organization-specific. So it’ll say, I want to work at an organization like this, working on a problem like that, in this kind of role. And it’s just like, it’s a very specific target to be aiming for. And I think most people are just wrong about where their career is going to be in 10 years. Their best guess is just wrong. I mean, it depends a little bit. There are some career paths where you really… They’re very well-defined. They’re scaffolded. You have to get on them early. People give you a lot of help figuring out how you’re progressing.
Holden Karnofsky: So the distinction kind of dissolves there, but I think there’s other careers where it’s very hard to tell where you’re going to end up, but it’s very tractable to tell whether you’re good at the kind of thing you’re doing and figure that will land you somewhere good. And if it doesn’t land you in direct work it’ll land you somewhere good anyway, because there’s things you can do as long as you’re good at what you do. So I would say that’s the distinction.
Holden Karnofsky: And I would say to imagine the distinction, just imagine two project managers, and one of them is saying, “I’m hoping I will pick up skills here that will allow me to work at an effective altruist organization on project management.” And the other person is saying, “I don’t know what my plan is. I’m just trying to get good at project management. And I’m trying to see if I’m good at it, and I’m really focused on that.” And then you go 20 years later and it’s like, well, it turns out that all the effective altruist organizations are fine, they don’t need project management, but like there’s some other weird job that didn’t exist at the time that we were having this conversation, and it’s not project management, but it takes a lot of the same skills. You needed to be a good manager. And it’s not even at an EA organization. It’s like, you’re at a non-EA AI lab, but by being a valuable employee, you’re getting a voice in some of the key issues or something like that. It’s just like something you didn’t think about before.
Holden Karnofsky: And so you imagine these two people and it’s like, the first person was focused on… One of the people, I don’t remember which one I said first, was focused on the right question, which is, “Am I good at this, or should I switch to another aptitude?” And another person was like, “I have this plan,” and their plan didn’t work out.
Rob Wiblin: Just an observation is that our impression or experience over the years is that most users of 80,000 Hours seem to prefer the most concrete advice possible, like “Go and get this specific job.” And then failing that, they want to be told what career path to get on. Something that’s very clear. And it’s almost to a slightly comical degree, an impractical degree, because obviously we can’t be directive to that many specific people and guide them in any way that’s sensible. It would be irresponsible to be as concrete as that. I guess, arguably the aptitude framework is taking things to an even slightly more abstract level. Because it’s like, neither a job nor a career, but an even broader class of careers. I guess it could be that your approach is the right one. But I wonder whether this might make it a bit more difficult to get uptake, because it almost requires a little bit too much work on the part of the user.
Holden Karnofsky: Maybe. I have occasional career coaching calls with people where I just say…. People just call me up and they’re like, can I talk to you about a career choice I’m making? It might be different. I may be talking to a different kind of person at a different kind of phase, because a lot of times they’re choosing at that time that they’re talking to me, but sometimes not. So maybe they’ll lead off with something like, well Holden, I want to become this kind of thing in 10 years. And I’ll be like, “Let’s not talk about that. What are the jobs you’re considering?” And then I’ll say, “What are the things that you’ve done in the past that you’ve been good at and that you’ve liked?”
Holden Karnofsky: And then I’ll say, “What would you be doing day-to-day in this job and that job, do you think you’d be good at that? Do you think you would like that?” And then I’ll kind of say, “Well it sounds like this job over here would really match with the kind of thing that you like doing, that you’re good at doing, that you’ve done well in the past. This job would match less well with that, but you’ve got a theory that you should take it anyway, because you calculated out the utilons and you can have more utilons per year of impact that way. And so you should just go with the first one.” And that’s always what I say. And a lot of times I’ve just given people permission to do what they wanted to do anyway.
Holden Karnofsky: But I do think it’s usually the right thing. It doesn’t always have that ending. So, other times I’ll say, “Well, you know what? It sounds like you really like to do this kind of work. Have you thought about a job in this general category?” And they’ll be like, “Oh no, because there aren’t any jobs like that at EA orgs.” And I’m like, “No, but it would be good to develop that aptitude if you’re going to be really good at it. And here’s how it could come in handy later.” And then we’ll talk about where they might apply and what they might look into, and maybe I’ll make a referral. So it’s a different… I think it’s a different style. I mean, I don’t know. I’ve listened to a few 80K coaching calls, and I think it’s a different style. I think maybe a blend or a combination could be interesting.
Rob Wiblin: So you start out focusing primarily on trying to figure out what is the person’s greatest strength or their aptitude or the area in which they—
Holden Karnofsky: …or just a hypothesis. It’s actually very hard to know. I mean, part of my thinking is like, a lot of the good careers I’ve seen, they have all these chaotic changes of direction and it’s like, people just don’t end up where they thought they would end up, and it’s all very unpredictable. And so, I’m trying to help people gather real information that makes them make these changes more quickly and more effectively, rather than build things around a plan of where they’re going to end up, or even talk about impact per year. Because your impact per year early on is just a rounding error, usually, compared to the impact after you’re really good at something.
Rob Wiblin: So you’re trying to suss out the kinds of skills that someone might kick ass at now, or might become extremely good at in the future. And then I guess you can almost broaden your view to think, well, what’s all of the kinds of things that can make use of this aptitude? Because people often do have an excessively narrow focus. That’s one thing I’ve noticed almost always when I’m talking to people about their career, is that they’ve usually narrowed in and become very invested in potentially too few options early on.
Holden Karnofsky: I think that’s somewhat true. I think that’s right. A thing that has bugged me is people who are a year out of college and they’re just like, where am I going to do the most good this year? And I’m just like, that’s a weird framework. Like, there’s way more variation in how much good you’re going to do in 10 years after you’ve built up a lot of career capital.
Rob Wiblin: Yeah, it’s very interesting, that phenomenon, because I feel we’ve been pushing against that pretty hard from the very beginning, with this idea of career capital. But I mean, maybe a bit of something that’s going on there is a bit of hybrid motivations, where people both want to have impact because of its intrinsic value, but also because they want to actually have a job in which they feel fulfilled and feel like they’re having impact. That’s also important to them. And the idea of delaying that for 10 or 20 years, maybe it’s a little bit hard to swallow.
Holden Karnofsky: Certainly. It can definitely be hard to do a job where you feel like everything you do is not having a positive impact at all. But I think, I don’t know, to me the most healthy attitude early in the career at a job is, I’m learning. That’s the most healthy attitude. And I don’t think that’s the healthiest attitude every year in your career. But that was certainly… I mean, I don’t know… I certainly did this. It was a little weirder because there was no effective altruism when I was starting my career. But I was just like, oh, this company, they make predictions about macroeconomic events. Like that sounds… I would love to be able to do that. I didn’t know why I wanted to be able to do that. I just wanted to be able to do that. And then I did learn things that were really useful for starting an organization that answers weird, hairy questions that require a bunch of poorly-scoped research. And I applied that skill to something that I thought was important.
Rob Wiblin: So I definitely recommend that people who are interested in thinking about career choice and want to have more impact — and I suppose especially career choice to have more impact, if you’re focused on longtermism — but people in that camp, definitely go read it. It packs a lot of information into a pretty short package. It’s mercifully brief, but it does include many of the most important things that I think 80,000 Hours and the effective altruism community and you have learned over the last 10 years.
How to tell if you’re on track [00:16:24]
Rob Wiblin: One thing that I particularly liked about the post is it’s focused on guideposts telling if you’re on track in developing a given aptitude. So whether you should stick with it and double down. For example, how does one tell if one is on track in political and bureaucratic aptitudes?
Holden Karnofsky: Yeah, sure. So that’s an example. So, I think some of these other frameworks, which I think are great, but I think you can end up lost on, how am I refining my picture of what I should be doing and where it should be. So it’s like, I wanted to work in AI, I managed to get this job in an AI organization. What do I do now? What am I learning about what kind of job I should be in? The aptitudes framework is a way of saying, look, if you’re succeeding at the job you’re in, then you are gaining positive information about your fit for that aptitude, not necessarily for that cause. And if you’re failing at the job you’re in, you’re gaining negative information about your fit for that aptitude, not necessarily that cause or that path.
Holden Karnofsky: And so, yeah. I mean like, so you gave the example of, I want to be in government. And it’s like, well, yeah, if you go into government and you have some peers or some close connections, they can probably, after a year, they can tell you how you’re doing. They can say, hey, you’re doing great. You’re moving up. You’re getting promoted at a good rate. People here think you’re good. This is a good fit for you. Or they can tell you, you’re kind of stalling out. And people don’t like this about you and that about you, or the system doesn’t like this or that about you. And then you can start thinking to yourself, okay, maybe I want to try something else. Maybe government’s not for me.
Holden Karnofsky: So it’s this framework where it’s not too hard to see if you’re succeeding or failing. A lot of people who aren’t necessarily all the way in your headspace and don’t have all the same weird views you have can just tell you if you’re succeeding or failing. And you can tell if you’re getting promoted, and it also matters if you’re enjoying yourself and if you’re finding it sustainable. These are all actual predictors of whether this is going to be something that you keep getting better at and end up really good at.
Rob Wiblin: So one of the reasons that you prefer this kind of aptitude-first way of approaching your career, especially early on, is that you think it’s easier to tell whether you are likely to be able to develop a particular aptitude, and maybe also whether you’re succeeding at developing that attitude. Yeah, do you want to explain why you think that?
Holden Karnofsky: Yeah. I mean, I’m not really sure why I think that, to be honest. I think it’s just… A thing that I noticed is, when I imagined myself as an early-career person using a very cause or path-based framework, I noticed myself having a tough time learning year to year about what I should be doing differently, and knowing where I should go next and what I should do next, and feeling like a lot of the stuff that I think I’m learning is actually very brittle. So on a path-based thing, you might be doing well at year one of your 20-year plan to get to a certain job, but it just doesn’t really give you very reliable information about how year 15 is going to go. And when I think about the aptitude stuff, I just think, okay, if you’re a project manager and you’re getting promoted and a lot of raises and people think you’re great, and you’re getting a lot of positive feedback, then probably you’re going to keep becoming a better project manager.
Holden Karnofsky: That’s not a wild prediction that could break in some brittle way for no reason. That’s probably true. You’re gaining real evidence about it. So I’m not totally sure why I think that, I just think that. And I think in many ways, it’s just not that hard to tell if you’re a fit for the aptitude. And so, it’s a kind of learning that I’m trying to get people to pay attention to, and do that learning as they’re also learning about what kind of cause they think is most important, which is something you can also do. But I think that’s more maybe something you’re learning in your spare time.
Rob Wiblin: So I suppose there’s different lenses that you can take when you’re trying to plan your career. And I think in our process we kind of recommend that you spend a little bit of time thinking through all of these different lenses so you don’t miss something important. But you’re suggesting taking this aptitude-first approach. Other lenses you might put on is like, well, which problem do I want to work on? Maybe that’s something that I should try to decide early on. Another one might be, what kind of career path do I want to be on? What sorts of specific jobs should I be going for? Yeah. Are there any other reasons why you prefer the aptitude-first approach?
Holden Karnofsky: The only thing I would add is that another thing I was trying to do with this post is also… A very important part of it is the aptitude-agnostic part of the post. So I also talk about how — especially in longtermism — we’ve got these weird problems we’re working on. No one’s really sure what to do. There’s a small number of jobs that are really directly on the most important thing, and not everyone is going to end up in a job that is directly working on the problem they care about most. And that’s okay.
Holden Karnofsky: And I think there’s a lot of opportunities for people to do a lot of good in whatever job they’re in, and using whatever aptitude they have. And so that was another thing that I tried to put in this post, to say that, one way to think of your career is instead of saying “My goal is to work in the most important AI job and if I fail, I fail” but to say, “My goal is to build this aptitude, and if I do, there’s a high probabilistic chance that that aptitude will be useful for the problems I care about. And if it’s not, then here’s what I’m going to do instead.” Because as long as you’ve gotten good at something, there’s actually a lot of ways that you can help with whatever problem that you care about most.
Holden Karnofsky: And I think those are the two biggest things I’m going for, is just trying to turn people onto this framework of making hypotheses and learning about them that I think can be very informative and helpful in choosing your career. And then also trying to lay out this backstop of, you don’t need to be do-or-die for getting the job you want. If you get really good at something, you’re going to do some good.
Rob Wiblin: A sort of assumption in the aptitude-first approach is that if you develop some really strong expertise, then you’re likely to be able to switch the problem or industry that you’re working on or a part of in the future. And I guess in my mind, deep experience with problems like artificial intelligence or biosecurity or international relations and so on, having worked in that and knowing the people and being familiar with the information that people in that organization or industry are familiar with, are important filters to entry. And so to some extent, they are aptitudes of their own. Or at least it’s not so easy to just always carry over aptitudes between very different problem areas. Is that something you agree with, or maybe am I misunderstanding the advice?
Holden Karnofsky: I think it’s only true for some aptitudes. So I think if your job is policy advisor, then you’re going to have to be building up expertise in what area you’re advising in probably. Although I would think super early in a career, it’s actually just totally fine to see if you’re doing well on a policy advisor track on some random thing, and then later start building it up. But you’ll need to build it up for a while for sure. If you’re a policy advisor on AI, then at some point you will need to build up AI expertise. There’s other jobs where it’s just way less true.
Holden Karnofsky: It’s just like, I do think this example of being in government… I think there are a lot of roles where you don’t need to have been in this area your whole career. You can move around if you’ve been in the very general area. So it’s like, if you’ve been working on science and technology, you don’t necessarily have to have been working on AI. I also think there are a lot of areas where you can just get up to speed really damn quickly. AI actually kind of seems to be one of them. A lot of the best AI scientists I know, they were biologists or physicists and they caught up frighteningly quickly to become great AI scientists.
Holden Karnofsky: And they were probably just building a bunch of scientific habits of mind, some of which are maybe actively diversifying and complementary to the normal ML field, and might actually make them better. So I think it varies. I think there’s jobs where you need to be building up expertise pretty early, there’s jobs where you need to build the expertise at some point, but you could do it pretty quickly, and then there’s jobs where you just don’t need to build up expertise because you’re a manager. And you’re good at that, and you don’t need to be an expert in something else. I think it just varies. And I tried to organize it that way. So I think I just fully own that some aptitudes are also paths. They are the same thing, because the path is well defined. So if you want to be a professor of economics, there’s nothing fancy here. Go get an economics PhD. It’s all the same steps. My framework has nothing to add. And I just own that. Some aptitudes are just paths. Some aptitudes are skills you can build that open up a lot of different possibilities.
Rob Wiblin: So having listened to the previous episode on the world’s most important century, I could imagine some listeners might think, “Holden sure seems to think that there’s some very specific technologies, or very specific issues, that could determine how the century and how the whole future goes.” That is a slightly strange juxtaposition with the idea that you shouldn’t worry too much early on in your career about what you are building expertise in, or what specific problem you’re learning about and building career capital in. How do you reconcile those?
Holden Karnofsky: It’s only one framework, and I would definitely say that if you have two job offers and they’re both exciting and one of them is in AI and one of them is not, and you agree with me about the most important century, take the AI one. No problem. And they don’t have to be equally exciting. I just said they’re both exciting, right? So, I’m not saying this is a totally dominant factor or something, but I would also say if you don’t have… Or if your only opportunity to work in AI is some job that isn’t very exciting to you and isn’t going to really build anything and that you’re not really into, and you’re not going to be working with great people, and not going to be leveling up…
Holden Karnofsky: And then you have some other job where you could really learn a lot, grow a lot, build a great network, build career capital… I would say take the second one. I absolutely would. And a lot of the people I know now who I’m most excited about the work they’re doing in AI did not start in AI, and should not have started an AI. So I mean, that would certainly include me. I’ve spent a lot of my career trying to figure out how to build an organization to answer a hairy research question. And that has been really useful for this most important century hypothesis stuff. And I think if I had been age 23 working in AI, I wouldn’t even have been in deep learning. So that would have sucked. And I would have been working on these AI frameworks that are just not even that useful and it really would have been bad.
Holden Karnofsky: So my wife, Daniela, she worked in tech, and she built up all these personnel management skills, and now she works in AI. I think if she had been in AI from the beginning, I just don’t think that would have gone nearly as well. I actually think there weren’t really fusions of tech companies and AI companies the way there are today. And I think tech companies are a much better place to pick up the skills that are now needed in that kind of role.
Just try to kick ass in whatever [00:26:00]
Rob Wiblin: So in prepping for this interview I went back and looked at some stuff that you’d said and written over the years about career choice. And I suppose the common thread that people have probably picked up on already is the idea that you should just flourish in whatever, or just try to kick ass in whatever, early on.
Holden Karnofsky: That’s right.
Rob Wiblin: And that’s a strategy in itself. I think the tone in favor of that was even sharper, I think, five or 10 years ago. What have you observed that makes you believe that that’s a good approach? Is it primarily based on empirical experience or something else?
Holden Karnofsky: I mean, it’s pretty hard to really have justified empirical beliefs about career choice. Career choice is not even a well-posed question, because we’re like, what should people do with their career? It’s like, alright, well who? And if I start talking to an individual, well, they have so much information that I don’t — about themselves, about their networks, about their feelings, which I think… I think a job is so all consuming that your feelings about it are going to be a major factor. And how it feels to work there every day… I’m not saying it’s the only thing that matters, I’m not saying do what you love. I’m just saying, it’s really important. And I don’t have the information about it that I could have if I’m talking to you.
Holden Karnofsky: So there’s really no version of the career choice question that is the same kind of question as, what are AI timelines? There’s no version of that question that we can really pinpoint our disagreements down to Bayesian probabilities and just hash it out. It’s just like, we’re either talking in generalizations, or we’re talking to specific people, where they’ve got 80% of the information and we’re trying to do some value added… More like they have 99% of the information, and we’re trying to do some value added. So, I mean, I don’t know what my views are based on. I look around myself at people who I think are doing great things, and I ask how they got there, and that’s most of it. So yeah, I absolutely think that when you kick ass… The job market is really unfair, and especially the market for high-impact effective altruist jobs is really unfair.
Holden Karnofsky: And the people who are incredible at something are so much more in demand than the people who are merely great at it. And the people who are great are so much more in demand than people who are good. And the people who are good are so much more in demand than the people who are okay. And so it’s just a huge, really important source of variation. And so then it’s like, can you know enough about what cause or path you want to be on, that the variance in that, the predictable variance in that, beats the variance in how good you’re going to be? And I’m like, usually not. Or usually at least they’re quite competitive, and you need to pay a lot of attention to how good you’re going to be, because there’s a lot of different things you could do to help with the most important century, and a lot of them are hard to predict today.
Holden Karnofsky: But a robust thing is that whatever you’re doing, you’re doing it so much better if you’re better at it. So, that’s part of it. Also, people who are good at things, people who kick ass, they get all these other random benefits. So one thing that happens is people who kick ass become connected to other people who kick ass. And that’s really, really valuable. It’s just a really big deal.
Holden Karnofsky: You look at these AI companies and they want boards, and they want the people on their boards to be impressive. And it’s like, those people, a lot of those people are not AI people necessarily, or they weren’t for most of their careers. They kicked ass at something. That’s how they got there. And you know, a lot of my aptitude-agnostic stuff is about this too, where I’m like, let’s say that you missed, and you picked a skill that turned out it’s never going to get you a direct-work EA job, but you kicked a bunch of ass, and you know a bunch of other people who kick ass. So now you have this opportunity to affect people you know, and get them to do a lot of good. And they are not the people you would know if you hadn’t kicked ass.
Rob Wiblin: It makes a lot of sense to me that if you can somehow become known as truly exceptional, just the best person at even doing something that’s in a relatively narrow domain, or not an especially important domain, that you do get these peculiar benefits from that. One way that I worry that this advice could go wrong for some people is if there’s nothing that they really can find that they are going to be the most exceptional at. But they are in a position to be good at something that really matters, but they’re not in a position to be exceptional at anything else in particular, but they could then think, well, I should just double down on the thing that I happen to be best at. And then they’ll get the worst of both worlds, where they’ve neither focused on something that’s itself important, nor do they get these… What do you call it… Increasing returns to being ever better, in a positional sense. Do you worry about that?
Holden Karnofsky: A little bit. I mean, I think what I’m saying scales down okay. So, I don’t know that well. I haven’t built a model or anything, but I’m kind of like, being amazing is better than being great, being great is better than being good, being good is better than being okay. I think all the things I’m saying just scale down okay as you talk about the different layers of that. So, I think if you’re good instead of okay at what you do, you’re going to know other people who are good instead of okay at what they do. And I think that gives you opportunities to… I don’t know, just be a person that they trust. And that matters more than if everyone you know is okay at what they do.
Holden Karnofsky: It gives you just like… Again, it affects what circles you’re moving in, what random opportunities you have to meet people who are doing stuff that is cool. The biggest thing is it’s the same argument about… It’s hard to know what kind of job is going to be relevant when it’s most important, and when you’re at the peak of your career, but whatever it is, being good at it will be a lot better than being okay at it. You’ll have a lot more impact at it that way in my opinion. So that’s a lot of it. And then I do want to be clear that I’m only presenting this as a major important axis of variation, not as the sole exclusive thing. And what I do in my post is I list aptitudes that all seem important. And they all seem ballpark important.
Holden Karnofsky: I’m not saying there’s no differences, but I’m saying most of the aptitudes I listed, you could pick any random two, and I’d probably rather have a good person at A than an okay person at B. In terms of how much impact I’d bet on them to have. And it’s those specific aptitudes. It’s those ones. So I’m not saying, whatever it is you can be good at just go do that, because I think there are plenty of aptitudes that are way less likely to lead anywhere good.
Rob Wiblin: Do you have any specific stories of people who’ve taken this just flourish-in-whatever-early-on approach and it’s worked out well for them?
Holden Karnofsky: I kind of feel like this is everyone I know, although it dissolves, like some of the distinctions dissolve. So the obvious person I look at is myself, I describe myself a little bit. I’ve definitely taken that approach my whole life. My wife, Daniela, was kind of the classic case of like a long time in the tech world. A long time, actually, trying different things and figuring out what she liked, which I think was also really valuable. And then realizing that she could be kind of a high-paced organization-scaling manager in the tech world, and now does that in AI. A lot of the kind of EAs now who are whatever, who are on boards, are in government. So it’s some of that.
Holden Karnofsky: And then it’s like, okay, there’s Open Philanthropy employees. And it’s like, how do you want to classify that? Because okay, they took what they thought was a high-impact job, and so you could say, well, that was the path approach, not the aptitude approach. But actually, whenever I’m talking to someone about whether to take the job at Open Philanthropy, I always try to switch them into the aptitudes thing. So they’re always like, “Well, I can have impact at Open Phil, but you guys have a lot of good people, or I could go here and they would have someone really bad instead, and I’ve done the math, and blah, blah.” And I’m just like, “Look, I’m just going to ask you to change the whole frame here. I think Open Phil looks quite good on impact. I’m going to ask you to ignore that. And I’m just going to say, go somewhere where you’re going to be surrounded by great people, working on stuff that you have energy for and are learning a lot about, and you’re kind of leveling up and becoming a more impressive human being. Just make the decision on that criteria.”
Holden Karnofsky: And in some ways I just put Open Phil at a disadvantage because we’re such an impact-focused organization that maybe you’ll be like, oh, there’s a random tech company that’s better, and I’m just giving them that. I’m just saying, “Go. Go take the tech company job, if that’s how it is.”
Holden Karnofsky: So it just dissolves. So I kind of want to say all the good cases I know kind of feel like…not literally all, but a lot of them feel like they follow the aptitudes, but in a lot of cases the distinctions dissolve.
Rob Wiblin: It’s possible that I have… I’m not sure whether bias is quite the right word, but I have a peculiar selection effect on the people that I know and the people that I meet. Many of them, including me, took jobs in order to have a large impact very early on, because we decided to kind of go and start up to try to build this thing called effective altruism. So I can think of myself and quite a lot of other people who I know who have done very well, even though they decided to go into some role that they thought was especially impactful early on. I suppose, at the same time, I can’t think of any of them who did that where they weren’t also thinking, “I’m going to flourish in this role, and this really uses my aptitudes.” So I suppose that you just had a happy coincidence of these two things, both recommending the same course of action.
Holden Karnofsky: I think the happy coincidence… It’s not a huge coincidence. I mean, one piece of advice I give to people is, go to small things that are getting bigger. That’s just a great way to build career capital. I tell people to do things they’re excited about and motivated by. I was just on fire to start GiveWell. I was like, I can’t believe this thing doesn’t exist. And I was going to be so much more productive and excited and motivated doing GiveWell than I was going to be doing something else. And that’s what made it an easy call. So these things do dissolve, because effective altruism is kind of a new thing that I think is getting bigger.
Holden Karnofsky: So I think if you’re into it and you share the values, you’re actually in a kind of small number of people who are into it. And so it actually is… It becomes quite appealing to go and do effective altruism stuff. And there could be a lot of opportunities. So it dissolves in a lot of cases, but then there’s a lot of cases where it doesn’t dissolve. And I think a lot of the people just follow my advice, like whether or not I give it to them. And a lot of what I’m trying to do is kind of clarify the situation—
Rob Wiblin: Make them comfortable. Make them happy about it.
Holden Karnofsky: Yeah, clarify the situation, yeah, make people happy about it. Get rid of some of the self-resentment and guilt, but also just speed everything up, make everyone more aware of what they’re doing. I think there are people who just…they just go through wasted cycles of, “Here’s a job I like, here’s a job that I’ve calculated has a high impact. I’m taking the latter. Ooh, this sucks. I can’t really do this. I’m going to make myself. I’m going to force myself. Oh God, I couldn’t do that. I burned out. That was really unpleasant. Well now I’m just going to do something I can do.” And I would like to skip that.
Holden Karnofsky: And I think just people can be faster at figuring out where they’re going to thrive if they’re paying attention to where they’re going to thrive, and if that’s one of the first questions they’re asking. So that is a lot of the advice. So a lot of times I’m just giving people permission to do what they want, in some sense, but I think that’s valuable.
Rob Wiblin: I think with 80,000 Hours, early on we talked a lot about career capital, because we noticed that a lot of people, as we were talking about earlier, would think about, how can I have the biggest impact right now, like me having just graduated from my degree and being 22. And that can often be a big mistake, because you need to think longer term, need to think about aptitude building up and accumulation. But then we started to find some people who were in a position very early on to get exactly the kind of the job or the path that they ultimately wanted to be on. They could already, say, hypothetically, get a job as an AI safety researcher. But then they were sometimes reading our articles and thinking, “No, I’m going to go and do consulting, or do some other unrelated thing, just for the purposes of building up career capital.” And so we started having things like, if you can already jump into the exact thing that you think is the right thing for you in the long term, you should go do that.
Holden Karnofsky: I generally agree with that. Although I would say, I mean, if someone is like, “Man, I could do AI or statistics, and I just fucking love statistics and I’ve got so many ideas I’m excited to pursue,” I’d be like, “Do statistics. For all you know, it’s going to make you a better AI researcher than the other path anyway.”
When not to take the thing you’re excited about [00:36:54]
Rob Wiblin: So both of us are very keen on the idea that people should not take jobs that they feel unexcited about, where they don’t think they’re going to feel like they’re thriving, they’re not going to get flow during the course of the work. That’s just a devastating disadvantage to be at. Maybe it’s worth thinking about, what is the limiting principle here? What’s a case where a role would be too unimportant or too distant from things that can have impact, even if someone thought that they would flourish in it, or it was the kind of thing that they were most motivated by, that they shouldn’t do it?
Holden Karnofsky: A lot of my feeling is a weighted average. So a lot of my feeling is just, we should just put weight on each of these things. And I think the aptitudes probably just… They could use more explicit attention in EA. It would just meet the framework nicer and better if they were just a good chunk of the picture. So it’s like in the limit, if someone is like, “Well, I can be a basketball player or an AI researcher, but as an AI researcher, I will barely complete my PhD, and as a basketball player, I will win the MVP award in the NBA.” I would be like, “Okay, basketball player it is.” But I mean, in almost all cases, I would tell someone to go be an AI researcher rather than a basketball player if they want to have a lot of impact.
Holden Karnofsky: There are jobs that everyone wants, just everyone wants, and you will be treated very badly if you go into them, and you will have a really unpleasant… Or it could be pleasant if you get really lucky. But your career is going to have a lot of luck in it, and there’s a high probability that it’s just going to be a lot of frustration, because you are so expendable and you are so replaceable. If what you want to be is a certain kind of entertainer, unless you’re just totally on fire and nothing I can say will dissuade you, you should probably not go that way. The supply and demand is not favorable. Go somewhere where there’s better supply and demand, such as EA, which is a very promising set of ideas that is not quite the same way, where there’s an infinite number of these people around.
Rob Wiblin: There is this funny circumstance that if you can combine an understanding of an interest in longtermism with artistic ability, then I think there actually are quite a lot of roles there.
Holden Karnofsky: Yeah, maybe, okay. So then maybe you should. But then you’re going to be a weird artist who’s supported by EA funders, and it’s like you don’t have to play by the normal rules. But if you find yourself spending five years bringing someone coffee in the hopes that you might get a shot at an audition for something, you’ve picked a job where the supply and demand is not in your favor. I wanted to be a novelist for a ton of my life, so that would be one of those jobs, yeah.
Rob Wiblin: I suppose now you’re writing blog posts, so slightly related.
Holden Karnofsky: Same difference. Yeah.
Rob Wiblin: What is your vision for the amount of time that someone would spend developing an aptitude before they changed their mentality, where it’s, “Now I want to use this aptitude to have an impact.” Or I mean, potentially, I guess it could go all through their career and then they just find that there’s something that suits their aptitude that they’re already in, and a way to have impact there.
Holden Karnofsky: I generally say to people, “Do you feel like you’re still learning? Do you feel like you’re still growing? And do you feel like you want to be in this for the long haul, this aptitude?” And so I think at the point where someone says, “I’m very good at what I do, and this is the kind of thing that I am going to keep doing, I’m happy to keep doing, and I’m not dramatically growing anymore, or I’m not dramatically growing where I am,” then I’m basically switching my framework with them, and I probably sound a lot more like you guys. Then I’m just like, “You should work on AI. Let’s find a way for you to work on AI.” And if I can’t find a way for them to work on AI, then I say, “Well, here’s the aptitude-agnostic vision, and I’ve got you in my Rolodex.” I think Rolodex just means Google Sheet now, but whatever.
Rob Wiblin: We were saying earlier that people can find it hard to go into these roles where they’re just building their aptitude but they don’t feel like they’re having any impact, because it’s just demoralizing to feel like your work isn’t already meaningful. I suppose, how much weight should we put on the fact that it could badly affect people’s mental health, or perhaps just being disconnected from any way of actually doing good could cause them to potentially lose interest in doing good, and they just become interested in making money or developing professional skills and so on?
Holden Karnofsky: Well, there’s different things here. I often tell people to do what they’re excited to do or what they think they can be great at, which is often connected and related. Again, not literally whatever they’re excited to do, but to put weight on that. So I’ve been telling people to do that, and a lot of people, including me, just like, the calculated impact is going to have a big effect on that. And if it does then it does. So I’m not saying to ignore impact. For me, I have trouble getting out of bed for a job if there’s some other job that I think could have more impact. It’s going to be very hard to stop me from wanting to switch. And so that’s fine. I should consider that.
Holden Karnofsky: It’s when someone is more excited about A but they want to go do B because it intellectually has more impact. That’s where I’m more resistant. And then so anyway, so people who are having trouble getting excited about work, because they just don’t feel like it’s high impact, I mean, yeah, I would say they should be looking for a job where they are more excited to come into work. And I think that’s just all there is to do about that.
Ways to be helpful to longtermism outside of careers [00:41:36]
Rob Wiblin: In the blog post you mentioned a couple of ways that people can be helpful to longtermism outside of careers as such. Can you mention a couple of those? I thought a few of them were quite cool.
Holden Karnofsky: I mean, I list a bunch of stuff. I have a bunch of bullet points. But I would say the basic vision is just, you’ve got a set of ideas you care about, and you’re good at something. You’re a role model. People look up to you, people are interested in the ideas you’re interested in, and they listen to you, they take you seriously. You show up to community events with other people who care about these issues. You make those events better. Now more people come to the events, more people are enjoying the events. More people are kind of getting into the ideas, taking them seriously. You’re also helping improve the events. You’re saying these events aren’t working for me, this community isn’t working for me, and I’m this great role model and I’m this kind of person you should want to make happy.
Holden Karnofsky: And so you’re doing all that stuff. And I think that’s a fair amount of impact. I think that just being a person who is connected to other successful people at different levels of success and just kind of living the good life and having an influence on others by your presence… So that’s a lot of it. And then I think there’s another thing that I think is a large chunk of the impact you can have in any job, which is kind of a weird meta thing, but it’s being ready to switch, which I think is actually super valuable. It’s super hard. And so if what you are is you’re a person doing a job that’s not direct work on longtermism, there’s two different versions of you.
Holden Karnofsky: There’s a version of you where you’re comfortable, you’re happy. And someone comes to you one day and they’re like, “You know what? You thought that your work was not relevant to longtermism, but it turns out it is, but you have to come work for this weird organization, take a giant pay cut, maybe move, get a lot less of all the stuff that you enjoy day to day. Will you please do it?” And there’s a version of you that says, “Oh, I can’t do that. I’m completely comfortable.” And there’s a version of you that’s like, “I’m in. Done. I’m ready.” I think it’s really hard to be the second kind of that person. And I think if you’re a successful person in some non-longtermist-relevant job, you could be thinking about what it would take for you to be that person. You probably need financial savings. You probably need just a certain kind of psychological readiness that you probably need to cultivate deliberately. You probably need to be connected enough to the community of longtermists that you wouldn’t completely be at sea if you entered into it one day.
Holden Karnofsky: And so it’s hard. It’s a lot of work. It’s not easy. And it’s super valuable in expectation, because you’re really good at something. And we don’t know whether that something is going to be useful. So there’s a chance it is, and the usual expected value calculation applies. And so I think if you’re managing to pull off both those things, that you’re kind of… You’re respected by people around you, you’re a positive influence and force and someone who’s looked up to, and you’re ready to switch at any moment, I’m like, “Gosh, your expected impact is really good. I’m comparing you now to someone who, they’ve got that job that everyone on the EA Forum is always going gaga over, and they’re barely hanging on to it.” And I’m like, yeah, I think the first person. I think the first person has a higher expected impact.
Things 80,000 Hours might be doing wrong [00:44:31]
Rob Wiblin: I guess we’ve talked about a couple of ways that your advice differs in tone or emphasis from 80,000 Hours, but are there any other ones that are worth highlighting?
Holden Karnofsky: I don’t really think so. I mean, I do look at your website occasionally. I would let you know if there’s specific stuff I disagree with. I don’t think there is specific stuff where I’m like, “This is wrong.” I think there are vibes and frameworks and methods of analysis and angles that I think it would benefit the career advice to contain more of them. And I think I’ve gone through those okay.
Rob Wiblin: When I read the blog post, I was like, I agree with basically all of this. I was like, it’s going to be a little bit hard to find that many disagreements to really bring up. From one point of view that’s good, because we’ve reached a consensus a little bit. From the other point of view, maybe, are we just blind to all of these ways that we could be completely wrong? It’s slightly suspicious that there’s so much convergence on the same ideas.
Holden Karnofsky:
Well, I think it’s something weird about career choice, the topic. I mean, it’s just, it’s one of these topics where things never get all the way to the bottom of being the most concrete things, where we’re putting probabilities. And so I’m making generalizations and I’m like, “Well, don’t do this too much, and don’t do that too much, and weigh this against that.” And you’re kind of saying, “Yeah, you should.” And a lot of times people feel like they agree when they’re talking about, “Don’t do too much of this, don’t do too much of that. Consider this, consider this, consider that.” So I think that’s okay, but I—
Rob Wiblin: Unless you have a specific case, perhaps you might—
Holden Karnofsky: But I really do think there’s an actual gain to be had from talking in this way, and from using this framework, and communicating this vibe. And I think that is a real thing. So I think it’s like, if there’s a disagreement, it’s not that there’s something you wrote that I think is wrong, or something I wrote that you think is wrong. There’s a framework that I put out there that I think it would be good if it was part of the 80,000 Hours vibe, and I don’t think you get that when you come to the website right now.
Rob Wiblin: At the risk of being slightly navel gaze-y, yeah, what would be your best guesses about what 80,000 Hours is doing wrong, or at least suboptimally?
Holden Karnofsky: Sure. So I mentioned the career advice thing. I mean the career advice from 80,000 Hours, it feels a little off to me because it’s kind of… It feels like it’s just not emphasizing this lens. And it’s not about things you’re saying that I disagree with, it’s about, you’re guiding people to what to consider, what to pay attention to, what to think about. And I think it would feel more intuitive to me if it was more guiding people to think about the kind of stuff in the aptitudes post. But I also think in general, criticizing organizations is a little bit of a weird thing, because what organizations exist for is to take something and do it really well, and do it better than any other organization can.
Holden Karnofsky: And I think that I am not usually that excited to look at an organization and say, “Well, they did this, which was a mistake, or they did that, which was embarrassing, or they did these two things, which were inconsistent.” I’m just kind of like, I don’t know. A lot of the best, most successful companies made so many mistakes. They did so many embarrassing, comical, ridiculous, stupid stuff early on. They almost went bankrupt. There’s all these great stories. And that doesn’t matter. I mean, it does matter some, but what matters is that they did one thing really well. And then a lot of the mistakes and challenges they were having worked out.
Holden Karnofsky: So I don’t want to overstate this. I think that there are mistakes organizations make that are really important, that are really damaging, that really need to be reckoned with and apologized for. But I don’t have any of those for 80,000 Hours, at least not in the recent past that I can think of. But I think once you get past that as an outsider, talking about an organization, it’s like, I often just don’t have anything to say. I’m often just like, look, if you guys do something kind of silly… If you did it because you were focused on something else, then I’m not really sure it was wrong to do this silly thing. And I think we should expect organizations to just screw up a lot of things. And the real question is, how much good have they done, and have you guys done everything you can? And that’s just a hard question for me to answer. I’m just like, gosh, I don’t know if 80,000 Hours is as good as it could be. It’s a harder question for me to answer from the outside.
Rob Wiblin: I hope it’s not. I hope we can be better.
Holden Karnofsky: Good point. So then it’s like, how accessible are the ways to be better? The one thing that I do wonder about a little is I feel like 80,000 Hours brands and self-conceptualizes as a career advice organization, but I often feel like when I ask what 80,000 Hours is awesome at, it’s more like communications, it’s more like taking important ideas and communicating them accessibly and getting eyeballs on them. And so that does make me wonder, if you were more self-conceptualized in that way, would you be more focused on that? Would you be even better at it? I don’t know.
Rob Wiblin: Yeah, that’s an interesting one. I think maybe four years ago or so we just consciously realized that we had our hands really full merely keeping up with, and then digesting and deciding whether all of these ideas that were coming out from longtermism and effective altruism, whether they were good ideas, and then trying to explain them to a much broader audience that isn’t deeply ensconced in it. But this was just a full-time job, that relative to the number of staff we had, there was so much to do there. And I think that in part led to this podcast, for example, where we were like, how can we just shovel out as much of this amazing, amazing thing that’s going on out there? And that is more of a communications role than a research role.
Rob Wiblin: There’s a research aspect, in that you have to decide a bit which of the ideas are worth amplifying, and which ones you’ll maybe leave on the shelf. I guess I hope at some point we’ll have succeeded in popularizing and explaining well all of the ideas that we have, and then perhaps we’ll need to come up with some more original stuff. But it does still seem like maybe the research community around these ideas is getting bigger. So there is always new stuff coming out that we still have to communicate. And then kind of contextualize in like, what would this imply for careers, is maybe a step that other groups don’t take.
Holden Karnofsky: Yeah. I mean, I might go the other way. I might say, gosh, look at the aptitudes. You guys are really good at this. You guys are like, you kind of found what I call your wheelhouse, right? You can take these weird ideas, you can explain them, you can get eyeballs on them. You’re really good at it. You know what you’re doing. You’ve built up all this. You, Rob, I mean, you know how to play the social media game, you know how to do it. You know all this stuff that I don’t know about it. And so maybe I’m thinking, well, don’t have this hope that you’ll finish, and then you’ll get back to career advice or something, or that you’ll be more original. Just get better and better and better. Maybe you guys could be clearer. Maybe you guys could be more viral. That’s something that I think would be maybe interesting to think about. I don’t know. I mean, it’s always so silly for someone who doesn’t work at an organization to advise it, in my opinion. I mean, it could give you stuff to think about, but the ratio of how much time you’ve spent thinking about your work to how much time I have is not putting me in a good position here.
Rob Wiblin: It’s a very interesting idea, just double down on the stuff that’s going well. I do agree about the criticizing organizations thing. The number of times that I’ve read people online say, “I can’t believe that 80,000 Hours didn’t consider doing X”… I think 100% of the time we have considered doing X. But I must admit, even having that experience, that does not stop me from then saying about some other organization, “I can’t believe they didn’t think to do this. I can’t believe this company designed this product this way.”
Holden Karnofsky: I think it’s almost backwards. It’s like there’s organizations that are moving very carefully and deliberately and checking every box, and there’s organizations that are kind of like this giant clown car that’s racing down the track and all this junk is flying off it, and it’s just a mess. And it’s all very silly, but it’s moving faster than the other one. And that’s often, depending on what you’re doing, but that’s often what matters more. And that’s why I think it’s tricky to judge organizations.
Rob Wiblin: If people want to learn more about your views on career advice, I definitely recommend checking out the post.
The state of longtermism [00:51:50]
Rob Wiblin: Alright. Let’s broaden our view on longtermism a bit away from career choice and think about how longtermism is going, as a community of people, and as an intellectual community that’s trying to have a positive impact on the world as a whole. What do you think is the state of longtermism, so to speak, as a group of folks who are trying to improve the long-term prospects of humanity?
Holden Karnofsky: I think about this a lot, because my job is to help get money out to help support long-term goals, and especially because recently I’ve focused more exclusively on longtermism. And I mean, I think it’s an interesting state we’re in, I mean, I think the community has had some success growing, and there’s a lot of great people in it. I think there’s a lot of really cool ideas that have come from the community, but I think we have a real challenge, which is we don’t have a long list of things that we want to tangibly change about today’s world, or that we’re highly confident should change, or that we’re highly confident that we want people to do. And that can make things pretty tough.
Holden Karnofsky: And I think it reminds me a bit… It’s like an analogy to a company or an organization where when you’re starting off, you don’t really know what you’re doing, and it wouldn’t really be easy to explain to someone else what to do in order to reliably and significantly help the organization accomplish its mission. You don’t even really know what the mission is. And when you’re starting off like that, that’s like the wrong time to be raising tons of money, hiring tons of people… That’s the kind of thing that I’ve struggled with a lot at GiveWell and Open Philanthropy is like, when is the right time to hire people to scale up? If you do it too early, it’d be very painful, because you have people who want to help, but you don’t really have a great sense of how they can help. And you’re trying to guide them, but unless they just happen to think a lot like you, then it’s really hard to get there.
Holden Karnofsky: And then eventually what happens is hopefully you experiment, you think, you try things, and you figure out what you’re about. And I guess there’s a bit of an analogy to the product/market fit idea. Although this is a bit different, because we’re talking about nonprofits and then figuring out what you’re trying to do.
Rob Wiblin: Yeah.
Holden Karnofsky: But that’s an analogy in my head, and it makes me think that longtermism is in a bit of a weird position right now. We’re in this early phase. We still lack a lot of clarity about what we are trying to do, and that makes it hard to operate at the scale we could to push out money the way that we hopefully later will.
Rob Wiblin: When Open Philanthropy got interested in longtermism, people, including me, were thinking, well, suddenly there’s a lot more funding chasing opportunities that haven’t really increased year to year, despite the fact that there’s a whole lot more money involved, but this is a disequilibrium that we don’t expect is going to last that long. More people will get involved. More projects will start up, and we’ll be able to absorb more funding. And eventually, we’ll have a more reasonable balance between funding and staff, or at least people who are able to lead projects. But it seems like more people have raised more money. Some of the companies that people with longtermist inclinations are running have been very successful. And in fact, it seems like maybe things have become even more unbalanced over the last five years, in terms of the amount of… The number of people who’d be interested in funding really flourishing, longtermist projects relative to the number of them that existed. Do you share that perception?
Holden Karnofsky: Yeah, I definitely share that perception. Turns out longtermists seem to be really good at making money. I think we may be there for a while. I mean, it’s so hard to say though, because it’s not that hard to imagine that if some of this stuff about AI pans out, you can imagine a future world where just a ridiculous, ridiculous percentage of the world economy is going into basically compute that could be in the run-up to transformative AI, as people are trying to get there, it could be afterward, as people are trying to use very powerful AI systems. So, I mean, in a sense, one way I’d put it is there’s a lot of money right now relative to people who can evaluate where to put the money, and then people who can use the money to do things that are reliably and believed by the funders to be good. And there is that imbalance. And then what happens once you automate a lot of that stuff?
Rob Wiblin: Interesting way of putting it. To what extent do you think it’s a bottleneck not only that there’s maybe not tons of really experienced people trying to launch projects of a longtermist flavor, but also just that there aren’t a lot of people who can vet those projects and fully determine how promising they are in the scheme of things?
Holden Karnofsky: I think it’s a huge imbalance right now. I mean, there’s a lot of money and there’s just… It’s a pretty small community in the scheme of things. So I think we’re low on everything. We’re low on people who can run great projects that can make a big robustly good difference for longtermism, and we’re low on people who can identify those people.
Rob Wiblin: What does this situation imply for someone who’s sympathetic to longtermism and is, say, 18, and just really early in their career and life? Does it suggest anything about how they ought to plan for the longer term?
Holden Karnofsky: I think it does. I mean, I think earning to give looks a lot worse than it used to, especially for a longtermist, but I don’t think it looks worthless. And I think one of the nice things about longtermism having more money… I mean, the world where Open Philanthropy was planning on being more than half of all longtermist is money that was ever going to be was an unhealthy dynamic in many ways. And I spent a lot of time and a lot of stress just trying to think about, how can we behave in that world, and how can we take some of the decision-making power off of us without causing other problems? And it was really tough on me just to have trouble interacting with people in the community in any way shape or form, because of the funding power dynamic. Which obviously is still there, but I think there’s a lot of benefits to having more diversified funding sources, to having people who can search out their own opportunities. And like I said, things can always change again. But I think earning to give looks a lot worse than it used to.
Rob Wiblin: I suppose people who want to earn to give, maybe there’s a stronger fit in taking the worldview diversification position and doing some of the more global health and wellbeing work. Even maybe if you weren’t super inclined to that. Maybe there’s some trade where people who have additional funding can direct funding towards that. And then maybe some people who are on the fence in terms of where they want to go with their career, the people should maybe move over into longtermism a little bit more.
Holden Karnofsky: That’s definitely possible. I’m not totally sure, because I do think anytime you take your money and you just invest it at a pretty good rate… It’s like… I think this is a thing that has been pointed out by others. Just that we have this intuition that if you’re not spending money, that you’re falling behind someone who is. But actually you’re not. If someone else is working and you’re doing nothing, then they’re getting ahead. If someone else is spending and you’re saving, maybe you’re getting ahead. Or at least it’s not clear.
Holden Karnofsky: I’m not sure that super longtermists should be donating to global health and wellbeing. Especially because I think the money in that is going to go up a lot too. But it’s certainly possible. And personally, I mean, I tend to donate that way, but that’s more because I’m so focused on longtermism in a professional capacity that I’m just trying to diversify myself a little bit. And a lot of that is about how I feel about myself as a person, because the stakes of my personal donations are really low in the scheme of things.
Rob Wiblin: I guess we got a sneak peek of this earlier, but this is obviously a key operating issue for Open Phil, that you’d like to find more amazing grants, but you feel like you’re finding it harder to scale up giving as much as you’d like. What’s the approach to fixing that?
Holden Karnofsky: I mean, right now we’re in the midst of conversations with the Open Philanthropy longtermist team about how we self-conceive. What’s our comparative advantage? Because historically, the obvious thing has been like, well, we help deploy capital. And I think that’s still a lot of it. I think actually a lot of how Open Philanthropy is starting to think of ourselves is like, sure, there’s a lot of obvious shovel-ready opportunities. There’s a lot of well-known longtermist organizations that could use money. And even for them, I think we have some value to add because we can pay a lot of attention. We can help do the evaluation, set a signal for other funders, and give feedback as an engaged funder, which I think is a good thing for an organization to have. But I mean, that leaves us with plenty of person-hours left over.
Holden Karnofsky: And then I think a ton of our grantmaking has actually been active, in the sense that I don’t think we would have been able to do it by just being known as people with money and waiting for people to come to us and ask for money. So some of that is about going into a space like biosecurity and pandemic preparedness and getting to know people who are not longtermists, who would never know to come to us, but who are doing relevant work on biosafety or whatever. Some of it is about just determining that certain things are really important, or would be really great, and spreading the word and looking for someone working on them. And a lot of times we run into someone who did want to do that, but we wouldn’t have run into them if we hadn’t pretty specifically said we’re looking to fund X and Y. And some of it is like funding academics to do AI safety research.
Holden Karnofsky: So there’s a lot of active grantmaking we’ve done. It’s a good chunk of what we’ve done. And so, that is definitely a part of it. But, I also think every organization should be flexible based on what their comparative advantage is, right? 80,000 Hours launched and branded as a career research and advice organization, but a lot of what you guys have ended up doing is this promotion of important ideas and explaining of important ideas that I think is really great. Open Philanthropy, we launched and branded as a capital allocator, but a lot of what we end up doing, and a lot of what our staff are remarkable for, is the more generic thing of like, here’s a big, hairy question. It’s a well-posed question. It’s not us thinking of things no one’s ever thought of, but it’s a big, hairy question. We need an answer that is as rigorous as we can make it in the time we’ve got. May not be as rigorous as one would imagine or one would want it to be.
Holden Karnofsky: And just sorting through these hairy open-ended questions in a way that leaves us with answers. And so where to give is one of those questions, but we’ve also done a lot of this stuff about, when is transformative AI going to be developed? What is the right probability distribution there? What are the premises that support a concern about misalignment risk? And I think there may be a lot more of that in our future too.
Rob Wiblin: In terms of what this situation implies for someone who’s 18… To me, it kind of suggests that it would be really valuable to try to, over time, over the next decade or possibly two decades, become the person who someone would trust to give substantial amounts of money to, to try to implement on a project that’s related to AI or biosecurity or whatever other longtermist priority. So, it fits with your views on how it’s really important to build up an aptitude for, say, managing people and starting projects and being able to deal with funders.
Rob Wiblin: And even if you, say at 20, 23, or 25, aren’t able to receive a significant grant to boot a project, perhaps that shouldn’t be so demoralizing, because that is just a pretty tall order, especially when you’re early in your career. So it’s like, it’s about the long-term trajectory. And likely, if you can get to a place where you’re kicking ass at that sort of thing, then probably funding will be available to you.
Holden Karnofsky: I definitely think that if you just go kick ass at something, and then you have a sensible plan that you need funding to make happen, I think that’s going to be a pretty good bet to be making.
Money pits [01:02:10]
Rob Wiblin: Let’s talk a bit about whether it might be possible to find any sinkholes…
Holden Karnofsky: We call them ‘money pits.’
Rob Wiblin: Money pits that help to further longtermist goals. I suppose a hypothetical example might be, say, climate change might not be the top longtermist priority, but it’s plausible, and we could just absorb so much money into, say, green energy, even just deploying solar panels, for example, to reduce climate change. Now that probably wouldn’t be the most effective thing to spend the money on at all, by our lights. However, it does have a lot of potential to absorb money in a way that plausibly influences the future trajectory of humanity. Do you have any hope that we might be able to find such money pits that maybe look plausibly cost effective?
Holden Karnofsky: I think we certainly could find decent money pits if we wanted to. I think climate change, as you say, is definitely one of them. I think you could also spend huge… If you wanted to just start buying biosurveillance or medical countermeasure platforms…R&D for biorisk. I mean, you could start spending enormous, enormous amounts of money on that if you wanted to. And then there’s the points I raised about AI and the question of… At some point, this distinction between capital and labor dissolves. I mean, that’s a lot of what I argue in the Most Important Century, is how crazy things get when the distinction between labor and capital dissolves. And so then capital becomes no longer a bottleneck. So those are pretty good candidates. And of course, there’s the other AI one, which is just that in the run-up you may want to be doing very expensive, compute-heavy research. I don’t know, maybe that’s a dissolve, maybe it’s not.
Holden Karnofsky: So I think those money pits exist. I think the thing is like, there’s Open Phil, I mean, we’ve had a lot of conversations like, do we want that to be our thing? Do we want our thing to be finding the money pits? And probably at some point we do, but I also think that maybe for now, what we should be doing is answering fundamental questions, getting better best guesses on fundamental questions about things like the size of alignment risk, which we haven’t really gotten very far on yet. And just a whole bunch of strategic questions about what kinds of things would actually make the world better. Because if we can reach more clarity on that, then I think we’ll be in a much better position to identify the best money pits and the best individual projects to be funding, and also to be encouraging people to do… I mean, I think at a certain point, instead of just writing the checks, we do have some role in saying, “Here’s what we’d like to fund,” and I think that will shape potentially what people do, especially if it’s based on good research.
Rob Wiblin: So one option is to try to get more people who are able to and interested in running the boutique projects that you think are especially impactful. Another one would be to try to find something that maybe isn’t quite as good as that, but does have a lot more ability to absorb tons of funding. And then I guess a third option which you’re saying is just like, do research and hold on to the money and wait for it to multiply, because it’s invested in some way. How do you think about that third option of just holding off?
Holden Karnofsky: I mean, I think it’s fine for the time being. My model is like… So I don’t know, and we’re figuring this out at Open Philanthropy, but I think it would be very non-crazy to just be saying, we’re figuring out fundamental questions that are helping determine what projects are most worthwhile. We’re also funding things that we think can lead to exponential growth in grantees over time. So community-building stuff, field-building stuff, just like we’ll spend amounts of money that may not be very large, but they’re essentially getting a return. They’re creating a field, they’re causing there to be more giving opportunities later. And we’ll do that for now.
Holden Karnofsky: And we’re not in a rush to get the money out the door to find a money pit, because what’s the rush? Because if we find a money pit in 30 years, assuming that’s not too late for everything, then we can do it then, and we’ll have more money to do it with. So that is a perspective that seems okay to me. That is a distinct perspective from the patient longtermist / hold-onto-the-capital-essentially-forever thing. I don’t endorse that. I’m more just like, well, I don’t see a ton to do right now [in longtermism — editor’s clarification]. And you know, it’s okay if we decide to shovel it all out in 20 years instead of right now.
Rob Wiblin: So, yeah, not only do you want investment returns, but also hopefully you have accumulated wisdom and probably a larger research team and a lot more… You will have fleshed out this worldview and critiqued it ever more, which seems like it could produce pretty substantial returns.
Holden Karnofsky: Yeah. So I think 20 years from now it’d be a lot better than now. I think it’s very unlikely that 100 years from now or more is a good idea.
Rob Wiblin: Yeah, I guess a downside that we haven’t ever mentioned of the holding off thing is just, what if history passes you by, and the most important moments in human history do just happen before you managed to spend most of the money? That would be somewhat regrettable.
Holden Karnofsky: It would be very regrettable. It’d be a risk. You have to weigh it. It’s like, do I want to dump all this into biosecurity, or do I want to take some chance that I just end up holding this and have nothing to do with it, and some chance that I’m actually dumping into AI at a crucial time? And it’s like, you have to run the expected value calculation. The very rough feeling in my head is actually just the second one is better. And so if you end up stuck and you didn’t spend the money, well, you could feel bad. But I think the expected value was the right call.
Broad longtermism [01:06:56]
Rob Wiblin: I guess another way that you might be able to try to find ways to spend the money really productively would be to find, I suppose, indicators in the world as it is now that are correlated with humanity’s future prospects. We’ve talked about this in various podcasts over the years, and sometimes it goes under the banner of ‘broad longtermism.’ So the idea would be, it’s hard to influence the world through specific projects where you talk to specific people, because just history is so unpredictable, plans like that tend to go awry.
Rob Wiblin: Instead, what we should do is try to make the world richer or wiser or more capable at doing new science when necessary. Try to improve capabilities or broad aggregates, like equality, that we think are signposts or guideposts to a positive future. They may be not necessarily incredibly valuable themselves, but nonetheless, in some way in future, they will allow humanity to deal with the challenges of the future better. What do you think of that overall approach?
Holden Karnofsky: I think it can be good. I mean, I’m pretty sympathetic to… I can’t remember if Ajeya said this, and I think Alexander did, but it’s just super non-obvious that we’ve come up with anything in that category that’s better than the global health and wellbeing stuff. So we make the world wiser. What does that mean? Wiser for what? Who’s wiser? What are they wiser at? And which of those things are really scalable general things? And of those things, how sure are we that that’s better than just making people richer or happier, or have longer, better lifespans? So I don’t think we’ve gotten there. I mean, I think we imaginably could get there.
Holden Karnofsky: The framework that I’m most into right now on a personal level is, hey, it could be the most important century, let’s premise everything on that. And so, if we’re going to improve decision making, the reason we’re improving decision making is because we want the most important century to go well. So we’re interested in decisions made by people making technology policy about technology. It’s not just some totally generalized, we want everyone to make better decisions about everything. And I’m most into that framework. And I think some of the best stuff to do for the most important century could be in that category, but it’s best if it’s reasoned through that lens. What is this going to do for our handling of this transition?
Holden Karnofsky: Then the framework I’m next most into is the global health and wellbeing framework, which is like, “This is all too weird. We’re getting too fancy. This is too much. Let’s help people in common-sense, recognizable ways.” And I do prefer both of those to this, I don’t know, this in-between middle ground that says, “We’re willing to throw out a lot of common sense and a lot of recognizable good because we have this bold vision for the very long run, but we’re ignoring this extremely important thing that just dominates the calculation of the long run because it’s too out there,” or something. And I’m just like, that’s a weird… I get it for people who somehow trust philosophy a lot more than they trust empirics or something. People who are happy to be contrarian and bold and different and make a bet on a weird philosophical view versus a weird empirical view. But I’m very much the opposite of that. I think that’s probably one that I would rank third among the frameworks, although I think it’s a perfectly fine thing to do.
Rob Wiblin: In the previous episode with your colleague Alexander Berger, he threw some pretty serious shade on broad longtermism. Just to briefly recap, his view was something like, people come up with these aggregates, like how good is our government at making decisions? Then he’s like, “Is that a real thing? Is that a real metric? How would you measure that? How would you know if you’re improving it?” And also, people want to say these metrics that we’re changing, these aggregates that we’re changing, should be expected to have some particularly important influence on the long-run future. But the people who say this never really flesh out that story to make it incredibly credible and vivid like why that would especially be the case. Or why it would be the case more than just making people richer and healthier in general, and then hoping that improves society in a broad way. So he’s like, “And at the same time, you pay this big cost of doing something that’s a lot harder to measure and a lot harder to tell whether you’re succeeding.”
Holden Karnofsky: Harder to measure. I think there’s just more risk of deluding yourself and being in a weird made-up headspace and castles in the air, which we have with the most important century too. I just think the upside of getting it right is bigger. And so, yeah. I mean, I’m pretty sympathetic to that. Like I said, I don’t think I’ve heard us get there yet with broad aggregates that are better than just all the stuff that global health and wellbeing works on. We might get there. But I will also say that I think once you’re out of Louis Armstrong classic great accessible jazz and into the avant garde*, you’re out of the accessible stuff and into the weird stuff, I think it’s time to look at the most important century. I think that dominates the situation and is the right thing to think about. And so the middle ground to me is not that appealing, but I understand how in some headspaces it could be.
[Editor’s note: This is a reference to our previous episode with Holden on The Most Important Century]
Rob Wiblin: I’m not sure how problematic it is or how pessimistic we should be that we haven’t yet reached any consensus on aggregate metrics or guideposts that we could target that we think are strongly correlated with a positive future for humanity. Because as far as I know, I just don’t know many people who’ve dedicated actual years to thinking about this. And it seems like it’s among the most difficult questions, so you really would want… But before you despair about that, I think you would want to put quite a few smart people on it for quite a bit of time. And then see what they could come up with. It just seems like a research space that’s a bit empty.
Holden Karnofsky: Yeah. Like I said, I don’t think it’s hopeless. I think we haven’t seen it yet. And that’s not to say that there’s no way to come up with a meaningful aggregate, but that project still doesn’t seem as promising to me as just being like, “Why don’t we work backwards from this incredibly important thing that can transform everything that’s happening soon by galactic timeline standards?” I’ll also say, I do see a very high-level reason for pessimism on that, which is just you look at different fields and you look at what they’re producing, different academic fields, and a common factor is when you have a lot of data and there’s a lot of independence, a lot of different independent trials, that’s one thing. And you’re able to learn a fair amount. And then when you have a field like macroeconomics where it’s like, there’s a certain number of business cycles we have good data on, and we’re just going back over them, and back over them, and back over to them, and overfitting, and overfitting…
Holden Karnofsky: It’s very hard to get very far with that. I think if we start asking… So one of the things that I’m going to write about on my blog in the future is this question, has the world gotten better over time? And it’s like, when I imagined myself trying to come up with broad aggregates that are going to make the future go better, I started thinking about like, well, historically, what’s made things go better? And it’s just like, you’re working with this tiny, tiny sample. You have very little data from anything before a few hundred years ago. And you have, I don’t know, three eras or something. You have the foraging era, you have the early agriculture era, and then you have the post-Industrial Revolution. I mean, obviously you could carve it up more than that if you wanted, but in terms of what you’re able to say about how good life was, you’re not working with a lot there. And so I don’t know how you do it.
Rob Wiblin: I guess you’re not empirically forced to believe any particular story about what exactly was the underlying factor that was driving progress. So you have to rely on intuition, or you have to put a lot of weight on the intuitive plausibility of the story. Someone says, “Yeah, it was 18th century fashion that caused the Industrial Revolution.” And you’re like, “No, I don’t think it was that.” But if someone says, “Oh, it was the politics of England at the time, and the Netherlands,” then you’d be like, “Well, maybe, I don’t know.”
Holden Karnofsky: There’s so many things it could have been. And then it’s like, and then what did that mean? Was the Industrial Revolution even good if what we’re talking about is the next 10 billion years? You don’t have a lot of data to work with, I don’t know where you get it.
Rob Wiblin: I guess maybe there’s an intermediate approach in between the most important century, going to focus on these specific AI stories, and the broad one, which is something like what you were alluding to, which is we think that something as narrow as the quality of policy analysis on governance of science and technology is going to be really important. So what we want to do is build a really high-quality…attract brilliant people into the field of policy analysis on science and technology, and then be happy when there’s lots of think tanks that produce a report that we’re like really wowed by. That’s broader than funding someone to do specific AI safety research. But then it’s much less broad than just trying to increase GDP.
Holden Karnofsky: I’m into that stuff. We do a lot of it at Open Philanthropy. We have grantees that are really aiming exactly that way. And I mean, one interesting thing is like, if I were to try and go down this project of super broad longtermism and find things that could make the world better, what I might do is I actually might just pretend or decide that we’ve only got five years or 10 years to transformative AI, and just be like, what are we going to do? And then five years later decide we have five years again, and five years later, decide we have five years again, and then at least I could start to track patterns in what changed about my best guesses about what the world needs. Which is a dicey thing to look for patterns in, but at least you’re imagining these different ways the future could play out that are all immediate enough to give you some actual, tangible things to talk about and argue about, and then you can come back and see what you’ve learned.
Rob Wiblin: What about the proxy of not having utter buffoons as political leaders? Is that potentially important over the next five years?
Holden Karnofsky: Seems super important. It seems like, man, just seems like right up there with ways to help the most important century go better. But that leaves the question of who are the buffoons? I think any reader is going to have their own views on that.
Rob Wiblin: Are there any other proxies that are in that in-between, between broad and narrow? Like, how impressive is the science and technology policy analysis community?
Holden Karnofsky: There’s a lot of stuff that I think is pretty good in that zone. A big one that I think about is just who is in government, especially in science and technology policy-related areas. And are those people who just care about all of humanity, but also can grapple, can be aware of what some of the biggest considerations are and can think about them in a way that’s, I don’t know, just calm and reasoned and trying to get the best outcome for everyone and not falling into memes of the U.S. has to show strength or else people will think we’re weak and a bunch of stuff that I think just doesn’t really make a lot of sense, but can be common in certain communities?
Holden Karnofsky: So that’s something I’m just generally very interested in, and it’s been a focus for us. There’s international relations, probably it’s better if there’s just more cooperation internationally and more coordination. That seems like when I think about the most important century and how it’s going to go, that seems good. That would also be a good candidate for a really long-run thing. But I think for the most important century, it looks really good. Yeah. I mean, I could probably go on like this for a while. I think the buffoons thing is, I mean, if you’re someone who’s just like, I just need to do something and I’m not an AI person and I can’t do any of the specific things you guys are talking about, but I just need to do something to help with the most important century… I mean, have your opinion on who the buffoons are because as a voter and as a person in the world, I mean, that’s something anyone can participate in.
Rob Wiblin: From the most important century worldview, isn’t it a little bit crazy that, I think 40% or 50% of the world’s semiconductor manufacturing capacity happens to be concentrated on the island of Taiwan, which also happens to be the most probable flashpoint for a war between the largest two superpowers on this planet? I feel like in a science fiction novel, it would be a little bit on the nose if someone wrote that in.
Holden Karnofsky: Yeah. It’s wild. I’m totally with you.
Rob Wiblin: Okay. Cool.
Holden Karnofsky: I don’t know if I have anything to add. I mean, gosh, that’s a tough one. And that is definitely something to be thinking about.
Rob Wiblin: Do you think that the United States might be able to, over the next 10 or 20 years, build a substantial domestic semiconductor manufacturing capacity?
Holden Karnofsky: I don’t see why not, if that was a goal of policymakers. I mean, I don’t think it appears to be right now, and that’s an interesting thing. I don’t know if it’s something that I’m hoping will happen. I mean, this is the whole thing about strategic clarity and the most important century. It certainly seems to matter a lot. I don’t know which direction it matters.
Rob Wiblin: A regular listener wrote in and was curious to know where Open Phil currently stands on its policy of not funding an individual organization too much, or not being too large a share of their total funding, because I think in the past you kind of had a rule of thumb that you were nervous about being the source of more than 50% of the revenue of a nonprofit. And this kind of meant that there was a niche where people who were earning to give could kind of effectively provide the other 50% that Open Phil was not willing to provide. What’s the status of that whole situation?
Holden Karnofsky: Well, it’s always just been a nervousness thing. I mean, I’ve seen all kinds of weird stuff on the internet that people… Games of telephone are intense. The way people can get one idea of what your policy is from hearing something from someone. So I’ve seen some weird stuff about it “Open Phil refuses to ever be more than 50%, no matter what. And this is becoming this huge bottleneck, and for every dollar you put in, it’s another dollar…” It’s like, what? No, we’re just nervous about it. We are more than 50% for a lot of EA organizations. I think it is good to not just have one funder. I think that’s an unhealthy dynamic. And I do think there is some kind of multiplier for people donating to organizations, there absolutely is, and that’s good. And you should donate to EA organizations if you want that multiplier. I don’t think the multiplier’s one-to-one, but I think there’s something there. I don’t know what other questions you have on that, but it’s a consideration.
Rob Wiblin: I mean, I think it totally makes sense that you’re reluctant to start approaching the 100% mark where an organization is completely dependent on you and they’ve formed no other relationships with potential backup supporters. They don’t have to think about the opinions of anyone other than a few people that Open Phil. That doesn’t seem super healthy.
Holden Karnofsky: Well, not only do they… I mean, it’s a lack of accountability but it’s also a lack of freedom. I think it’s an unhealthy relationship. They’re worried that if they ever piss us off, they could lose it and they haven’t built another fundraising base. They don’t know what would happen next, and that makes our relationship really not good. So it’s not preferred. It doesn’t mean we can never do it. We’re 95% sometimes.
Rob Wiblin: Yeah, it does seem like organizations should kind of reject that situation in almost any circumstance of becoming so dependent on a single funder that to some extent, they’re just… Not only is the funder a supporter, but they’re effectively managing them, or you’re going to be so nervous about their opinions that you just have to treat them as though they were a line manager. Because you know so much more about the situation than the funder probably does, otherwise they would be running the organization. But accepting that, so you’re willing to fund more than 50% of an organization’s budget in principle?
Holden Karnofsky: Yeah.
Rob Wiblin: But you get more and more reluctant as they’re approaching 100%. That does mean that there is a space there for people to be providing the gap between what you’re willing to supply and 100%. So maybe that’s potentially good news for people who wanted to take the earning to give route and were focused on longtermist organizations.
Holden Karnofsky: Yeah, and I think the reason it’s good news is the thing I said before, which is that it is good for there not to just be one dominant funder. So when you’re donating to EA organizations, you’re helping them have a more diversified funding base, you’re helping them not be only accountable to one group, and we want that to happen. And we do these fair-share calculations sometimes. So we’ll kind of estimate how much longtermist money is out there that would be kind of eligible to support a certain organization, and then we’ll pay our share based on how much of that we are. And so often that’s more like two thirds, or has been more like two thirds than 50%. Going forward it might fall a bunch. So I mean, that’s the concept. And I would say it kind of collapses into the earlier reason I gave why earning to give can be beneficial.
Cause X [01:21:33]
Rob Wiblin: Yeah, another listener wrote in and wanted to ask you, “Are there any problem areas that you think the effective altruism community is neglecting? Or, alternatively, overrating?”
Holden Karnofsky: We’re not a huge community, so I would be upset if we were all in on one cause, but I don’t think we need to diversify a lot more than we are, personally. I think cause X is a little bit overrated. Not that I think it’s a horrible idea. Not that I think we’ll never find cause X.
Rob Wiblin: We should probably just be clear what ’cause X’ is.
Holden Karnofsky: Oh yeah, yeah. Let’s talk about what cause X… Do you want to explain it?
Rob Wiblin: ‘Cause X’ is this term for the idea that there could well be another problem in the world that we haven’t identified, or maybe nobody has identified as an important priority. So it could be like some kind of moral catastrophe that, because we haven’t yet reached a sufficient level of moral understanding or maturity, we haven’t yet realized is going on. And that working on that could be substantially better than all of the things that we’re currently aware of, which are necessarily less neglected than that.
Holden Karnofsky: Yeah, exactly. Or another way I might I put it is, the EA community is very focused on a few causes right now. And maybe what we should be focused on is finding another cause we haven’t thought of yet that’s an even bigger deal than all the causes we’re thinking about now. And the argument goes, well, if no one had thought about this existential risk and AI stuff, then thinking of it would be by far the best thing you could do. And so maybe that’s true again now. And so I get that argument and I certainly think it could be right.
Holden Karnofsky: And I don’t think the right amount of investment in this is zero. I also think we should just look at the situation. You’re kind of pulling causes out of an urn or something, and you’re seeing how good they are and you’re thinking about how much more investment in finding new causes is worth it. And it’s like, if the first three causes you pull out are all giving you the opportunity to let’s say benefit 10% of the current global population, if you do things right, then you might think maybe there’s a way to do a lot better than this. And then you pull out a cause that’s like, well, this century we’re going to figure out what kind of civilization is going to tile the entire galaxy. And it’s like, okay, well I think that drops the value of urn investment down a bit.
Rob Wiblin: …What more could you want? Where else is there to go?
Holden Karnofsky: Exactly, where else are you going to go? And it’s also neglected. So you’ve got importance and neglectedness off the charts. You’ve got a tractability problem.
Holden Karnofsky: But that’s exactly why, I mean the kind of person who would be good at finding cause X who finds these crazy things no one thought of… Well, there’s plenty of crazy things no one thought of that could be relevant to how that particular cause goes. There’s so much room to be creative and find unknown unknowns about what kind of considerations could actually matter for how this potential transition to a galaxy-wide civilization plays out and what kinds of actions could affect it and how they could affect it. There’s all kinds of creative open-ended work to do. So I think it’s better to invest in finding unexpected insights about how to help with this cause that we’re all looking at that looks pretty damn big. I’m more excited about that than just looking for another cause. It’s not that I have some proof that there’s no way another cause is better, but I think that investment is a better place to look.
Open Philanthropy [01:24:23]
Rob Wiblin: Alright. Let’s move on and talk about Open Philanthropy for a bit. First off the bat, what sort of job opportunities are there in the longtermist umbrella of Open Phil, maybe at the moment and potentially over the next year?
Holden Karnofsky: So I think 80,000 Hours often does a good job of getting out what kinds of jobs are on at a given point. The Open Phil longtermist team is not really prioritizing hiring right now. That doesn’t mean we’re not doing any of it, but I do feel like, as I’ve said before, we have a kind of lack of strategic clarity. I think we need to be small, improvisational, and hire pretty carefully and pretty conservatively.
Holden Karnofsky: I do think that organizations… Well, I think there’s AI labs that you can potentially go do safety work at, Anthropic, Open AI, DeepMind — and conflict of interest disclosure, my wife Daniela is president of Anthropic. So there’s a program that I think will be up by the time this podcast is up. It’s going to be a technology policy fellowship and it’s for people who basically want to work in U.S. policy areas, working on high-priority emerging technologies, especially AI and biotechnology. So I think if that sounds like you, if you want to work on policy for these kinds of things in the U.S., I would definitely encourage applying to that.
Holden Karnofsky: And then I’ll also add that the global health and wellbeing team, on the other hand, is definitely hiring. And I think those are just phenomenal jobs doing phenomenally important work that are looking to deploy a ton of capital into stuff that helps improve quality of life for people and animals. I think Alexander talked about it too, but I really want to plug that because that’s the search we’re doing.
Rob Wiblin: So the Program Officer: South Asian Air Quality was one. So working on, I guess, particulate pollution in India. Then there was someone to work on policy around foreign aid, a lead on grantmaking in that area. And then I think there were a couple of generalist research roles and potentially there was a vision that those people would work on problem prioritization and potentially trying to find other problem areas that the global health and wellbeing team would want to move into.
Holden Karnofsky: And also do some grantmaking. One of the generalist roles is like a generalist grantmaker role, which I think is pretty cool. [Editor’s note: These roles are now closed.]
Rob Wiblin: Okay, we’ll stick up a link to those if they’re still open when this episode comes out. What’s a grant that you’ve been involved with since we last spoke in 2017 that you’re especially proud of, or maybe has really exceeded expectations?
Holden Karnofsky: I’ll name a couple of them. One of them is the Center for Security and Emerging Technology (CSET). So that is a think tank started by Jason Matheny, who’s now in government, and they do high-quality analysis to inform policymakers on important technologies. That was a group that we were really excited to fund because we think Jason is great. And we’ve been really happy with how things have gone. They’ve done analysis we think is really good on topics like what the U.S. semiconductor export control policy should be. We’ve found it just very much more high quality than other analysis on similar topics. They’ve helped debunk what I think were some kind of paranoid views in the national security community about China. And they’ve hired a number of people who I think are very dedicated to helping the long-term future go well, who understand, I think, what the stakes are and what some of the major risk factors are better than a lot of other people in the national security community, and that includes some people who have recently gone into government and are now helping with policy related to AI and semiconductors and so on.
Holden Karnofsky: And the final thing that’s really exciting is that Jason has gone into government, but we believe the new director Dewey Murdick is really good, and CSET is going to stay strong. And there’s nothing more exciting for me than kind of seeing something get off the ground, but then be able to have its own momentum and become this kind of force on its own. So we think CSET is going to be giving good data-driven advice on technology policy and serving as a great place for people to learn about these issues and end up in a position to make policy on them better. And it’s one of the top places I’d recommend working if you’re interested in helping the long-run future go better. So that’s CSET, and that was a major grant that was over $50 million.
Rob Wiblin: CSET seems like they’re really kicking ass, so congrats to those guys. And it’s great they’ve managed to have a handover at the top that’s gone well, because that can be really hard. What’s another one?
Holden Karnofsky: And the congrats is mostly to CSET. So we’re really glad we funded it, but always with Open Philanthropy, I think of us as getting a little slice of the impact equity for someone, and most of the work and most of the credit is theirs. Another one, I mean, so the COVID-19 pandemic, we were all kind of just amazed that there were no human challenge trials going on. So what I mean by this is that they were testing these vaccines by just vaccinating people and then kind of waiting for them to get COVID. And especially, I mean, before we hit the winter, with the lockdowns, COVID rates were actually quite low in a lot of places they were running these tests, so some of those can take forever. And we were all kind of like, why aren’t they taking volunteers?
Holden Karnofsky: And the story was that it’s unethical to take volunteers, because we didn’t have a cure. And we were thinking it’s unethical to let a pandemic go on a couple of months and give extra time for nastier variants to emerge. And there’s this group 1Day Sooner that was essentially a collection of people who wanted to volunteer in human challenge trials. And they were doing kind of like advocacy on behalf of people who wanted to be volunteers and wanted to take this kind of risk. And I think that was a really cool thing. A different angle that I think had a chance to really change the debate about human challenge trials and speed the end of the pandemic. And it was funny because they wanted money, and we had one person working on our COVID grantmaking who was just too slammed and was swamped, and then we had Alexander, who was about to go out on parental leave.
Holden Karnofsky: And I couldn’t handle the idea that we wouldn’t do this. And so I was the grant investigator, which was…
Rob Wiblin: You got back in the trenches.
Holden Karnofsky: I did, I almost never do a grant write-up, but I occasionally like to do it. I think it’s good for me to follow the process. Notice if the process is screwy in some way, dogfooding, so it wasn’t a terrible thing, but it was an unusual thing for me to do to just go ahead and do a grant write-up and submit it for approval. I mean, I think they changed the conversation about human challenge trials, and I think they played a major role in the fact that eventually human challenge trials were approved. And the timing hasn’t worked out, partly because the rates went up a lot and the vaccines were really effective. Between those two things.
Holden Karnofsky: The normal trials went really fast, but gosh, that was about $1 million that I just feel great about having spent, because there’s a bunch of nearby alternative universes where we might’ve shaved a month or more off the pandemic. And in addition to all the lives and cost saved, I mean, that could have cut down on some of the variants we’re looking at now. So gosh, I liked the grant.
Rob Wiblin: I imagine they have some research policy papers discussing the benefits you can get from human challenge trials. I mean from one point of view, it doesn’t seem like it’s made a huge contribution to speeding the end of the COVID pandemic now, but it seems like creating a precedent and creating a system by which we would do human challenge trials in future when it’s called for seems incredibly valuable. Because, for example, we could be running a human challenge trial right now to figure out whether the Pfizer and Moderna vaccines work basically as well if you give people half as large a dose. And there’s pretty good observational evidence, or there’s some strong clues that suggest that even if you were to give people a substantially smaller dose of the vaccine, it works almost exactly equally as well.
Rob Wiblin: In which case we would have doubled the supply of mRNA vaccines. And you could do that through a traditional trial that probably, I guess at the current case rates, which are really quite high in the U.K., maybe you could get an answer quite quickly, but you would be able to do it even faster using a human challenge trial.
Holden Karnofsky: That’s right.
Rob Wiblin: So anyway, there’s lots of more specific questions you can get about like, how should we do the rollout? What’s the optimal spacing? Which vaccines work better against what variants? And so in that way, human challenge trials are still very relevant. And they could definitely be relevant, like incredibly valuable the next time we have a completely new pandemic.
Holden Karnofsky: I agree with all that. In fact we might end up funding more to look to the future. It’s a great group, and I remain really excited about them. I think we may well end up feeling that that grant was a success, but even if it wasn’t, I’m glad we made it.
Rob Wiblin: Well, this was a broad COVID intervention that worked out in an unanticipated way where we’ve been targeting the proxy for the future, which is our ability to infect them with diseases during pandemics.
Holden Karnofsky: I do think sometimes the best way to do a broad intervention is to just do a specific intervention for the thing you think is the most likely, and then just keep doing that and kind of find the patterns.
Rob Wiblin: So, those kinds of projects that go really well, that you’re proud of and excited by, like CSET, like 1Day Sooner…are they often close calls, where you’re not sure ahead of time whether you want to fund them? Or do they often stand out ex ante before you’ve made the grant?
Holden Karnofsky: There’s definitely a correlation between how easy a call something was and how good it was. I mean, CSET was… We were very excited from the beginning and it was a high priority from the beginning. It was a big bet, but we were not on the fence about CSET. But I mean, 1Day Sooner might be kind of an example, in the sense of we definitely thought it was awesome and we definitely thought it was the right kind of thing to do, but we also have priorities and we need to focus on what we’re focused on, and we didn’t want to get derailed too much. And so in that sense, I mean, the fact that I was doing the grant investigation is kind of an indicator, because it would have been, I think, reasonable for me to just say, this is cool, but no, and then I don’t know if we would have found the staff time or not.
Holden Karnofsky: So, I’d also say there’s different phases of grants. So, we try to have a much looser trigger finger with smaller, earlier-stage grants, and I think there’s plenty of grants we give out… Like through our scholarship programs, we have a whole bunch of scholarship programs, like a generalist longtermist one, a biological risk one, I’ve mentioned one that’s going to be about technology policy, and we have a lot of those. And a lot of times we are giving away relatively small amounts of money, like maybe tens of thousands, hundreds of thousands to people that we think are cool, but we haven’t thought it through in an incredible amount of detail. And it’s the bigger grants where we’re generally, we need to really feel great about it, and I think those are less likely to be borderline.
Rob Wiblin: I wonder whether Open Phil needs to have an exasperation-based philanthropy project, where it’s things like with 1Day Sooner or other cases where a member of the team is just looking out at the world and is just like so frustrated, so angry, so irritated that humanity or a country is getting something so incredibly wrong that they’re just like, I have to make a grant. Maybe that could be a useful outlet that then allows you to be more rational about the other grants.
Holden Karnofsky: Possibly. I mean, I think it was interesting with COVID because COVID hit, and I was very disoriented. On one hand I was like I see all these seeming opportunities, but on the other hand everyone is paying attention to this. And we just didn’t know, when we looked back, if we would feel that it was the best or the worst place to get involved. And I think as it turns out, it was actually… I think everything was moving so fast that there were opportunities to help in ways that were not going to get done otherwise. So I think it was… Probably if I were going back, we would have put a little bit more time, a little bit more money into it. Even though maybe that should have been on the global health and wellbeing side, because I don’t think that the longtermist implications were super clear.
COVID and the biorisk portfolio [01:35:09]
Rob Wiblin: Has COVID given you any other important updates on how the biorisk portfolio is panning out?
Holden Karnofsky: I would say COVID has not given us huge updates on the biorisk portfolio, because the biorisk portfolio is focused on even worse cases than COVID, and so even when COVID hit a lot of the response grantmaking was not even led by our biosecurity team, it was just led by the science team or by someone else. So in some ways, no. I think we’re in the midst of trying to understand… We did support a bunch of generalized biosecurity infrastructure for years leading up to the pandemic. And I think in theory, that’s the kind of broad intervention that you would hope really helped.
Holden Karnofsky: And I think we’re still trying to kind of nail that down a little bit, and understand, did that help? And that is something that I think we’re… It’s hard. It takes a long time sometimes to figure out, did someone actually make an impact and did someone make a difference? But I think that’s something we could look at as when you’re just supporting… So there’s some work we do that’s very targeted at the worst case, and there’s some work we do where we’re just kind of this supporting leg of the biosecurity infrastructure. Groups like the Center for Health Security. And I think we’re kind of TBD on that.
Rob Wiblin: Has COVID maybe changed any of the opportunities that are available for things to fund? Because I’m imagining that there’s more people interested in entering this space. So maybe they’re looking for funding for new and exciting projects.
Holden Karnofsky: There’s a lot more interest in biorisk than there used to be, and people are thinking bigger about how bad it could be. So those are things that I think could be positive. Definitely there has been some effort among our grantees and at Open Phil to think about, at some point there’s going to be a bunch of attempts in the U.S. government to do things to prevent the next COVID pandemic, to be better prepared. And can we try to make sure that people have good advice on how not only to prepare for the next Coronavirus, or the next thing that’s exactly like COVID, which I think is maybe kind of the default you would expect, but also to prepare for things that could be much, much worse and to do things that are very general, that could prevent a lot of bad things.
Holden Karnofsky: So like, more widespread biosurveillance, such as metagenomic sequencing, where you make it kind of a regular practice to just look for unfamiliar genetic sequences in people. Or biohardening, where you just generally try to have more areas or more equipment that could make us immune to whatever the heck pathogen is floating around. And these things I think could make us more robust to things that are worse than COVID, instead of just everything being about how do we prevent the next COVID. So that’s something we’ve been thinking about.
Rob Wiblin: Let’s get through a couple of questions from the audience. Someone wrote in and said, if Holden could found an EA organization again today, that would take such a central position in the effective altruist ecosystem as GiveWell or Open Phil, what would it work on and what might it do?
Holden Karnofsky: So, I think I’m generally interested in tackling questions that are important, but kind of fuzzy and hard to get started on and hard to get a rigorous-feeling answer to. And already Open Phil has been putting more time into that than maybe an average grantmaker would. So, things like AI timelines. I think if I were to pick out one, if I just cloned myself now, and I were to pick out one distinctive thing that the new organization would focus on that Open Phil doesn’t really do that much of, it would be the ‘AI deployment problem,’ is what Luke and I call it. So, there’s the AI alignment problem, which is how do you design an AI that, even though it’s smarter than you, or whatever, even though it’s more capable than you, even though it could do a lot of things and think of a lot of things that you can’t, it’s still helping you instead of going out and doing whatever its own crazy thing is that you hadn’t meant it to do at all.
Holden Karnofsky: That’s the AI alignment problem, and that’s a technical problem. And the AI deployment problem is more of a sociopolitical problem, which is like, you’re a lab or a government, and you’re on the cusp of an AI that could be really powerful, could lead to really good or really bad things happening, and there’s a lot you don’t know. You’re sitting in a state of uncertainty, you’re not sure how safe it is. You’re not sure how much safer you can make it. You’re not sure about a lot of things. And it’s like, what do you do? Like you have this system. Do you put it out as fast as you can, because there’s probably someone else behind you who’s going to be less careful than you and put it out in a worse way?
Holden Karnofsky: Do you hold onto it, because you’re not sure if you can make it more safe and more aligned? If you are putting it out in the world, how do you do that? Who do you talk to? If you want to partner with government, how do you approach the government, and what do you want them to do with it? And it’s like, how do you want this all to play out? And there’s a lot of different versions of this, and I think the only real way to think about this problem is to go through a large number of scenarios, each of which is way too specific to ever actually be realistic, and think about how you would want an AI lab or a government to behave. What you would want them to do with their maybe-aligned to maybe-not-aligned-AI in that world.
Holden Karnofsky: Or maybe you could go through scenarios where it’s definitely aligned, or you could go through scenarios where it might be aligned but it’s probably not. And I think that’s something that basically Luke Muehlhauser and I have been talking about for years, and we’ve tried to get people to kind of think about it and work on it, and we really have not been able to get people to think about it. I think it’s just one of these hairy, scary problems. You feel kind of crazy when you’re thinking about it, because you’re writing down all these future scenarios, they’re all too specific to be realistic. So, it’s a psychologically hard problem to think about and work on. But I think if I had a clone of myself, the clone would do this all day and would eventually get to the point where this problem that started off being just too much and too weird had become kind of a set of closed-ended questions and a set of methodologies, and that clone could start hiring people and training them to help with it in a way that was more tractable.
Holden Karnofsky: And so, that’s something I would do, and if someone else was going to do that, I’d be really happy about it.
Rob Wiblin: That sounds exactly right. Is it possible to get kind of a broad sense of how your time is allocated these days, between looking into specific problems or organizations and managing people and having a broad vision and things like that?
Holden Karnofsky: I’m in a point of transition right now where the way I use my time is changing a ton. So, I had been working a lot on the reorganization and getting to the state where we have two co-CEOs and two divisions of Open Philanthropy, and I can focus on longtermism and looking forward. So, first off, by the time this goes up, I’ll be on parental leave. And so I’m going to have a good chance to kind of reflect and really think about what I want to be doing with my time, because a lot of responsibilities I used to have, I’m not going to have anymore. And then when I look forward, I mean, I’m not sure, but one thing I’ve considered actually is just trying to take like a year or something where I spend more than half my time just thinking about either the AI alignment problem or the AI deployment problem, probably more likely the alignment problem actually, because it’s so fundamental when you think about how to make the most important century go well and what kind of community we’re trying to build.
Holden Karnofsky: Questions like how serious is the AI alignment problem, how hard is it going to be to solve? What kind of person is going to be best at solving it? What kind of research is going to be most promising, and how are we going to find people to do that research? Those are so important that I almost feel like spending a year trying to solve it myself — not that I would solve it, but then that experience of trying to do that — I think could put me in a much better position to lead a team that is trying to prioritize these very hard-to-prioritize interventions and decide what the best kind of active grantmaking to push is. So, that’s something I’ve been thinking about. It’s only a possibility.
Rob Wiblin: Another listener wrote in to say even though Open Phil isn’t trying to do this, the opinions of Open Phil staff surely has a pretty big influence over what longtermist-oriented organizations decide, what projects they decide to take on, because they might be interested in applying for Open Phil funding in future, or at least keeping the funding that they have. I guess, yeah, there’s a couple of things one could ask about this, I suppose the first one is like, to what extent is that a good or bad thing? Presumably it’s like, it has some pros and has some cons.
Holden Karnofsky: Yeah, pros and cons. I think it’s a fundamentally healthy part of the ecosystem that organizations have to raise money, and their funders do care about some stuff, but it’s like, you have to have good taste, and it’s very dangerous. You can easily unwittingly micromanage grantees who know more about their work than you do. And all of our grantees know more about their work than we do. So, we need to be very selective and very careful in what we insist on, what we express an opinion on, and what we’re thinking to ourselves, but express no opinion on. Because I think sometimes there is no way to express an opinion to a grantee without influencing their behavior; sometimes you just need to shut up. So, I think that’s just a tough balance, and I think we try to build an organization that is full of people who are really bright and who have thought through a lot of the relevant issues and have seen a lot and can approach those trade-offs with wisdom.
Holden Karnofsky: And I could give some principles about what kinds of things we would generally insist on versus nudge versus just keep to ourselves. But the main answer is that it’s hard. I don’t wish we just lived in a world where these conversations didn’t happen at all, but it is hard, and it’s easy to screw up.
Rob Wiblin: So, it sounds like you potentially don’t share all of your opinions, when you think an opinion is too tentative, you think if we say something, then they might just reorient the whole damn thing around just a guess that I have. Do you take any other steps to avoid potentially over-influencing grantees when you have much less local knowledge about their work than they do?
Holden Karnofsky: All grantee feedback is seen as a high-stakes thing. So, there’s always a discussion about, like, is it better to share this feedback or not? And what do we know about this grantee? Do we think they’re going to take this and run with it and put too much weight on it, or do we think they’re going to… And then we talk about how to phrase it in a way that makes it clear that it’s just a suggestion, and makes it clear they can ignore it, or shows a lot of uncertainty on our part. So, we do try to do that. I mean, it’s just a matter of being clear about the strength of our opinions. And at the same time we will insist on stuff sometimes.
Holden Karnofsky: I mean, I think a general thing that we are very uncompromising about is that when we fund someone, we know we’re funding people, we know we’re making a bet on people. And so, we need to understand when we fund someone, who’s in charge? Who’s in charge of how this money gets spent, who are they and how do we feel about them? And a thing we will insist is that whoever we’re supporting, they’re the one who’s in charge of how the money gets spent. And so, that often leads to some hardcore negotiations where… Because a lot of times we’ll support, let’s say someone at a university. And the university will want to say, “Well, this is our money. We’re ultimately in charge of this.” We’ll say, “No, it’s not.” And we’ll have things like key person provisions that say, “Well, we’re supporting this organization, but if such-and-such a person leaves, we are out of our obligations.”
Holden Karnofsky: “We don’t owe any more money, and we’re renegotiating. And we’ll renegotiate on the basis of who the new person is.” So, I think once we’ve picked a person or a team that we’re betting on, we’re much more deferential, and we’re trying to not micromanage, because we don’t… The whole bet is, who are we betting on? But we got to make sure we’re betting on who we think we’re betting on. And so, questions like governance and authority, those can be areas where we draw very hard lines. And that’s something we have learned from experience.
Rob Wiblin: That makes a lot of sense when you have a big organization where there’s a particular team or a person or a project that you’re very excited by, but a lot of the rest of it, you don’t have any particular enthusiasm about, but you could imagine the money being siphoned away towards something else. I could imagine that those conversations could get a little bit awkward or a little bit testy at times. Is that about right?
Holden Karnofsky: We have tough hardball negotiations sometimes. So, and that’s a tough thing about this. The kind of grantmaking we’re doing, it’s like, I can’t give you these hard-and-fast rules. We see a lot of things, we have to form judgments. Our involvement ranges from everything from, “Here’s your money, we’re deliberately not telling you any opinions, we don’t want to interfere, go have fun,” to, “We need a contract and we need the lawyers all over this stuff.”
Rob Wiblin: That’s interesting. I guess for those negotiations you have to kind of be willing to walk away and say, no, if this large organization, a university, say, isn’t willing to give this principal investigator on this project enough freedom to actually use the resources that we’re trying to direct to them, then we are going to walk away and maybe we’ll fund them at a different organization, or through some different vehicle, because we have the ability to do that.
Holden Karnofsky: That can be on the table, yeah. One of the things that I think is hard to do, but it’s worth it, is to have people who are doing the grant investigations, program officers, who just are well connected in the relevant communities and have good relationships with grantees. And the better relationship you have, the more you can have kind of frank conversations and share your ideas without it turning into this toxic dynamic of everything you say is interpreted as an order. So, a lot of the questions about how much to share our opinions with someone, how much to nudge, how much to criticize… A lot of that is like, what is our relationship with this person or with this team, and do we feel that they can have a healthy conversation with us? And if we don’t feel that, then we need to understand that whatever we tell them, they’re going to take it as kind of a demand. And sometimes that’s okay, but we need to think about that.
Holden Karnofsky: Another area where we can be kind of pushy is cause focus. So I think sometimes there’ll be a person who’s interested in all aspects of an issue, and we’re interested in one aspect of it. A very important example would be animals, where you might work with someone who’s interested in endangered species and farm animal welfare. And for us by the numbers, the farm animal welfare is what matters. And so, that’s another place where we can be pretty pushy about saying we want to make sure that we’ve got to focus on the issue we care about.
Holden Karnofsky: Historically, I think we’ve been less successful with that. I mean, I think when I look back and I think about what we want to be doing, what kind of advice do we want to be giving, and what kind of things we want to be negotiating for, I think it’s really, really important to make sure that the right person is there and that they’re really in charge and that they’re empowered, and we’ll just go as hardcore as we need to on that. And then beyond that, everything from there I think should be just a friendly discussion with the person. And the better relationship we have, the more we can share our opinions, but it’s got to be up to them, and further efforts to manage their activities, I think are…I’m not excited about them.
Rob Wiblin: I suppose to take the position of these broader organizations that have a broader, like more plural set of goals, to some extent, you’re coming to them and saying, we want to create this child within your organization. And we, like, in a sense, want to have more control over it than you do, or we want to put someone in there who’s going to be able to manage it and be resistant to—
Holden Karnofsky: Well, sometimes the child’s already there, and we just want to fund it. I mean, sometimes there’s a farm animal welfare team within a general animal organization. We just say, we want to fund these people. But then we have to make sure that those are the people we’re funding.
Rob Wiblin: Does the attitude ever come up that, well, we’ve created this broader ecosystem or this broader organization that now is fostering this thing you’re very excited by, so it seems reasonable that some amount of resources should be kicked back to the broader project that then created this thing that you’re excited by? So maybe there’s an 80-20 split or a 90-10 split.
Holden Karnofsky: Sure. And that’s part of the negotiation. I mean, we want to be fair and we want to pay our share. I think we also are looking at the general financial state of the organization, and sometimes it just doesn’t seem necessary or good to be paying a lot of overhead, but we do absolutely want to pay our share to the larger umbrella. But we also want to make sure that our main bet is the bet that we want it to be, which is on a particular team.
Rob Wiblin: Do I remember reading or hearing that the Gates Foundation had recently come to the conclusion that too much of their grant money was being skimmed off by universities for just general revenue? I think they may have said we’re not willing to pay more than 10%. 10% is the maximum. If you want more than that, sorry, we’re just going to go fund these people somewhere else?
Holden Karnofsky: Well, we do have that exact policy when it comes to universities, and having one clear policy that’s public and consistent, and that you never compromise on, is important for having that policy be actually enforced. So, a university is a good example. So, this is a common debate in the philanthropy world, is someone will say, “Well, you really should just give general operating support, not project support, because project support, it’s like you’re micromanaging and it’s going to screw over these organizations. You really want to be supporting leadership, not trying to run it yourself.” And I’ll say, I agree with all the principles. I agree with all the basic ideas there, but actually we want to support particular leadership, and we need to make sure we’re supporting the leadership we think we’re supporting.
Holden Karnofsky: And a lot of times that is exactly project support, but what’s important is that the project support says, “This is for these people to use. This money is for these people to do whatever they want with.” It’s not saying, “This is for these people to do these activities.” And so, that’s a distinction that I think can get a little confused in the philanthropy world. And then this other topic comes up where people say, “Well, you’re going to bankrupt the poor umbrella organization. It’s all projects, and they’re not going to be able to keep the lights on.” And I’m like, “What if the umbrella organization is Harvard University?” Come on. Most of the university grants we make… I think I would like to just minimize the overhead given to these incredibly endowed universities.
Rob Wiblin: Their $100 billion dollar endowment has fallen on hard times. You can help a fund manager for just $1 billion a day.
Holden Karnofsky: Exactly.
Has the world gotten better? [01:51:16]
Rob Wiblin: Alright. Let’s push on from talking about Open Phil to a section I’ve titled ‘Grab bag question section.’ This is kind of a whole lot of different fun stuff that I wanted to bring up, but I couldn’t really build into any cohesive series of questions.
Rob Wiblin: What other interesting things are you writing about at the moment, or might you publish blog posts about on Cold Takes?
Holden Karnofsky: I mean, it’s called Cold Takes because I tend to just write stuff like way, way, way, way, way in advance. I don’t want deadlines on blogging, because I have too much professional responsibility and I don’t want it to be competing with that. So there’s a lot of stuff that I basically drafted and I just need to clean it up and get it out there. One major topic I’m going to write about is this question of, has life gotten better over the course of history? And this is maybe something listeners are thinking that’s already been done. There’s a book on it. There’s all this stuff, there’s Our World in Data.
Holden Karnofsky: The problem with this existing content on whether life has gotten better, I would summarize as, the problem is the x-axis. So, people will put up a chart that says, “Oh, people are getting more anxious.” And the chart goes back to like 2006. People put up a chart that says, “Hey, lifespans are increasing.” And the chart goes back to like 1800. It just—
Rob Wiblin: It’s all over the place.
Holden Karnofsky: I think it is very hard to get one unified picture of, has life gotten better, and over what time periods, and how does that fit into the broad scope of history? And actually the way that I would put it, so the book Enlightenment Now is mostly about the period after the Industrial Revolution, after the Enlightenment. So, that’s the last 200 or 300 years. And that’s basically where all the data is. So, if you go to Our World in Data, almost every chart goes back, at most, 200 or 300 years. Often going back less than 100 years. And that’s one phase where you can talk about whether life’s gotten better. Then there’s like the phase before that, when there’s almost no data. Then you can go back from there to the beginning of human civilization, and try to make some inferences about whether life has gotten better.
Holden Karnofsky: And then there’s the phase before that, which is millions of years, which is the, what people would call the ‘foraging era,’ the pre-agriculture era. I have seen and heard and Googled hypotheses that actually that was the best era. People were moving around in small bands, they were very egalitarian, they treated each other really well, there was no hierarchy, and actually they were healthier than you would think. And this is stuff that I’ve heard that I think is somewhat true, but I think is probably overstated. And I think my overall take is that the modern world is probably better than the foraging world, but it’s a thing you could debate. And the foraging world may well have been better than the world that came between that world and this world.
Holden Karnofsky: So, those are the different eras. There’s millions of years of foraging, or maybe it was foraging, we don’t really know. Then there were thousands of years of post-agriculture, and then there were like a couple hundred years of post-Industrial Revolution. I think it’s time to disentangle this, put it in one place, and say, what do we know about whether the world has gotten better in each of those different eras? So that’s one thing that I’m still working out exactly how to get all the pieces together, but that’s something I’ve been working on.
Rob Wiblin: Do you have any tentative conclusions, or is it too early to say?
Holden Karnofsky: I definitely do have tentative conclusions. I think if I were to draw the chart, it would be kind of like a flat, wavy, and unsure line for millions of years during pre-agriculture, and then it would go down a bit post-agriculture, and stay flat for a long time. And then it would kind of rocket up after the Industrial Revolution. And I would agree with the Enlightenment Now hypothesis there, and I would think the modern world is the best it’s been so far. But it’s a more complicated story than gosh, things just get better all the time and technology makes things better all the time, because it’s a phase. It’s a temporary phase and it’s like, we just took off on this rocket ship. And so, I think it is a little bit less conducive to the, “Let’s not plan things out too much. Let’s just make more technology,” and a little bit more conducive to the, “Crap, human history is a mess, it’s chaos, and we should think about what’s coming next.”
Historical events that deserve more attention [01:55:11]
Rob Wiblin: Are there any particularly interesting things you’ve learned about history that people don’t appreciate, that are worth sharing?
Holden Karnofsky: Sure. Another thing that’s very related to the series is that years ago I did this somewhat crazy project where I just wrote down a summary of human history. Which just sounds like this ridiculous thing to do. You could call it arrogant, or whatever. But it’s not that I actually think that I know what happened, or that I’m a history expert. I’m not.
Holden Karnofsky: It was that I was just trying to learn, and I was trying to educate myself and say, for me, the best way of learning is to write down what I think. And then fill it in and look for high-benefit, low-cost ways to correct it and learn more.
Holden Karnofsky: And so, I actually just, I took all these different categories. I took all the different sciences and I took things like gender relations, gender equality, rights for LGBTQ people, and just all the things that I thought would be good to understand if they had gotten better or worse or what had happened to them.
Holden Karnofsky: And I made this big matrix where I listed all these things that could be changing, and I listed all these periods. And I was really just spending hours and hours and hours and hours just Googling instead of reading history books. I read history books too. And then I ended up having this 15-page, like, “Well, here’s what’s happened. Here’s the summary.” And then I have this big spreadsheet.
Holden Karnofsky: So, in the process of doing that, I picked up a lot of intuitions. It was a personal-time project, but it started to help me take a lot of this AI stuff more seriously, because it was giving me the sense of just how wild our history is, and how much more has happened in the last couple hundred years than everything before it, in many senses, on many axes. And just how anything can change.
Holden Karnofsky: But at the same time, I don’t think history is a list of random events. I think there are these eras, and that some are much more significant than others. We were talking about this before the podcast, but I did notice some things that I thought were really cool historical people or events that don’t get a ton of attention. I could give a few of them if you want.
Rob Wiblin: Yeah. Hit us.
Holden Karnofsky: Deng Xiaoping took over in China in the 1970s after Mao died. I feel like different leaders would have done different things, and he chose to go down the road of economic reforms that kicked off decades of unprecedented growth and poverty reduction.
Holden Karnofsky: I mean, that could be the most poverty reduction any individual has ever been responsible for. Especially if you look at what would have happened if he had somehow just not been around. He might be the person who’s had the most positive impact ever, to date. So I thought that was interesting.
Rob Wiblin: He was sent off to prison camps I think more than once when he fell out of favor with Mao, so it’s not that hard to imagine that he might not have been in the political scene.
Holden Karnofsky: Yeah, exactly.
Rob Wiblin: Or indeed may not have been on Earth, by that stage.
Holden Karnofsky: There’s probably a nearby world where just, things didn’t work out for Deng, they worked out for someone else, and everyone was so… Not everyone, but a lot of people were so much worse off. We don’t hear his name a lot, but gosh, I mean, what a person who made a big difference.
Rob Wiblin: I read some of a biography of Deng last year, and one thing I think might be worth noting — because I’ve heard this thing of like, Deng Xiaoping changed a lot of stuff — was I think he was chosen in part because this much broader coalition, this much broader group of people who were influential within the Communist Party, within Chinese society as a whole, were fed up with Mao.
Holden Karnofsky: Right.
Rob Wiblin: Or they were fed up with what had been happening the last few decades. And so, they promoted him and they voted to make him the premier. So I suppose as with all of these things, it’s like, Deng was enabled by the fact that there was a 55/45 split within the Communist Party on whether they should modernize. Go lighter on the communism for a bit.
Holden Karnofsky: Maybe eight out of 10 people they would have picked would have done exactly what he did. And maybe two out of 10 would just have been like, “Well, I’m in charge now. Screw them.”
Rob Wiblin: Yeah.
Holden Karnofsky: Paul Ehrlich, not the author of The Population Bomb, but a person who was around during the turn of the 20th century, was a chemist who just, as far as I can tell, really just invented the whole framework that we use for drug development today.
Holden Karnofsky: There was this known thing that when you put a clothing dye into a sample, under a microscope, it would stain some things and not others. And that made it easier to see. And he thought of the application to drugs, where he said, “Well, wait, you can have a chemical that binds to certain things and not others. Maybe you could have a toxin and you attach it to something that binds only to the things you’re trying to kill, the pathogen or whatever.”
Holden Karnofsky: And that basic concept is… I mean, he created a cure for syphilis, but that basic concept, that is basically what a drug is now. I mean, that’s how we think about it. I think that’s just pretty cool. He’s a pretty well-known guy, but not an incredibly well-known guy.
Rob Wiblin: So, you’re saying that before this time, well, they must’ve known that poisons existed, but maybe they didn’t have this idea that, “Oh, you can just keep playing with chemicals, keep trying lots of different chemicals until you find one that binds to and happens to be toxic to the very specific tiny bacteria that you want to get rid of.”
Holden Karnofsky: Well, the idea of delivering the toxin to specific things in the body, instead of just using a toxin to kill the person, I think was probably mostly his idea. And now of course, a lot of these drugs, I mean, they’re not toxins, they’re just blocking things, but it’s still—
Rob Wiblin: A really important concept.
Holden Karnofsky: I mean the whole idea of, you’re targeting a particular molecule, you’re trying to bind to that thing, by putting it into the body. That was not what people were doing with poisons. People were like, “This thing will screw you up. It’ll kill you. I don’t care how.”
Rob Wiblin: Yeah. Okay. What’s another one?
Holden Karnofsky: I would be surprised if you’ve never Facebooked about this, but I don’t even know if I’m pronouncing it right. Porphyry was an ancient Greek. There’s a lot of famous ancient Greeks, we celebrate them a lot, but this person was an advocate of vegetarianism on spiritual and ethical grounds. They wrote a treatise called On Abstinence From Animal Food. Still gets cited in vegetarian literature. Personally, I mean, maybe I think that person is more impressive than Aristotle. I’m not an Aristotle expert, but that was someone I was glad to learn about, who I had never heard of.
Rob Wiblin: I’d obviously heard that Pythagoras was famously an early vegetarian, I think he probably advocated vegetarianism as well, but I’ve never heard of Porphyry. But you mentioning this inspired me to go and have a skim of his essay. It was a little bit hard to follow, because he talks so much about justice and all these moral philosophy concepts that I think must’ve been really popular among the ancient Greeks, or a particular conception of justice and spirituality that I don’t really share.
Rob Wiblin: But there were some parts that were definitely recognizable as similar to modern arguments. Where he was like, “Some people respond to me when I advocate vegetarianism and they’re like, ‘But aren’t plants conscious as well? And aren’t they living? Shouldn’t we not eat them?'” And he’s like, “No, obviously animals are higher on the hierarchy of consciousness.”
Holden Karnofsky: Wow.
Rob Wiblin: “And so, we should eat the things that are the least damaging spiritually to consume.” And then he’s like, “And then some people are like, ‘But it is in the nature of humans and other animals to eat one another. And this shows that that it’s morally justified.'” And then he’s like, “But surely it’s also in the nature of a crocodile to consume a human being.”
Holden Karnofsky: Wow.
Rob Wiblin: “But we don’t therefore say that it’s good when crocodiles eat human beings, so this is a shoddy argument. It’s both bad when crocodiles eat people and it’s bad when people eat animals.” Yeah. This really does remind me of modern vegetarianism—
Holden Karnofsky: I mean, could have been the first EA. It seems possible. Or just, it seems pretty impressive to me.
Rob Wiblin: I mean, it is all infused with these spirituality and teleology arguments about the nature of things, which is a little bit distant. But yeah, no, I mean, maybe we should find some more writing and then try to make better sense of it, because I’m not sure the translation I was reading was absolutely tops.
Holden Karnofsky: Yeah. It’s probably not the stuff that’s had the most effort go into translations.
Rob Wiblin: Yeah. What’s another one?
Holden Karnofsky: I think metallurgy is something I wish I understood better, just because I had a lot of trouble finding a whole coherent narrative of what happened. But I mean, metal seems like a really big deal. And just a really huge part of the story of how technology has improved and how humans have gotten the ability to do all kinds of things.
Holden Karnofsky: It was even just a little mind blowing to me that it was discovered as early as it was. You have to do this involved process where you have to heat up these rocks very hot and then you have to take what falls out of them.
Holden Karnofsky: And I think improvements in metal are a big deal, and understanding how they came about… Probably a lot of the improvements in metal did not come about through this very scientific process. It was probably just a lot of messing around, but it was a huge, huge thing. And it happened a lot faster in some periods than other periods. That’s a story I wish I knew better. Maybe something like the Roots of Progress blog will cover it better at some point.
Holden Karnofsky: While I was doing this, I was often just talking to my wife, Daniela, and just talking about how interested I was. And wish I knew more about metal and cement, and how we came up with cement. And she hated this. She still makes fun of me.
Rob Wiblin: “Not interested.”
Holden Karnofsky: To this day, she’ll just be like, “Don’t start talking about cement.” This is her go-to way of making fun of me.
Rob Wiblin: Cement’s a big deal. It’s a big deal. What’s another one?
Holden Karnofsky: Alhazen is an Islamic scholar from the early 11th century who intensively studied how curved glass lenses bend light. And had this very mathematical rigorous study of it. I wasn’t able to really nail it down, but I could imagine… Spectacles are believed to have been developed or to have reached Europe in the 1200s. It really could have been a key for that.
Holden Karnofsky: And then, in addition, microscopes and telescopes, you could think of those as being really, really central to the scientific revolution. We think of the scientific revolution as starting in the 1500s. And that’s probably roughly right in spirit, but there were these bursts of science earlier, and they weren’t all in the West.
Holden Karnofsky: And that person seems like they probably belong on some list of people who just had unbelievably important scientific, rigorous discoveries that really laid the groundwork for science. Just bending glass and moving light around turned out to be an enormous deal.
Rob Wiblin: Is there any common thread, or cause, maybe, between all of these things that you’ve been listing that you think are important topics in history that haven’t been studied enough or don’t get talked enough about?
Holden Karnofsky: The common thread is human empowerment. This is how I organized my summary, and why my summary has different emphases from a history textbook. And it’s why it’s not a history textbook, is that I think normally when you study history, the idea is, here’s a list of stuff that happened. And this thing happened and that thing happened. And there was this war and there was that war.
Holden Karnofsky: But it’s like, there’s no underlying sense that throughout there was this underlying factor that was changing in a predictable way. And I was looking for… I believe that over the course of history — and it would be weird if it weren’t this way — that human empowerment has gone up. Because as each year goes by, there’s more and more technology. There’s more and more people around.
Holden Karnofsky: It just seems like, as a species, our ability to do what we want to do has gone up. And that is not necessarily a good thing. It could be a good thing or a bad thing. I think there were long periods where empowerment was going up and quality of life was not going up.
Holden Karnofsky: Agriculture could even be a time when empowerment went up and quality of life went down. But the list of things that humans can do, their powers over the environment, those have been going up, generally. Pretty monotonically, I would think. And it’s interesting to just reflect on, what were the big moments that caused empowerment to go up faster and more? What were the periods during which it was accelerating and what was the impact of that?
Holden Karnofsky: Was empowerment, human empowerment, generally a force for good, or for bad? I think this is relevant when I look to the future. And I think, “Well, if we created digital people, would that be good or bad?” Because that would be a giant burst of empowerment. That would give us the ability to do all kinds of things we couldn’t do before.
Holden Karnofsky: And so, can we get any clues from the past, and say, “Well, when we get empowered, do we just start shooting ourselves in the foot and making things worse, or treating each other badly? Or do we start making things better?” That is the theme. And so basically everything in this summary is about when did humans get more empowered especially fast? And what happened as a result of that, and what happened to quality of life?
Holden Karnofsky: There’s a lot of attention… It’s like, when you do it this way, this guy Alhazen is a huge deal. Just, man. Lenses, that is a big… That gave us microscopes, telescopes and spectacles. Wow. And then William of Orange is just like…who cares. A lot of wars. Alright. Some people were in charge, other people were in charge. Whatever. And so that was the lens I took to this thing, for better or worse.
Rob Wiblin: I’m not sure whether I can connect this story with the rest, but it is something from history that I was interested to learn recently, and we’re very deep in this conversation. So I’m just going to talk about the things that I find interesting.
Holden Karnofsky: Oh yeah. Go for it.
Rob Wiblin: I was listening to this history of India. A long series of lectures, like 20 or 30 hours. Anyway, I got to the 18th century, 19th century when the English East India Company was initially creating trading ports on the coast in India. And then over time, they started just invading parts of India, basically. And playing local princes against one another in order to just take over and govern, like a government, more and more parts of India.
Rob Wiblin: Anyway, I knew that this had happened, but I was very interested to hear that there were lots of people in England who were outraged by this. There were people in Parliament who were like, “What the fuck is this? This is a company. This is a corporation that was set up to make money trading spices. We were happy with them having trading ports on the coast of India because that was a place where we could come and sell things to Indians, and then buy stuff and make bank. That’s totally fine. But it’s totally ridiculous to have parts of India being governed by a corporation that is headquartered in London.”
Rob Wiblin: And people in Parliament kept trying to stop this from happening. I think ultimately, they lost the argument. Perhaps because the East India Company became unfathomably rich through all of this exploitation and all of this trading of things back and forth from Europe to Asia.
Rob Wiblin: Anyway, I’m always interested in hearing stories like this. And I guess it connects with Porphyry saying that vegetarianism was good thousands of years ago, because often when people are condemning folks from the past for having done things that are viewed as atrocities today, other people will say, “You have to judge them by the standards of the time.”
Rob Wiblin: And I often think, even on its own terms, this is an overstated argument. Because you can almost always find people at the time who were saying that the thing that was happening was bad. And it’s very likely that the people who were doing it, or advocating for it, or defending it, would have heard these arguments and actively rejected them. And I think that makes it a lot less defensible.
Rob Wiblin: If there was an active group of people in England saying, “All of this colonialism is an outrage. It’s totally ridiculous to have India governed from London. We shouldn’t be exploiting these people to take their money.” Then it’s like, the argument has been put, and people who didn’t accept that, they’re just morally on the hook for failing to see that that was the correct move.
Holden Karnofsky: That’s really interesting. You could extend that to a lot of things from U.S. history where people were… There were ongoing debates for sure at lots of points, about a lot of the things that people point to and say, “Well, that was just a part of the time, with slavery and such.” But yeah, I mean, I think it’s pretty fair to take a lot of that behavior and just say, “That was not okay. And we’re not going to give people a pass for it.”
Rob Wiblin: I’m going to go look at more stuff on this English East India Company thing. I hope I haven’t misrepresented stuff, because these lectures refer to there being some group in England at least, and some opponents in Parliament. But I need to go and find out how big they were. And I guess also, how they ended up losing. At least until the late 19th century. Are there any other interesting episodes from history before we move on?
Holden Karnofsky: I mean, there’s tons. I’m naming some that struck me as the most random, that I’d never heard of. There were others that everyone’s heard of, but I thought were an even bigger deal than people think. But I’ll just mention the Tanzimat, which is this series of reforms in the mid-19th century in the Ottoman Empire.
Holden Karnofsky: They abolished the slave trade, they declared political equality of all religions, they decriminalized homosexuality… I wish I knew more about this. I mean, that was a real… They were ahead of the curve in the Ottoman Empire there.
Rob Wiblin: That’s amazing. I guess that maybe makes more sense of how they really jumped forward into the modern world when they became Turkey under Atatürk.
Holden Karnofsky: It could be connected. This was not under Atatürk, this was much before him. I mean, a lot of this was me just seeing these very high-level trends and being like, “Oh, what’s that? What’s that? What’s that?” And never really digging in as much as I wish I had time to, but it could be.
Applied epistemology [02:10:55]
Rob Wiblin: Okay, enough Holden and Rob talking about stuff from history they know very little about. Let’s move and talk about other non-history things that we don’t know very much about. Any other things that you’re planning on writing about?
Holden Karnofsky: There’s going to be the series I talked about. I’m going to write some about what I call applied epistemology, which is just… Instead of stuff like, well, what is the definition of knowledge? Applied epistemology is like, “Hey, I’m sitting here in this world where a lot of people are saying things, and I don’t know who to believe, and how do I decide?” One of the things I’m going to write about is the Bayesian mindset, which is something that I think a lot of us do in this community, which is you think of things in terms of probabilities and values, and you try to separate probabilities and values, and you try to be explicit about them, and you try to run explicit expected value calculations. I think this is a very cool thing to be doing.
Holden Karnofsky: I think it could be more acknowledged and named and described and examined in terms of the pros and cons, because it’s a practice, it’s a set of practices. It’s a set of psychological tricks. It’s got pros and cons in terms of how it plays with human psychology when you’re writing down probabilities and values. That makes you think certain ways and do certain things, and when we talk about what’s good about this, it’s mostly not that there’s various theorems about expected utility maximization. Those are, to me, very tangential — or relevant, but not the only thing going on. I’m going to write about things like that.
Holden Karnofsky: How far do you take self-skepticism? What do I do about the fact that I could be wrong about everything, and how far do I take that? At what point do I say, “Hey, it wouldn’t be productive for me to just say, well, half the country thinks X, so I better think X too.” I mean, where do you stop on the self-skepticism train? That’s something I’m going to write about. Relatedly, just some of the methods that I’ve picked up over the years for just digging in on problems and deciding how much to research a question before reaching a conclusion on it, and how to research it. Then I’m going to write a little bit about utopia and why it seems that there’s so little excitement about utopia, and what that means, and whether that means the whole idea of utopia is futile or not, and why it’s so hard to imagine one that seems appealing.
Holden Karnofsky: Then finally I do probably want to do a kind of extensive series on this thing that I haven’t really come up with a name for yet, and I hope to by then. I’ll call it for now ‘ambitiously impartial ethics,’ or something, which a lot of people in the EA community… We tend to call it utilitarianism, but I think there’s more to it than that. I think it’s thicker than that. I think people in the EA community have this vision of a particular approach to ethics that says, “We’re going to draw a bright line between moral patients and non-moral-patients. We’re going to decide fundamentally, we’re going to get to the root of the truth of what counts as a person and what doesn’t count as a person, and then we’re going to treat all persons equally.”
Holden Karnofsky: That’s where a lot of the weird stuff in effective altruism comes from, where it’s like, well, maybe the line includes insects, and so then we should think about insect suffering as a major issue. Maybe the line includes factory farmed animals, maybe the line includes future digital people. When I say maybe, I tend to usually think that it could or should, or at least maybe should. That is a strain of thinking in the EA community that is not just about being utilitarian. It’s about having this very ambitious vision of widening the moral circle as far as it could possibly theoretically go, and therefore being in this space where you’ve immunized yourself against being a moral monster by the standards of some future advanced civilization.
Holden Karnofsky: I think one way of thinking of this is, is there an ethics we can come up with where we’ve minimized our odds that future people will think that we were jerks? Because past people sure look like jerks a lot of the time. Because for example, they were treating people badly who we now think are people who should have been treated well. Who are we treating badly who we’ll think should be treated differently in the future? So, there is this strain of thinking, but I don’t feel like it’s ever been articulated in one place, because, again, it’s not just utilitarianism. I also think there’s major weaknesses with this strain of thinking. I think there’s major ways in which it doesn’t work. A lot of what people are trying to do… It has to stop somewhere, and I think it has to stop somewhere different and less satisfying than where I think a lot of EAs imagine it’s going to stop.
Holden Karnofsky: I originally started this series trying to explain what’s wrong with this approach to ethics, even though a big part of myself endorses it. I ended up just feeling that I had to put a lot of work into articulating in the first place, because it’s kind of floating around in the water and it hasn’t been written down, and stuff that’s floating around in the water of avant-garde EA and never written down is going to be a major thing that I’m writing about.
Rob Wiblin: Is it possible to give a flavor of why you think taking this to its logical extreme isn’t going to be quite as satisfying as it seems like it should be, setting out?
Holden Karnofsky: I think you run into a number of problems when you try to make the moral circle include everyone. We all know that the moral circle can’t include literally every physical process, because, okay, are tables moral patients? What do they even want? If I start considering every physical process a moral patient, then for every person I help, I’m hurting another person, because you could just construct the processes differently in your head. So, it has to stop somewhere. Then it’s like, how do you draw that line? I think a lot of people want to draw the line based on something like consciousness or sentience. Does something have an inner experience? I think there’s some problems with that.
Holden Karnofsky: One problem with it is when you draw the line that way, you run right into a lot of infinite ethics problems, and related problems that are just… If we have this objective measure, and there’s no limit to how many of something there can be, you run into a lot of just weird moral quandaries that can knock the teeth out of utilitarianism and make it stop working, or make you have to choose something else weird to believe. Then I think another problem is that I feel like it’s just this whole idea of consciousness and sentience… I feel like effective altruists are just taking a lot of their uncertainties and just loading them into this concept, and then being comfortable with the fact that they don’t understand what this concept is at all.
Holden Karnofsky: I suspect — and it’s only a suspicion, and I’ll try and explain the suspicion and help people understand why I have it, but I can’t prove it and it might be wrong — I suspect that when we finally get around to deconstructing consciousness, we will be disappointed. We will feel, “This is not making us feel the level of satisfaction we expected to feel about how to decide whom to care about.” We’ve been saying, “Oh, well, is this thing conscious? Is that thing conscious?” We’re loading a lot of who do we care about onto this idea of consciousness that we haven’t really looked at, and if we ever do figure out what consciousness is, I suspect we’ll look at it and we’ll say, “You know, I don’t feel great making that the center of my morality. I may need something else.”
Rob Wiblin: I’ve got a lot of thoughts on that, but this is the grab bag question section, so we’ll have to solve the hard problem of consciousness maybe in your third appearance on the show. What’s the last one?
Holden Karnofsky: Well, the only other thing is that — this is almost more of a warning, or a disclaimer, or a ‘buyer beware,’ but there’s going to be a lot of random nonsense on my blog, and so it’s just good to know that. For years I’ve had this private email list, and I just take links that I think are cool or interesting and I just share them, because then I have a record of them, and I feel like I did something with them. A lot of times they’re old. I’m not a person who’s like, “Hey, this thing just came out, I’ll be the first person to tell you about it.” And some of them are about sports, and so you’ve got to have a quick archive finger on your email if you’re going to subscribe.
Holden Karnofsky: But a general theme that I’ve noticed when I’ve been writing this stuff is that I write a ton about how you can’t trust anything you read, and about how academic social science, the methods don’t hold up and stuff doesn’t really work, and a lot of the things people say are not really supported. Even when someone debunked someone, a lot of times the debunking is wrong, or the debunking of the debunking is wrong. So, we’re in a world of a huge amount of claims and hypotheses and information, and really, I think, a much lower reliability quotient than most people imagine.
Rob Wiblin: What does that imply? I mean, it’s a very interesting one, because it’s like, I agree with that, but then I’m like, it’s kind of fun to be in the thick of it and to take it somewhat more seriously than perhaps any of these claims deserve. What else are you going to do? You can caveat everything with, “I don’t know shit, and this is probably wrong.” Every link I could link to, I could say, “This is probably not that reliable,” but maybe it all just cancels out.
Holden Karnofsky: I don’t really think it does. This is a bit of an obsession for me. Even my sports posts are somehow about this. It’s somehow about, well, because you can understand sports really well, you can see that all the methods we use in academia just spit out garbage when you apply them to something you actually understand. I don’t know, I think I have a higher bar for believing things than most people. I just look at everything and I’m just like, “I don’t know, maybe.”
Holden Karnofsky: I think that changes how I live my life in a big way. I just ignore most stuff, and I have to make really selective deliberate decisions about what stuff seems like it might be true if I looked into it, and would be really important if it were true, and is worth my time to look into. So, I feel like I engage with many fewer claims than most people, and think about way fewer things than most people, and go way deeper on the things I do think about. That’s a very different style of living my life that seems different to me.
Rob Wiblin: So inasmuch as you think each piece of evidence is less reliable, in order to really get to grips with a question you would have to read more about it, and then find some way to properly integrate all of these different pieces of evidence, because reading one article is probably just quite misleading, and then that’s causing you to narrow your focus somewhat onto a smaller number of things, because you want to understand them properly, rather than just be drifting about in a sea of information on all topics.
Holden Karnofsky: Exactly. I basically consider most claims just false by default, and I need to pick a few that might be true that I’m going to really try to understand.
What Holden has learned from COVID [02:20:55]
Rob Wiblin: That’s fair enough. Okay. If that’s all the blog posts, what’s something important that you’ve learned from the COVID-19 experience?
Holden Karnofsky: I mean, there’s a lot that’s potentially learnable from COVID. It’s really funny, here’s something I would like to learn, and then I’ll get to something I think I sort of have learned, or have thought about differently. Something I would like to learn is just who actually was right, ahead of other people. Because there’s a lot of claims floating around. There’s a lot of people taking victory laps, and saying, “Well, this person was right, and we should listen to them, and this community was right, and we should listen to them.” I love the EA community, but a lot of people will say the EA community, or the rationalist community, really nailed COVID. And as their support, as their footnote, they’ll link to the Scott Aaronson post. The Scott Aaronson post gives a long list of names, and he says, “These people were right. You should listen to them.” But he doesn’t give citations. He does not give links.
Holden Karnofsky: I went through these names, and half of them had not said anything about COVID that I was able to find anywhere. I think he was giving an emotional impression, this general kind of person. It was not a research project that he was publishing. A lot of those people didn’t say anything about COVID. I think I really wished that someone would actually try to lay this out, and say, “Okay, what was Rob saying? What was LessWrong saying?” There was something like this on LessWrong specifically. At the same time, what was the Trump Administration saying? Because sometimes they actually did, quite early on, say some quite alarmist things. What was nerdy Twitter saying? Because I generally have an impression that nerdy Twitter did — at least, my current impression, which could be wrong — I think nerdy Twitter probably did at least as well as the rationalist and EA communities, and probably better. Though it’s a little hard to define nerdy Twitter.
Rob Wiblin: Yeah, what’s nerdy Twitter? Who should I follow?
Holden Karnofsky: Well, I don’t know. It’s like cool epidemiologists who tweet… I don’t know, when I think of nerdy Twitter, the way that I engage with Twitter is I basically follow like five people, and then I see who they follow, and I think nerdy Twitter I basically define as people Alexander Berger retweets or follows.
Rob Wiblin: Okay.
Holden Karnofsky: But a lot of it is not EA and rationalist, and I think it’s people who did quite well. Let’s line it up. Trump Administration, various definitions of nerdy Twitter, WHO, CDC — I think they’re going to look terrible — and the various groups, and say, “Who said what when, and how valuable was that, and how correct did that look in hindsight?” That would be awesome. I do have the impression that the CDC and WHO did really horribly, and that the EA and rationalist communities did at least a lot better than those agencies, and that nerdy Twitter did a lot better than those agencies, but the granularity… I mean, I think a lot of claims are running around that I would like to nail down a little bit better.
Rob Wiblin: I would absolutely love to see that studied more systematically. There’s going to be a big problem, for example, that you might think, did this particular academic sub-discipline do well? It could have been the case that 90% of people in that academic discipline were ahead of the curve and had a decent idea what was going on, but it wouldn’t then shock me if the 10% most vocal, or the 10% most retweeted on Twitter are completely non-representative and might’ve had a more politically interesting, unusual view. You’re totally going to identify those people much more, because people saying something that’s wrong is often far more interesting at the time.
Holden Karnofsky: I’m mostly interested in public statements, and what people were out there saying getting attention for, more than what people would have said in an opinion poll, because I don’t think it matters all that much what people would have said in an opinion poll.
Rob Wiblin: A kind of cross-cutting question here is should we learn to put more weight on authorities and people with traditional credentials, or should we learn more to trust people who are super forecasters who don’t have particular domain experience, or people who seem like they’re really on it intuitively, and what they’re saying makes sense to you? It’s a really difficult one. I think my impression is that the non-domain experts did surprisingly well, but I think that might be because I have an extremely selected narrow group of non-domain experts, and if I chose non-domain experts at random, then I’d find it was just absolute garbage.
Holden Karnofsky: I mean I think in general the non-domain experts did horribly, if you put them all in one group together. But I think if you’re listening to the right non-domain experts, then they did well. And so it’s a matter of, you want to learn general lessons about what kinds of people… But it’s a little bit more nuanced than domain experts versus non-domain experts. But I would generally say that it was a big update toward listening to Rob Wiblin over listening to the CDC. I think it’s hard to argue with that.
Rob Wiblin: Thanks, Holden. You say the sweetest things. Okay. Anything else you’ve learned from COVID, or shall we move on?
Holden Karnofsky: Well, a weird thing that I haven’t seen people reflecting on all that much, as far as I can tell, is that I think the thing that is rough is that people who were right and who had a lot of foresight had a lot of trouble being helpful. The translation of being ahead of the curve on what you know to actually helping things go better looked pretty rough to me. When I think about who helped the most, I actually look at people… For example, the 1Day Sooner founder, or maybe Tomas Pueyo, people who instead of being most remarkable for the fact that they knew COVID was going to be a thing before other people did, they actually found out at the same time as everyone else, but what they did was they jumped into the fray then, and they really threw themselves into something with all the energy they had.
Holden Karnofsky: So, I kind of feel like “effort beat foresight” is a thing that I think might be true. Again, I’d like to nail this down. I don’t know if this is true, but it makes me nervous, and it makes me think about… I have seen EAs and rationalists taking victory laps, and I think it worries me a little to take a victory lap when you didn’t necessarily help things go much better. Our goal with the most important century with AI is for things to go better. Our goal is not to say, “Haha, we told you, we saw it coming.” And to the extent those two can come apart, I think that’s worth being very nervous about.
Holden Karnofsky: I think it’s worth being nervous that we as a community are really onto something with all this AI and existential risk stuff, and yet we have so much more work to do to find a way to translate that into having a positive impact. Maybe the best way to have a positive impact isn’t even going to lean on foresight as much as we would like it to, or wish it would, or think it would be nice for it to. Maybe we should just try to have fewer buffoons in government, and that was the right answer, and that’s what we should focus on, and that’s the right thing, and that’s what’s going to help, and we didn’t have to have insights about AI in order to get there. That would be too bad for our vanity, but maybe that’s where we are.
Holden Karnofsky: So it’s something I’ve been thinking about a little bit, because I think the analogy is interesting. There was this huge thing coming, some people saw it coming earlier than others, it turned out to be just as huge as they said, but I don’t know that they were the most helpful people, and that’s an interesting juxtaposition.
Rob Wiblin: I guess there’s a broader alarming thing, which is that lots of people, including you and me —in fact, most people who were just broadly informed about the issue — knew that pandemics could be a very serious problem, and that we should be doing more to prep for it.
Holden Karnofsky: Yeah.
Rob Wiblin: As a group of millions, tens of millions of people, we did not manage to make that happen ahead of time, at least not to a sufficient degree. That’s an example where, as a society, there was almost a consensus among informed people that we should be solving this problem, and then we kind of just let it happen.
Rob Wiblin: I want to defend a little bit the people who were saying in late January or early February that this was likely to become a global pandemic, when I think significant other public authorities were playing down the risk, because folks then were saying, “We should be analyzing exactly what policy are we going to do when this becomes a massive pandemic in this country, and we should be stockpiling hand sanitizer, we should be increasing manufacturing of masks, we should be figuring out how we’re going to do work remotely so that we can adjust for all this stuff.”
Rob Wiblin: It’s true, mostly those people weren’t super listened to, but it seems a little bit unfair to say these people had… I mean, these people were saying, “We should get a month ahead of this.” If you remember, March and April were just absolute chaos, because we had sat on our hands for so long rather than actually figuring out what the policy response should be, and doing the obvious preparation. I don’t know. I agree that some people who didn’t have particular foresight but then had better ideas once things became obvious… That’s also an incredibly useful skill. It’s not only about being ahead of the curve and seeing what’s about to happen, but also knowing what is actually worth doing.
Holden Karnofsky: Well, I don’t know about the ideas. I mean, I think it was people who just had a lot of time. I kind of just feel like I would trade foresight for effort, you know? People who threw themselves into something and really spent a lot of time on it I think are among the people who I think helped a lot. I think, yeah, it would have been good to be thinking about policy earlier. I wish people had listened and had thought about policy earlier. That’s totally true. A little bit of me is just like, “Look, I was one of the people who stocked up on hand sanitizer and stuff like that, and that didn’t help at all.”
Rob Wiblin: What about masks? That might’ve helped.
Holden Karnofsky: No.
Rob Wiblin: Okay. Yeah. That’s a shame.
Holden Karnofsky: By the time people were into masks, you could get masks. I mean, they were expensive for a little while… I don’t know. Maybe I’m just reacting to the victory laps and maybe I’m overreacting a little bit, but I do worry sometimes we’re very cerebral, intellectually nerdy people in this community, and I do worry sometimes that we look at whether we were right, not at whether we helped. Some people were very excited, like, “I bought boxes of pasta, and you didn’t.” It’s like, “Yeah, and that didn’t matter.” Let’s think about what mattered and look for the patterns in that.
Rob Wiblin: Totally. Totally, yeah.
Holden Karnofsky: We still have that hand sanitizer. We have this unbelievable amount of hand sanitizer sitting in our house. I don’t think we ever used a drop.
Rob Wiblin: We had a surplus of hand sanitizer as well. It didn’t work out, but we could have been right. It could have helped.
Holden Karnofsky: It could have. It could have been really hard to get hand sanitizer forever, and really important to have it.
Rob Wiblin: On the stuff that actually did matter, yeah, Tomas Pueyo and I guess me and some other bloggers were kind of on board with, “We need to cancel stuff and start staying at home” maybe only days or possibly a week before everyone else got there.
Holden Karnofsky: Which is good, definitely good.
Rob Wiblin: But I was actually about to say, we haven’t heard the last word on whether that was actually worth it, from a cost-benefit point of view. We’re actually not going to know until we see, well, how many lives actually were saved, and what were the hidden costs of this? Because obviously the costs were enormous. And although at the time I thought that was the right way to go because it preserved option value, I don’t think it’s completely obvious ex post that it was worth the cost, relative to some other approach. I’m going to be very interested to see someone try to do a cohesive cost-benefit analysis on the lockdowns.
Holden Karnofsky: Really early in the pandemic, I made this spreadsheet analysis about when to be really careful, and I concluded that it wasn’t time yet. I think we were probably a few days ahead of San Francisco policy, but we were I think one week behind what everyone we knew was doing. We were chilling out. Then we looked back on that decision and we were like, “We traveled. We had fun. We went to another city, had a lot of fun.” We were like, “That was our last chance to have fun, and that was absolutely the highlight of 2020.” We had a great weekend in another city. We spent the rest of the year sitting in our house, and so we were like, “Man, that was a huge win, locking down a week later.” Just our house, just our household, me, my wife, and her mom.
Rob Wiblin: You were a hero of the pandemic, Holden, going on that vacation.
Holden Karnofsky: We had fun. We got to see friends, took walks with people, yeah, went out drinking, yeah.
Rob Wiblin: My memory is just the opposite. I remember I went to a house party on the 9th of March, and at the time I was like, “This is a really bad idea. We shouldn’t be going to this house party,” but the social pressure was too great, and so I went anyway. And in retrospect I was like, “This was crazy.” One or two percent of the whole population had COVID. This could have been an absolute disaster.
Holden Karnofsky: But it was pretty knowable that the risk was not that high, and the cost-benefit didn’t say to do it. That was the last house party you went to for like a year, so good job.
Rob Wiblin: Well, we have the rest of our lives to re-litigate March 2020, so let’s push on.
Holden Karnofsky: The point is, it’s hard to really, really get this stuff right, even when you get it impressively right compared to other people, and that is something I worry about for longtermism, and I think that is a reason that it’s not so crazy to be focused on global health and wellbeing if you’ve got better stuff to do there. That’s me shoehorning a moral in there.
Rob Wiblin: Predicting things isn’t enough.
Holden Karnofsky: Yeah.
What Holden has gotten wrong recently [02:32:59]
Rob Wiblin: What are a couple of interesting or notable things that you’ve gotten wrong over the last couple of years?
Holden Karnofsky: I think we made some big mistakes in hiring and management. I think we were at a stage at one point where our work was very poorly scoped and we just… On the longtermist side we were improvising and figuring stuff out as we went. And we just hired too aggressively at that time.
Holden Karnofsky: And I think that’s a common mistake organizations make. And I think it’s an especially easy one in these very intellectually confusing areas where it’s just, you’ve been at something for years, you feel like you should know what you’re doing, you wish you knew what you were doing. You want to feel like you know what you’re doing… You hire, and then you’re trying to give people guidance and support, and it’s too hard to do because you don’t know what you’re doing.
Holden Karnofsky: So that’s something I’ve been thinking about a lot. And I often am advising organizations to just be a little more conservative with their hiring. And a lot of times it’s hard to make the case, because until you’ve seen how it plays out it always feels like, well, isn’t it always better to just do more? And I think not always. I think a lot of times you want to make sure you’ve built a really good prototype widget or two before you build a widget factory, and jump into the factory when you still have a vague idea of what the widget is. I mean, that would be the analogy I’d use.
Rob Wiblin: Yeah.
Holden Karnofsky: So that’s something. I think in general I’m feeling pretty fired up about this idea that we could be in the most important century. And that’s something that has been a priority for us for years, for sure.
Holden Karnofsky: But thinking about it now, I wish I’d gotten into this headspace a little faster somehow, and moved a little more boldly in line with that hypothesis. I’ve been dividing my time between a lot of things, and really just mixed in a lot of directions. And I think focus is good, and having a firm conviction in something that you want to bet on that other people aren’t, and really going all the way in on it would be a good thing for us to do more of. And probably we could have done that earlier.
Rob Wiblin: Were there any interesting emotional or social barriers to fully embracing that worldview earlier than you did?
Holden Karnofsky: I mean, for me, I’ve spent my whole life in a state where I just, I don’t like taking big high-stakes actions on something until I feel I’ve done my homework. I think it’s just healthier, and I’m really uncomfortable with… I think it’s a bad dynamic for the EA community to be pushing people to really act on things, like, this thing could happen this century. And then someone says, “Okay, and why do you believe that?” And it’s like, “I don’t know, go talk to some people.”
Holden Karnofsky: But that’s how I’ve always been. And so for me, having gotten to the point where we’ve done these worldview write-ups, and then I’ve written up the summary of the whole thing and this Most Important Century series, and feeling like I’ve stared at it and sat with it and we’ve done what we can to find the holes in it… It’s very psychologically important to me to feel that I’ve done that. And then I also think having the ability to focus my time on it is important to me too.
Holden Karnofsky: And I do know people who can get to these things before they’ve done the amount of homework that I’ve done. Now to be fair, I think those people tend to predict I’ll never get there. And I think that turns out to be false, because they’re imagining that I don’t have an evolving picture of what level of rigor is necessary, and a flexibility about how much investigation to do.
Holden Karnofsky: But I think there probably is a way to get there faster. I don’t really know, because I have a tough time wishing generally that I had just embraced everything the first time it sounded right to me. I don’t think that would have worked out well for me either, but I think there’s probably some way.
Rob Wiblin: Was there a third thing that you thought you’d gotten wrong over the last couple of years?
Holden Karnofsky: I wrote a post a while ago called Expert Philanthropy vs. Broad Philanthropy. It’s talking about the contrast between, do you want to have a specialist who works on one cause, lives and breathes that cause, knows everyone in the cause, and they lead your work in that cause? Or do you want the generalist who maybe works in five causes, and they only fund the very best stuff they see, and they don’t know as much?
Holden Karnofsky: At the time, I said expert philanthropy seems like it’s got to be better. And I think I just changed my mind on that one. I don’t think it’s clear cut, and I think we want to do a mix, but I feel like our broad philanthropy has done better than I expected it to, and I’m interested in doing more of it. And I think it may be the better model a lot of the time, so I think that’s interesting.
Rob Wiblin: Was there some benefit to the non-specialist approach that you perhaps underestimated?
Holden Karnofsky: I think it’s about keeping the bar high. So I think the issue when you hire a specialist is, they’re very focused on their cause. You have to figure out what their budget should be, and they’re going to advocate for a bigger budget, and then they’re going to spend whatever budget they have. And you can improve your ROI a lot if you have someone who is like, “I’m not interested unless this thing is amazing.”
Holden Karnofsky: And then I do think, as with many things, it’s like, a lot of the impact comes from the very best grants, and a lot of the very best grants are actually just super obvious. And so it might be better to be in more causes with a very high bar and a very high amount of generalist effective altruist mindset, so that you’re just really funding the stuff that’s really great. And that might be worth some of the costs you get where you… I mean, you’re not going to be as knowledgeable, and you’re going to miss stuff. But maybe funding the most obvious stuff, the most amazing stuff from several causes, is better than going deeper on one.
Holden Karnofsky: And then there’s another piece of it too. I’ve thought that a really important part of grantmaking is relationships, because you want people to feel comfortable telling you the truth and giving you honest feedback. And a lot of times in order to do that, you have to be really connected to a field, and you have to really know everyone in it. And I think I’ve evolved on that a bit, just in the sense that I think it’s just hopeless for an Open Philanthropy program officer to really stay in a state where people will be honest with them. And because it’s hopeless, you’re not gaining as much as I thought you were gaining when you have someone who’s really well networked.
Holden Karnofsky: I wish it were different, but I used to think of it as our expert is a known person with friends. And our broad grantmaker is a weird person in an office that no one understands. But they’re both going to be the second one. You have to find a way to have impact anyway. It’s a sad conclusion.
Rob Wiblin: You’re saying that when someone’s making big decisions on funding, even their friends might understandably become reticent to start criticizing—
Holden Karnofsky: Yes. Yes.
Rob Wiblin: —an organization on the basis of a rumor they heard. Because now it’s a really significant thing, because you could totally change their funding just because you’ve got something wrong.
Holden Karnofsky: Exactly, yeah. The lead grantmaker is a powerful figure, and people are going to be very careful with them. So the opportunity to get a lot of gossip and scuttlebutt, I mean, it may just not be there, or there may need to be other ways to do it. A lot of times our more junior staff do better at that because they aren’t as powerful, and that could be important.
Having a kid [02:39:50]
Rob Wiblin: Interesting. Okay, totally different topic. What’s something you’ve been working on that isn’t related to effective altruism or anything like that?
Holden Karnofsky: I mean I wouldn’t say there’s no relationship, but my wife Daniela and I are having a kid soon. I think the kid will be here by the time this podcast goes up.
Rob Wiblin: Congratulations.
Holden Karnofsky: So we’ve been preparing for that and thinking about it, and that’s been a project. And will continue to be a big project.
Rob Wiblin: Are you excited? I suppose it’d be hard to say that you’re not, on the podcast—
Holden Karnofsky: Well—
Rob Wiblin: —but I imagine you wouldn’t be going into it if you weren’t.
Holden Karnofsky: We’re both excited to have a kid. It’s not one of us pushing the other. I’ve heard that the first months can be very difficult and not necessarily very rewarding. And we’ve been trying to prepare for those coming months so that they’re not worse than they have to be. So I’ve got some trepidation, obviously, and it’s a big decision. But yeah, I mean, certainly excited.
Holden Karnofsky: Daniela and I froze embryos. She’s pregnant the natural way, but we froze embryos, and that was an interesting experience. And I ended up doing a lot of research there too on just the best way to do that. I learned that the standard clinic approach now seems to be worse than the old approach. So they do this thing called ICSI, which was originally used for infertile males, and now they do it for everyone, and it seems worse. It seems worse for the kid. So if you can avoid it.
Holden Karnofsky: I learned that it’s better for both men and women to freeze, whether it’s sperm or eggs, to do that earlier in life. You’re going to get better quality gametes. And so I wish I had frozen mine earlier, I wish Daniela had frozen hers earlier. And any listeners, if you haven’t frozen anything and you might want kids someday, I would encourage you to think about doing it. It’s obviously a very different process for men and women, but I think for both it’s a good idea.
Rob Wiblin: What’s a new kind of possible global health and wellbeing cause area that would most excite you? Not necessarily the most impactful, but one that’s fun and enthusiasm inducing.
Holden Karnofsky: I mean, the truth is I just have a lot of trust in that team, and I have a lot of sympathy with their mentality, which is just, it’s a numbers game. We’re doing ROI calculations. We want to help the most people for the least money, we don’t care what it is. It could be boring. I like the South Asian air quality thing, just because it’s… I like things that are weird and that people don’t normally talk about much.
Rob Wiblin: Me too.
Holden Karnofsky: I mean, they’re doing global aid advocacy too, which I think is great, but it has less of that feel of like, “Hey, did you know that maybe a huge amount of the world’s disease burden is coming from this pollution in South Asia that you never hear about? And it’s actually a bigger issue than most of the stuff you do hear about?”
Holden Karnofsky: So I like it when they do stuff like that. I selfishly hope they keep doing it, but I don’t really care. They could end up deciding that all the money should just go to bed nets, and that would be fine, and I’d be excited about it.
Rob Wiblin: Talking about Daniela again, what do you think of Anthropic? Do you have any comments on her new organization?
Holden Karnofsky: Anthropic is a new AI lab, and I am excited about it, but I have to temper that or not mislead people because Daniela, my wife, is the president of Anthropic. And that means that we have equity, and so… I am about as conflict-of-interest-y as…
Rob Wiblin: Impartiality over at this point of the interview.
Holden Karnofsky: Yeah, exactly. Yeah. I can disclose all day, but I’m as conflict-of-interest-y as I can be with this organization. So the only thing I’ll say is that Daniela and I are together partly because we share values and life goals. She is a person whose goal is to get a positive outcome for humanity for the most important century. That is her genuine goal, she’s not there to make money.
Holden Karnofsky: And I think Anthropic just has a lot of people who have that in common with her, which is part of the reason why she’s happy being there, both among their employees and among their investors. So I think that’s cool, but I’m not the most objective observer you could ever ask for on this topic.
Rob Wiblin: Alright. This has been a marathon two-recording-session episode, but we have reached the end, or just about the end. As a final question, we’ve talked about so much fun stuff that you’ve been writing about and learning about and doing. Is there anything else exciting and fun from your life that the audience might be interested to hear about?
Holden Karnofsky: Absolutely not. No. I’m Co-CEO of Open Philanthropy, I’ve been writing this blog on my personal time, and we’re preparing to have a kid, so that’s it. I haven’t been reading. I haven’t been watching TV barely at all. There’s nothing else going on for me. You now know everything that I’ve been doing. Nothing left. I have been managing to exercise, but barely.
Rob Wiblin: “I’m running this huge team, I’m writing all these blog posts, I’m learning about history, my wife’s pregnant, she’s starting up a new organization. How much more do you want??”
Holden Karnofsky: Yeah, we covered it. There’s nothing else that I do with my time.
Rob Wiblin: Okay, alright. Well, best of luck with parenthood, and pass on my best wishes to Daniela. I hope things go smoothly.
Holden Karnofsky: Thank you.
Rob Wiblin: And yeah, I look forward to hearing about it in a couple months’ time.
Holden Karnofsky: Yeah, we’re excited. And thanks for having me on.
Rob Wiblin: My guest today has been Holden Karnofsky. Thanks so much for coming on the 80,000 Hours Podcast, Holden.
Holden Karnofsky: Good talking.
Rob’s outro [02:44:50]
Rob Wiblin: If you’ve made it to the end of this episode, I just want to draw your attention to the Open Philanthropy Technology Policy Fellowship.
Open Philanthropy is looking for applicants for a U.S. policy fellowship program focused on high-priority emerging technologies, especially AI and biotechnology. The program will go for 6-12 months and offer training, mentorship, and support matching with a host organization for a full-time position in Washington, DC.
You’ve got until September 15th to apply, and can find out more on the Open Philanthropy website, or by clicking through the link on the blog post associated with this episode.
We’re also currently hiring a new Head of Marketing to spread the word about this podcast and all the other services 80,000 Hours offers.
As always, you can stay on top of those opportunities and hundreds of others by regularly checking our job board, at 80000hours.org/jobs.
If you go there and join our job board newsletter, you’ll get an email every two weeks when it’s updated, with a selection of some of the most interesting options.
The 80,000 Hours podcast is produced by Keiran Harris.
Audio mastering is by Ben Cordell.
Full transcripts are available on our website and produced by Sofia Davis-Fogel.
Thanks for joining, talk to you again soon.
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.