#112 – Carl Shulman on the common-sense case for existential risk work and its practical implications

Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation.

But the policy of US government agencies is already to spend up to $4 million to save the life of a citizen, making the death of all Americans a $1,300,000,000,000,000 disaster.

According to Carl Shulman, research associate at Oxford University’s Future of Humanity Institute, that means you don’t need any fancy philosophical arguments about the value or size of the future to justify working to reduce existential risk — it passes a mundane cost-benefit analysis whether or not you place any value on the long-term future.

The key reason to make it a top priority is factual, not philosophical. That is, the risk of a disaster that kills billions of people alive today is alarmingly high, and it can be reduced at a reasonable cost. A back-of-the-envelope version of the argument runs:

  • The US government is willing to pay up to $4 million (depending on the agency) to save the life of an American.
  • So saving all US citizens at any given point in time would be worth $1,300 trillion.
  • If you believe that the risk of human extinction over the next century is something like one in six (as Toby Ord suggests is a reasonable figure in his book The Precipice), then it would be worth the US government spending up to $2.2 trillion to reduce that risk by just 1%, in terms of American lives saved alone.
  • Carl thinks it would cost a lot less than that to achieve a 1% risk reduction if the money were spent intelligently. So it easily passes a government cost-benefit test, with a very big benefit-to-cost ratio — likely over 1000:1 today.

This argument helped NASA get funding to scan the sky for any asteroids that might be on a collision course with Earth, and it was directly promoted by famous economists like Richard Posner, Larry Summers, and Cass Sunstein.

If the case is clear enough, why hasn’t it already motivated a lot more spending or regulations to limit existential risks — enough to drive down what any additional efforts would achieve?

Carl thinks that one key barrier is that infrequent disasters are rarely politically salient. Research indicates that extra money is spent on flood defences in the years immediately following a massive flood — but as memories fade, that spending quickly dries up. Of course the annual probability of a disaster was the same the whole time; all that changed is what voters had on their minds.

Carl suspects another reason is that it’s difficult for the average voter to estimate and understand how large these respective risks are, and what responses would be appropriate rather than self-serving. If the public doesn’t know what good performance looks like, politicians can’t be given incentives to do the right thing.

It’s reasonable to assume that if we found out a giant asteroid were going to crash into the Earth one year from now, most of our resources would be quickly diverted into figuring out how to avert catastrophe.

But even in the case of COVID-19, an event that massively disrupted the lives of everyone on Earth, we’ve still seen a substantial lack of investment in vaccine manufacturing capacity and other ways of controlling the spread of the virus, relative to what economists recommended.

Carl expects that all the reasons we didn’t adequately prepare for or respond to COVID-19 — with excess mortality over 15 million and costs well over $10 trillion — bite even harder when it comes to threats we’ve never faced before, such as engineered pandemics, risks from advanced artificial intelligence, and so on.

Today’s episode is in part our way of trying to improve this situation. In today’s wide-ranging conversation, Carl and Rob also cover:

  • A few reasons Carl isn’t excited by ‘strong longtermism’
  • How x-risk reduction compares to GiveWell recommendations
  • Solutions for asteroids, comets, supervolcanoes, nuclear war, pandemics, and climate change
  • The history of bioweapons
  • Whether gain-of-function research is justifiable
  • Successes and failures around COVID-19
  • The history of existential risk
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Highlights

International programs to stop asteroids and comets

Carl Shulman: So in earlier decades there had been a lot of interest in the Cretaceous extinction that laid waste to the dinosaurs and most of the large land animals. And prior to this it had become clear from finding in Mexico the actual site of the asteroid that hit there, which helped to exclude other stories like volcanism.

Carl Shulman: And so it had become especially prominent and more solid that, yeah, this is a thing that happened. It was the actual cause of one of the most famous of all extinctions because dinosaurs are very personable. Young children love dinosaurs. And yeah, and then this was combined with astronomy having quite accurate information about the distribution of asteroid impacts. You can look at the moon and see craters of different sizes layered on top of one another. And so you can get a pretty good idea about how likely the thing is to happen.

Carl Shulman: And when you do those calculations, you find, well, on average you’d expect about one in a million centuries there would be a dinosaur killer–scale asteroid impact. And if you ask, “Well, how bad would it be if our civilization was laid waste by an asteroid?” Then you can say, well it’s probably worth more than one year of GDP… Maybe it’s worth 25 years of GDP! In which case we could say, yeah, you’re getting to several quadrillion dollars. That is several thousand trillion dollars.

Carl Shulman: And so the cost benefit works out just fine in terms of just saving American lives at the rates you would, say, arrange highway construction to reduce road accidents. So one can make that case.

Carl Shulman: And then this was bolstered by a lot of political attention that Hollywood helped with. So there were several films ⁠— Deep Impact and Armageddon were two of the more influential ⁠— that helped to draw the thing more into the popular culture.

Carl Shulman: And then the final blow, or actually several blows, the Shoemaker-Levy Comet impacting Jupiter and tearing Earth-sized holes in the atmosphere of Jupiter, which sort of provided maybe even more salience and visibility. And then when the ask was, we need some tens of millions, going up to $100 million dollars to take our existing telescope assets and space assets and search the sky to find any large asteroid on an impact course in the next century. And then if you had found that, then you would mobilize all of society to stop it and it’s likely that would be successful.

Carl Shulman: And so, yeah, given that ask and given the strong arguments they could make, they could say the science is solid and it was something the general public understood. Then appropriators were willing to put cost on the order of $100 million dollars and basically solve this problem for the next century. And they found, indeed, there are no asteroids on that collision course. And if they had, then we would have mobilized our whole society to stop it.

How x-risk reduction compares to GiveWell recommendations

Carl Shulman: Yeah, I think that’s a higher bar. And the main reason is that governments’ willingness to spend tends to be related to their national resources. So the US is willing to spend these enormous amounts to save the life of one American. And in fact, in a country with lower income, there are basic public health gains, things like malaria bed nets or vaccinations that are not fully distributed. So by adopting the cosmopolitan perspective, you then ask not what is the cost relative to the willingness to pay, but rather anywhere in the world where the person who would benefit the most in terms of happiness or other sorts of wellbeing or benefit. And because there are these large gaps in income, there’s a standard that would be a couple of orders of magnitude higher.

Carl Shulman: Now that cuts both ways to some extent. So a problem that affects countries that have a lot of money can be easier to advocate for. So if you’re trying to advocate for increased foreign aid, trying to advocate for preventing another COVID-19, then for the former, you have to convince decision-makers, particularly at the government level, you have to convince countries you should sacrifice more for the benefit of others. In aggregate, foreign aid budgets are far too small. They’re well under 1%, and the amount that’s really directed towards humanitarian benefit and not tied into geopolitical security ambitions and the like is limited. So you may be able to get some additional leverage if you can tap these resources from the market from existing wealthy states, but on the whole, I’m going to expect that you’re going to wind up with many times difference, and looking at GiveWell’s numbers, they suggest… I mean, it’s more like $5,000 to do the equivalent of saving a life, so obviously you can get very different results if you use a value per life saved that’s $5,000 versus a million.

Rob Wiblin: Okay, so for those reasons, it’s an awful lot cheaper potentially to save a life in an extremely poor country, and I suppose also if you just look for… not for the marginal spend to save a life in the US, but an exceptionally unusually cheap opportunity to save a life in what are usually quite poor countries. So you’ve got quite a big multiple between, I guess, something like $4 million and something like $5,000. But then if the comparison is the $5,000 to save a life figure, then would someone be especially excited about that versus the existential risk reduction, or does the existential risk reduction still win out? Or do you end up thinking that both of these are really good opportunities and it’s a bit of a close call?

Carl Shulman: So I’d say it’s going to depend on where we are in terms of our activity on each of these and the particular empirical estimates. So a thing that’s distinctive about the opportunities for reducing catastrophic risks or existential risks is that they’re shockingly neglected. So in aggregate you do have billions of dollars of biomedical research, but the share of that going to avert these sort of catastrophic pandemics is very limited. If you take a step further back to things like advocacy or leverage science, that is, picking the best opportunities within that space, that’s even more narrow.

Carl Shulman: And if you further consider… So in the area of pandemics and biosecurity, the focus of a lot of effective altruist activity around biosecurity is things that would also work for engineered pandemics. And if you buy, say, Toby Ord’s estimates, then the risk from artificial pandemics is substantially greater than the natural pandemics. The reason being that a severe engineered pandemic or series thereof — that is, like a war fought, among other things, with bioweapon WMD — could have damage that’s more like half or more of the global population. I mean, so far excess deaths are approaching one in 1,000 from COVID-19. So the scale there is larger.

Carl Shulman: And if we put all these things together and I look at the marginal opportunities that people are considering in biosecurity and pandemic preparedness, and in some of the things with respect to risk from artificial intelligence, and then also from some of the most leveraged things to reduce the damage from nuclear winter — which is not nearly as high an existential risk, but has a global catastrophic risk, a risk of killing billions of people — I think there are things that offer opportunities that are substantially better than the GiveWell top charities right now.

What would actually lead to policy changes

Rob Wiblin: It seems like if we can just argue that on practical terms, given the values that people already have, it’ll be justified to spend far more reducing existential risk, then maybe that’s an easier sell to people, and maybe that really is the crux of the disagreement that we have with mainstream views, rather than any philosophical difference.

Carl Shulman: Yeah, so I’d say… I think that’s very important. It’s maybe more central than some would highlight it as. And particularly, if you say… Take, again, I keep tapping Toby’s book because it does more than most to really lay out things with a clear taxonomy and is concrete about its predictions. So he gives this risk of one in six over the next 100 years, and 10 out of those 16 percentage points or so he assigns to risk from advanced artificial intelligence. And that’s even conditioning on only a 50% credence in such AI capabilities being achieved over the century. So that’s certainly not a view that is widely, publicly, clearly held. There are more people who hold that view than say it, but that’s still a major controversial view, and a lot of updates would follow from that. So if you just take that one issue right there, the risk estimates on a lot of these things move drastically with it.

Carl Shulman: And then the next largest known destruction risk that he highlights is risks from engineered pandemics and bioweapons. And there, in some ways we have a better understanding of that, but there’s still a lot of controversy and a lot of uncertainty about questions like, “Are there still secret bioweapons programs as there were in the past? How large might they be? How is the technology going to enable damaging attacks versus defenses?” I mean, I think COVID-19 has showed a lot of damage can still be inflicted, but also they’re very hard to cause extinction because we can change our behavior a lot, we can restrict spread.

Carl Shulman: But still, there’s a lot of areas of disagreement and uncertainty here in biosafety and future pandemics. And you can see that in some of the debates about gain of function research in the last decade, where the aim was to gain incremental knowledge to better understand potentially dangerous pathogens. And in particular, the controversial experiments were those that were altering some of those pathogens to make them more closely resemble a thing that could cause global pandemics. Or that might be demonstrating new mechanisms to make something deadly, or that might be used, say, in biological weapons.

Carl Shulman: So when the methods of destruction are published, and the genomes of old viruses are published, they’re then available to bioweapons programmes and non-state actors. And so some of the players arguing in those debates seem to take a position that there’s basically almost no risk of that kind of information being misused. So implicitly assigning really low probability to future bioterrorism or benefiting some state bioweapons programmes, while others seem to think it’s a lot more likely than that. And similarly, on the risk of accidental release, you’ve had people like Marc Lipsitch, arguing for using estimates based off of past releases from known labs, and from Western and other data. And then you had other people saying that their estimates were overstating the risk of an accidental lab escape by orders and orders of magnitude to a degree which I think is hard to reconcile with the number of actual leaks that have happened. But yeah, so if you were to sync up on those questions, like just what’s the right order of magnitude of the risk of information being used for bioweapons? What’s the right order of magnitude of escape risk of different organisms? It seems like that would have quite a big impact on policy relative to where we stand today.

Solutions for climate change

Carl Shulman: So I’d say that what you want to do is find things that are unusually leveraged, that are taking advantage of things, like attending to things that are less politically salient, engaging in scientific and technological research that doesn’t deliver as immediate of a result. Things like political advocacy, where you’re hoping to leverage the resources of governments that are more flush and capable.

Carl Shulman: Yeah, so in that space, clean energy research and advocacy for clean energy research seems relatively strong. Some people in the effective altruism community have looked into that and raised funds for some work in that area. I think Bill Gates… The logic for that is that if you have some small country and they reduce their carbon emissions by 100%, then that’s all they can do, costs a fair amount, and it doesn’t do much to stop emissions elsewhere, for example, in China and India and places that are developing and have a lot of demand for energy. And if anything, it may… If you don’t consume some fossil fuels, then those fossil fuels can be sold elsewhere.

Carl Shulman: Whereas if you develop clean energy tech, that changes the decision calculus all around the world. If solar is cheaper than coal — and it’s already making great progress over much of the world — and then hopefully with time, natural gas, then it greatly alleviates the difficulty of the coordination problem. It makes the sacrifice needed to do it less. And if you just look at how little is actually spent on clean energy research compared to the benefits and compared to the successes that have already been achieved, that looks really good. So if I was spending the incremental dollar on reducing climate change, I’d probably want to put it more towards clean energy research than immediate reductions of other sorts, things like planting trees or improving efficiency in a particular building. I more want to solve that global public goods problem of creating the technologies that will better solve things.

Carl Shulman: So continuing progress on solar, on storage, you want high-voltage transmission lines to better integrate renewables. On nuclear, nuclear has enormous potential. And if you look at the cost for France to build an electrical grid based almost entirely on nuclear, way back early in the nuclear era, it was quite manageable. In that sense, for electrical energy, the climate change problem is potentially already solved, except that the regulatory burdens on nuclear are so severe that it’s actually not affordable to construct new plants in many places. And they’re held to standards of safety that are orders of magnitude higher than polluting fossil fuels. So enormous numbers of people are killed, even setting aside climate change, just by particulate matter and other pollution from coal, and to a lesser extent, natural gas.

Carl Shulman: But for the major nuclear accidents, you have things like Fukushima, where the fatalities from the Fukushima release seem to be zero, except for all the people who were killed panicking about the thing due to exaggerated fears about the damage of small radiation levels. And we see large inconsistencies in how people treat radiation levels from different sources versus… You have more radiation when you are at a higher altitude, but we panic many orders and orders of magnitude less about that sort of thing. And the safety standards in the US basically require as safe as possible, and they don’t stop at a level of safer than all the alternatives or much better.

Carl Shulman: So we’re left in a world where nuclear energy could be providing us with largely carbon-free power, and converted into other things. And inventing better technologies for it could help, but given the regulatory regime, it seems like they would again have their costs driven up accordingly. So if you’re going to try and solve nuclear to fight climate change, I would see the solution as more on the regulatory side and finding ways to get public opinion and anti-nuclear activist groups to shift towards a more pro-climate, pro-nuclear stance.

Solutions for nuclear weapons

Carl Shulman: I found three things that really seemed to pass the bar of “this looks like something that should be in the broad EA portfolio” because it’s leveraged enough to help on the nuclear front.


Carl Shulman: The first one is better characterizing the situation with respect to nuclear winter. And so a relatively small number of people had done work on that. Not much followup in subsequent periods. Some of the original authors did do a resurgence over the last few decades, writing some papers using modern climate models, which have been developed a lot because of concern about climate change and just general technological advance to try and refine those. Open Philanthropy has provided grants funding some of that followup work and so better assessing the magnitude of the risk, how likely nuclear winter is from different situations, and estimating the damages more precisely.

Carl Shulman: It’s useful in two ways. First, the magnitude of the risk is like an input for folk like effective altruists to decide how much effort to put into different problems and interventions. And then secondly, better clarifying the empirical situation can be something that can help pull people back from the nuclear brink.


Carl Shulman: [The second is] just the sociopolitical aspects of nuclear risk estimation. How likely is it from the data that we have that things escalate? And so there are the known near misses that I’m sure have been talked about on the show before. Things like the Cuban Missile Crisis, Able Archer in the 80s, but we have uncertainty about what would have been the next step if things had changed in those situations? How likely was the alternative? The possibility that this sub that surfaced in response to depth charges in the Cuban Missile Crisis might’ve instead fired its weapons.

Carl Shulman: And so, we don’t know though. Both Kennedy and Khrushchev, insofar as we have autobiographical evidence, and of course they might lie for various reasons and for their reputation, but they said they were really committed to not having a nuclear war then. And when they estimated and worried that it is like a one in two, one in three chance of disaster, they were also acting with uncertainty about the thinking of their counterparts.

Carl Shulman: We do have still a fairly profound residual uncertainty about the likelihood of escalating from the sort of triggers and near misses that we see. We don’t know really how near they are because of the difficulty in estimating these remaining things. And so I’m interested in work that better characterizes those sorts of processes and the psychology and what we can learn from as much data as exists.


Carl Shulman: Number three… so, the damage from nuclear winter on neutral countries is anticipated to be mostly by way of making it hard to grow crops. And all the analyses I’ve seen and talking to the authors of the nuclear winter papers suggest there’s not a collapse so severe that no humans could survive. And so there would be areas far from the equator, New Zealand might do relatively well. Fishing would be possible. I think you’ve had David Denkenberger on the show.

Carl Shulman:
So I can refer to that. And so that’s something that I came across independently relatively early on when I was doing my systematic search through what are all of the x-risks? What are the sort of things we could do to respond to each of them? And I think that’s an important area. I think it’s a lot more important in terms of saving the lives of people around today than it is for existential risk. Because as I said, sort of directly starving everyone is not so likely but killing billions seems quite likely. And even if the world is substantially able to respond by producing alternative foods, things like converting biomass into food stocks, that may not be universally available. And so poor countries might not get access if the richest countries that have more industrial capacity are barely feeding themselves. And so the chance of billions dying from such a nuclear winter is many times greater than that of extinction directly through that mechanism.

Gain-of-function research

Carl Shulman: So, the safety still seems poor, and it’s not something that has gone away in the last decade or two. There’ve been a number of mishaps. Just in recent years, for example, those multiple releases of, or infections of SARS-1 after it had been extirpated in the wild. Yeah, I mean, the danger from that sort of work in total is limited by the small number of labs that are doing it, and even those labs most of the time aren’t doing it. So, I’m less worried that there will be just absolutely enormous quantities of civilian research making ultra-deadly pandemics than I would about bioweapons programs. But it does highlight some of the issues in an interesting way.

Carl Shulman: And yeah, if we have an infection rate of one in 100 per worker year, or one in 500 per laboratory year, and given an infection of a new pandemic thing. And a lot of these leaks, yeah, like someone else was infected. Usually not many because they don’t have high enough R naught. So yeah, you might say on the order of one in 1,000 per year of work with this kind of thing for an escape, and then there’s only a handful of effective labs doing this kind of thing.

Carl Shulman: So, you wouldn’t have expected any catastrophic releases to have happened yet reliably, but also if you scale this up and had hundreds of labs doing pandemic pathogen gain-of-function kind of work, where they were actually making things that would themselves be ready to cause a pandemic directly, yeah, I mean that cumulative threat could get pretty high…

Carl Shulman: So I mean, take it down to a one in 10,000 leak risk and then, yeah, looking at COVID has an order of magnitude for damages. So, $10 trillion, several million dead, maybe getting around 10 million excess dead. And you know, of course these things could be worse, you could have something that did 50 or 100 times as much damage as COVID, but yeah, so 1/10,000ths of a $10 trillion burden or 10 million lives, a billion dollars, 1,000 dead, that’s quite significant. And if you… You could imagine that these labs had to get insurance, in the way that if you’re going to drive a vehicle where you might kill someone, you’re required to have insurance so that you can pay to compensate for the damage. And so if you did that, then you might need a billion dollars a year of insurance for one of these labs.

Carl Shulman: And now, there’s benefits to the research that they do. They haven’t been particularly helpful in this pandemic, and critics argue that this is a very small portion of all of the work that can contribute to pandemic response and so it’s not particularly beneficial. I think there’s a lot to that, but regardless of that, it seems like there’s no way that you would get more defense against future pandemics by doing one of these gain-of-function experiments that required a billion dollars of insurance, than you would by putting a billion dollars into research that doesn’t endanger the lives of innocent people all around the world.

Rob Wiblin: Something like funding vaccine research?

Carl Shulman: Yeah. It would let you do a lot of things to really improve our pandemic forecasting response, therapeutics, vaccines. And so, it’s hard to tell a story where this is justified. And it seems that you have an institutional flaw where people approve these studies and then they maybe put in millions of dollars of grant money into them, and they wouldn’t approve them if they had to put in a billion dollars to cover the harm they’re doing to outside people. But then currently, our system doesn’t actually impose appropriate liability or responsibility, or anything, for those kinds of impacts on third parties. There’s a lot of rules and regulations and safety requirements, and duties to pay settlements to workers who are injured in the lab, but there’s no expectation of responsibility for the rest of the world. Even if there were, it would be limited because it’d be a Pascal’s wager sort of thing, that if you’re talking about a one in 1,000 risk, well, you’d be bankrupt.

Carl Shulman: Maybe the US government could handle it, but the individual decision-makers, it’s just very unlikely to come up in their tenure or from, certainly from a particular grant or a particular study. It’s like if there were some kinds of scientific experiments that emitted massive, massive amounts of pollution and that pollution was not considered at all in whether to approve the experiments, you’d wind up getting a lot too many of those experiments completed, even if there were some that were worth doing at that incredible price.

Suspicious convergence around x-risk reduction

Rob Wiblin: You want to say that even if you don’t care much about [the long-term future], you should view almost the same activities, quite similar activities, as nonetheless among the most pressing things that anyone could do… And whenever one notices a convergence like that, you have to wonder, is it not perhaps the case that we’ve become really attached to existential risk reduction as a project? And now we’re maybe just rationalizing the idea that it has to be the most important?

Carl Shulman: Yeah. Well, the first and most important thing is I’m not claiming that at the limit, you get, exactly the same things are optimal by all of these different standards. I’d say that the two biggest factors are first, that if all you’re adjusting are something like population ethics, while holding fixed things like being willing to take risks on low probability things, using quantitative evidence, having the best picture of what’s happening with future technologies, all of that, then you’re sharing so much that you’re already moving away from the standard practice a lot. And yeah, winding up in this narrow space. And then second, is just, if it’s true that in fact the world is going to be revolutionized and potentially ruined by disasters involving some of these advanced technologies over the century, then that’s just an enormous, enormous thing. And you may take different angles on how to engage with that depending on other considerations and values.

Carl Shulman: But the thing itself is so big and such an update that you should be taking some angle on that problem. And you can analogize. So say you’re Elon Musk and Musk cares about climate change and AI risk and other threats to the future of humanity. And he’s putting most of his time in Tesla. And you might think that AI poses a greater risk of human extinction over the century. But if you have a plan to make self-driving electric cars that will be self-financing and make you the richest person in the world, which will then let you fund a variety of other things. It could be a great idea if Elon Musk wanted to promote, you know, the fine arts. Because being the richest person in the world is going to set you up relatively well for that. And so similarly, if indeed AI is set to be one of the biggest things ever, in this century, and to both set the fate of existing beings and set the fate of the long-term future of Earth-originating civilization, then it’s so big that you’re going to take some angle on it.

Carl Shulman: And different values may lead you to focus on different aspects of the problem. If you think, well, other people are more concerned about this aspect of it. And so maybe I’ll focus more on the things that could impact existing humans, or I’ll focus more on how AI interacts with my religion or national values or something like that. But yeah, if you buy these extraordinary premises about AI succeeding at its ambitions as a field, then it’s so huge that you’re going to engage with it in some way.

Rob Wiblin: Yeah. Okay. So even though you might have different moral values, just the empirical claims about the impact that these technologies might have potentially in the next couple of decades, setting aside anything about future generations, future centuries. That’s already going to give you a very compelling drive, almost no matter what your values are, to pay attention to that and what impact it might have.

Carl Shulman: Although that is dependent on the current state of neglectedness. If you expand activities in the neglected area, by tenfold, a hundredfold, a thousandfold, then its relative attractiveness compared to neglected opportunities in other areas will plummet. And so I would not expect this to continue if we scaled up to the level of investment in existential risk reduction that, say, Toby Ord talks about. And then you would wind up with other things that maybe were exploiting a lot of the same advantages. Things like taking risks, looking at science, et cetera, but in other areas. Maybe really, really super leverage things in political advocacy for the right kind of science, or maybe use of AI technologies to help benefit the global poor today, or things like that.

Articles, books, and other media discussed in the show

Carl’s work

Books

Articles

Job opportunities

80,000 Hours articles and episodes

Everything else

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.