#112 – Carl Shulman on the common-sense case for existential risk work and its practical implications
#112 – Carl Shulman on the common-sense case for existential risk work and its practical implications
By Robert Wiblin and Keiran Harris · Published October 5th, 2021
On this page:
- Introduction
- 1 Highlights
- 2 Articles, books, and other media discussed in the show
- 3 Transcript
- 3.1 Rob's intro [00:00:00]
- 3.2 The interview begins [00:01:34]
- 3.3 A few reasons Carl isn't excited by strong longtermism [00:03:47]
- 3.4 Longtermism isn't necessary for wanting to reduce big x-risks [00:08:21]
- 3.5 Why we don't adequately prepare for disasters [00:11:16]
- 3.6 International programs to stop asteroids and comets [00:18:55]
- 3.7 Costs and political incentives around COVID [00:23:52]
- 3.8 How x-risk reduction compares to GiveWell recommendations [00:34:34]
- 3.9 Solutions for asteroids, comets, and supervolcanoes [00:50:22]
- 3.10 Solutions for climate change [00:54:15]
- 3.11 Solutions for nuclear weapons [01:02:18]
- 3.12 The history of bioweapons [01:22:41]
- 3.13 Gain-of-function research [01:34:22]
- 3.14 Solutions for bioweapons and natural pandemics [01:45:31]
- 3.15 Successes and failures around COVID-19 [01:58:26]
- 3.16 Who to trust going forward [02:09:09]
- 3.17 The history of existential risk [02:15:07]
- 3.18 The most compelling risks [02:24:59]
- 3.19 False alarms about big risks in the past [02:34:22]
- 3.20 Suspicious convergence around x-risk reduction [02:49:31]
- 3.21 How hard it would be to convince governments [02:57:59]
- 3.22 Defensive epistemology [03:04:34]
- 3.23 Hinge of history debate [03:16:01]
- 3.24 Technological progress can't keep up for long [03:21:51]
- 3.25 Strongest argument against this being a really pivotal time [03:37:29]
- 3.26 How Carl unwinds [03:45:30]
- 3.27 Rob's outro [03:48:02]
- 4 Learn more
- 5 Related episodes
Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation.
But the policy of US government agencies is already to spend up to $4 million to save the life of a citizen, making the death of all Americans a $1,300,000,000,000,000 disaster.
According to Carl Shulman, research associate at Oxford University’s Future of Humanity Institute, that means you don’t need any fancy philosophical arguments about the value or size of the future to justify working to reduce existential risk — it passes a mundane cost-benefit analysis whether or not you place any value on the long-term future.
The key reason to make it a top priority is factual, not philosophical. That is, the risk of a disaster that kills billions of people alive today is alarmingly high, and it can be reduced at a reasonable cost. A back-of-the-envelope version of the argument runs:
- The US government is willing to pay up to $4 million (depending on the agency) to save the life of an American.
- So saving all US citizens at any given point in time would be worth $1,300 trillion.
- If you believe that the risk of human extinction over the next century is something like one in six (as Toby Ord suggests is a reasonable figure in his book The Precipice), then it would be worth the US government spending up to $2.2 trillion to reduce that risk by just 1%, in terms of American lives saved alone.
- Carl thinks it would cost a lot less than that to achieve a 1% risk reduction if the money were spent intelligently. So it easily passes a government cost-benefit test, with a very big benefit-to-cost ratio — likely over 1000:1 today.
This argument helped NASA get funding to scan the sky for any asteroids that might be on a collision course with Earth, and it was directly promoted by famous economists like Richard Posner, Larry Summers, and Cass Sunstein.
If the case is clear enough, why hasn’t it already motivated a lot more spending or regulations to limit existential risks — enough to drive down what any additional efforts would achieve?
Carl thinks that one key barrier is that infrequent disasters are rarely politically salient. Research indicates that extra money is spent on flood defences in the years immediately following a massive flood — but as memories fade, that spending quickly dries up. Of course the annual probability of a disaster was the same the whole time; all that changed is what voters had on their minds.
Carl suspects another reason is that it’s difficult for the average voter to estimate and understand how large these respective risks are, and what responses would be appropriate rather than self-serving. If the public doesn’t know what good performance looks like, politicians can’t be given incentives to do the right thing.
It’s reasonable to assume that if we found out a giant asteroid were going to crash into the Earth one year from now, most of our resources would be quickly diverted into figuring out how to avert catastrophe.
But even in the case of COVID-19, an event that massively disrupted the lives of everyone on Earth, we’ve still seen a substantial lack of investment in vaccine manufacturing capacity and other ways of controlling the spread of the virus, relative to what economists recommended.
Carl expects that all the reasons we didn’t adequately prepare for or respond to COVID-19 — with excess mortality over 15 million and costs well over $10 trillion — bite even harder when it comes to threats we’ve never faced before, such as engineered pandemics, risks from advanced artificial intelligence, and so on.
Today’s episode is in part our way of trying to improve this situation. In today’s wide-ranging conversation, Carl and Rob also cover:
- A few reasons Carl isn’t excited by ‘strong longtermism’
- How x-risk reduction compares to GiveWell recommendations
- Solutions for asteroids, comets, supervolcanoes, nuclear war, pandemics, and climate change
- The history of bioweapons
- Whether gain-of-function research is justifiable
- Successes and failures around COVID-19
- The history of existential risk
- And much more
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore
Highlights
International programs to stop asteroids and comets
Carl Shulman: So in earlier decades there had been a lot of interest in the Cretaceous extinction that laid waste to the dinosaurs and most of the large land animals. And prior to this it had become clear from finding in Mexico the actual site of the asteroid that hit there, which helped to exclude other stories like volcanism.
Carl Shulman: And so it had become especially prominent and more solid that, yeah, this is a thing that happened. It was the actual cause of one of the most famous of all extinctions because dinosaurs are very personable. Young children love dinosaurs. And yeah, and then this was combined with astronomy having quite accurate information about the distribution of asteroid impacts. You can look at the moon and see craters of different sizes layered on top of one another. And so you can get a pretty good idea about how likely the thing is to happen.
Carl Shulman: And when you do those calculations, you find, well, on average you’d expect about one in a million centuries there would be a dinosaur killer–scale asteroid impact. And if you ask, “Well, how bad would it be if our civilization was laid waste by an asteroid?” Then you can say, well it’s probably worth more than one year of GDP… Maybe it’s worth 25 years of GDP! In which case we could say, yeah, you’re getting to several quadrillion dollars. That is several thousand trillion dollars.
Carl Shulman: And so the cost benefit works out just fine in terms of just saving American lives at the rates you would, say, arrange highway construction to reduce road accidents. So one can make that case.
Carl Shulman: And then this was bolstered by a lot of political attention that Hollywood helped with. So there were several films — Deep Impact and Armageddon were two of the more influential — that helped to draw the thing more into the popular culture.
Carl Shulman: And then the final blow, or actually several blows, the Shoemaker-Levy Comet impacting Jupiter and tearing Earth-sized holes in the atmosphere of Jupiter, which sort of provided maybe even more salience and visibility. And then when the ask was, we need some tens of millions, going up to $100 million dollars to take our existing telescope assets and space assets and search the sky to find any large asteroid on an impact course in the next century. And then if you had found that, then you would mobilize all of society to stop it and it’s likely that would be successful.
Carl Shulman: And so, yeah, given that ask and given the strong arguments they could make, they could say the science is solid and it was something the general public understood. Then appropriators were willing to put cost on the order of $100 million dollars and basically solve this problem for the next century. And they found, indeed, there are no asteroids on that collision course. And if they had, then we would have mobilized our whole society to stop it.
How x-risk reduction compares to GiveWell recommendations
Carl Shulman: Yeah, I think that’s a higher bar. And the main reason is that governments’ willingness to spend tends to be related to their national resources. So the US is willing to spend these enormous amounts to save the life of one American. And in fact, in a country with lower income, there are basic public health gains, things like malaria bed nets or vaccinations that are not fully distributed. So by adopting the cosmopolitan perspective, you then ask not what is the cost relative to the willingness to pay, but rather anywhere in the world where the person who would benefit the most in terms of happiness or other sorts of wellbeing or benefit. And because there are these large gaps in income, there’s a standard that would be a couple of orders of magnitude higher.
Carl Shulman: Now that cuts both ways to some extent. So a problem that affects countries that have a lot of money can be easier to advocate for. So if you’re trying to advocate for increased foreign aid, trying to advocate for preventing another COVID-19, then for the former, you have to convince decision-makers, particularly at the government level, you have to convince countries you should sacrifice more for the benefit of others. In aggregate, foreign aid budgets are far too small. They’re well under 1%, and the amount that’s really directed towards humanitarian benefit and not tied into geopolitical security ambitions and the like is limited. So you may be able to get some additional leverage if you can tap these resources from the market from existing wealthy states, but on the whole, I’m going to expect that you’re going to wind up with many times difference, and looking at GiveWell’s numbers, they suggest… I mean, it’s more like $5,000 to do the equivalent of saving a life, so obviously you can get very different results if you use a value per life saved that’s $5,000 versus a million.
Rob Wiblin: Okay, so for those reasons, it’s an awful lot cheaper potentially to save a life in an extremely poor country, and I suppose also if you just look for… not for the marginal spend to save a life in the US, but an exceptionally unusually cheap opportunity to save a life in what are usually quite poor countries. So you’ve got quite a big multiple between, I guess, something like $4 million and something like $5,000. But then if the comparison is the $5,000 to save a life figure, then would someone be especially excited about that versus the existential risk reduction, or does the existential risk reduction still win out? Or do you end up thinking that both of these are really good opportunities and it’s a bit of a close call?
Carl Shulman: So I’d say it’s going to depend on where we are in terms of our activity on each of these and the particular empirical estimates. So a thing that’s distinctive about the opportunities for reducing catastrophic risks or existential risks is that they’re shockingly neglected. So in aggregate you do have billions of dollars of biomedical research, but the share of that going to avert these sort of catastrophic pandemics is very limited. If you take a step further back to things like advocacy or leverage science, that is, picking the best opportunities within that space, that’s even more narrow.
Carl Shulman: And if you further consider… So in the area of pandemics and biosecurity, the focus of a lot of effective altruist activity around biosecurity is things that would also work for engineered pandemics. And if you buy, say, Toby Ord’s estimates, then the risk from artificial pandemics is substantially greater than the natural pandemics. The reason being that a severe engineered pandemic or series thereof — that is, like a war fought, among other things, with bioweapon WMD — could have damage that’s more like half or more of the global population. I mean, so far excess deaths are approaching one in 1,000 from COVID-19. So the scale there is larger.
Carl Shulman: And if we put all these things together and I look at the marginal opportunities that people are considering in biosecurity and pandemic preparedness, and in some of the things with respect to risk from artificial intelligence, and then also from some of the most leveraged things to reduce the damage from nuclear winter — which is not nearly as high an existential risk, but has a global catastrophic risk, a risk of killing billions of people — I think there are things that offer opportunities that are substantially better than the GiveWell top charities right now.
What would actually lead to policy changes
Rob Wiblin: It seems like if we can just argue that on practical terms, given the values that people already have, it’ll be justified to spend far more reducing existential risk, then maybe that’s an easier sell to people, and maybe that really is the crux of the disagreement that we have with mainstream views, rather than any philosophical difference.
Carl Shulman: Yeah, so I’d say… I think that’s very important. It’s maybe more central than some would highlight it as. And particularly, if you say… Take, again, I keep tapping Toby’s book because it does more than most to really lay out things with a clear taxonomy and is concrete about its predictions. So he gives this risk of one in six over the next 100 years, and 10 out of those 16 percentage points or so he assigns to risk from advanced artificial intelligence. And that’s even conditioning on only a 50% credence in such AI capabilities being achieved over the century. So that’s certainly not a view that is widely, publicly, clearly held. There are more people who hold that view than say it, but that’s still a major controversial view, and a lot of updates would follow from that. So if you just take that one issue right there, the risk estimates on a lot of these things move drastically with it.
Carl Shulman: And then the next largest known destruction risk that he highlights is risks from engineered pandemics and bioweapons. And there, in some ways we have a better understanding of that, but there’s still a lot of controversy and a lot of uncertainty about questions like, “Are there still secret bioweapons programs as there were in the past? How large might they be? How is the technology going to enable damaging attacks versus defenses?” I mean, I think COVID-19 has showed a lot of damage can still be inflicted, but also they’re very hard to cause extinction because we can change our behavior a lot, we can restrict spread.
Carl Shulman: But still, there’s a lot of areas of disagreement and uncertainty here in biosafety and future pandemics. And you can see that in some of the debates about gain of function research in the last decade, where the aim was to gain incremental knowledge to better understand potentially dangerous pathogens. And in particular, the controversial experiments were those that were altering some of those pathogens to make them more closely resemble a thing that could cause global pandemics. Or that might be demonstrating new mechanisms to make something deadly, or that might be used, say, in biological weapons.
Carl Shulman: So when the methods of destruction are published, and the genomes of old viruses are published, they’re then available to bioweapons programmes and non-state actors. And so some of the players arguing in those debates seem to take a position that there’s basically almost no risk of that kind of information being misused. So implicitly assigning really low probability to future bioterrorism or benefiting some state bioweapons programmes, while others seem to think it’s a lot more likely than that. And similarly, on the risk of accidental release, you’ve had people like Marc Lipsitch, arguing for using estimates based off of past releases from known labs, and from Western and other data. And then you had other people saying that their estimates were overstating the risk of an accidental lab escape by orders and orders of magnitude to a degree which I think is hard to reconcile with the number of actual leaks that have happened. But yeah, so if you were to sync up on those questions, like just what’s the right order of magnitude of the risk of information being used for bioweapons? What’s the right order of magnitude of escape risk of different organisms? It seems like that would have quite a big impact on policy relative to where we stand today.
Solutions for climate change
Carl Shulman: So I’d say that what you want to do is find things that are unusually leveraged, that are taking advantage of things, like attending to things that are less politically salient, engaging in scientific and technological research that doesn’t deliver as immediate of a result. Things like political advocacy, where you’re hoping to leverage the resources of governments that are more flush and capable.
Carl Shulman: Yeah, so in that space, clean energy research and advocacy for clean energy research seems relatively strong. Some people in the effective altruism community have looked into that and raised funds for some work in that area. I think Bill Gates… The logic for that is that if you have some small country and they reduce their carbon emissions by 100%, then that’s all they can do, costs a fair amount, and it doesn’t do much to stop emissions elsewhere, for example, in China and India and places that are developing and have a lot of demand for energy. And if anything, it may… If you don’t consume some fossil fuels, then those fossil fuels can be sold elsewhere.
Carl Shulman: Whereas if you develop clean energy tech, that changes the decision calculus all around the world. If solar is cheaper than coal — and it’s already making great progress over much of the world — and then hopefully with time, natural gas, then it greatly alleviates the difficulty of the coordination problem. It makes the sacrifice needed to do it less. And if you just look at how little is actually spent on clean energy research compared to the benefits and compared to the successes that have already been achieved, that looks really good. So if I was spending the incremental dollar on reducing climate change, I’d probably want to put it more towards clean energy research than immediate reductions of other sorts, things like planting trees or improving efficiency in a particular building. I more want to solve that global public goods problem of creating the technologies that will better solve things.
Carl Shulman: So continuing progress on solar, on storage, you want high-voltage transmission lines to better integrate renewables. On nuclear, nuclear has enormous potential. And if you look at the cost for France to build an electrical grid based almost entirely on nuclear, way back early in the nuclear era, it was quite manageable. In that sense, for electrical energy, the climate change problem is potentially already solved, except that the regulatory burdens on nuclear are so severe that it’s actually not affordable to construct new plants in many places. And they’re held to standards of safety that are orders of magnitude higher than polluting fossil fuels. So enormous numbers of people are killed, even setting aside climate change, just by particulate matter and other pollution from coal, and to a lesser extent, natural gas.
Carl Shulman: But for the major nuclear accidents, you have things like Fukushima, where the fatalities from the Fukushima release seem to be zero, except for all the people who were killed panicking about the thing due to exaggerated fears about the damage of small radiation levels. And we see large inconsistencies in how people treat radiation levels from different sources versus… You have more radiation when you are at a higher altitude, but we panic many orders and orders of magnitude less about that sort of thing. And the safety standards in the US basically require as safe as possible, and they don’t stop at a level of safer than all the alternatives or much better.
Carl Shulman: So we’re left in a world where nuclear energy could be providing us with largely carbon-free power, and converted into other things. And inventing better technologies for it could help, but given the regulatory regime, it seems like they would again have their costs driven up accordingly. So if you’re going to try and solve nuclear to fight climate change, I would see the solution as more on the regulatory side and finding ways to get public opinion and anti-nuclear activist groups to shift towards a more pro-climate, pro-nuclear stance.
Solutions for nuclear weapons
Carl Shulman: I found three things that really seemed to pass the bar of “this looks like something that should be in the broad EA portfolio” because it’s leveraged enough to help on the nuclear front.
Carl Shulman: The first one is better characterizing the situation with respect to nuclear winter. And so a relatively small number of people had done work on that. Not much followup in subsequent periods. Some of the original authors did do a resurgence over the last few decades, writing some papers using modern climate models, which have been developed a lot because of concern about climate change and just general technological advance to try and refine those. Open Philanthropy has provided grants funding some of that followup work and so better assessing the magnitude of the risk, how likely nuclear winter is from different situations, and estimating the damages more precisely.
Carl Shulman: It’s useful in two ways. First, the magnitude of the risk is like an input for folk like effective altruists to decide how much effort to put into different problems and interventions. And then secondly, better clarifying the empirical situation can be something that can help pull people back from the nuclear brink.
Carl Shulman: [The second is] just the sociopolitical aspects of nuclear risk estimation. How likely is it from the data that we have that things escalate? And so there are the known near misses that I’m sure have been talked about on the show before. Things like the Cuban Missile Crisis, Able Archer in the 80s, but we have uncertainty about what would have been the next step if things had changed in those situations? How likely was the alternative? The possibility that this sub that surfaced in response to depth charges in the Cuban Missile Crisis might’ve instead fired its weapons.
Carl Shulman: And so, we don’t know though. Both Kennedy and Khrushchev, insofar as we have autobiographical evidence, and of course they might lie for various reasons and for their reputation, but they said they were really committed to not having a nuclear war then. And when they estimated and worried that it is like a one in two, one in three chance of disaster, they were also acting with uncertainty about the thinking of their counterparts.
Carl Shulman: We do have still a fairly profound residual uncertainty about the likelihood of escalating from the sort of triggers and near misses that we see. We don’t know really how near they are because of the difficulty in estimating these remaining things. And so I’m interested in work that better characterizes those sorts of processes and the psychology and what we can learn from as much data as exists.
Carl Shulman: Number three… so, the damage from nuclear winter on neutral countries is anticipated to be mostly by way of making it hard to grow crops. And all the analyses I’ve seen and talking to the authors of the nuclear winter papers suggest there’s not a collapse so severe that no humans could survive. And so there would be areas far from the equator, New Zealand might do relatively well. Fishing would be possible. I think you’ve had David Denkenberger on the show.
Carl Shulman:
So I can refer to that. And so that’s something that I came across independently relatively early on when I was doing my systematic search through what are all of the x-risks? What are the sort of things we could do to respond to each of them? And I think that’s an important area. I think it’s a lot more important in terms of saving the lives of people around today than it is for existential risk. Because as I said, sort of directly starving everyone is not so likely but killing billions seems quite likely. And even if the world is substantially able to respond by producing alternative foods, things like converting biomass into food stocks, that may not be universally available. And so poor countries might not get access if the richest countries that have more industrial capacity are barely feeding themselves. And so the chance of billions dying from such a nuclear winter is many times greater than that of extinction directly through that mechanism.
Gain-of-function research
Carl Shulman: So, the safety still seems poor, and it’s not something that has gone away in the last decade or two. There’ve been a number of mishaps. Just in recent years, for example, those multiple releases of, or infections of SARS-1 after it had been extirpated in the wild. Yeah, I mean, the danger from that sort of work in total is limited by the small number of labs that are doing it, and even those labs most of the time aren’t doing it. So, I’m less worried that there will be just absolutely enormous quantities of civilian research making ultra-deadly pandemics than I would about bioweapons programs. But it does highlight some of the issues in an interesting way.
Carl Shulman: And yeah, if we have an infection rate of one in 100 per worker year, or one in 500 per laboratory year, and given an infection of a new pandemic thing. And a lot of these leaks, yeah, like someone else was infected. Usually not many because they don’t have high enough R naught. So yeah, you might say on the order of one in 1,000 per year of work with this kind of thing for an escape, and then there’s only a handful of effective labs doing this kind of thing.
Carl Shulman: So, you wouldn’t have expected any catastrophic releases to have happened yet reliably, but also if you scale this up and had hundreds of labs doing pandemic pathogen gain-of-function kind of work, where they were actually making things that would themselves be ready to cause a pandemic directly, yeah, I mean that cumulative threat could get pretty high…
Carl Shulman: So I mean, take it down to a one in 10,000 leak risk and then, yeah, looking at COVID has an order of magnitude for damages. So, $10 trillion, several million dead, maybe getting around 10 million excess dead. And you know, of course these things could be worse, you could have something that did 50 or 100 times as much damage as COVID, but yeah, so 1/10,000ths of a $10 trillion burden or 10 million lives, a billion dollars, 1,000 dead, that’s quite significant. And if you… You could imagine that these labs had to get insurance, in the way that if you’re going to drive a vehicle where you might kill someone, you’re required to have insurance so that you can pay to compensate for the damage. And so if you did that, then you might need a billion dollars a year of insurance for one of these labs.
Carl Shulman: And now, there’s benefits to the research that they do. They haven’t been particularly helpful in this pandemic, and critics argue that this is a very small portion of all of the work that can contribute to pandemic response and so it’s not particularly beneficial. I think there’s a lot to that, but regardless of that, it seems like there’s no way that you would get more defense against future pandemics by doing one of these gain-of-function experiments that required a billion dollars of insurance, than you would by putting a billion dollars into research that doesn’t endanger the lives of innocent people all around the world.
Rob Wiblin: Something like funding vaccine research?
Carl Shulman: Yeah. It would let you do a lot of things to really improve our pandemic forecasting response, therapeutics, vaccines. And so, it’s hard to tell a story where this is justified. And it seems that you have an institutional flaw where people approve these studies and then they maybe put in millions of dollars of grant money into them, and they wouldn’t approve them if they had to put in a billion dollars to cover the harm they’re doing to outside people. But then currently, our system doesn’t actually impose appropriate liability or responsibility, or anything, for those kinds of impacts on third parties. There’s a lot of rules and regulations and safety requirements, and duties to pay settlements to workers who are injured in the lab, but there’s no expectation of responsibility for the rest of the world. Even if there were, it would be limited because it’d be a Pascal’s wager sort of thing, that if you’re talking about a one in 1,000 risk, well, you’d be bankrupt.
Carl Shulman: Maybe the US government could handle it, but the individual decision-makers, it’s just very unlikely to come up in their tenure or from, certainly from a particular grant or a particular study. It’s like if there were some kinds of scientific experiments that emitted massive, massive amounts of pollution and that pollution was not considered at all in whether to approve the experiments, you’d wind up getting a lot too many of those experiments completed, even if there were some that were worth doing at that incredible price.
Suspicious convergence around x-risk reduction
Rob Wiblin: You want to say that even if you don’t care much about [the long-term future], you should view almost the same activities, quite similar activities, as nonetheless among the most pressing things that anyone could do… And whenever one notices a convergence like that, you have to wonder, is it not perhaps the case that we’ve become really attached to existential risk reduction as a project? And now we’re maybe just rationalizing the idea that it has to be the most important?
Carl Shulman: Yeah. Well, the first and most important thing is I’m not claiming that at the limit, you get, exactly the same things are optimal by all of these different standards. I’d say that the two biggest factors are first, that if all you’re adjusting are something like population ethics, while holding fixed things like being willing to take risks on low probability things, using quantitative evidence, having the best picture of what’s happening with future technologies, all of that, then you’re sharing so much that you’re already moving away from the standard practice a lot. And yeah, winding up in this narrow space. And then second, is just, if it’s true that in fact the world is going to be revolutionized and potentially ruined by disasters involving some of these advanced technologies over the century, then that’s just an enormous, enormous thing. And you may take different angles on how to engage with that depending on other considerations and values.
Carl Shulman: But the thing itself is so big and such an update that you should be taking some angle on that problem. And you can analogize. So say you’re Elon Musk and Musk cares about climate change and AI risk and other threats to the future of humanity. And he’s putting most of his time in Tesla. And you might think that AI poses a greater risk of human extinction over the century. But if you have a plan to make self-driving electric cars that will be self-financing and make you the richest person in the world, which will then let you fund a variety of other things. It could be a great idea if Elon Musk wanted to promote, you know, the fine arts. Because being the richest person in the world is going to set you up relatively well for that. And so similarly, if indeed AI is set to be one of the biggest things ever, in this century, and to both set the fate of existing beings and set the fate of the long-term future of Earth-originating civilization, then it’s so big that you’re going to take some angle on it.
Carl Shulman: And different values may lead you to focus on different aspects of the problem. If you think, well, other people are more concerned about this aspect of it. And so maybe I’ll focus more on the things that could impact existing humans, or I’ll focus more on how AI interacts with my religion or national values or something like that. But yeah, if you buy these extraordinary premises about AI succeeding at its ambitions as a field, then it’s so huge that you’re going to engage with it in some way.
Rob Wiblin: Yeah. Okay. So even though you might have different moral values, just the empirical claims about the impact that these technologies might have potentially in the next couple of decades, setting aside anything about future generations, future centuries. That’s already going to give you a very compelling drive, almost no matter what your values are, to pay attention to that and what impact it might have.
Carl Shulman: Although that is dependent on the current state of neglectedness. If you expand activities in the neglected area, by tenfold, a hundredfold, a thousandfold, then its relative attractiveness compared to neglected opportunities in other areas will plummet. And so I would not expect this to continue if we scaled up to the level of investment in existential risk reduction that, say, Toby Ord talks about. And then you would wind up with other things that maybe were exploiting a lot of the same advantages. Things like taking risks, looking at science, et cetera, but in other areas. Maybe really, really super leverage things in political advocacy for the right kind of science, or maybe use of AI technologies to help benefit the global poor today, or things like that.
Articles, books, and other media discussed in the show
Carl’s work
- Reflective Disequilibrium by Carl Shulman, including posts on:
- Sharing the World with Digital Minds by Carl Shulman and Nick Bostrom
Books
- The Precipice: Existential Risk and the Future of Humanity by Toby Ord
- The Doomsday Machine: Confessions of a Nuclear War Planner by Daniel Ellsberg
- Democracy for Realists: Why Elections Do Not Produce Responsive Government by Christopher H. Achen and Larry M. Bartels
Articles
- Are We Living at the Hinge of History? by William MacAskill
- The Past and Future of Economic Growth: A Semi-Endogenous Perspective by Chad I. Jones
- The End of Economic Growth? Unintended Consequences of a Declining Population by Chad I. Jones
- Human error in high-biocontainment labs: a likely pandemic threat by Lynn Klotz
- Moratorium on Research Intended To Create Novel Potential Pandemic Pathogens by Marc Lipsitch and Thomas V. Inglesby
- Learning to summarize from human feedback by Nisan Stiennon, et al.
Job opportunities
80,000 Hours articles and episodes
- We could feed all 8 billion people through a nuclear winter. Dr David Denkenberger is working to make it practical
- Mark Lynas on climate change, societal collapse & nuclear energy
- Andy Weber on rendering bioweapons obsolete and ending the new nuclear arms race
- Dr Owen Cotton-Barratt on why daring scientists should have to get liability insurance
- If you care about social impact, why is voting important?
Everything else
- Draft report on AI timelines by Ajeya Cotra
- What should we learn from past AI forecasts? by Luke Muehlhauser
- Clean Energy Innovation Policy by Let’s Fund
- Why are nuclear plants so expensive? Safety’s only part of the story by John Timmer
- Did Pox Virus Research Put Potential Profits Ahead of Public Safety? by Nell Greenfieldboyce
- Pedestrian Observations: Why American Costs Are So High by Alon Levy
- Wally Thurman on Bees, Beekeeping, and Coase on with Russ Roberts on EconTalk
Transcript
Table of Contents
- 1 Rob’s intro [00:00:00]
- 2 The interview begins [00:01:34]
- 3 A few reasons Carl isn’t excited by strong longtermism [00:03:47]
- 4 Longtermism isn’t necessary for wanting to reduce big x-risks [00:08:21]
- 5 Why we don’t adequately prepare for disasters [00:11:16]
- 6 International programs to stop asteroids and comets [00:18:55]
- 7 Costs and political incentives around COVID [00:23:52]
- 8 How x-risk reduction compares to GiveWell recommendations [00:34:34]
- 9 Solutions for asteroids, comets, and supervolcanoes [00:50:22]
- 10 Solutions for climate change [00:54:15]
- 11 Solutions for nuclear weapons [01:02:18]
- 12 The history of bioweapons [01:22:41]
- 13 Gain-of-function research [01:34:22]
- 14 Solutions for bioweapons and natural pandemics [01:45:31]
- 15 Successes and failures around COVID-19 [01:58:26]
- 16 Who to trust going forward [02:09:09]
- 17 The history of existential risk [02:15:07]
- 18 The most compelling risks [02:24:59]
- 19 False alarms about big risks in the past [02:34:22]
- 20 Suspicious convergence around x-risk reduction [02:49:31]
- 21 How hard it would be to convince governments [02:57:59]
- 22 Defensive epistemology [03:04:34]
- 23 Hinge of history debate [03:16:01]
- 24 Technological progress can’t keep up for long [03:21:51]
- 25 Strongest argument against this being a really pivotal time [03:37:29]
- 26 How Carl unwinds [03:45:30]
- 27 Rob’s outro [03:48:02]
Rob’s intro [00:00:00]
Rob Wiblin: Hi listeners, this is the 80,000 Hours Podcast, where we have unusually in-depth conversations about the world’s most pressing problems, what you can do to solve them, and why you have to be really sure to fit your air filters the right way around if you work at a bioweapons lab. I’m Rob Wiblin, Head of Research at 80,000 Hours.
You might not have heard of today’s guest, Carl Shulman, but he’s a legend among professional existential risk and effective altruist researchers.
This is the first long interview Carl has ever given, not that you’d ever be able to tell.
In this episode Carl makes the case that trying to tackle global catastrophic risks like AI, pandemics and nuclear war is common sense and doesn’t rest on any unusual philosophical views at all. In fact it passes the same boring cost-benefit analyses society uses to build bridges and run schools with flying colours, even if you only think about this generation.
Carl has spent decades reading deeply about these issues and trying to forecast how the future is going to play out, so we then turn to technical details and historical analogies that can help us predict how probable and severe various different threats are — and what specifically should be done to safeguard humanity through a treacherous period.
This is a long conversation so you might like to check the chapter listings, if your podcasting app supports them. Some sections you might be particularly interested in have titles like:
- False alarms about big risks in the past
- Solutions for climate change
- Defensive epistemology and
- Strongest argument against this being a really pivotal time
If you like this episode tell a friend about the show. You can email us ideas and feedback at podcast at 80000hours dot org.
Without further ado I bring you Carl Shulman.
The interview begins [00:01:34]
Rob Wiblin: Today I’m speaking with Carl Shulman. Carl studied philosophy at the University of Toronto and Harvard University and then law at NYU. Since 2012 he’s been a research associate at Oxford University’s Future of Humanity Institute, where he’s published on risks from AI as well as decision theory. He consults for the Open Philanthropy Project among other organizations and blogs at Reflective Disequilibrium.
Rob Wiblin: While he keeps a low profile, Carl has had as much influence on the conversation about existential risks as anyone and he’s also just one of the most broadly knowledgeable people that I know. So, thanks for coming on the podcast, Carl.
Carl Shulman: Thanks, Rob. It’s a pleasure. I’m a listener as well.
Rob Wiblin: I hope we’ll get to talk about reasons that you’re skeptical of strong longtermism as a philosophical position, but why you nevertheless focus a lot of your work on preventing existential risks. But first, what are you working on at the moment and why do you think it’s important?
Carl Shulman: Yeah, right now the two biggest projects I’m focusing on are, one, some work with the Open Philanthropy Project on modeling the economics of advanced AI. And then, at the Future of Humanity Institute I’m working with Nick Bostrom on a project on the political and moral status of digital minds; that is, artificial intelligence systems that are potentially deserving of moral status and how we might have a society that integrates both humans and such minds.
Rob Wiblin: Nice. Yeah, I saw you had a publication with Nick Bostrom about that last year. I haven’t had a chance to read it yet. I think Bostrom just announced on his website that he’s thinking of writing a book about this question of how do you integrate potentially morally valuable digital minds into society.
Carl Shulman: Yes, we’ve been working on this book project for a while and that OUP book chapter spun out of that, Sharing the World with Digital Minds. The central theme of that paper is largely that on a lot of different accounts of wellbeing, digital minds in one way or another could be more efficient than human beings. So, that is, for a given quantity of material resources they could have a lot more wellbeing come out of that. And then, that has implications about how coexistence might work under different moral counts.
Rob Wiblin: Nice. Yeah, maybe we’ll get a chance to talk about that later on in the conversation.
A few reasons Carl isn’t excited by strong longtermism [00:03:47]
Rob Wiblin: But yeah first off, as a preface to the body of the conversation, I’d like to kind of briefly get a description of your, as I understand it, lukewarm views about strong longtermism.
Rob Wiblin: And I guess strong longtermism is, broadly speaking, the view that the primary determinant of the kind of moral value of our actions is how those actions affect the very long-term future. Which, I guess, is like more than 100 years, maybe more than 1,000 years, something like that. What’s the key reason you’re reluctant to fully embrace strong longtermism?
Carl Shulman: Yeah. Well, I think I should first clarify that I’m more into efforts to preserve and improve the long-term future than probably 99% of people. I’d say that I would fit with my colleague Toby Ord’s description in his book The Precipice in thinking that preventing existential risk is one of the great moral issues of our time. Depending on how the empirics work out, it may even be the leading moral issue of our time. But I certainly am not going to say it is the only issue or to endorse a very, sort of, strong longtermist view that, say, even a modest increment to existential risk reduction would utterly overwhelm considerations on other perspectives.
Rob Wiblin: Yeah. Yeah, I suppose a lot of your philosophy colleagues or people at the Future of Humanity Institute are maybe more open to the idea that the long-term future is really dominant. Is there any kind of way of describing perhaps why you part ways with them on that?
Carl Shulman: Yeah, I mean, I think the biggest reason is that there are other normative perspectives that place a lot of value on other considerations. So, some examples: duties of justice where if you owe something in particular, say, to people that you have harmed or if you owe duties of reciprocity to those who have helped you or to keep a promise, that sort of thing. Filial piety — when you have particular duties to your parents or to your children.
Carl Shulman: And if we give these sorts of other views that would not embrace the sort of most fanatical versions of longtermism, and we say they’re going to have some say, then insofar as they’re going to have some say in our decisions, some weight on our behavior, then that’s inevitably going to lead to some large ratios.
Carl Shulman: If you ever do something out of duty to someone you’ve harmed or you ever do something on behalf of your family, then there’s a sense in which you’re trading off against some vast potential long-term impact. It seems like the normative strength of the case for longtermism is not so overwhelming as to say all of these things would be wiped out.
Rob Wiblin: Yeah. So, I guess, some people, maybe they try to get all of the different kind of moral considerations or the different theories that they place some weight on, and then compare them all on the same scale, like a possible scale being how much wellbeing did you create?
Rob Wiblin: Whereas I guess you’re thinking that your kind of moral reasoning approach takes different moral theories or different moral considerations and then slightly cordons them off and doesn’t want to put them on a single scale from like zero to a very large number, where like one theory that values one particular thing could end up completely swamping the other, because it seems like you have a really good opportunity within that area. It’s maybe like there’s buckets that do have some flexibility between moving your effort between them, but you’re not going to let any one moral concern that you have just get all of the resources.
Carl Shulman: Yeah. And the literature on moral uncertainty talks about some of the challenges to, say, interconvert. It’s filial piety really, utilitarianism, but with a different weight for your family, or does it conceptualize the world differently. And it’s more about obligations of you as an agent or think about it as like the strength of different moral sentiments. And so there are places where each of those approaches seem to have advantages.
Carl Shulman: But in general, if you’re trying to make an argument to say that A is a million times as important as B, then any sort of hole in that argument can really undermine it. Because if you mix in, if you say, well, A is a million times better on this dimension, but if you mix even a little probability or modest mixture of, well, actually this other thing that B is better on dominates, then the argument is going to be quickly attenuated.
Rob Wiblin: Oh yeah. I guess the ratio can’t end up being nearly so extreme.
Carl Shulman: Yeah. And I think this is, for example, what goes wrong in a lot of people’s engagement with the hypothetical of Pascal’s mugging.
Longtermism isn’t necessary for wanting to reduce big x-risks [00:08:21]
Rob Wiblin: Yeah. Okay. Yeah. There’s a whole big debate that we could have here. And we might return to some of this moral philosophy later on. But talking about this too much might be a little bit ironic because kind of a key view of yours, which we’re hopefully going to explore now, is that a lot of these kind of moral philosophy issues matter a lot less in determining what people ought to do than perhaps is commonly supposed and perhaps matter less relative to the amount of attention that they’ve gotten on this podcast, among other places.
Rob Wiblin: So yeah, basically as I understand it, you kind of believe that most of the things that the effective altruism or longtermism communities are doing to reduce risks of catastrophic disaster can basically be justified on really mundane cost-benefit analysis that doesn’t have to place any special or unusual value on the long-term future.
Rob Wiblin: I guess the key argument is just that the probability of a disaster that kills a lot of people, most of the people who are alive today, that that risk is just alarmingly high and that it could be reduced at an acceptable cost. And I recently saw you give a talk where you put some numbers on this.
Rob Wiblin: And I guess, in brief the argument runs, kind of, the US government in terms of its regulatory evaluations is willing to pay up to something like $4 million to save the life of an American. I think it varies depending on the exact government agency, but that’s in the ballpark, which then means that saving all US lives at any given point in time would be worth $1,300 trillion. So an awful lot of money.
Rob Wiblin: And then, if you did believe that the risk of human extinction over the next 50 or 100 years is something like one in six as Toby Ord suggests is a reasonable figure in his book The Precipice, and you think you could reduce that by just 1%, so 1% of that one-sixth, then that would be worth spending up to $2.2 trillion for the US government to achieve just in terms of the American lives that would be saved.
Rob Wiblin: And I guess you think it would cost a lot less than that to achieve that level of risk reduction if the money was spent intelligently. So it just simply passes a government cost-benefit test with flying colors, with like a very big benefit-to-cost ratio. Yeah, what do you think is the most important reason that this line, this very natural line of thinking, hasn’t already motivated a lot more spending on existential risk reduction?
Carl Shulman: Yeah, so just this past year or so we’ve had a great illustration of these dynamics with COVID-19. Now, natural pandemics have long been in the parade of global catastrophic risk horribles and the risk is relatively well understood. There’ve been numerous historical pandemics. The biggest recent one was the 1918, 1919 influenza, which killed tens of millions of people in a world with a smaller population.
Carl Shulman: And so I think we can look at what are all of the reasons why the COVID-19 pandemic, which now costs on the order of $10 trillion, excess mortality is approaching 10 million. What are all the reasons why we didn’t adequately prepare for that? And I think a lot of that is going to generalize to engineered pandemics, to risks from advanced artificial intelligence, and so on.
Why we don’t adequately prepare for disasters [00:11:16]
Rob Wiblin: Okay. Yeah. So maybe let’s go one by one, like what’s a key reason or maybe the most significant reason in your mind for why we didn’t adequately prepare for COVID-19?
Carl Shulman: Yeah, so I’d say that the barriers were more about political salience and coordination across time and space. So there are examples where risks that are even much less likely have become politically salient and there’s been effort mobilized to stop them. Asteroid risk would fall into that category. But yeah, the mobilization for pandemics, that has not attained that same sort of salience and the measures for it being more expensive than the measures to stop asteroids.
Rob Wiblin: Yeah. So do you think, like it’s primarily driven by the idea that, so just normal people like me or normal voters in the electorate, there’s only so many things that we can think about at once. And we only have so much time to think about politics and society as a whole. And that means that we tend to get buffeted around by kind of what’s in the news, like what’s a topical risk or problem to think about now?
Rob Wiblin: That means it’s very easy for regular periodic risk like a major pandemic to just kind of fall off of the agenda. And then it’s not salient to voters. And so as a result, politicians and bureaucrats don’t really see it as in their personal or career advantage to try to prepare for something that they might be blamed for just wasting money, because it’s not really a thing that people especially are talking about.
Carl Shulman: Yeah, I think there’s a lot of empirical evidence on these political dynamics. There’s a great book called Democracy for Realists that explores a lot of this empirical literature. And so you see things that like, after a flood or other natural disaster, there’s a spike in enthusiasm for spending to avert it. But it’s not as though right after one flood or hurricane the chances of another are greatly elevated. It’s a random draw from the distribution, but like people remember it having just happened.
Carl Shulman: And right now in the wake of COVID there’s a surge of various kinds of biomedical and biosecurity spending. And so we’ll probably move somewhat more towards a rational level of countermeasures. Though, I think, still we look set to fall far short as we have during the actual pandemic. And so you’ve got the dynamics of that sort.
Carl Shulman: And you see that likewise with economic activity. So voters care about how the economy is doing, but they’re much more influenced by the economy in the few months right before the election. And so even if there’ve been four years of lesser economic growth, it makes sense for politicians to throw out things like sudden increases and checks; one-time expenditures cause monetary policy to differentially favor economic activity right around there.
Carl Shulman: And so, yeah, so ultimately it’s difficult for the average voter to estimate and understand how large these respective risks are. Generally a poor grip there. They don’t naturally create strong incentives in advance for politicians to deal with it because it’s not on their minds. And then you can get some reaction right after an event happens, which may be too late. And it doesn’t last well enough.
Rob Wiblin: Yeah. Yeah. Okay. So that’s one broad category of why we failed to prepare sufficiently for COVID-19 and I guess probably why we failed to prepare adequately for other similar periodic disasters. Yeah. Is there any other kind of broad category that’s important that people should have in mind?
Carl Shulman: Certainly there’s the obvious collective action problems. These come up a lot with respect to climate change. So the carbon emissions of any given country and certainly any individual disperse globally. And so if you as an individual are not regulated in what you do, you emit some carbon to do something that’s useful to you and you bear, depending on your income and whatnot, on the order of one in a billion, one in 10 billion, of the cost that you’re imposing on the world.
Carl Shulman: And that’s certainly true for pandemics. So a global pandemic imposes a risk on everyone. And if you invest in countermeasures and then you’re not going to be able to recover those costs later, then yeah, that makes things more challenging.
Carl Shulman: You might think this is not that bad because there are a few large countries. So the US or China, you’re looking at like order of one-sixth of the world economy. So maybe they only have a factor of six, but then within countries you similarly have a lot of internal coordination problems.
Carl Shulman: So if politicians cause a sacrifice now and most of the benefits are for future — leave generations aside, we can always talk about caring about future generations — I would like politicians to care about, like the next electoral cycle. And so, yeah, so they’re generally focused on winning their next election. And then it’s made worse by the fact that as you go forward, the person in charge may be your political opponent.
Rob Wiblin: Yeah. So anything you do to prepare for a pandemic that happens to fall during your opponent’s term is, is bad from your point of view or bad from this like narrow, selfish point of view.
Carl Shulman: Yeah. And we can see with the United States where you had a switchover of government in the midst of the pandemic and then attitudes on various policies have moved around in a partisan fashion.
Rob Wiblin: Yeah. So yeah, this coordination problem seems really central in the case of climate change. Because if you’re a country that imposes a carbon tax and thereby reduces its emissions, you’ve borne almost all of the cost of that, but you only get potentially quite a small fraction of the benefit depending on how large the country is.
Rob Wiblin: With the pandemic, maybe it seems to have this effect a bit less because you can do lots of things that benefit your country at the national level, like having a sensible policy to stop people with the disease from flying in, to have stockpiles of protective equipment and so on.
Rob Wiblin: And I guess there’s also business models where you can recover a lot of your costs. So if you’re a government or a university or research group that is working on biomedical science that could be useful in a pandemic, if there is a pandemic, you can potentially recover a bunch of the costs through patents and selling that kind of stuff.
Rob Wiblin: So is that maybe why you mentioned this as like a secondary factor in the pandemic case, whereas it might be kind of more central in the climate change case?
Carl Shulman: Yeah. I mean, so again, I think you can trace a lot of what goes wrong to externalities, but they’re more at the level of political activity. So like the standard rational irrationality story. So like a typical voter, it’s extremely unlikely that they will be the deciding vote. There’s an enormous expected value because if you are the deciding vote, then it can change the whole country. And you’ve written some nice explainer articles on this.
Rob Wiblin: Yeah. Yeah, I’ll stick up a link to that.
Carl Shulman: But that means there’s very little private incentive to research and understand these issues. And more of the incentive comes from things like affiliating with your team, seeming like a good person around your local subculture. And those dynamics then are not super closely attuned to what people are going to like as an actual outcome. So people will often vote for things that wind up making the country that they live in less desirable to them.
Carl Shulman: And so they would behave differently if they were deciding, “Which country should I move to because I like how things are going there?” versus “What am I going to vote on?” And then a lot of the policies that could have staunched the pandemic much earlier seem to fall into that category.
International programs to stop asteroids and comets [00:18:55]
Rob Wiblin: Interesting. Yeah. So I guess despite all of these potential pitfalls, the US has indeed run at least one pretty big program that was in large part to reduce the risk of human extinction. And that’s the famous effort by NASA I think in mostly the 90s maybe but also the 2000s to identify all the larger asteroids and comets in order to see if any of them might hit the earth at any point soon. How did that one come about?
Carl Shulman: Yeah, so this is an interesting story. So in earlier decades there had been a lot of interest in the Cretaceous extinction that laid waste to the dinosaurs and most of the large land animals. And prior to this it had become clear from finding in Mexico the actual site of the asteroid that hit there, which helped to exclude other stories like volcanism.
Carl Shulman: And so it had become especially prominent and more solid that, yeah, this is a thing that happened. It was the actual cause of one of the most famous of all extinctions because dinosaurs are very personable. Young children love dinosaurs. And yeah, and then this was combined with astronomy having quite accurate information about the distribution of asteroid impacts. You can look at the moon and see craters of different sizes layered on top of one another. And so you can get a pretty good idea about how likely the thing is to happen.
Carl Shulman: And when you do those calculations, you find, well, on average you’d expect about one in a million centuries there would be a dinosaur killer–scale asteroid impact. And if you ask, “Well, how bad would it be if our civilization was laid waste by an asteroid?” Then you can say, well it’s probably worth more than one year of GDP… Maybe it’s worth 25 years of GDP! In which case we could say, yeah, you’re getting to several quadrillion dollars. That is several thousand trillion dollars.
Carl Shulman: And so the cost benefit works out just fine in terms of just saving American lives at the rates you would, say, arrange highway construction to reduce road accidents. So one can make that case.
Carl Shulman: And then this was bolstered by a lot of political attention that Hollywood helped with. So there were several films — Deep Impact and Armageddon were two of the more influential — that helped to draw the thing more into the popular culture.
Carl Shulman: And then the final blow, or actually several blows, the Shoemaker-Levy Comet impacting Jupiter and tearing Earth-sized holes in the atmosphere of Jupiter, which sort of provided maybe even more salience and visibility. And then when the ask was, we need some tens of millions, going up to $100 million dollars to take our existing telescope assets and space assets and search the sky to find any large asteroid on an impact course in the next century. And then if you had found that, then you would mobilize all of society to stop it and it’s likely that would be successful.
Carl Shulman: And so, yeah, given that ask and given the strong arguments they could make, they could say the science is solid and it was something the general public understood. Then appropriators were willing to put cost on the order of $100 million dollars and basically solve this problem for the next century. And they found, indeed, there are no asteroids on that collision course. And if they had, then we would have mobilized our whole society to stop it.
Rob Wiblin: Yeah. Nice. So I suppose, yeah, the risk was really very low, if it was one in… one in 100 million centuries, did you say?
Carl Shulman: One in a hundred million per year. One in a million per century.
Rob Wiblin: Okay, yeah. So the risk was pretty low, but I suppose like, not so low that we would want to completely ignore it. And they spent up to $100 million dollars. What does that then imply about their willingness to pay to prevent extinction with 100% probability? Does it roughly match up with the kind of 1,300 trillion number that we were talking about earlier?
Carl Shulman: Yeah, so it’s a bit tricky. So if you look to things like World War II, so in World War II the share of Japanese GDP going to the war effort went as high as 70%.
Rob Wiblin: Wow.
Carl Shulman: And that was one of the higher ones, but still across all the major combatants you had like on the order of half of all productive effort going to the war. So it was clearly possible to have massive social mobilizations and clearly the threat of being, having everyone in your country killed is even worse —
Rob Wiblin: Motivating, yeah.
Carl Shulman: — than the sort of typical result of losing a war. I mean, certainly Japan is doing pretty well despite losing World War II. And so you’d think we could have maybe this mass mobilization.
Costs and political incentives around COVID [00:23:52]
Carl Shulman: And then there’s the question though of, what do you need to trigger that? And so wars were very well understood. There was a clear process of what to do about them, but other existential risks aren’t necessarily like that. So even with COVID, so the aggregate economic cost of COVID has been estimated on the order of $10 trillion and many trillions of dollars have been spent by governments on things like stimulus payments and such to help people withstand the blow.
Carl Shulman: But then the actual expenditures on say, making the vaccines or getting adequate supplies of PPE have been puny by comparison. And so you have, I mean, there’s even been complaints about, so someone like Pfizer might make a $10 profit on a vaccine that generates more than $5,000 of value per dose. And we’re having this gradual rollout of vaccines to the world.
Carl Shulman: Those vaccines were mostly discovered at the very beginning of 2020. And so if we were having that wartime mobilization, then you’d think that all the relevant factories and the equipment that could have been converted would have been converted. And there are limits to that. It involves specialized and new technologies. But you’d think that… say you’d be willing to pay $500 instead of $50 for the vaccine or $2,000, and then make contracts such that providers who can deliver more vaccine faster get more payment, and then they can do expensive things to incrementally expedite production.
Carl Shulman: You maybe have, you know, make sure you have 24/7 working shifts. Hire everyone from related industries and switch them over. Take all of the production processes that are switchable and do them immediately. Even things like just supplying masks universally to the whole population, you do that right away.
Carl Shulman: And you wouldn’t delay because there was some confusion on the part of sort of Western public health authorities about it. You’d say, “Look, there’s a strong theoretical case. A lot of countries are doing it. The value if it works is enormous. So get going producing billions of N95 and higher-grade masks right away.” And certainly give everyone cloth masks right away.
Carl Shulman: And then, by the same token, allow challenge trials so that we would have tested all of the vaccines in the first few months of 2020 and have been rolling them out the whole time. So it’s like, you could have a wartime mobilization that would have crushed COVID entirely. And we actually had financial expenditures that were almost wartime level, but almost none of them were applied to the task of actually defeating the virus, only just paying the costs of letting people live with the damage.
Rob Wiblin: Stay home, yeah. Yeah, it’s an interesting phenomenon that seems to have occurred in, I guess the… Yeah, I can’t think of many countries that didn’t seemingly spend more money just on economic stimulus rather than actually preventing the pandemic. And I’m not sure that that’s something I would have predicted ahead of time.
Rob Wiblin: Because I guess obviously yes, there’s like, there’s lobby groups that are very interested in affecting these kinds of stimulus bills so that they get more of the goodies. But at the same time, it seems like all of them have an enormous interest also in preventing the pandemic and spending money on vaccines and masks and all of these things that would stop it because that helps every industry. So I don’t quite understand the political economy behind this somewhat peculiar distribution of effort.
Carl Shulman: Yeah, this was one thing that puzzled me as well. And I actually, I lost a bet with Jeffrey Ladish on this. So some months into the pandemic it seemed pretty clear to me that it was feasible to suppress it. And we’d already seen China go from widespread virus and bring it down to near zero. And a number of other countries, like Australia, New Zealand, Thailand, and so on, had done very well keeping it to negligible levels. And so I thought that the political incentives were such that you would see a drive to use some of these massive spending response bills to push harder.
Carl Shulman: You might’ve thought that the Trump administration, which did go forward with Operation Warp Speed — which was tremendous in terms of mobilization compared to responses to past diseases, but still far short of that full wartime mobilization — you might’ve thought they would conclude, look, the way to win a presidential election is to deliver these vaccines in time and to take the necessary steps, things like challenge trials and whatnot. And so the administration failed to do that, but that seems like it’s not just an issue of dysfunctional political incentives. That seems like serious error, at least as far as I can see, serious error, even in responding to the political incentives.
Rob Wiblin: Yeah. Yeah. And it seems like it wasn’t only politicians or like politicians with their motive to get reelected who were holding this back. It was also all kinds of other groups who, as far as I can tell, were kind of thinking about this wrong and had somewhat something of a wrong focus. And so it seems like it was, the intellectual errors were fairly widespread as far as I could tell, at least from my amateur perspective.
Carl Shulman: Yeah. So I think that that’s right, and that problems that are more difficult to understand, that involve less sort of visually obvious connections between cause and effect, they make it harder. And they make it harder at many points in the political process, the policy process, the institutions that have to decide how to respond.
Carl Shulman: Some of them though seem like they were pretty much external political constraints. So Operation Warp Speed, which was paying for the production of vaccine before approval was granted. So that was very valuable. And you know, a bunch of heroic people within the government and outside of it helped to cause that to happen, but in doing that they really had to worry about their careers because, so among other things, mRNA vaccines had not been successfully used at scale before. And so there was fear that you will push on this solution, this particular solution will fail, and then your career —
Rob Wiblin: You’ll be to blame.
Carl Shulman: — will be, yeah, will be very badly hurt and you won’t get a comparable career benefit. If you have a one in 10 chance of averting this disaster and a nine in 10 chance of your particular effort not working, although in fact the vaccines looked good relatively early, then yeah, that may not be in your immediate interest.
Carl Shulman: And so you have these coordination problems even within the government to do bold action, because bold action in dealing with big disasters and when trying things like probabilistic scientific solutions means that a lot of the time you’re going to get some negative feedback. A few times you’re going to get positive feedback. And if you don’t make it possible for those successes to attract and support enough resources to make up for all the failures, then you’ll wind up not acting.
Rob Wiblin: Yeah, there’s asymmetry between if things go wrong, then you get fired and live in disgrace. Versus if you save hundreds of thousands of lives, then people kind of pat you on the back, and maybe it’s a slight advance, but it doesn’t really benefit you all that much. That incentive structure really builds in a sort of extreme cowardice, as far as I can tell, into the design of these bureaucracies. I think, yeah, we kind of see that all the time, that they’re not willing to take risks even when they look really good on expected value terms, or they need an enormous amount of external pressure to push them into doing novel and risky things.
Carl Shulman: Yeah. You see some exceptions to that in some parts of the private sector. So in startups and venture capital, it’s very much a culture of swinging for the fences and where most startups will not succeed at scale, but a few will generate astronomical value, much of which will be captured by the investors and startup holders.
Carl Shulman: But even there, it only goes so far because people don’t value money linearly. So they’re not going to be like equally happy or they’re not going to be delighted to have a chance to trade off $400 million with certainty versus a 50% chance of $1 billion. And as a result, there’s really no institution that is great at handling that sort of rare event.
Carl Shulman: In venture capital it’s helped because investors can diversify across many distinct startups. And so when you make decisions at a high level in a way that propagates nicely down to the lower levels, then that can work. So if you set a broad medical research budget and it’s going to explore hundreds of different options, then that decision can actually be quite likely to succeed in say, staunching the damage of infectious disease.
Carl Shulman: It’s then, transmitting that to the lower levels is difficult because a bureaucrat attached to an individual program, the people recommending individual grants, have their own risk aversion. And you don’t successfully transmit the desire at the top level to we want a portfolio that in aggregate is going to work. It’s hard to transmit that down to get the desired boldness at the very low level. And especially when, if you fail, if you have something that can be demagogued in a congressional hearing. Then yeah, the top-level decision-makers join in the practice of punishing failure more than they reward success among their own subordinates.
Rob Wiblin: Yeah. I mean, I guess it could plausibly not be enough motivation even for entrepreneurs in Silicon Valley because they don’t value the money linearly. But it’s like so much more extreme for a bureaucrat at the CDC or the FDA because they may well make zero dollars more and they might just face a lot of frustration getting it up and criticism from their colleagues. At least in Silicon Valley, you also have kind of a prestige around risk-taking and a culture of swinging for the fences that helps to motivate people in addition to just the raw financial return.
Carl Shulman: Yeah, although, of course, that can cut the other way. In that when you have an incentive structure that rewards big successes, and doesn’t hurt you for failures, you want to make sure it doesn’t also not punish you for really doing active large harm. And so this is a concern with some kinds of gain of function research that involve creating more pandemic prone strains of pathogen. Because if something goes wrong, the damage is not just to the workers in the lab, but to the whole world. And so you can’t take this position where you just focus on the positive tail and ignore all the zeros because some of them aren’t zeros. They’re a big negative tail.
How x-risk reduction compares to GiveWell recommendations [00:34:34]
Rob Wiblin: Yeah, yeah, I’ve got a couple of questions about gain-of-function research later on. But maybe let’s come back to the questions I had about these cost-effectiveness estimates. So I guess we saw based on just the standard cost-benefit analysis style that the US government uses to decide what things to spend money on, at least sometimes when they’re using that frame to decide what to spend money on. A lot of these efforts to prevent disasters like pandemics could well be extremely justified, but there’s also a higher threshold you could use, which is if I had a million dollars and I wanted to do as much good as possible, is this the best way to do good? So I suppose for people who are mostly focused on how the world goes for this generation and maybe the next couple of generations, how does work on existential risk reduction in your view compare to, say, giving to GiveWell-recommended charities that can save the life of someone for something like $5,000.
Carl Shulman: Yeah, I think that’s a higher bar. And the main reason is that governments’ willingness to spend tends to be related to their national resources. So the US is willing to spend these enormous amounts to save the life of one American. And in fact, in a country with lower income, there are basic public health gains, things like malaria bed nets or vaccinations that are not fully distributed. So by adopting the cosmopolitan perspective, you then ask not what is the cost relative to the willingness to pay, but rather anywhere in the world where the person who would benefit the most in terms of happiness or other sorts of wellbeing or benefit. And because there are these large gaps in income, there’s a standard that would be a couple of orders of magnitude higher.
Carl Shulman: Now that cuts both ways to some extent. So a problem that affects countries that have a lot of money can be easier to advocate for. So if you’re trying to advocate for increased foreign aid, trying to advocate for preventing another COVID-19, then for the former, you have to convince decision-makers, particularly at the government level, you have to convince countries you should sacrifice more for the benefit of others. In aggregate, foreign aid budgets are far too small. They’re well under 1%, and the amount that’s really directed towards humanitarian benefit and not tied into geopolitical security ambitions and the like is limited. So you may be able to get some additional leverage if you can tap these resources from the market from existing wealthy states, but on the whole, I’m going to expect that you’re going to wind up with many times difference, and looking at GiveWell’s numbers, they suggest… I mean, it’s more like $5,000 to do the equivalent of saving a life, so obviously you can get very different results if you use a value per life saved that’s $5,000 versus a million.
Rob Wiblin: Okay, so for those reasons, it’s an awful lot cheaper potentially to save a life in an extremely poor country, and I suppose also if you just look for… not for the marginal spend to save a life in the US, but an exceptionally unusually cheap opportunity to save a life in what are usually quite poor countries. So you’ve got quite a big multiple between, I guess, something like $4 million and something like $5,000. But then if the comparison is the $5,000 to save a life figure, then would someone be especially excited about that versus the existential risk reduction, or does the existential risk reduction still win out? Or do you end up thinking that both of these are really good opportunities and it’s a bit of a close call?
Carl Shulman: So I’d say it’s going to depend on where we are in terms of our activity on each of these and the particular empirical estimates. So a thing that’s distinctive about the opportunities for reducing catastrophic risks or existential risks is that they’re shockingly neglected. So in aggregate you do have billions of dollars of biomedical research, but the share of that going to avert these sort of catastrophic pandemics is very limited. If you take a step further back to things like advocacy or leverage science, that is, picking the best opportunities within that space, that’s even more narrow.
Carl Shulman: And if you further consider… So in the area of pandemics and biosecurity, the focus of a lot of effective altruist activity around biosecurity is things that would also work for engineered pandemics. And if you buy, say, Toby Ord’s estimates, then the risk from artificial pandemics is substantially greater than the natural pandemics. The reason being that a severe engineered pandemic or series thereof — that is, like a war fought, among other things, with bioweapon WMD — could have damage that’s more like half or more of the global population. I mean, so far excess deaths are approaching one in 1,000 from COVID-19. So the scale there is larger.
Carl Shulman: And if we put all these things together and I look at the marginal opportunities that people are considering in biosecurity and pandemic preparedness, and in some of the things with respect to risk from artificial intelligence, and then also from some of the most leveraged things to reduce the damage from nuclear winter — which is not nearly as high an existential risk, but has a global catastrophic risk, a risk of killing billions of people — I think there are things that offer opportunities that are substantially better than the GiveWell top charities right now.
Rob Wiblin: But that’s the kind of thing where if we started spending $10 billion a year globally on this slightly off-the-beaten-track effort to prevent catastrophic risk or extinction risks, then those sorts of really exceptional opportunities might dry up because we would take them, and then the GiveWell stuff might look equally good or possibly even better on the margin.
Carl Shulman: We have, right now… We’re moving into the hundreds of millions of dollars maybe of annual expenditures really targeted towards existential risks as such. So if you scale that up a factor of 10, a factor of 100, to spending $10 billion a year, which would be 1/10,000ths of world output, it’s well justified even by things like natural pandemics. So if you got up to that level, then that makes it potentially 100-fold or 1,000-fold worse per dollar if you’re getting similar gains with each doubling of expenditures, as you move to progressively less low-hanging fruit. So I think from our current standpoint, I’d see the marginal extra things as delivering better returns in saving current lives. But I don’t think… If we had the world as a whole, say, adopted my point of view on these things, then we would wind up spending much more on existential risk reduction. But long before you got to spending half of GDP or something like that, the incremental risk reduction you’d be getting per unit of expenditure would’ve just plummeted by orders of magnitude, and definitely the thing to do saving current lives, yeah, it would be well below that GiveWell mark.
Rob Wiblin: So do you think this overall picture implies that organizations like 80,000 Hours and I guess the Global Priorities Institute are maybe too focused on promoting longtermism as an idea or as a philosophical position? Because it seems like if we can just argue that on practical terms, given the values that people already have, it’ll be justified to spend far more reducing existential risk, then maybe that’s an easier sell to people, and maybe that really is the crux of the disagreement that we have with mainstream views, rather than any philosophical difference.
Carl Shulman: Yeah, so I’d say… I think that’s very important. It’s maybe more central than some would highlight it as. And particularly, if you say… Take, again, I keep tapping Toby’s book because it does more than most to really lay out things with a clear taxonomy and is concrete about its predictions. So he gives this risk of one in six over the next 100 years, and 10 out of those 16 percentage points or so he assigns to risk from advanced artificial intelligence. And that’s even conditioning on only a 50% credence in such AI capabilities being achieved over the century. So that’s certainly not a view that is widely, publicly, clearly held. There are more people who hold that view than say it, but that’s still a major controversial view, and a lot of updates would follow from that. So if you just take that one issue right there, the risk estimates on a lot of these things move drastically with it.
Carl Shulman: And then the next largest known destruction risk that he highlights is risks from engineered pandemics and bioweapons. And there, in some ways we have a better understanding of that, but there’s still a lot of controversy and a lot of uncertainty about questions like, “Are there still secret bioweapons programs as there were in the past? How large might they be? How is the technology going to enable damaging attacks versus defenses?” I mean, I think COVID-19 has showed a lot of damage can still be inflicted, but also they’re very hard to cause extinction because we can change our behavior a lot, we can restrict spread.
Carl Shulman: But still, there’s a lot of areas of disagreement and uncertainty here in biosafety and future pandemics. And you can see that in some of the debates about gain of function research in the last decade, where the aim was to gain incremental knowledge to better understand potentially dangerous pathogens. And in particular, the controversial experiments were those that were altering some of those pathogens to make them more closely resemble a thing that could cause global pandemics. Or that might be demonstrating new mechanisms to make something deadly, or that might be used, say, in biological weapons.
Carl Shulman: So when the methods of destruction are published, and the genomes of old viruses are published, they’re then available to bioweapons programmes and non-state actors. And so some of the players arguing in those debates seem to take a position that there’s basically almost no risk of that kind of information being misused. So implicitly assigning really low probability to future bioterrorism or benefiting some state bioweapons programmes, while others seem to think it’s a lot more likely than that. And similarly, on the risk of accidental release, you’ve had people like Marc Lipsitch, arguing for using estimates based off of past releases from known labs, and from Western and other data. And then you had other people saying that their estimates were overstating the risk of an accidental lab escape by orders and orders of magnitude to a degree which I think is hard to reconcile with the number of actual leaks that have happened. But yeah, so if you were to sync up on those questions, like just what’s the right order of magnitude of the risk of information being used for bioweapons? What’s the right order of magnitude of escape risk of different organisms? It seems like that would have quite a big impact on policy relative to where we stand today.
Rob Wiblin: Yeah. This is a slight hobby horse of mine, because reasonably often I encounter someone who’s like, “Oh, you work on this existential risk stuff. You work on reducing global catastrophic risks, and that’s interesting, but I don’t personally think that it’s sensible to do something that has only a one in a billion or one in a trillion chance of working, or of being any help, just because the benefit if you succeed is so enormously large.” And I’m like, “That is not the reason that I’m doing this at all. It’s nothing to do with this argument that while the odds are infinitesimally small, the gain would be infinitesimally large.”
Rob Wiblin: The reason I’m concerned about this is the risks are really large, horrifyingly, unreasonably big, and there’s just tractable things that we could do that have a really meaningfully high chance of helping and preventing these terrible things from happening, just like our ancestors did lots of things to try to make the world more stable and better. It is definitely frustrating to me that people have somehow got in their head this idea that the justification for working on existential risks is that the gain is so enormous rather than that it’s a serious, obvious problem that we face over the next century.
Carl Shulman: There’s been a little bit of help or movement on that over the course of the pandemic, where the fact that effective altruism broadly had been one of the largest sources of philanthropic funding for work on biosecurity and focus on these extreme pandemic cases, and a number of folk who have been funded broadly from the effective altruism movement have played helpful or influential roles in some of the pandemic response. So that’s something. So some people who previously were making that argument about pandemics, despite the fact that there have been many pandemics in history, and even just, there have been repeated spreading coronavirus issues like the first SARS and MERS and so on.
Carl Shulman: And indeed actually there was one scenario-planning exercise that received some funding from effective altruists, used a coronavirus scenario as an example because the last several emerging diseases that people were worried about were that. And now there’s some conspiracy theorists responding to that, but no, it’s a thing that has happened repeatedly and worrying about it was actually sensible. So some people I think now are less responding when they say, “Oh, effective altruists, they’re so obsessed with pandemics and artificial intelligence and they should focus on real-world problems.” There’s some attenuation of that effect, I think.
Rob Wiblin: Yeah, I definitely have heard it gradually less over the last couple of years. I think it’s partly events like with COVID, but maybe it’s also just that we’ve been worrying about the ways that artificial intelligence could go wrong for 10 or 15 years, but I think the mainstream culture has also gradually started to see that AI can be a really big deal, and when it’s deployed in important functions, if you haven’t properly figured out how it’s going to behave in these ways and fully understand all of the consequences that it can have, then things can really go awry. That’s become a very mainstream idea, and maybe we get a bit of credit for being ahead of the curve.
Carl Shulman: Yeah, I agree. Things like algorithmic bias, accidents with self-driving cars, things like automated weapon systems and even the debate about automating nuclear response, building a literal Skynet-type system, which is debated in the military literature. And especially given the unreliability of the systems now. Setting aside issues about alignment of advanced systems, just the frequent failures and having systems that so poorly meet standard military robustness requirements, too close to a case where a bug causes nuclear war.
Rob Wiblin: A billion deaths, yeah.
Carl Shulman: Yeah, so those issues and just the general rapid progress on artificial intelligence, I think, have somewhat changed the memetic landscape there.
Solutions for asteroids, comets, and supervolcanoes [00:50:22]
Rob Wiblin: Cool, yeah. Well we can return to AI a little bit later. But as people can probably suss out, you spent a decent fraction of the last 10 to 20 years really digging into the details of a lot of these threats that humanity potentially faces. And I’m keen to dig in risk-by-risk into a bit of that knowledge. I imagine that most listeners are somewhat familiar with the categories of risk that we’re talking about here, both because we’ve talked about them on the show before, and also just because of general knowledge. So I’m keen to maybe ask you about what solutions particularly stand out to you as seeming promising, or maybe what lessons we can learn from the history that aren’t obvious about what would be particularly useful to do today.
Rob Wiblin: So yeah, first off, is there anything more that humanity should urgently do that we haven’t already done to address the risk of asteroids or comets or supervolcanoes or other natural phenomena like that?
Carl Shulman: Yeah, I’d say that those risks are improbable enough that there are other, more pressing ways to get more risk reduction per dollar, and we can bound a lot of those natural risks just by seeing the rate at which they’ve occurred in the past and our ancestors and the ancestors of other life have survived them. Whereas we’re in the process of… We’re having very rapid technological advance over the last few centuries. Things like nuclear weapons, bioweapons, artificial intelligence being introduced for the first time. So we’re in an extraordinarily unusual period. Carl Sagan called it the “Time of Perils,” with respect to these anthropogenic human activities and technology. And the natural risks, we’re in a pretty normal period. At most you might worry more about ways in which natural phenomena interact with our technological situation.
Rob Wiblin: Yeah, I thought you might possibly say that some low-hanging fruit here might be figuring out… If there was a supervolcano eruption that made it a lot harder to produce food for a couple of years, maybe trying to plan ahead a little bit to figure out, “Well, how could we feed everyone or almost everyone through a period like that?” Not necessarily because the lack of food is going to directly cause human extinction, but because a situation in which huge numbers of people are starving might be somewhat unstable and might lead to other conflicts or a gradual degradation of civilization. But maybe that’s just kind of outweighed by the fact that a supervolcano explosion in the next century is so improbable.
Carl Shulman: In fact, the intervention of having food supplies and methods to scale up food in the event of an atmospheric disruption is one that I am interested in. I wrote about this something like 10 years ago and had been thinking about it before, and working through what are all the possible mechanisms of existential risk. How seriously should we take them? But overwhelmingly, I see that risk as being driven by nuclear winter and human activity. So if we have a nontrivial chance of serious nuclear war happening in this century, but then our baseline for supervolcanism and for the really bad asteroid, if that’s one in a million for that happening in the century, and for supervolcanism it can be more likely… But again, our ancestors have survived many big volcanic eruptions and —
Rob Wiblin: There’s many more of us now.
Carl Shulman: There’s many more of us now, our technology is better able to reply to it, and just the base rate is low enough that… For example, there haven’t been any really killer ones like that in recent millennia. The base rate for really crushing blockage of the sun from that is so low that you can round that to zero while there’s still this threat of nuclear winter causing an extreme version of the same phenomenon. So I’d say, care about nuclear winter. Volcanism and asteroids, in some sense we’d eventually get to them, but they’re not near the most urgent priorities for preventing people from starving from atmospheric phenomena.
Solutions for climate change [00:54:15]
Rob Wiblin: Moving up the list to a risk that does quite a bit more probability-adjusted expected damage. What’s a particularly valuable thing that we should try to do to tackle climate change?
Carl Shulman: So I’d say that what you want to do is find things that are unusually leveraged, that are taking advantage of things, like attending to things that are less politically salient, engaging in scientific and technological research that doesn’t deliver as immediate of a result. Things like political advocacy, where you’re hoping to leverage the resources of governments that are more flush and capable.
Carl Shulman: Yeah, so in that space, clean energy research and advocacy for clean energy research seems relatively strong. Some people in the effective altruism community have looked into that and raised funds for some work in that area. I think Bill Gates… The logic for that is that if you have some small country and they reduce their carbon emissions by 100%, then that’s all they can do, costs a fair amount, and it doesn’t do much to stop emissions elsewhere, for example, in China and India and places that are developing and have a lot of demand for energy. And if anything, it may… If you don’t consume some fossil fuels, then those fossil fuels can be sold elsewhere.
Carl Shulman: Whereas if you develop clean energy tech, that changes the decision calculus all around the world. If solar is cheaper than coal — and it’s already making great progress over much of the world — and then hopefully with time, natural gas, then it greatly alleviates the difficulty of the coordination problem. It makes the sacrifice needed to do it less. And if you just look at how little is actually spent on clean energy research compared to the benefits and compared to the successes that have already been achieved, that looks really good. So if I was spending the incremental dollar on reducing climate change, I’d probably want to put it more towards clean energy research than immediate reductions of other sorts, things like planting trees or improving efficiency in a particular building. I more want to solve that global public goods problem of creating the technologies that will better solve things.
Rob Wiblin: We’ll stick up a link to a report that I think was at least coauthored by John Halstead and came out of Founders Pledge, which looks at this argument in favor of —
Carl Shulman: It was from Let’s Fund.
Rob Wiblin: Ah, from Let’s Fund, sorry. That looks at this argument in favor of clean energy research and also makes, I think, some more specific suggestions about things that listeners could potentially fund themselves.
Rob Wiblin: What maybe stands out within that category? I suppose it seems to me maybe a major bottleneck to scaling up renewable energy is going to be energy storage and batteries and things like that. I’ve also read seemingly coherent things from people who think that a particularly valuable spend is working on new designs for nuclear power plants. Is there anything specific within the clean energy bucket that stands out to you?
Carl Shulman: Yeah. So continuing progress on solar, on storage, you want high-voltage transmission lines to better integrate renewables. On nuclear, nuclear has enormous potential. And if you look at the cost for France to build an electrical grid based almost entirely on nuclear, way back early in the nuclear era, it was quite manageable. In that sense, for electrical energy, the climate change problem is potentially already solved, except that the regulatory burdens on nuclear are so severe that it’s actually not affordable to construct new plants in many places. And they’re held to standards of safety that are orders of magnitude higher than polluting fossil fuels. So enormous numbers of people are killed, even setting aside climate change, just by particulate matter and other pollution from coal, and to a lesser extent, natural gas.
Carl Shulman: But for the major nuclear accidents, you have things like Fukushima, where the fatalities from the Fukushima release seem to be zero, except for all the people who were killed panicking about the thing due to exaggerated fears about the damage of small radiation levels. And we see large inconsistencies in how people treat radiation levels from different sources versus… You have more radiation when you are at a higher altitude, but we panic many orders and orders of magnitude less about that sort of thing. And the safety standards in the US basically require as safe as possible, and they don’t stop at a level of safer than all the alternatives or much better.
Carl Shulman: So we’re left in a world where nuclear energy could be providing us with largely carbon-free power, and converted into other things. And inventing better technologies for it could help, but given the regulatory regime, it seems like they would again have their costs driven up accordingly. So if you’re going to try and solve nuclear to fight climate change, I would see the solution as more on the regulatory side and finding ways to get public opinion and anti-nuclear activist groups to shift towards a more pro-climate, pro-nuclear stance.
Rob Wiblin: Yeah, I’ll see if I can find a link to an article I read at some point this year, which was looking at trying to estimate what fraction of the cost increase in construction of nuclear power plants is due to what different causes. And this piece said, yes, a nontrivial fraction of it is because of the ever-escalating safety requirements and safety requirements that are far higher than non-nuclear infrastructure or power generation. But it also said part of it is simply that all sorts of infrastructure, even non-nuclear infrastructure, have become many multiples more expensive in the United States and also just facing ever-longer delays to their construction. You see this with subway station constructions, for example, that they’re much more expensive than they used to be, and they take much longer. And so, it may be, unfortunately if that’s right, it suggests that not only do we have to solve nuclear-specific issues, but also we’ve got this broader challenge with just doing anything that seems really expensive, or at least doing anything big and newish these days is just quite difficult in rich countries.
Carl Shulman: Yeah, that seems right. I mean, it applies even to houses. These broad restrictions on construction and opportunities for many local actors to disrupt the construction for many years, which raises the capital costs and then regulatory requirements for approval of every tiny step, each of which is a veto point that can extract rents. So yeah, I’d say those are shared. And to some extent, solar helps to bypass that because it’s less feared than nuclear, and because the individual units are so small, it’s possible to construct them in a lot of different areas. You can have this rooftop stuff, although utility-scale installations are better, and just circumventing the regulation can be a benefit of the technology.
Carl Shulman: And it’s sort of tragic that we can’t just adjust the regulations to achieve things that were successfully achieved in the 20th century. France really could build out carbon-free electricity for their whole population at rates that were competitive with other much more polluting fossil fuels. But we live in the world we live in, and so we can look at both marginal opportunities to improve that sort of dynamic on the regulatory side while also investigating technology which might get some of its benefits by circumventing the regulatory barriers.
Rob Wiblin: Yeah. Don’t jinx it, Carl. Next we’re going to be going through a three-year council approval process just to stick up solar panels on your roof. I want to give a shout out to the blog Pedestrian Observations. I can’t remember who it’s written by, but it’s an expert in infrastructure who has done a bunch of work to try to break down why it is that infrastructure is so much more expensive than it used to be. I’ll stick up a link to a post that I’ve enjoyed on there.
Solutions for nuclear weapons [01:02:18]
Rob Wiblin: All right, moving on to another risk: nuclear weapons. I guess that’s been a threat that humanity’s had hanging over it for 76 years now. What has changed about the picture over those 76 years? How serious the risk has looked and what the nature of the risk seems to be?
Carl Shulman: Nuclear weapons were a technology that was especially novel and our sort of understanding of them developed over time. Some of the earliest mentions of nuclear weapons, I believe was, H. G. Wells describes nuclear bombs not of the sort that we have, but that would have radioactive decay over the battlefield for some weeks, rather than a sudden critical mass where all the energy is released at once.
Carl Shulman: And then early on, so during the Manhattan Project, there was fear, would this cause a chain reaction that would ignite the atmosphere and immediately destroy all life on Earth? And there’ve been discussions in various works about how this went. And there were definitely, there were calculations done that were supposed to indicate that no, that reaction could not happen. And indeed they were right in the event. But a number of physicists on site kept reinventing independently this concern that this might happen. And when the actual test first happened, a number of them were holding their breaths.
Carl Shulman: And then this was followed, of course, a little later there was another similar calculation about the blast radius of the hydrogen bomb. And then they found they actually had made a mistake about whether a certain reaction could go. And so the lethal radius was unexpectedly large and it actually killed people. And so they had this worry, “Are we ready to act on these sort of flawed safety assurances?” But it’s interesting that in that early period though, we would’ve conceived of the global risk from nuclear weapons differently than we do today.
Rob Wiblin: In what way?
Carl Shulman: Early on, people assessed the damage from nuclear weapons in terms of direct destruction, fire and killing, destroying cities, and then radioactive fallout. And radioactive fallout is indeed nasty but there’s some exaggeration in fictional media about how permanently unliveable it renders an area and how it spreads, and especially because it decays relatively… there’s intense radiation at the start and then it falls off as you move to the slower-decaying elements released.
Rob Wiblin: Yeah, I learned a bit about this a couple years ago when I was living in the Bay Area and this was right around the point that North Korea seemingly maybe got the capacity to send a… it was no cruise missile, some sort of ICBM with a nuclear warhead, over as far as the west coast of the US. If I recall correctly, if you bunker down after a nuclear weapon explodes somewhere nearby you, then after a couple of weeks, the fallout has mostly cleared and it’s reasonably safe to go outside. Probably wouldn’t want to be breathing in lots of dust outside but it’s kind of survivable again.
Carl Shulman: Yeah, I mean it scales with the distance. And so when people first talked about sort of end-of-the-world scenarios, so in Dr. Strangelove, you have the doomsday device. And so people speculated about enhanced radiation that is intentionally trying to change nuclear weapons to maximize lasting land-ruining radiation to cause increased damage. And in terms of things to kill all the people in neutral countries or reach that level of global disaster, radiation and then the side effects of the belligerent countries being wrecked was what people had in mind. But it was only several decades later that nuclear winter really came into focus and both research into the climate science and then communication of that to the broader world. And from our current point of view, it seems like that is a disproportionate share of some of these impacts on nonbelligerent countries and the potential to reach the really near-maximal fatality levels.
Rob Wiblin: Yeah. I guess from our vantage point now, 75 years in, I guess the risk of nuclear weapons per year is probably really disturbingly high, but we kind of know that it’s not say 10% a year because if it was 10% a year, we almost certainly wouldn’t have made it this far. But it must’ve been terrifying in the late 40s or 50s to people who were really in the know about this because they’re in a completely new regime, the risk could just be extremely high and they don’t really know whether it’s possible to have enduring peace and survive in a world where there are two superpowers that have such destructive arsenals of these weapons. Have you ever looked into kind of what they wrote about their own fears and their own life at that time?
Carl Shulman: Yeah. I was interested to see an anecdote from Daniel Ellsberg (former guest of the show), the author of The Doomsday Machine and was responsible for releasing the Pentagon Papers. And so he describes people working at RAND doing nuclear strategy at the beginning of the Cold War and a number of them not wanting to take the retirement plans and invest for their retirement because they actually thought that nuclear war was likely pretty soon. And that basically that the dynamics of deterrence were unstable as well as thinking that the Soviet Union was more willing to risk terrible destruction for conquest.
Rob Wiblin: Interesting. Okay, yeah. So there was at least significant contingents that thought things might be more likely than not to end up in a nuclear holocaust, but I guess not everyone.
Carl Shulman: Yeah. And of course, if the situation is ill understood, then you learn the most about it at the very beginning because then you get to rule out the possibility of the situation being super unstable. And so I don’t think it was crazy to worry that that dynamic might turn out to be unstable.
Rob Wiblin: Yeah. No, not at all.
Carl Shulman: But it certainly has done a lot better than those worst fears, and not only the US-Russia or US-Soviet pairing, but other nuclear powers. There’s some independent evidence from the fact that India and Pakistan have been able to pull back from the nuclear brink even while having other sorts of confrontations. And likewise Israel and its neighbors, including during the 1973 war.
Rob Wiblin: Yeah. And I guess also the Soviet Union and China when they kind of came to blows during the 70s.
Carl Shulman: Yeah, indeed.
Rob Wiblin: Okay. I guess, today the nuclear winter potentially then killing billions of people through famine looms particularly large to us. What’s a particularly valuable thing that say, the US should do to reduce the expected damage from nuclear weapons?
Carl Shulman: Yeah. Compared to engineered pandemics and artificial intelligence, there is a relatively greater amount of activity, primarily by governments, to reduce nuclear risk. And so in engaging with it, I focused on things that were trying to be off the beaten track and getting special leverage. Whereas for something like risks from advanced artificial intelligence, it’s more of an empty field. And so I found three things that really seemed to pass the bar of “this looks like something that should be in the broad EA portfolio” because it’s leveraged enough to help on the nuclear front.
Rob Wiblin: Yeah right. Hit me. What’s number one?
Carl Shulman: Yeah. The first one is better characterizing the situation with respect to nuclear winter. And so a relatively small number of people had done work on that. Not much followup in subsequent periods. Some of the original authors did do a resurgence over the last few decades, writing some papers using modern climate models, which have been developed a lot because of concern about climate change and just general technological advance to try and refine those. Open Philanthropy has provided grants funding some of that followup work and so better assessing the magnitude of the risk, how likely nuclear winter is from different situations, and estimating the damages more precisely.
Carl Shulman: It’s useful in two ways. First, the magnitude of the risk is like an input for folk like effective altruists to decide how much effort to put into different problems and interventions. And then secondly, better clarifying the empirical situation can be something that can help pull people back from the nuclear brink.
Carl Shulman: So Reagan and Gorbachev… It’s hard to know the psychology of individual people, but they report being influenced by the findings about nuclear winter. In fact, knowing about this huge additional downside of nuclear war on the margin, you expect would give a bit more momentum to the people less likely to pursue it. For example, on the part of neutral countries to want to exert pressure. Now a danger of that is because there is this attraction, it would sort of seem beneficial for people to seem more worried about these negative effects and then not engage in the work. It can cause a problem of credibility.
Rob Wiblin: I see. So because there’s such a clear motivation for even an altruistic person to exaggerate the potential risk from nuclear winter, then people who haven’t looked into it might regard the work as not super credible because it could kind of be a tool for advocacy more than anything.
Carl Shulman: Yeah. And there was some concern of that sort, that people like Carl Sagan, who was both an anti-nuclear and antiwar activist and bringing these things up. So some people, particularly in the military establishment, might have more doubt about when their various choices in the statistical analysis and the projections and assumptions going into the models, are they biased in this way? And so for that reason, I’ve recommended and been supportive of funding, just work to elaborate on this. But then I have additionally especially valued critical work and support for things that would reveal this was wrong if it were, because establishing that kind of credibility seemed very important. And we were talking earlier about how salience and robustness and it being clear in the minds of policymakers and the public is important.
Rob Wiblin: Yeah. Makes sense. All right. What’s number two?
Carl Shulman: Yeah, so this is related. Just the sociopolitical aspects of nuclear risk estimation. How likely is it from the data that we have that things escalate? And so there are the known near misses that I’m sure have been talked about on the show before. Things like the Cuban Missile Crisis, Able Archer in the 80s, but we have uncertainty about what would have been the next step if things had changed in those situations? How likely was the alternative? The possibility that this sub that surfaced in response to depth charges in the Cuban Missile Crisis might’ve instead fired its weapons.
Carl Shulman: And so, we don’t know though. Both Kennedy and Khrushchev, insofar as we have autobiographical evidence, and of course they might lie for various reasons and for their reputation, but they said they were really committed to not having a nuclear war then. And when they estimated and worried that it is like a one in two, one in three chance of disaster, they were also acting with uncertainty about the thinking of their counterparts.
Rob Wiblin: Right. When they said, I can’t remember whether it was one in three or one in two, but they were mostly thinking it would start by the other person initiating the conflict because they were in their own minds extremely reluctant to do so.
Carl Shulman: Well, yeah. There may be some of that and pressures to escalate and maybe they would have changed their mind. But so we do have still a fairly profound residual uncertainty about the likelihood of escalating from the sort of triggers and near misses that we see. We don’t know really how near they are because of the difficulty in estimating these remaining things. And so I’m interested in work that better characterizes those sorts of processes and the psychology and what we can learn from as much data as exists.
Carl Shulman: Like how is it that India and Pakistan or Israel and its neighbors avoided conflict? And we actually do know quite a lot about the mechanics there. Things like, for Israel, the role of the superpowers in discouraging things seems like it may have been helpful. Both discouraging its neighbors from invading in a way that could escalate to nukes and from its allies demanding that it not use that nuclear arsenal. Yeah, so it’s, broadly better estimating those risks and making them more robust can serve a similar role to nuclear winter work, as well as identifying sort of choke points where the risk is being escalated for reasons no one wants, like unreliable command and control systems, that sort of thing.
Rob Wiblin: Yeah. I guess the idea there is, there might be a bunch of near misses that we pay a lot of attention to that perhaps were not as near as they seem because there might be something of an illusion where it looks like we got really close but in reality, there was a later step that we were very unlikely to pass. Whereas there might be other cases that haven’t gotten so much attention that really were near misses when you look at them more closely and really think about what was going on in the heads of the actors and making sure that we focus on the cases where things almost went wrong, that where they really did almost go wrong, might give us some better idea about what we want to change about these systems to make sure that we never get to those cases again.
Carl Shulman: Yeah. And unfortunately, the literature on probabilistic estimation of these things, there is some, but it does not incorporate all of the insights that you see from things like the superforecasting tradition. And so this is something I’ve looked into ways to advance it, but still haven’t seen as much as I would like to really make it more robust, using all the information we have but also all that we know about reasonable probabilistic forecasting to better estimate this nuclear risk.
Rob Wiblin: All right. That’s number two. What’s number three?
Carl Shulman: Yeah. Number three, we discussed this a little bit when it came up in the context of volcanoes, but so, the damage from nuclear winter on neutral countries is anticipated to be mostly by way of making it hard to grow crops. And all the analyses I’ve seen and talking to the authors of the nuclear winter papers suggest there’s not a collapse so severe that no humans could survive. And so there would be areas far from the equator, New Zealand might do relatively well. Fishing would be possible. I think you’ve had David Denkenberger on the show.
Rob Wiblin: Yeah, that’s right.
Carl Shulman:
So I can refer to that. And so that’s something that I came across independently relatively early on when I was doing my systematic search through what are all of the x-risks? What are the sort of things we could do to respond to each of them? And I think that’s an important area. I think it’s a lot more important in terms of saving the lives of people around today than it is for existential risk. Because as I said, sort of directly starving everyone is not so likely but killing billions seems quite likely. And even if the world is substantially able to respond by producing alternative foods, things like converting biomass into food stocks, that may not be universally available. And so poor countries might not get access if the richest countries that have more industrial capacity are barely feeding themselves. And so the chance of billions dying from such a nuclear winter is many times greater than that of extinction directly through that mechanism.
Rob Wiblin: Yeah. I wonder whether it would also be a risky period if you had several years where there’s really quite significant starvation because we didn’t do this preparation to figure out how we could feed the world through a nuclear winter, whether that could lead to a kind of a cascade of failures where you end up with more conflicts between countries, or just a general breakdown in coordination globally? And yeah, that could provide another benefit or another stream of benefits would be that it prevents some cascading failure that then puts humanity in a significantly worse position 10 years on.
Carl Shulman: Yeah, there are certainly indirect effects on long-term trajectories like that. And I think those are real and I’ve looked into them, including the possibility of a collapse of civilization and could it recover. And when I’ve done that, it seems in fact very likely that from just that sort of shock, civilization would recover technologically.
Carl Shulman: And so there are these long-term impacts, but when I say I’d see that mostly in terms of saving current lives, I mean that these indirect existential risk impacts are a lot attenuated relative to the probability of the event occurring. And so, if you’re talking about some probability of billions dying and then those indirect effects mean conditional on that, there is a 1% x-risk or a 0.1% x-risk, then that’s the sort of thing where I think a pluralist, non-fanatical view is going to say, “Look, this is great anyway.”
Carl Shulman: And the reason is not some kind of indirect thing of, oh, maybe it will impact on x-risk in such-and-such a way. It’s like, that would be most of the world’s people dying and billions beyond even the belligerents in the war. And that should be our focus in understanding alternative food stuff. And our society should have the capacity to respond to massive food disruptions. And so we should test out the basics of those technologies and see what we would do in those circumstances and perhaps carry them further. And that is valuable I think just on a saving-current-lives basis, especially the very early steps of just inventorying what are all of our options. How well would they work if we actually try to feed 10 people or a 100 people using them, does it work?
Carl Shulman: If it does, then we can stand ready for New Zealand and other neutral countries to deploy that in the event. And we would get enormous value from that initial investigation. And indeed that’s something that effective altruism is already into. And so there is ALLFED that you had Dave on from and also Open Philanthropy has made grants in this space to academic researchers.
Rob Wiblin: Yeah. Yeah, another stream of benefits, it seems like the kind of ALLFED program of figuring out how to produce lots of food in a disaster to prevent famine, it helps with nuclear war and nuclear winter, helps with asteroids, helps with supervolcanoes. I think it also potentially helps with one of the most damaging effects that climate change might have. When I spoke with Mark Lynas about climate change, we had a section on what’s the most plausible story under which climate change goes really badly and a lot of people die? And it seems like a lot of it is mediated by some quite abrupt famine. Or sorry, that’s one of the most plausible effects where it can result in really a lot of people dying quite quickly. And so having this plan to feed the world through a nuclear winter might also allow us to feed the world through some extreme heatwave or some outlier event due to climate change really going off the charts.
Carl Shulman: Yeah. That’s possible. The thing about nuclear winter is that it’s sudden and unexpected and so when you look at alternative food technologies, so, there can be things that are more expensive and the reason we’re not using them is because there are currently cheaper methods available. And there can be things that differ in how fast they can be scaled up. Today people make food actually from natural gas and it’s provided to some fish. And so that technology works and it can produce food that’s cheap enough for sale but it involves a very complicated supply chain, very elaborate technology, and so it seems a lot more doubtful that we could quickly scale it up in response to a nuclear winter event. And so looking for simpler, more scalable technologies where you could say, just have cultures of certain organisms and conventional carpentry equipment and paper processing plants and things be switched over, that serves a different function than just, what if food prices are overall increasing.
The history of bioweapons [01:22:41]
Rob Wiblin: All right, let’s move on to another risk that I think we’re both maybe even more worried about than the ones that we’ve talked about so far. I know you’ve spent a lot of time studying the history of research into bioweapons and research into dangerous pathogens for medical purposes in labs. I’d be excited to hear a story from that history and maybe what that might imply about the scale of the risk and what’s most useful to do now.
Carl Shulman: Yeah. I think if we’re talking about the history of biological weapons, as it is known to the public, the big story is the Soviet bioweapons program. And that was, the vast majority of all the people who have ever worked on bioweapons worked in that. The vast majority of dollar resources and the vast bulk of stockpiles. They had as many as, I mean, estimates vary, but as many as 50,000 people working in that program.
Rob Wiblin: Wow.
Carl Shulman: It was a lot smaller than their nuclear program, of course, like order of magnitude and more, but it was this enormous operation. They were working on many different pathogenic agents and they produced stockpiles of some of them, things like anthrax. And the thing that scares me the most about that history is the rate of accidental escapes and infections from that sort of work.
Rob Wiblin: Yeah, I suppose. That was running from the 50s through, I guess, until the end of the Cold War and it was called Biopreparat, right?
Carl Shulman: So bioweapons work in the Soviet Union and Russia had a longer history, but during the last few decades of the Cold War, you had this vast expansion of Biopreparat which was a sort of civilian military combination thing. There were other closed military-only research facilities that were not part of Biopreparat. But yeah, this is one that we know more about from defectors and it alone provides enough information to draw some interesting conclusions.
Rob Wiblin: Okay. Yeah. So what was the annual rate of accidental leaks from this program?
Carl Shulman: Yeah. So, say, first in terms of leaks to the outside. Some of the major incidents we know about. So one town where anthrax was being produced, basically the town was blasted with anthrax because of a problem with the ventilation system. Basically it was being vented to the outside and then many people were infected there. The international community sort of learned about this and was quite suspicious about it, although there was debate about what had actually happened. The Soviets claimed that it was a natural case of anthrax suddenly infecting all of these people all at once right next to this bioplant.
Rob Wiblin: Suppose they’ve got to say something.
Carl Shulman: They’ve got to say something. And later Gorbachev did admit the existence of the program after all of these defectors had happened and so on. But then later there’s been backsliding and more denial of the things that are already known. And so there’s never been a sort of full unveiling of all of what happened inside. It’s still like Russian state secrets.
Rob Wiblin: All right. So we’ve got this leak of anthrax. What are a couple of others?
Carl Shulman: Yeah, so we had an outbreak of smallpox, which seems like it may have come from their testing facilities. They had an area to try releasing bioweapons in the open air to sort of see how they work and kill nonhuman animal subjects. There’s no testing on humans, according to the defectors, except in this accidental case where, so the smallpox reached a small town. A number of people were infected and they were able to identify it, rush in, and contain things by quarantine and vaccination and whatnot. And there were a number of cases of other local outbreaks of sort of less visibly nasty and deadly things leaking out into the surrounding environment. And then many, many, many infections of lab workers reported by the hundreds, according to defectors. And that’s in line with what we see in sort of visible high biosafety labs in liberal democratic countries where it’s more open and we can track them.
Carl Shulman: There’s a study by Lipsitch and Inglesby, that’s looking into the possible risk of gain-of-function research. And so they use US biosafety level 3 labs working with dangerous select agents and they found four infections over… reported, not all things get reported it seems, but over 2,000 laboratory years of infection. So it’s about one in 500 laboratory years. And then across some 600,000 person-hours. So about every 100 full-time person-years. And you had 50,000 workers. Now not all of those workers were always in the equivalent of being inside the high biosafety lab. They might be working on the blackboard. They might be doing janitorial work in the research areas that didn’t have dangerous agents. But with 50,000 workers, you’d expect to have a lot of accidents each year and that’s what the defectors reported, including some of these releases to the outside.
Rob Wiblin: Yeah. I guess naively I’d think, well, the Soviet Union seems to have had a lot of problems with its factories and with its just general competence at doing things. But you’re saying we don’t really have a better track record in the United States or the UK in terms of preventing leaks.
Carl Shulman: Yeah. Well, it’s a bit difficult to know the precise numbers from the defectors, but it seems they weren’t vastly worse. I think they were worse but not tremendously. And if anything, some of these facilities were more organized maybe than some of the US facilities, where you have individual labs that are sort of going along with a regulatory requirement but maybe they engage with it differently than say some of the military personnel or whatnot. So I’d say we can’t say that the observed record of releases from public health facilities that are open and visible are that much better.
Rob Wiblin: Yeah. Interesting. It seems like most of the leaks were just cases where someone got infected while working in the laboratory and then fortunately it didn’t manage to then infect people outside of the laboratory. Was that because the agents they were working with weren’t super infectious? Or was it because in general, people know that they’ve infected themselves and then they quarantine?
Carl Shulman: Yeah. In general, so for one thing, people were vaccinated against everything that could be vaccinated. So all of the workers would have been vaccinated against smallpox. And so smallpox and influenza were two of the agents they were working with that had caused pandemics before. But anthrax is not spread person-to-person and that was true for most of the bacterial agents. And for the viral agents, some of them had infected person-to-person and caused localized outbreaks, but most of them do not cause global pandemics. They have, under sort of rich country conditions, normally they have R naught below one and you’d sort of expect this because there couldn’t be that many pandemic viruses that kill a large chunk of people. Smallpox was the one like that, where it would sweep around the world, you could see a substantial fraction of people infected be killed. But you couldn’t find 50 or 100 of those. Otherwise our population would have been a lot lower.
Rob Wiblin: Right. Yeah, because it’s kind of a situation where if these were pathogens that really regularly went and caused massive pandemics, then they would have done so already. And then we either would have, I guess, built immunity or developed vaccines or something.
Carl Shulman: Yeah, and in a sense, the damage that could be done by releasing a pathogen that’s already circulating in the wild is limited. So SARS has escaped from labs many times, but those labs were studying it after it had already entered into the population. And so if there’s already thousands of infectees out in the world, having a release from one lab, it makes the SARS problem only one in several thousand worth. But if you have a virus that isn’t circulating at all and then you’re taking it from zero to one, then that can be very damaging indeed. But because insofar as you’re working with natural pathogens, that’s usually not going to arise. Although there have been exceptions.
Rob Wiblin: Yeah. What’s an exception?
Carl Shulman: Yeah. One example is smallpox. After it was eradicated, there were several escapes from labs where it was being studied. And then also of course, the bioweapons testing incident in the Soviet Union and similarly SARS-1 had several lab escapes that caused infections after the epidemic was over. And so in general, that phenomenon where you have a pathogen that was sweeping around and then has changed through evolution or been suppressed by human action and then it can escape and come back. That’s something we’ve seen repeatedly and that has elevated danger.
Carl Shulman: One particularly troubling example of that is the 1977 flu pandemic, which killed hundreds of thousands of people. And geneticists looking at the genome of that strain found it was almost identical to one from the 1950s, so compatible with this being stored samples that had then been used later by humans and gotten out. And that’s been speculated to be from research and then also hypothesized to be from vaccine trials where they had used in China samples of the old viruses to test new vaccines. And so if so, that’s human action causing there to be this virus in existence at that time and then wreak global havoc. And that’s something that could be worse.
Carl Shulman: So the genome of the 1918–1919 flu, which is said to have killed, maybe, the number of 50 to 100 million is thrown around, but there’s a lot of uncertainty about the exact measure. It killed an enormous number of people but its genome has been published. So it’s available to any bioweapons program in the world that wants to reboot it. And so, you have examples of diseases that are not currently circulating coming out of labs. But what we haven’t seen is actually engineering a novel thing and that gets out. And as far as we know, there’ve been very few opportunities for that to happen yet.
Rob Wiblin: I had never heard of that 1977 flu before, I guess that suggests that if there’s any labs out there storing old flu samples from many decades ago, and I imagine there must be a bunch of them, they should be taking super strong precautions to make sure that they don’t escape because effectively it could just cause the same flu pandemic again because all of the immunity has waned in the meantime.
Carl Shulman: That is a possibility. And in some cases you have close enough relatives to protect against it. I mean, so with flu, we need new vaccines every year because of the continuous shuffling around. And so there’s some threat of that sort. And there’ve been other releases, foot-and-mouth. There’s an article you can link to. There’s been several times in the Bulletin of Atomic Scientists and other sources cataloging all of these accidental releases. And usually they have not been catastrophic because the organisms are already out there in the world. But if you consider a program that is trying to make new viruses, which was the objective of the Soviet program, they wanted to enhance existing pathogens or recombine them with genetic engineering to make things that would beat vaccines, that would be more contagious, be more fatal.
Gain-of-function research [01:34:22]
Rob Wiblin: Yeah. Okay, so a completely different style of program other than these bioweapons is scientific research, biomedical research in which scientists try to make new viruses that have additional or different capabilities than what wilder viruses do, I guess so-called gain-of-function research. I guess, from the above, it sounds like even labs that are reasonably well-run still have a rate of leaking samples, leaking viruses out of them that is something on the order of, I guess you mentioned one in 500 lab years earlier. I guess maybe we could say if things have gotten a bit better now, maybe it’s more like one in 1,000, one in 2,000 lab years, but it still suggests that if these leaks are happening at all at that kind of rate, then it might not be so wise to be creating and storing viruses that might be capable of causing a global pandemic in any labs at all, because perhaps we just haven’t yet figured out how to reach the standard of safety that would be required to ensure that there’s no way that they can escape into the wild and cause huge amounts of damage.
Carl Shulman: Yeah. So, the safety still seems poor, and it’s not something that has gone away in the last decade or two. There’ve been a number of mishaps. Just in recent years, for example, those multiple releases of, or infections of SARS-1 after it had been extirpated in the wild. Yeah, I mean, the danger from that sort of work in total is limited by the small number of labs that are doing it, and even those labs most of the time aren’t doing it. So, I’m less worried that there will be just absolutely enormous quantities of civilian research making ultra-deadly pandemics than I would about bioweapons programs. But it does highlight some of the issues in an interesting way.
Carl Shulman: And yeah, if we have an infection rate of one in 100 per worker year, or one in 500 per laboratory year, and given an infection of a new pandemic thing. And a lot of these leaks, yeah, like someone else was infected. Usually not many because they don’t have high enough R naught. So yeah, you might say on the order of one in 1,000 per year of work with this kind of thing for an escape, and then there’s only a handful of effective labs doing this kind of thing.
Carl Shulman: So, you wouldn’t have expected any catastrophic releases to have happened yet reliably, but also if you scale this up and had hundreds of labs doing pandemic pathogen gain-of-function kind of work, where they were actually making things that would themselves be ready to cause a pandemic directly, yeah, I mean that cumulative threat could get pretty high.
Rob Wiblin: So, if a global pandemic, we’ve learned from COVID-19, it can cause on the order of tens of trillions of dollars of economic damage and maybe similar again, in terms of death and loss of quality of life. If the risk of a leak is even at the order of one in 1,000, then that corresponds to at least tens of billions of dollars of expected damage per lab year. I suppose even if you managed to make them 10 times safer than that, up to one in 10,000, then we’re still talking about billions of dollars of expected damage per lab year from the risk of an escape of a pathogen. So, you would really want to be pretty confident that this research is extremely valuable or that you’ve far exceeded any historical record in terms of lab safety in order for that to seem worthwhile.
Carl Shulman: Yeah, so I mean, take it down to a one in 10,000 leak risk and then, yeah, looking at COVID has an order of magnitude for damages. So, $10 trillion, several million dead, maybe getting around 10 million excess dead. And you know, of course these things could be worse, you could have something that did 50 or 100 times as much damage as COVID, but yeah, so 1/10,000ths of a $10 trillion burden or 10 million lives, a billion dollars, 1,000 dead, that’s quite significant. And if you… You could imagine that these labs had to get insurance, in the way that if you’re going to drive a vehicle where you might kill someone, you’re required to have insurance so that you can pay to compensate for the damage. And so if you did that, then you might need a billion dollars a year of insurance for one of these labs.
Carl Shulman: And now, there’s benefits to the research that they do. They haven’t been particularly helpful in this pandemic, and critics argue that this is a very small portion of all of the work that can contribute to pandemic response and so it’s not particularly beneficial. I think there’s a lot to that, but regardless of that, it seems like there’s no way that you would get more defense against future pandemics by doing one of these gain-of-function experiments that required a billion dollars of insurance, than you would by putting a billion dollars into research that doesn’t endanger the lives of innocent people all around the world.
Rob Wiblin: Something like funding vaccine research?
Carl Shulman: Yeah. It would let you do a lot of things to really improve our pandemic forecasting response, therapeutics, vaccines. And so, it’s hard to tell a story where this is justified. And it seems that you have an institutional flaw where people approve these studies and then they maybe put in millions of dollars of grant money into them, and they wouldn’t approve them if they had to put in a billion dollars to cover the harm they’re doing to outside people. But then currently, our system doesn’t actually impose appropriate liability or responsibility, or anything, for those kinds of impacts on third parties. There’s a lot of rules and regulations and safety requirements, and duties to pay settlements to workers who are injured in the lab, but there’s no expectation of responsibility for the rest of the world. Even if there were, it would be limited because it’d be a Pascal’s wager sort of thing, that if you’re talking about a one in 1,000 risk, well, you’d be bankrupt.
Carl Shulman: Maybe the US government could handle it, but the individual decision-makers, it’s just very unlikely to come up in their tenure or from, certainly from a particular grant or a particular study. It’s like if there were some kinds of scientific experiments that emitted massive, massive amounts of pollution and that pollution was not considered at all in whether to approve the experiments, you’d wind up getting a lot too many of those experiments completed, even if there were some that were worth doing at that incredible price.
Rob Wiblin: So I guess it could be worth doing if this kind of research had a track record of massively drawing attention to very specific risks that were particularly likely to eventuate, and then causing us to respond to them ahead of time to foresee them and then prevent it from happening. But I’m not aware of gain-of-function research, at least to date, inspiring that kind of massive response that might be helpful.
Carl Shulman: Yeah, so the particularly hazardous hard core does not seem to have yielded any particularly great benefits and certainly nothing in line with the costs that I was just describing. If you were going to make an argument, it would have to be, like, the world should be spending much more on pandemics, I think that’s right. And doing this incredibly risky stuff that endangers all the people of the world, it will wake them up against this greater natural danger. But I mean, I can’t accept that argument.
Rob Wiblin: It seems like there must be a better way.
Carl Shulman: It’s certainly possible there could be circumstances where the benefit is vastly higher, the risk is lower. So far, these things seem to be mostly not passing that, and not something that you would fund if you had to internalize those externalities. But yeah, I would leave space open to the possibility of that in the future.
Rob Wiblin: This seems like a case where… Yeah, people have this natural tendency to round down risks that are very low all the way down to zero. And this is a case where that really bites and damages the analysis. So, it’s like you might think, “Well, these labs are extremely secure,” and by any normal human standard, they are extremely secure and the risk of something going wrong is very low. But the problem is, even one in 10,000 — which is a very high bar, something that humans very rarely meet — even that’s not enough to suggest that the expected damage from a possible leak is less than billions of dollars, which is vastly larger than the amounts being spent. So, it really matters. Is it like one in 1,000, one in 10,000, one in 100,000? You have to be precise about the risk.
Carl Shulman: Yeah and I think that’s the problem, or a central part of the problem, for motivating the individual labs and scientists. It’s just sufficiently unlikely that they can mostly ignore it for purposes of their career, and then the benefits of doing the research in terms of getting a publication, or getting some grant money, or for some other kinds of dangerous research. Like this horsepox assembly paper, where they showcased a small lab doing the methods to assemble horsepox, a close relative of smallpox, in a way that could have been used to resurrect smallpox without having samples.
Carl Shulman: So, that’s again, imposing this risk on the rest of the world of sharing the knowledge and some of the methods involved in that. And they had financial interests. This was in support of some commercial interest and developing and selling new smallpox vaccines. Yeah, if you have a fairly likely gain that would make a substantial difference to your career, or finances, business interests, and then you have a one in 1,000 chance of… Even if it’s a one in 1,000 chance of you being killed, a typical value of statistical life, that might be like $10,000 in cost to the person making the decision and increased risk of them suffering some consequences from the release.
Carl Shulman: Even assuming that they die in the event of the release, still they would come out way ahead by going forward with the research. So, you really need input from society, and just the private incentives to avoid risk there are just orders and orders of magnitude too small.
Rob Wiblin: This has reminded me, we have this episode from April 2018, Dr. Owen Cotton-Barratt on why daring scientists should have to get liability insurance, which talks about exactly this issue. Hopefully it might get a little bit more attention in future.
Carl Shulman: Yes, yes. And yeah, that work was performed by my colleagues at FHI with some support from a foresighted insurance company. And I don’t think that policy has been adopted, but it’s a helpful device to illustrate how strange some of these decisions are.
Solutions for bioweapons and natural pandemics [01:45:31]
Rob Wiblin: So, this has been a super interesting detour into a bunch of the history here and a very interesting cost-benefit analysis. What might all of this imply about what some of the most valuable things to do to reduce the risk of bioweapons might be? Or I guess also on natural pandemics?
Carl Shulman: Yeah. So, I think the scary thing is if we consider an active large bioweapons program that isn’t just working with natural pathogens. So for the reasons we discussed before, natural pathogens were not set to get out and cause these huge pandemics, especially natural pathogens that the staff were already vaccinated against. And so, the worry is going to be that technology continues to advance and you’re no longer limited by the natural pathogens. Then there’s another Soviet-scale or even larger bioweapons program, and they’re working with many things that are highly fatal, highly pandemic, and you could have hundreds of lab years per year working on that kind of thing, depending on the ratio. It’s hard to know exactly how much work they would put into pandemic things, because with anthrax, you can aim it. You can say, “You’re going to attack this area with anthrax.” With pandemic pathogens, they’re going to destroy your own population unless you have already made a vaccine for it.
Carl Shulman: And so the US eschewed weapons of that sort towards the end of its bioweapons program before it abandoned it entirely, on the theory that they only wanted weapons they could aim. But the Soviets did work on making smallpox more deadly, and defeat vaccines. So, there was interest in doing at least some of this ruin-the-world kind of bioweapons research.
Carl Shulman: We don’t know if there are these illegal bioweapons programs now. If they exist or come into existence, we don’t know how large they will be and we’re not sure how large a share of their activity they would put into creating these world-ruining types of pandemic weapons that you can’t aim. But if those things go through… And none of them seems vanishingly unlikely because I mean, we did see huge illegal bioweapons programs for most of the history of this bioweapons treaty, for which we would know. And they did put effort into all of the pandemic pathogens that they had available to them. So, if they use 100% of the pandemic pathogens they had available to them, it doesn’t let us rule out that they’ll do more when they can make more.
Carl Shulman: Then yeah, you’ve got this scary world of, given those conditions and given the observed accidental release rate, it’s not like a, “Maybe this will happen this century in a war,” in the way that we might think about nuclear weapons use or someone intentionally giving the order. Like the leader of a country give the order to release a pandemic bioweapon arsenal. But rather, that we might say, “Yeah, maybe we can get a lot of decades, or a century or more, without that happening.” We’ve gone a long time since the last nuclear detonation.
Carl Shulman: The worry would be with these observed accident rates, you would actually expect the things to get out within a couple of decades. You might hope that they would take much better safety precautions than they have before, and you might hope that earlier experience with near misses in such a bioweapons program would cause them to shape up. But that’s the worry that I would highlight for bioweapons, is this advancing technology. If you had the capacity to introduce new pandemic, bad pathogens combined with something like the Soviet history, then that looks like a world where biodisaster happens probably soon. And so, hopefully some of those assumptions are false.
Rob Wiblin: Yeah, okay. Yeah, I suppose you’ve probably heard the episode with Andy Weber or at least read a summary of it, where he’s very bullish on doing mass ubiquitous genetic sequencing to detect new pathogens early and also massively scaling up our ability to quickly deploy new mRNA vaccines, maybe other vaccines as well. I’m guessing you feel good about that policy proposal? Are there any other kinds of things that you’d want to highlight as really valuable things, perhaps for the US government to fund to reduce the kind of risk that you’ve just been talking about?
Carl Shulman: Yeah. So the universal flexible sequencing, it’s definitely more valuable to have things that can work immediately on new pathogens and pathogens that you haven’t seen before, compared to having the same amount of testing apparatus and tests made specifically to a particular pathogen. So, that’s an advantage. It does get quite expensive because if you have a new pathogen and it’s doubling every week, say, or less, then the number of places you need to sample simultaneously to find it grows exponentially. So, every doubling earlier you want to detect it, then you need twice as much. I mean, this is why it’s harder to detect SARS-CoV-2 infection right after you’ve been infected, because there’s so few viral particles in your body. That’s similarly true at the population level.
Carl Shulman: So, I think we should think about it as like, it’s worth it to have systematic surveillance, it pays at the level of individual doctors’ offices and hospitals to be getting systematic disease information. You could do things like target antibiotics or antivirals appropriately to the organism that’s actually infecting people, and treat it as worth going all-out at the national level.
Rob Wiblin: I guess, a very natural thing to focus on given what we’ve been talking about, is just trying to improve the security and containment on facilities that are working with really dangerous pathogens. Is there much of use that can be done there?
Carl Shulman: Yeah. I mean, it seems very likely. So, there’ve been many areas of human activity where people have drastically reduced accident rates or improved efficiency. Things like in car manufacturing, Six Sigma manufacturing, the use of checklists to avoid accidentally exposing people to bacteria in the course of medical treatment. So, doing more rigorous research into the workings of these high BSL labs, finding all of the places that things can go wrong, how often they go wrong due to human factors, and find either ways of setting them up to remove the step where a human can screw things up or where ventilation can fail. So many of these accidents are ventilation failures and then sort of human error. So you could do that kind of work.
Rob Wiblin: Yeah. As far as I know, civil aviation has just been incredible in its ability to continuously improve its safety record. I guess part of the philosophy that they have there is that you never blame it on human error or never blame any person for making a mistake, because if there was an opportunity for a person making a mistake to cause a terrible outcome, then it’s the system that’s broken and you have to find some way to fix it because people are just always going to make mistakes. Do you know if there’s perhaps enough of that mentality within the biosecurity establishment?
Carl Shulman: Yeah well, so biosecurity and biosafety are not identical. Yeah, I mean, one worry is this sort of risk compensation where you have the example of nuclear codes all being set to, I think it was 0-0-0-0, because the service carrying the weapons didn’t like the idea of having a delay to input the code. So, when people feel that the threat of a pandemic escape is low, if it’s one in 500, one in 1,000, it’s hard to get a lot of psychological momentum behind it. And so nominally protective measures —
Rob Wiblin: People start cutting corners.
Carl Shulman: Yeah, they will tend to cut corners at the margin. One thing that the aviation case I think has done well is very rigorously tracking incidents. And so you can then improve on the sort of precursors to an actual disaster, and you can have one engine fail before two engines fail. You can have a bad reading on one of the engine sensors and scale that out.
Carl Shulman: It seems like the reporting systems are not as good as you would like for this kind of work, and so you could improve there. Yeah, taking lessons from that kind of safety culture and standardization around it, it seems, could be helpful. Also, just increasing the safety standards for certain kinds of work. So right now, it seems like there are different safety requirements for different sorts of pathogens. You’d think that a pandemic release that would cost $10 trillion should have a greater level of safety requirements than anthrax that might kill one worker there.
Rob Wiblin: Yeah, you would think.
Carl Shulman: You would think, but these things are not perfectly aligned and they become orders of magnitude less well-aligned for these kinds of work that’s in danger of releasing a pandemic because it’s just so much worse than any of the other kinds of harm a lab accident can do. Some of that work even happens below the maximum existing biosafety level.
Rob Wiblin: Right. Yeah, I’ve heard a little bit about this and so I think one problem is that the kind of cost-benefit analysis that goes into deciding how much risk are we going to tolerate focuses mostly on the immediate damage that comes to people exposed in the lab, who then might get sick, and then maybe people who they directly affect, rather than the world-scale effect of what if the pandemic gets everywhere?
Rob Wiblin: Oh, and I think there’s maybe also an issue where if you’re dealing with a pathogen that might be capable of causing a global pandemic, but hasn’t yet been demonstrated to do so, or hasn’t done so in the past, it’s not necessarily classified as maximally dangerous. So, you could have a pathogen that hasn’t yet been characterized, that has — through what you’ve done, or through some chance event — has become much more contagious and now escapes existing human immunity. But until you’ve proven that, it doesn’t necessarily have to go in the very top facility, is that right?
Carl Shulman: Yeah, that’s exactly right. So, I mean, there are still safety standards for that work, but they are a lower safety standard than, say, working with smallpox or Ebola, and that’s just wrong. The expected damage caused from research on Ebola in labs is just microscopic compared to the expected damage of unleashing major pandemics. So if you’re doing work that’s just inventorying many wild viruses, and certainly with active gain-of-function or bioweapons research, yeah, the expectation is orders of magnitude, whereas Ebola is not going to spread in an epidemic in rich countries. We’ve seen that incoming cases are relatively quickly intercepted and contained.
Rob Wiblin: Yeah. It’s interesting. I feel like on the show, a number of times before, I’ve complained about how some areas of research, I guess especially human subject research, is just incredibly bogged down and slowed down by ethics concerns and concerns about potential harm that people might suffer, even when it seems like there’s not really any plausible significant harm and everyone’s consenting to participate in it. And yet you have here, it seems like the only people who get much consideration in the decision-making process are the staff at the lab and whether they’re going to get sick or not, though the bigger picture is really missed.
Carl Shulman: Yeah. I mean, that’s an exaggeration in that… So there are higher requirements for things that are airborne and transmissible on average, and certainly there’s a difference in concern. It just fails in proportionality when the expected damage caused is orders and orders of magnitude higher, but then the increment of safety is so moderate. There was a nice tweet by Daniel Eth recently that compared the case of challenge trials, where you have people who are enthusiastically consenting to participate in a trial where people are given a vaccine and then exposed to COVID, so you can quickly determine whether it works. If we had done this early on in the pandemic, we could have had vaccine distribution months earlier, millions of deaths prevented, enormous economic gain, trillions of dollars. So, that’s a case where you have enthusiastic consent from people to save lives. The risk is lower than what —
Rob Wiblin: The risk is very small.
Carl Shulman: Yeah, it’s lower than what healthcare workers and grocery store workers are undertaking because they would do it with young, healthy people. And yet, there were sort of qualms about how thorough was the consent, and you know, you’re harming people, even though they’re choosing to take the risk to help people. And we wind up not doing the challenge trials and having millions of deaths and trillions of dollars of additional loss.
Carl Shulman: But then you have areas like gain-of-function research, or as you mentioned, inventorying new viruses coming in from animal reservoirs that are not known yet whether they are pandemic, where they haven’t previously had a chance to become pandemic. And there you’re exposing billions of people to this risk without their consent. And we know a very large fraction of them reject the research when you poll people about this, and even informed people reject being exposed to this risk. They say, “Look, the benefits to me are not worth these costs by a lot.” And yet they’re free to take this risk with everyone else.
Rob Wiblin: Yeah. Sometimes the world really is just crazy, or at least inconsistent.
Successes and failures around COVID-19 [01:58:26]
Rob Wiblin: So, I guess the experience with COVID-19 has potentially taught us a bunch about how well society functions and how well it responds when there is a new pandemic. I’m not quite sure myself whether the update is in a positive or negative direction. I guess there’s been a bunch of ways in which we did a whole lot more maybe, than you might worry that we would do, but there’s also been really major failings as well. Overall, do you think the amount of effort that we put into containing and preventing damage from COVID was more than you would expect, or less?
Carl Shulman: Yeah, so as you say, there’s a mixture. It’s certainly a drastically greater response than we’ve seen to other diseases in the past. So a lot of prior public health wisdom, I think, has been violated and a number of things that effective altruist authors had written — for example, about pandemic response that were at the time against conventional wisdom — have now been embraced.
Rob Wiblin: Yeah, if we had a much worse pandemic, something that was even more contagious perhaps, and had a 10 or higher percent fatality rate, how well do you think it would cope given the experience that we’ve had over the last 18 months?
Carl Shulman: Yeah, I think, so the data show that willingness to spend trillions of dollars is there and to suspend and adjust ordinary activities are there, at least for much of the world, and in some cases it’s very strong. So, the fact that some countries have basically weathered COVID unscathed, that gives you a stronger argument against existential risk from anything that’s susceptible to these sorts of countermeasures. But the big problem was not enough of the response being aimed directly at solving the technical problems and directly defeating the virus, and then not more efficiently suspending or getting around various regulatory barriers to response.
Carl Shulman: And so when I think about an engineered pandemic that kills 50% or 90% of people infected and was more contagious than COVID, then I’d see more change in personal behavior. So, in this pandemic, young people could accept a bounded risk of just fully exposing themselves and allowing for transmission, and then fatalities concentrated in the elderly and vulnerable. If the thing was more consistently fatal, then you’d see more compliance with measures to reduce exposure. I think you’d get more suspension of the regulatory stuff, not all the way to what you should, but maybe more like wartime responses. Like, World War II, you did have a lot of regulatory barriers removed successfully, although not all.
Carl Shulman: And I suspect that there will be some update from this experience in favor of doing the medical countermeasures more vigorously and more quickly. I mean, there’s a lot of movement in that direction and they would certainly have an easier time with a more severe pandemic 100 times as fatal. The individual decision-makers and politicians would have a much greater engagement with a pathogen: “Will this kill me personally and everyone I love?”, rather than seeing it more in political and career terms. Not that politics is ever removed. But yeah, so I’d see mass mobilization and I’d say this has removed the worst stories of “We will just completely not respond at all.”
Rob Wiblin: Or just socially disintegrate.
Carl Shulman: Yeah, some effective altruists from around your neck of the earth had done some analyses about travel bans for New Zealand in the event of a severe pandemic. And so contra pre-pandemic conventional public health wisdom, for a severe enough pandemic, indeed it can be worthwhile to have restrictions on incoming cases. And likewise, countries with exposure to SARS had stronger cultures around masks and whatnot and effective virus response. So, you can see some accumulated pseudo-knowledge being removed and a nudge for more effective cultural responses.
Rob Wiblin: Yeah. So it seems like the effective altruism and rationalist communities have had some wins with regard to their response to COVID-19, as well as some mistakes and some misses. I’m curious to hear what you think are some instructive examples of those. Maybe let’s do the wins first.
Carl Shulman: Yeah. So on the win side. So there’s something to just… Yeah, this is evidence that attention to pandemics was a good move. I don’t think it should update particularly drastically our estimates of the risk of pandemics. We already expected things like this to happen. Concretely in improving the response, I’d say one of the most distinctive things was a lot of EAs were supportive of challenge trials or signed up to participate in them. And EA had already been into that. A lot of EAs had volunteered for malaria challenge trials in the past. And so that’s good. And leading actors in driving that forward, like 1Day Sooner, that was something that non-EAs had taken up. And then, yeah, a lot of EAs were supportive of it or advocating for it. But yeah, so that went forward.
Carl Shulman: And if it had been in the strongest version of it, that could have been drastic, it could have cut the damage of the pandemic by more than half. If we had intense, quick challenge trials in like the first months of 2020, and then shown that the vaccines work as we now know they do and as scientists commonly thought they would work earlier on. So that would have been great. And then if combined with massive vaccine orders, could have gotten us to much, much higher vaccination rates than today, earlier. And so averted most of the deaths heretofore and also probably most of the deaths that are upcoming. But that didn’t happen. Even though challenge trials ultimately did go forward, it turned out it was still a difficult enough lift to get them through. And there were enough additional barriers on like, “How do you do a challenge trial quickly? How do you get approval to culture COVID, or rather SARS-CoV-2, and infect people with it?”
Carl Shulman: Yeah. And so we did wind up having challenge trials. They did not expedite proof of function for the core vaccines. There’s still room to benefit from them on things like… because we experiment with different dosage levels and such. But basically it was like success in moving challenge trials towards realizing their potential or at least support of that correct movement, which was by non-EAs as well. And so yeah, so that’s one front where it’s not a massive success in hand, but it’s sort of progress.
Rob Wiblin: I think from fairly early on, I got the sense that the challenge trials were going to take so long to get approved, that this was more of a play for the next pandemic. That we had to figure out some way to do it this time, so that we would have the infrastructure and the approval so that hopefully we could do it in a far more expedited fashion when the next pandemic comes along.
Carl Shulman: Yeah. I think that’s right. And then some other long-term plays that they’d made previously like funding of flexible diagnostics. So like Sherlock, that was a technology that was being supported by Open Philanthropy in part in the name of its application to future pandemics. And then in fact it was not fully developed and with a big track record when the pandemic hit. And it’s taken most of the pandemic for that to get through to approval. But on the other hand, we’ve seen the benefits of mRNA vaccines, which are an example of a flexible vaccine platform. Which is the kind of thing that people worried about pandemics, especially engineered pandemics, have been keen on for a long time. Because it’s something that improves your ability to react to new diseases that you’re not anticipating.
Rob Wiblin: Are there any cases where people put forward ideas that maybe in retrospect just look misguided?
Carl Shulman: Yeah, so certainly there were a bunch of common misconceptions and where you’ve seen conventional wisdom pivot back and forth. So Western medical authorities adopting the view already held in the previously SARS-affected areas that masks are a good idea given the state of the evidence, things like that. And so yeah, those things were not always gotten right by people in that community. Although on average, I think you did better following some of the EA and rationalist sources maybe than the average. Still, did not get all of those right from the start. And then at the personal level, I think, so EAs did a great job with microcovid.org, which actually helps people to calculate their risk of COVID exposure from different activities. But you also had people who I think went overboard in reducing their COVID risk in the face of uncertainty, which can be a reasonable reaction to uncertainty. But I think some took it too far.
Rob Wiblin: Yeah. You’re thinking people who were just so worried about COVID that they just bought too high a price in terms of social isolation or other costs to their ability to get things done.
Carl Shulman: Yeah, that’s right. This is driven by things like, you’re uncertain about the study that says or provides weak evidence that mechanism X of transmission is low. And so then to cover their bases, they then don’t engage in that. Or also they take the uncertainty about long-term effects of the illness, which are hard to assess in the short term, and then take extra precautions. And yeah, and of course that makes sense. We do find long-term effects in other diseases sometimes, and sometimes the long-term effects are greater than the short-term effects. Like human papilloma virus, which causes cancer, a great proportion of cervical cancer. But you got initially what seems like a mild infection and then the cancer comes later. But I think some people still went too far on that front. Or if they were going to be taking that many precautions, should have then acted to reduce their uncertainty faster.
Rob Wiblin: Yeah. I guess the thing with uncertainty is that it potentially cuts both ways that the long COVID stuff could be worse than it initially seems, or maybe it could be less bad.
Carl Shulman: Well, I mean, although… That raises the expected harm if you’re starting off from a low level and you have uncertainty over significantly higher, then it drives that expectation up.
Rob Wiblin: Yeah. That’s fair enough.
Who to trust going forward [02:09:09]
Rob Wiblin: Has our experience over the last 18 months made you trust experts more or trust experts less? I’ve thought about this a lot. And I feel like I don’t really have a clear picture. It seems like, very complicated because some experts within some categories of knowledge seem to have done very well. Some it’s been mixed. Some it’s been quite disappointing and I don’t feel like I could easily sum up what we’ve learned about who to trust in difficult uncertain situations going forward.
Carl Shulman: Yeah. I agree it’s complicated and you’ll get a more accurate model by attending to particular characteristics of different sources of advice and what their knowledge level is, what their incentives are. One update I think is that, so a lot of these institutions have a very difficult time speaking forthrightly to the public. So there’s a lot of concern with using statements about the world to manipulate people, usually for their own good or for their collective good. But that means you can’t actually trust at face value statements that are being made with the concern that, say, there will be some fraction of the population who will react in an irrational way or a bad way to some information. And therefore, then an institution that is worried about some backlash against it or risk averse in other ways — they’re just trying to manage this collective thing — then thinks, we can’t be forthright about certain things.
Carl Shulman: And an example of that early on was the sort of confused line about masks in Western countries. Where Asian countries with experience with SARS were gung ho, yeah masks. There’s like basic physics, why it should work.
Rob Wiblin: Yeah.
Carl Shulman: Western authorities more went off of limited and underpowered studies that weren’t able to clearly show a result, but also didn’t clearly rule it out. And then you got statements coming out… Like people stopped buying masks. “They don’t work and they’re needed for our health workers.” Which is sort of a confusing statement because shouldn’t they work for the health workers? And the reconciliation was, well, they work a certain amount and they work better if you do the difficult task of properly fitting them. But they’ll still work because we know collectively, Western countries have come to believe.
Rob Wiblin: Yeah, there was a brief period, I think in March or April where it was actually illegal to advertise face masks in the UK as preventing disease, on the basis I think that it was misleading advertising. So you couldn’t sell or advertise face masks to the public on the basis that it would help with the COVID-19 pandemic. And then I think by May, it was mandatory to wear them on buses or in lots of public spaces. So yeah, quite a rapid flip there. And I think I disrespect the decision-making process or how slow it was for the UK to come around to something that the great majority of the world, I think, had already recognized months before the UK accepted that masks do in general help.
Carl Shulman: Yeah. A lot of those problems are made more difficult by the existence of bad actors. The fact that there are scammers and predatory companies that will sell snake oil, and if you didn’t have them around, then it would be easier to have this straightforward discourse. And for people just to talk about their best estimates about uncertain issues. And so I understand that there’s that pressure to regulate claims and to avoid ever actually stating useful actionable advice about anything. But clearly in aggregate, yeah, that’s causing problems for the system. And the need to get accurate advice to go to things like social media of individual scientists who, freed from some of those bureaucratic and social constraints, or things like microcovid.org, where they could actually just state best understanding of the current evidence. Rather than a defensive epistemology where every statement is overwhelmingly organized around how folk are going to misinterpret it and attack it, and not have a direct conversation of the sort that you can have between two parties who trust each other.
Rob Wiblin: Yeah. All right. That’s been a long detour into bio, which I think was worthwhile given that you know so much about it. Just briefly I was saying something about artificial intelligence, but how excited are you by mainstream attempts to make AI safer and more robust, like the kinds of ones that are going on at DeepMind and OpenAI?
Carl Shulman: I am delighted that both of those major AI labs have taken on significant safety groups that are attempting to pursue research lines that can scale up to future scenarios. Yeah, there was a recent paper on learned human summarization using human feedback. That is, where instead of giving a simple objective or a limited set of labeled examples, you train one model to predict the feedback a human would give. And then that can extend and amplify the result. Most of the time the agent performing the task is getting feedback from the model of the human. And then you have more limited draws on the scarce, expensive actual human time.
Carl Shulman: And so at the moment, the kinds of research that are being done are not close to what seemed like robust solutions or robust assurances. And so still very much the hope is that this field builds up, is able to get its problems more to an engineering level, is able to figure out the thing that it needs to solve in advance if that’s possible. And then can deal with issues that only arise once our systems are more powerful. But I’m glad that some action is being taken even though much more needs to be done and we need actual success in it to assuage my concerns.
Rob Wiblin: Makes sense. Let’s delay on fully diving into the pond of AI because it’s kind of a bit of its own beast.
The history of existential risk [02:15:07]
Rob Wiblin: I’d like to now talk about some research that you’ve told me you’ve done where I think you and a bunch of colleagues tried to do a sort of intellectual history, I suppose, of the idea of existential risk. Where you wanted to look back over history over hundreds, I guess, possibly thousands of years. And look for any significant instance of someone claiming that a particular technology or phenomena or event could begin to have a massive impact on the trajectory of humanity, both positive and negative effects. Yeah, for the sake of brevity, I’m tempted to call these events or trends or technologies “really, really big deals.” What was the goal of that background research, trying to itemize and categorize all of these potentially really big deals?
Carl Shulman: Yeah. So I think that, that’s combining actually several different things at different places and times. I’ll break them up and then we can go through it. So more than a decade ago, when I was first getting into this field and before that, I was interested in what are the other potential big deals and wanting to make sure… So first, not missing something huge and more important than what I’m currently working on. And second, it’s helpful to get base rates in interpreting certain kinds of evidence. So if you have some evidence where it seems like a certain thing is unstable or it’s going to deliver some huge results, then insofar as it’s possible, you really want to find similar cases or cases where people thought they had evidence of the same kind, or it like looked similar on various dimensions, and asked, “Well, when ground truth was ultimately found, what was the past success rate?” And people sometimes call this “outside view forecasting,” and it can inform our prior and help us find where important things are.
Carl Shulman: And when I’m doing this kind of search, I like to use things like exhaustive taxonomies. So things like going through all of the major scientific fields and, you know, are there risks or transformative things that people have associated with those. Trying to find comprehensive assemblies of major technologies people have discussed and the lists of possible disasters and catastrophic risks people have considered. And then if we go through all of those and see how those lists have changed over time, how the assessment of them has changed over time, then that can give us information about how seriously we should take our object-level impressions about things today.
Rob Wiblin: Okay. So one aspect of it is, if there were lots of times in the past where people suggested that there could be something that was a really big deal and the argument seemed reasonable to them at the time. But then it turned out that when push came to shove and we really understood things, the odds were very small or the effect would be quite small. Then we should then be somewhat skeptical of the analogous cases today where we think we have good arguments or something could be really influential, have a big effect on the trajectory of humanity. Like artificial intelligence or biotechnology or whatever. We should be somewhat suspicious of those arguments because we’ve had these kinds of false alarms in the past.
Carl Shulman: That’s one angle. And another angle would be, you don’t want to have something that had just happened to come to your attention, and then you wind up focusing your whole career on that. But it turned out there was something else in some area to work that would deliver the same goods, but more so. And so you just wind up that, you happened to already be familiar with that field or happened to have some personal connection. An example of that is, sometimes people suggest that worry about artificial intelligence is something that comes from Silicon Valley because people in Silicon Valley think computers are powerful and they think about computers all the time. And so when they think about risks, they’re going to think about AI disaster. And then sociologically, that’s not actually, I think, what happens and what did happen. Like if you trace out the intellectual history of things.
Rob Wiblin: It’s more coming from outside.
Carl Shulman: Well, so you did have early pioneers of artificial intelligence and elsewhere noticing the possibility of risks there. But then some of the recent revival of interest was triggered by people like Nick Bostrom, who’s a philosopher, not a computer scientist and people… There’s of course criticism in the other direction of, well, Nick Bostrom, he’s a philosopher, not a computer scientist. And now you can say, well sure, von Neumann and Turing and Good and these other luminaries in the 50s and 60s were bringing up the possibility of these risks and issues. But yeah, there are bias arguments both ways. It’s interesting to understand like, yeah, what is said by people coming from different field backgrounds and when you take the broad sweep and look at everything, yeah, it is something that for one reason or another, there’s a professional deformation that’s causing overestimation or underestimation of the relative importance of an area.
Rob Wiblin: Yeah. Just as an aside, that argument has driven me mad over the years. Because it does have this character of, if you say, “Well, people in Silicon Valley or people who know about computer science and AI, they’re concerned about it.” And some people will dismiss you, saying, “Oh, well, they’re just biased because they self-selected into that because they think it’s a really big deal.” And then you can say, “Oh no, actually it’s like, there’s also a similar number of philosophers or people in medicine or any other field.” They would say, “Well, they just don’t have expertise.” So you end up in this catch-22 situation where nothing could potentially persuade someone who is keen to be skeptical to that degree, that there is a problem here.
Carl Shulman: Well usually it’s not the same people at the exact same time giving both of those arguments. And there are ways of reconciling them, but there are flawed arguments in this area. And I can empathize with your feelings.
Rob Wiblin: Yeah. Okay. Let’s maybe do the second one that you mentioned first. So that was just, are there lots of other things that could be more important than the things that you were already aware of, say in 2010? What did you find in your search?
Carl Shulman: Yeah. So one is threat at very high levels. There are… Things fall into a limited number of categories. And so for those classes, things where there are intelligent actors who wreak havoc — so those include things like permanent totalitarian regime, various kinds of transformations of society. People used to be concerned about different religions and empires establishing lasting dominion. Artificial intelligence, people have had related ideas about things like genetic engineering and like the introduction of new beings that way who might be better or worse in various respects, might drive the future into a bad place. The Brave New World kind of scenarios. There’s a large class of things around war and just different weapons, but can then be brought into the broad causal factor of war. And so nuclear weapons, biological weapons, swarms of robotic weapons can all fit into that category.
Carl Shulman: And then a number of other things that have been discussed sort of like exotic climate disruption, yada yada. So there’s things of that sort. There’s biological mechanisms where the danger reproduces and spreads, but it’s not itself an intelligent actor. And so yeah, big classes of things like that. And then you have a set of things that are mediated through disruptions to our surrounding environment. And so climate change, nuclear winter, falls into that sort. Damage to the ozone layer and like other astronomical and other things that would disrupt food production over the whole earth, or things that involve chemicals or poisons permeating the earth. And then in the intelligent actors, there’s also supernatural beings and aliens and things of that sort.
Rob Wiblin: And I guess, religious doomsday, did that qualify for the list?
Carl Shulman: Yeah. So religious doomsday is a little tricky. So by far the vast majority of apocalyptic stories are religious in nature, and historically that’s almost all of them. Yeah. And so then your question is, well, what to make of that? And if the category of things that are coming out of scientific nonreligious epistemology, or that commanded expert assent based on evidence — that you didn’t have to be raised under intense social pressure and a religious movement or brought up in it to buy. And so, yeah. So if your reference class of things for which we have similar evidence includes all of the different stories that different supernatural deities are going to sort of swoop down and destroy everything. And including all of the particular instances, because in some religions that have a doomsday story, every year there will be some preachers and religious groups, they will say, “Oh yeah. Now is the time based on some flimsy evidence about faces in the clouds and the stars are right.” And things like that.
Carl Shulman: So if our level of evidence for things like pandemics, and nuclear winter, and artificial intelligence disaster were on a level with that for, you know, Zeus is going to have a great flood and destroy life on Earth, then the base rate is super bad. 99.9-plus percent of them are totally bogus. I don’t actually think that the level of evidence for things like climate change and artificial intelligence disaster and so forth are at that level. I think they’re much better in systematic and predictable ways.
The most compelling risks [02:24:59]
Rob Wiblin: Okay. So yeah, you’ve got a bunch of categories there. In general, did you find that there were more or less things on this list than perhaps what you expected going in? And were they as good as or better than the stuff that you already worried about, or maybe less compelling than the stuff that already stood out to you?
Carl Shulman: So in the category of like natural disruptions to the environment, so there’s a lot of particular, fully detailed examples. So you can split asteroids and comets. They are different sorts of astronomical bodies. I think like supernovae, gamma-ray bursts, a wide variety of these things. They’re all so unlikely though that, for the reasons we discussed earlier, the base rate and the track record is such that as you accumulate them, they don’t really change much to your estimates. But so we’ve learned more about those as our mastery of astronomy grew. I mean, in the distant past, we couldn’t even see events of this sort.
Carl Shulman: And with respect to things like pollutants, so that has developed as we have worked with more sorts of materials and processes. Climate change being a striking example. Climate change in one sense was detected very early. You had people like Arrhenius at the end of the 19th century, examining it. His estimate of climate sensitivity was too high. And it didn’t become a widespread popular concern among the scientific community until some decades later when their knowledge had advanced more, and also you actually were starting to observe the global warming happening. But yeah, so we’ve had a number of discoveries from just how the basics of the natural world work. And that’s, yeah, concentrated in the period of like the last century or so when it’s even become possible to understand a lot of this stuff.
Rob Wiblin: Yeah. Did you get any big updates on how much to worry about the kind of intelligent malicious actor thing or the potential for war? Which, you know, has been one of the most regularly disastrous things throughout history.
Carl Shulman: Yeah. So with respect to war, there’s definitely been some trend over the last few centuries for us to see more possible collateral damage. So yeah, with nuclear weapons, biological weapons being interesting examples of that. One thing we haven’t seen is as much use of those weapons as people might’ve feared. We talked earlier about Ellsberg’s reports from RAND. Initially, people feared these WMD situations would be very unstable. And so far they have been surprisingly stable, at least in the current geopolitical context. So that’s an update. And then with respect to the kinds of weapons that might have broad spillover, so a lot of them were mentioned as classes relatively early on. So biological weapons causing pandemics came around shortly after the germ theory of disease. And indeed, biological weapons programs were not that far after that to actually make it happen or at least move towards exploiting that capability.
Carl Shulman: And likewise, the idea of drone weapons and even self-replicating ones was sort of noticed in the early part of the 20th century into the 40s. Nukes, once the physics was known, it was within a decade or so people were having a relatively good understanding. But that came relatively close to the thing actually being developed. So nukes were unusual in that way. With bioweapons or artificial intelligence, the conceptual idea of the thing was known long before, whereas nuclear weapons came relatively quickly from understanding of radioactivity. So you didn’t have this very long duration of knowledge of this rough kind of mechanism. People did have the idea of superbombs and indeed we’d built enormous bombs. But nukes were an additional level that was not anticipated.
Rob Wiblin: Yeah. It’s more practical than just lots of TNT stuck together in an enormous package.
Carl Shulman: Yeah. And then as we talked about earlier, nuclear winter was a late discovery. You know, well towards the end of the 20th century. That it was really grokked as an effect of nuclear weapons. And then there were some modification views about radiation and other damage and the like.
Rob Wiblin: Yeah. So this seems like, you’re worried primarily about artificial intelligence and biothreats and nuclear threats and I guess, great power war maybe. And then maybe climate change. Were those kind of the same things that you worried about before you did this project? Which would then suggest that further research doesn’t massively overturn the conclusions? In fact, maybe the things that seem worse do just really stand out from the list, and that’s just because there’s strong reasons to think that they are the greatest threats.
Carl Shulman: That seems like the broad picture. Definitely the fact that nuclear winter was discovered late is some evidence for novel, unknown kinds of disaster appearing. I’d also add the category of civilization being stabilized and locked onto a bad trajectory, which is one. That’s one of the older ones, because it was easier to understand. And certainly early in the 20th century, there was a lot of concern about that. And you can think of artificial intelligence as just a particular instance of that. That is, an AI disaster does not lead to the end of civilization on earth. But it potentially leads to a sudden change in the composition and motives and character of that situation. And perhaps the deaths of a lot of the people or all of the people around at the time.
Rob Wiblin: So quite a significant change to the character potentially.
Carl Shulman: Yes, yes. And one where we might expect it to be somewhat worse and not sharing some of the criteria that we use to evaluate things. Like, for example, caring about the wellbeing of some of the creatures and the civilization. Yeah, there can be civilizations that are populated entirely by aliens and AI and nonhuman beings that are very good, or much less good, or bad. And so completely losing control of where things are going for a perhaps somewhat randomly rolled artificial intelligence program. There’s a lot of ways that could go worse as we would evaluate it. And so it’s a relatively extreme way in which the trajectory of our civilization could go off track from what we would like. And then also could stabilize in that way, because then you have AI systems that are able to design their own successors and prevent divergence back to where we would steer it. Because the steering mechanism has been replaced.
Rob Wiblin: Okay. So maybe nuclear winter stands out as one where it seems like it could have been discovered earlier, but wasn’t, and came along a little bit late. Indeed, after we’d been running the risk of creating it for several decades. So maybe that presages that there could be future things like nuclear winter that we could have identified. But we’ve just failed to do so for, for whatever reasons. It sounds though like the broad list was kind of stable. You didn’t uncover any really important risks that were as yet unrecognized by you and the people you were working with.
Carl Shulman: Yeah. There were some things that stemmed out of old erroneous scientific theories or uncertainty. As we talked about, the possibility of a nuclear chain reaction igniting the earth. And so that was something that was rolled out by scientific knowledge. And so there were some things of that sort. Also, there were a lot of cases where people identified the theoretical possibility of a technology. And so they’re identifying it as a threat, but then the technology takes a long time to reach fruition. And so you can treat that as a negative. People discussed possible threat from some technology when it reached a certain level of development. And even though it’s advanced, it hasn’t yet reached that level of development. So you can say like —
Rob Wiblin: I guess nanotechnology is maybe in that bucket, where people have worried about ways that nanotechnology could go awry and cause harm. And to some extent, I guess we don’t know whether they’re right, because nanotechnology hasn’t really advanced that much. If they’d made specific forecasts about the rate of advancement and what would have particular capabilities, then they might be wrong.
Carl Shulman: Yeah, some did make forecasts along those lines. I’d say that area of nanotechnology is one where a certain intellectual community, people interested in advanced technology who are interested in space and artificial intelligence… You might associate this with the broad transhumanist community. I’d say yeah, that nanotechnology is one of the cases where you had a lot of people who are more optimistic about the pace. Well, or just expected more technological progress in that area than had been. There are some other areas where they’ve done reasonably. So like Hans Moravec‘s view from early in the 20th century about the key role of computer hardware in AI progress I think is looking a lot better with the recent results in the last decade of just seeing systematic success scaling up by spending more compute on the same problems and cracking a lot of the solutions.
False alarms about big risks in the past [02:34:22]
Rob Wiblin: Yeah. Okay. So that was one angle, trying to find problems to work on, risks to work on that might be even better than ones that you were already aware of. Another one was just looking for the number of false positives in the past. Where someone in your same situation would’ve been incorrectly convinced to work on something that ultimately would turn out to be misguided. Did you find many examples of that? Of false alarms about really big deals or really big risks in the past?
Carl Shulman: Yeah. So interestingly it depends on how deep you probed into the particular risk. There are a number of things that have been commonly described as threats to the existence of humanity or as extinction-level events, when that was never actually particularly present in the scientific literature, or never detailed arguments. So it’s a citation trail to nowhere. And we could discuss several of those probably.
Rob Wiblin: Yeah. Yeah. Give us an example of one of those.
Carl Shulman: Yeah. So one thing that was interesting and informative to me was when I started looking into nuclear risk and nuclear winter. So I’d seen many cases of people talking about nuclear winter as causing human extinction. And then I was intrigued and surprised to hear that when I actually inquired, the authors of the major nuclear winter papers thought the chance of direct human extinction by way of these nuclear exchanges was extraordinarily small. Not like one in 1,000, but much lower than that.
Carl Shulman: And they had a lot of reasons, like aquatic life in the oceans, you have biomass that you can convert. You have many years of food stocks in scattered places around the world, New Zealand and the lower latitudes surviving relatively well. Things of that sort. But yet as a matter of rhetoric, there’s this tendency for things to develop into saying that it will lead to extinction or the end of the world. And maybe it just sounds rhetorically powerful. And sometimes the people will mean well — it would be the end of our civilization because half of Americans would be dead. And we might change our system of government or like, we’d say it’s a new era. Things like that.
Rob Wiblin: Yeah. It’s interesting, I suppose. Well, I guess it would be the end of the world for many of the people having those conversations. So it maybe makes sense for them to —
Carl Shulman: Absolutely.
Rob Wiblin: Yeah. So yeah, maybe that has some influence and I guess also just, yeah, the world would have changed so much that it’s the end of our… Yeah. It’s slightly conflating the end of our species with the end of the current, like, civilizational era. Which is understandable. Yeah. What’s another example of something where when you dug into it, people would have loose talk in newspapers about extinction, but then the scientists knew it wasn’t going to happen?
Carl Shulman: Yeah. And so you’re in the UK. And so this Extinction Rebellion movement, which is trying to galvanize action to mitigate and avert climate change, which is a great objective, but then they formulate the rhetoric in terms of like, humanity is doomed and will be extinguished. Everyone will die, your grandchildren will not grow up. And then when they’re challenged for being so out of whack with the scientific consensus and asked like, “What’s the evidence for the claims?” and they just sort of change the subject and engage in the rhetoric. So they’re not even in the business of making an argument that this will actually happen. It just sounds good. And so they’ve chosen that as a rhetorical strategy.
Rob Wiblin: I see. Okay. Yeah. Any other notable cases?
Carl Shulman: Yeah. So Toby Ord discusses some of these in his book. I had a bit of correspondence with him and his research team in its creation. And so yeah, that was interesting. So there’s a set of resource depletion arguments. We say like, some resource X — sometimes it’s phosphorus or fossil fuels — is running out and then we’re going to collapse. And sometimes people then like, say… And that goes to extinction. Yeah. And so yeah, usually again, there’s no citation trail that goes like, how does that work? Yeah. And then the biggest, most obvious argument for a lot of these is if you have some substitute for that resource that is scalable and we know how much that substitute costs, then we can say, “Okay, well, if we switched over entirely to the substitute, how much of our GDP would we have to pay for it?”
Carl Shulman: And so right now we spend less than 10% of GDP on energy. And there are abundant energy sources. Solar, nuclear, they’re somewhat more expensive than natural gas and you have to use the batteries, but they’re not drastically more expensive. And there’s a lot of room for energy efficiency and adjustment in the same way. Like with food. Like half the world’s food output, it’s actually more than half, if you combine food waste and feeding to animals for animal agriculture. So vegetarianism reduces the amount of land you need to use to produce a certain quantity of calories. So if you combine those, there’s enough slack to have food production fall by half, and no one starve, just by feeding the food to humans and not letting it rot or using it in factory farming and animal agriculture. And with energy, again, if you increase spending on energy from like 10% to 15% of GDP, or you say, take some of it by using a little less energy, that’s not the end of the world.
Carl Shulman: It’s not human extinction. And for phosphorus, yeah, it’s possible to up recycling. But also you can expend energy to extract it from lower-grade ores and things of that sort. And so, there’s no obvious way in which this could actually happen for the major resources that people sometimes moot. But it’s a meme in certain circles. And certainly it would be bad to have large price rises in some of these commodities, and that sometimes happens. But the story you go to extinction, it either doesn’t work or it has to pass through something else. Like you say, “This moderately increases the chance of a nuclear war and biowar all happening at the same time and causing extinction.” But it’s not like a direct channel to doom.
Rob Wiblin: I see. Okay. So it sounds like the big picture is there have been quite a lot of suggested things that could cause massive disasters for humanity or even extinction. But most of the time, they weren’t even really serious suggestions. They were suggestions being made by people who wanted to write an interesting book or publish an interesting article in the newspaper. And if you’d followed up at the time, you could have reasoned it through and realized that this was not a serious proposition.
Carl Shulman: Like the world’s not going to collapse for lack of honeybees. And if the honeybees all disappeared, we get 90% of calories from other things. And you could substitute those crops. But it’s an exciting headline to describe some research about the spread of mites among honeybees as “We’re all doomed, the honeybees are going to be gone.”
Rob Wiblin: Yeah, I think there’s a nice episode of the podcast EconTalk where they talk about what people did in order to respond to these issues with these mites with honeybees. Which is a legitimate issue in agriculture that needed to be addressed. And basically, I think they spent across the US, it was something on the order of like $1 billion extra per year growing more honeybees. Or they just did more to get the honeybees that they had to have more children and grow more nests. Maybe it was even less money than that. So yeah, rather than cause human extinction, it cost a tiny fraction of GDP. Which is quite a large difference. And in any case, as you’re alluding to, almost all human calories come from crops that do not require pollination, like wheat and rice and so on.
Rob Wiblin: So yeah, even the original suggested thing that we would starve for lack of honeybees never made any sense whatever. Okay. So I guess it seems like the general picture that’s coming out of this is, there aren’t a lot of false alarms that you could imagine falling for yourself in the past. And most of the things that you’re worried about you kind of already knew about and they do stand out because there’s stronger arguments to be concerned. So I suppose this would have bolstered your overall confidence that the concerns that you had about these disasters or really big deals was sound on the base rates.
Carl Shulman: Yeah, I’d say that overall. Although it highlights the remaining areas of uncertainty. So things like these technologies that have not yet reached the point where they’re hypothesized to cause damage. So when will that happen? And the relative magnitudes of risks that are each individually serious. So the update about the stability or comparative stability of nuclear arms races, that would be quite a drastic update in terms of how seriously to take that threat versus, say, accidental bioweapon release. So this is a range where it really matters if one risk is 10 times as large as another. And certainly when we were talking earlier about valuations in terms of share of GDP that we would spend on averting the disaster, that really matters. But the story of like, yeah, this is all completely crazy, there’s a thousand things like this that didn’t pan out. I think that’s not true, but we’re still left with a difficult empirical problem in exactly assessing — exactly as we are able, given the fundamental limitations — to estimate the relative magnitudes of these different risks and then prioritize our interventions amongst them.
Rob Wiblin: Yeah. That makes sense. I guess it seems like there’s an era where a new potential threat is conceived and first understood. And we begin to have a picture of what the technology might look like or what the natural phenomenon is. And then it sounds like there’s a window of like sometimes years, sometimes decades where people come to grips with that. I guess with nuclear winter, it took a bunch of research to do with bioweapons, people probably figured it out more. With nukes, there was a very quick transition from people conceiving of it, to having a greater understanding of what actual power it had.
Rob Wiblin: And I guess it’s possible that we’re going through that era of discovery and rapid increase in understanding with AI. Where we’re seeing these substantial increases in the capabilities and we’re getting potentially a clearer picture of what really advanced or human-level AI might actually look like in terms of the gears inside the machine. And so potentially we could get big updates about, yeah, how great is the threat and what’s to be done, just as we have about some of the past threats.
Carl Shulman: Yeah. I think that’s right. So the early AI luminaries considered the possibility of say, automating the whole stack of the human economy and scientific research, and that causing vast acceleration of technological progress and opening up the possibility of removing humans from the economy, whether that be by… at the direction of nasty human regimes or by the AI run amok. But in terms of concrete understanding of the mechanisms of how would we in practice be trying to design AIs to do as intended and the exact ways in which that might fail, that’s something that’s much more informed by knowledge of the technology of the day and yeah, incremental thinking. So there’s a sense in which the obvious general shape of the problem was quickly noticed by luminaries who spent even a little bit of time thinking about it in the 40s and 50s and 60s.
Carl Shulman: But the precise character and what you would need to really assess it… By default, is our likelihood of managing to navigate some of these issues, is it 90% or 50% or 99%? That’s going to be dependent on a more refined picture. And I don’t think you can get that from just noticing the general outline of the problem. It’s going to demand a lot deeper work, which is why much of the work that I do or have recommended and supported is about refining our understanding of these empirics.
Rob Wiblin: Yeah. Just one last question on this section: is it true that these luminaries you’re talking about, the people who saw early glimpses of what AI might look like, that they thought, “Oh yeah, we might well crack this problem of general artificial intelligence and have like a human-level intelligence as soon as in 10 years or five years.” I’ve heard that, but I’ve always suspected that that might be an illusion, just because the people who make the more spectacular and most aggressive forecasts about technological advances tend to get more coverage in newspapers because it’s more interesting.
Carl Shulman: Yeah. Well I would link to a couple of studies of these past forecasts. So Luke Muehlhauser at the Open Philanthropy Project has some posts going over the intellectual history there. And so indeed there was a contingent at the beginning of the AI field who thought that yeah, maybe it’s close. And again, when you start a new field, it’s reasonable to have an enriched probability of success soon after. Because some problems are easy, some are super difficult, some are middling. But a nontrivial fraction of things are solved pretty quickly after folk try to solve them. And so a lot of… Yeah, there are many fields like that. For example, making vaccines in response to COVID. Like indeed, it happened in the first few weeks.
Carl Shulman: So if you can have difficulties distributed over a wide range, some significant chunk of it is in the easy range. And so when you first start, there’s reason to put more. But yeah, in fact, I think what it looks like is that yes, the most prominent voices saying AI is near at the time, they had shorter timelines than the average. And there are some examples of surveys of everyone at a conference that had longer timelines. And there’s some justification in that. Hey, it was a new thing. Maybe it will turn out that we can do a lot to make AI more efficient than biological systems. But also, looking back, we can say they were suggesting that computing elements far less than those in the brain of an ant would be sufficient to have superhuman intelligence. And people like Hans Moravec were saying all the way through, look, you’re going to need computation this close or at least we have the existence proof that it works with large amounts of computation.
Carl Shulman: And we can do some things more efficiently with less computation. Like your calculator can far outperform you in serial arithmetic. Because it has advantages that biological brains don’t and it was optimized for that task. But we don’t have an existence proof that such advantages exist to let you get away with so little computing power compared to biological animals. And it’s looking like it might be, far in the future, people will be able to make some supercompressed, hyperefficient thing that can do a lot with 60s-level computing power or even 70s. But we didn’t have this argument of, “Look, human brains can do it. Other animal brains can do it. Therefore we should be able to duplicate it.” That was more an argument of maybe we’ll be able to do way better than that, say in the way that nuclear power is much more efficient in like energy per kilogram than biological sources. Or that our spacecraft can travel much faster than a cheetah. And it turns out that wasn’t true for the course of AI development and people were mistaken to suspect that it might be.
Rob Wiblin: Yeah, super interesting.
Suspicious convergence around x-risk reduction [02:49:31]
Rob Wiblin: Let’s push on and talk about something that is kind of suspicious about this whole line of argument that we’ve been making through the conversation so far. And that’s that, well, it’s clear enough why people would think that the great majority of value in the universe lies in the long-term future rather than the current century. Why those folks would want to prevent extinction or any other permanent limitation on what humanity can ultimately get up to.
Rob Wiblin: But you want to say that even if you don’t care much about that, you should view almost the same activities, quite similar activities, as nonetheless among the most pressing things that anyone could do. And that is kind of a suspicious convergence on the same conclusion, that the same projects could be roughly optimal for quite radically different moral values.
Rob Wiblin: And whenever one notices a convergence like that, you have to wonder, is it not perhaps the case that we’ve become really attached to existential risk reduction as a project? And now we’re maybe just rationalizing the idea that it has to be the most important. Even if the motivation that originally drew us to care about it is no longer the case. Or maybe it’s just that we haven’t explored sufficiently broadly enough for alternative projects that would be even better from a non-longtermist perspective. But you think that this suspicious convergence can be explained or made sense of. What’s the most important driver for that convergence in your view?
Carl Shulman: Yeah. Well, the first and most important thing is I’m not claiming that at the limit, you get, exactly the same things are optimal by all of these different standards. I’d say that the two biggest factors are first, that if all you’re adjusting are something like population ethics, while holding fixed things like being willing to take risks on low probability things, using quantitative evidence, having the best picture of what’s happening with future technologies, all of that, then you’re sharing so much that you’re already moving away from the standard practice a lot. And yeah, winding up in this narrow space. And then second, is just, if it’s true that in fact the world is going to be revolutionized and potentially ruined by disasters involving some of these advanced technologies over the century, then that’s just an enormous, enormous thing. And you may take different angles on how to engage with that depending on other considerations and values.
Carl Shulman: But the thing itself is so big and such an update that you should be taking some angle on that problem. And you can analogize. So say you’re Elon Musk and Musk cares about climate change and AI risk and other threats to the future of humanity. And he’s putting most of his time in Tesla. And you might think that AI poses a greater risk of human extinction over the century. But if you have a plan to make self-driving electric cars that will be self-financing and make you the richest person in the world, which will then let you fund a variety of other things. It could be a great idea if Elon Musk wanted to promote, you know, the fine arts. Because being the richest person in the world is going to set you up relatively well for that. And so similarly, if indeed AI is set to be one of the biggest things ever, in this century, and to both set the fate of existing beings and set the fate of the long-term future of Earth-originating civilization, then it’s so big that you’re going to take some angle on it.
Carl Shulman: And different values may lead you to focus on different aspects of the problem. If you think, well, other people are more concerned about this aspect of it. And so maybe I’ll focus more on the things that could impact existing humans, or I’ll focus more on how AI interacts with my religion or national values or something like that. But yeah, if you buy these extraordinary premises about AI succeeding at its ambitions as a field, then it’s so huge that you’re going to engage with it in some way.
Rob Wiblin: Yeah. Okay. So even though you might have different moral values, just the empirical claims about the impact that these technologies might have potentially in the next couple of decades, setting aside anything about future generations, future centuries. That’s already going to give you a very compelling drive, almost no matter what your values are, to pay attention to that and what impact it might have.
Carl Shulman: Although that is dependent on the current state of neglectedness. If you expand activities in the neglected area, by tenfold, a hundredfold, a thousandfold, then its relative attractiveness compared to neglected opportunities in other areas will plummet. And so I would not expect this to continue if we scaled up to the level of investment in existential risk reduction that, say, Toby Ord talks about. And then you would wind up with other things that maybe were exploiting a lot of the same advantages. Things like taking risks, looking at science, et cetera, but in other areas. Maybe really, really super leveraged things in political advocacy for the right kind of science, or maybe use of AI technologies to help benefit the global poor today, or things like that.
Rob Wiblin: Yeah. So it’s not just that you have these views, it’s that you have these views and not so many other people do have this thing that you think is a true valuable insight. Such that all of the opportunities have already been taken. So there’s low-hanging fruit there from multiple different angles.
Rob Wiblin: I thought a possible answer that you might give is that, so, a neartermist cares about the current generation surviving and flourishing because of its intrinsic value. A longtermist cares about that somewhat because of its intrinsic value, but also as a means to an end of having a flourishing long-term civilization. But they both have this term that is very central, which is that the present generation survive and ideally, I guess be doing well. And so it’s perhaps not surprising that they might converge on similar projects given that their, kind of… One person’s terminal value is the other person’s instrumental value. Does that make sense?
Carl Shulman: Yeah, it does for a certain class of scenarios, certainly. And that’s where it seems to me that you’re getting a lot of this convergence, but they can come apart. So if you have bunkers that let 10,000 people survive some apocalypse and then reboot civilization. You know, that only saves those 10,000 people. It makes an enormous difference to the long-term future. And so that’s an obvious way they can separate. And likewise, I mean, the tradeoff with things like malaria, bed nets. So certainly if you move further along the diminishing returns curve for global catastrophic risks, eventually the expected lives saved would get worse than bed nets. But even before that, I think if you were successful with bolder strategies along the lines of bed nets, maybe really pushing for advocacy to get a system that will routinely roll out global vaccinations for new infectious diseases.
Carl Shulman: So that thing might be even more leveraged and you don’t expect that’s going to be hyper-leveraged on things like existential risk. And so that’s how they can differ. And I’d see, the sharpest contrast actually, that would be purely value-based from my point of view. As being between not bed nets and preventing accidental AI catastrophe, but more, again, something that tried to exploit as many neglected sources of impact or leverage as the AI accident thing. And so that would be something that would look maybe more technologically focused, or political advocacy. Some kind of combination, maybe, that was exploiting beliefs or facts about the world that are not widely held.
Rob Wiblin: And that would allow you to have a particularly leveraged impact from a neartermist point of view as well?
Carl Shulman: That’s right. And that’s going to be a much higher bar, the most leveraged thing that you can get. Even with… Well, giving up some of the advantages of bed nets, you can very clearly track what’s happening locally. You don’t have to act on these long horizons. And so those are limitations compared to the whole universe of saving the lives of people today.
Rob Wiblin: Yeah, totally.
How hard it would be to convince governments [02:57:59]
Rob Wiblin: Okay. So from this point of view, obviously, the values do some work. But empirical beliefs are kind of absolutely key in driving people to pay attention to the kinds of things that you and I think are really important and levered opportunities to affect the world. Assuming that these beliefs are right, and I guess by that, I mean, kind of Toby’s estimates of the likelihood of human extinction or massive catastrophe in the next 100 years, which I guess was one in six overall, and maybe one in 10 chance from artificial intelligence in particular.
Carl Shulman: Yeah, or one in five conditional on advanced artificial intelligence.
Rob Wiblin: Ah, okay. Yeah.
Carl Shulman: That is, in those estimates, he was giving 50/50 for AI within the period.
Rob Wiblin: I see. Okay. Yeah. So assuming that that view were right, how difficult do you think it would be to actually convince governments that this was right?
Carl Shulman: Yeah. That’s a tricky question because there’s a lot of different levels of being convinced or engaged. I mean, certainly the experience of pandemics and pandemic preparation and bioweapons does not look great for that, in that when we consider a lot of the factors that motivate action on pandemics, a lot of them are harder and more difficult with AI. So even having repeated empirical examples of actual pandemic disasters happening and of things like the massive secret Soviet biological weapons program, and a variety of accidental leaks and whatnot, even with all of those things happening, we still get the radical underinvestments. And just like the public epistemology and discourse, they were left as sort of niche issues. Like that high-minded wonks would talk about different factions, were pushing different epistemological visions of that. And various actors wanted to exaggerate or conversely downplay those sorts of possibilities.
Carl Shulman: You didn’t have a sort of shared epistemology of, “This is the best way of assessing these things, this is something that we should trust, and so society is acting based off of this understanding of the risks.” Then for AI, we’re talking about something that is in many ways more unprecedented. The risks that we’d most worry about at the end are more different from the kind of interim troubles. And so you have a crash of a self-driving car, that’s one thing, and then you adjust the program. That maybe causes some controversy at the time but the damage is bounded.
Carl Shulman: But so for AI, it seems that the really catastrophic things are ones that come out when we no longer have the ability to do that trial and error, correct things in response to feedback. You’re looking at catastrophic systemic failures. And so then the worry is that you can’t have experience with very many instances at the top level. Tiny lab-scale demonstrations maybe just don’t go very far because they seem too abstract and theoretical in the same way that repeated leaks, say, from past bioweapons labs because they caused localized damage mostly. It’s hard to go from that to the kind of response you would get if say, a Soviet bioweapon had been released and killed 500 million people.
Rob Wiblin: Yeah. So it seems like, so with pandemics and to some degree bioweapons, we have this historical track record and these warnings, which motivates a bit of investment, but not nearly the efficient level of investment. With AI, to some degree, it’s just going to be the case that it’s significantly more unprecedented and things might develop quite a bit more quickly. Why is it that governments seem not really able to deal with kind of anticipating and preparing for unprecedented scenarios? Is this maybe just a common phenomenon with all large organizations, and all big groups of people, or is there something about government institutions in particular that mean that they tend to be more reactive rather than anticipating things?
Carl Shulman: Yeah. Well, so the objective epistemic difficulty is certainly a core factor. It’s hard to assess and forecast these things. Different people have different views and there are trends with certain kinds of knowledge and investigation, but the more difficult the problem is, the harder it is to get political consensus on. Even something that is very well established to a sophisticated, honest observer can still be blocked by all sorts of misleading falsehoods when you have a lot of actors trying to influence the discussion and not that much feedback from the empirical world. I’d look to… So in science, you often have scientific controversies. There’s fighting over two different theories. You have lots of people holding these views for a while, and eventually, it gets pinned down by empirical evidence like repeated experiments. Things eventually become overwhelming or almost overwhelming within science. But the level of evidence that you need to get that kind of social consensus, it seems like it’s a lot higher than what you would need to get a best judgment that the proposition that is ultimately supported by experiment is quite likely true. There’s a big lag.
Carl Shulman: So the thing that then I would worry about is if some of the critical decisions depend on some of these subjectively uncertain things, and where it really matters, finally, because you have to balance the risk of one possibility which is uncertain against another that is also uncertain. There’s significant tradeoffs between the two. Like if the first one is bigger, you can invest in blocking that risk at the expense of increasing the other and then you really, really want to have precision there because you can’t just have a strategy that says, “Well, we’ll act on things that are overwhelmingly certain. We’ll just push to get knowledge on these things until they’re very certain, and then we’ll act from that.” I worry that we won’t be in that situation because people will be weighing multiple risks like the fear of, “Oh, what if, what if others are going to use AI in a way that we don’t like or it will change balances of power?” Things like that. And then on the other hand, “What if AI lays waste to human civilization and replaces it with something much worse than the future that we could have had?” And so both of those are scary. Both of them are uncertain. And it’s possible that you will have one of these problems made much worse by trying to trade off in favor of the other.
Defensive epistemology [03:04:34]
Rob Wiblin: Yeah. I’ve heard you talk about defensive epistemology before. Can you explain kind of what the key components of that are?
Carl Shulman: Yeah. So people speaking in public and organizations, they have to make decisions and then they will be attacked for them later. They have hostile actors who are eager to make them look bad for political or other advantage, or lawsuits as well. They then restrict what they say quite a lot and restrict what they do in this risk-averse way.
Carl Shulman: An example that comes to mind is, so under the Obama administration, there was a lot of funding for clean energy research and startups, and overall, that portfolio did quite well, it was beneficial and got good return on investment. And then one or two failures in that portfolio then become like a big political bludgeon, which is the opposite of, say, a venture capital portfolio. In venture capital, almost all the returns come from your top couple of hits, and VC culture is oriented around a fund is going to invest in many of them, try and diversify across them, and collectively, the thing will be paid for by the successes. But they’re willing to take risks on failures. They take quite a lot of them and that’s how it goes. There isn’t an analog, really, in the political system to the same extent.
Rob Wiblin: In the sense that people’s upside…
Carl Shulman: Yeah. People’s upside tends to be limited even when you have a great idea that contributes a lot of public benefit. Often there’s not much political reward. That’s especially the case for blocking unseen threats where it requires a complicated model and understanding to even know that you’ve done a benefit. And then the loss of being seen to be a cause of failure or a wasted effort then can be grounds for being fired, and people are very worried about being fired or losing an election compared to sort of incremental gains and standing.
Carl Shulman: Then that results in a lot of statements, and thinking, and being able to engage with the public only on what seems like ironclad defensible evidence, either because it’s so sure that it’s right or because it’s easily defensible because it’s, say, deferring to a high-status institution. No one ever went broke hiring IBM. I mean, that may no longer be true, but it was the story that you… Even if the advice from a high-prestige institution is wrong, still, you can defend your having acted on it in a way that you can’t defend acting on sort of complex direct reasoning, or like because outsiders, intellectual entrepreneurs, even substantial portions of a scientific field that have not been endorsed by all the institutional organs. You see that in the COVID advice that it was very difficult to get clear recommendations like the ones that microcovid.org provided because the public authorities were unwilling to say something that might turn out to be wrong later.
Rob Wiblin: Yeah. So the issue is that these institutions, they hire very smart people who might well be able to interpret the evidence quite sensibly and personally reach an overall judgment that is quite calibrated and quite sensible. But because of the incentives of the institution that they work for, that institution is not just going to publish their frank advice and their frank personal judgments. They’re just only going to ever say anything once they’re really sure that it’s very likely to be true or is extremely defensible, which means that you just like, lose… like, the public that’s following these organizations is losing a huge amount of evidence that could potentially make their decisions better. I guess this is most obvious with the COVID case, but it also then means that they are much less likely to take interesting risks based on people’s inside views and their particular understandings of the world in other areas like deciding what to invest in.
Carl Shulman: It’s a bit worse than that though because… So you described a scenario where there’s folk with great judgment who are sort of actively working to understand the world at the object level as well as possible. There’s some of that activity in some people’s jobs and they do try to do it. But if you know you’re going to be constrained to have to rely on the defensive epistemology stuff, then also you’re going to invest less effort into building the accurate model. People who can do those models are less likely to be hired, and promoted, or to win elections, and so on. I mean, there’s a lot of criteria in action, and so you can wind up with… Yeah, the ability to understand is impaired, not just the ability to communicate it, although you see both.
Carl Shulman: The example with the COVID vaccines. So along the way, you had the vaccine, folks saying they quite expected the vaccines to work based on the interim evidence and whatnot, but that information wasn’t available to the public discourse, and it was fairly difficult to derive from these head statements that… Yeah, I mean, these vaccines were developed quickly in the first few weeks and relatively early on. People thought that they would work and had good reasons because, well, what if it turns out wrong? And then that restricts things like investing more than we already did, which was a lot, and the capacity to produce them and so on.
Carl Shulman: So with AI and with bioweapons, a lot of the important things, it’s hard to state them in this kind of defensible epistemology. So things like “Is there a secret bioweapons program in country X?” are always going to be in this area of difficult subjectivity. And where it is possible to do better in that. Tetlockian forecasting principles apply. It is possible to do better. It’s measurable, but it’s not as easy to show, not as easy to show in a discourse, and so it seems we’re going to be wrestling with those problems and therefore failing to adequately grip in public epistemology some of these questions.
Rob Wiblin: Yeah. This is kind of a judgment call. I’m curious to get your, I guess, frank personal opinion on this question, even though it’s going to be hard to refer to any published research that would necessarily justify the answer. To what degree do you think it’s people’s fear of being fired or sidelined that’s a key reason that these important institutions, I guess especially government institutions, can’t take risky decisions in their work even if they have positive expected value? I guess this is just one explanation, that people won’t put forward these proposals because they’re worried about being made fun of or being viewed as just not going along with the obvious incentives that the institution faces. I guess there could be other explanations for this behavior and I’m curious whether you think this is a big one.
Carl Shulman: Yeah. I mean, I think it’s a manifestation of a larger dynamic. A thing you could add on to. That is to say, so there are these objective career and practical consequences and they clearly matter and they clearly have shaped the behavior of politicians, academics, officials, et cetera. But it’s also possible that people are overshooting that, just because humans have very strong drives to imitate, and conform, and to be simpatico with their fellow community members. And we see that things like, say, scientists who develop innovative results, they tend to be more disagreeable, that Big Five personality trait. And that helps them to have ideas that are new, and correct, and different from those around them. And we see different personality types in a lot of weird areas or sort of… not yet consensus, not yet uniform standard areas.
Carl Shulman: So you see higher disagreeableness, enrichment, things like Asperger’s syndrome or mild versions of it, folk who are less influenced by the pain of having others disagree with them or others look askance at them. You can measure this. You can take objective questions of fact where there’s some kind of social, cultural, emotional, tribal current that pushes people to getting wrong answers on those factual questions, and then see how things go. My impression is that you do see higher accuracy on some of those things associated with an analytic cluster of relying more on things like math and data and independent thinking rather than social epistemology, where you just refer to ‘this is the view of the community, this is the view of the institution’. And so, a thing that you see more, maybe, in engineers than in salespeople. That sort of thing.
Rob Wiblin: Yeah. I guess if someone is working in an institution where they notice that the people around them tend to be pretty conformist and pretty risk averse personally about suggesting things that might influence their reputation even though they are plausibly very good ideas… Is that a sign that maybe, so there’s this willingness to step forward and suggest new stuff that will be really good that’s in particularly short supply. It’s a particularly neglected intervention in this situation, and so they could potentially do a lot of good by being that person and being willing to risk getting fired?
Carl Shulman: Yeah. I mean, that’s possible although… So nonconformity is a double-edged sword. So the community knows a lot of things, which is the reason why we have evolved such strong drives to imitate, because it’s a very handy way to learn and get configured in society. No human is going to reinvent, not just stone tools, but electricity, and automobiles and such in their lifetime. We have to draw on those around us to a great extent, and so what you really want is the combination of reduced conformity with things that systematically draw you towards the truth.
Carl Shulman: So just increasing entropy, just being free to move off in random directions from consensus is no good. You want to strengthen things like science, or Tetlockian predictive forecasting, and accuracy, and track records there. And things like being able to support arguments with data, being able to, say, go with rationalist conversational norms where you put up the evidence for your claims, why you believe them, and then open up to other arguments and actively seek out disconfirming information, things like that.
Carl Shulman: When you’re diverging because of practices that we think are systematically correlated with truth, then you’re providing the engine for society because conformity can help you preserve what you’ve already got. But it’s not going to drive you towards the truth. What you want is a continuous steady tug from things like regular scientific experiments, and seeing what worked, and what didn’t, things like mathematics that draw us towards the truth. And then you can potentially have a lot of impact there because when most activity is conformist, the smaller subset of people who are actually developing and testing new ideas can wind up having a lot of influence later on because most are not in that game.
Hinge of history debate [03:16:01]
Rob Wiblin: All right. Let’s move on now and talk about some counterarguments that have been raised over the years to the idea that we do happen to be living in an especially important, and, I guess, hingey moment in history. Yeah. Regular listeners will be familiar with this kind of hinge of history debate, which was prompted, or at least put into a higher gear, by a blog post that William MacAskill wrote a couple of years ago. Can you briefly remind us of Will’s take on that before we discuss what might be right and wrong about it?
Carl Shulman: Yeah. So Will was arguing that ex ante, we should be very surprised to find ourselves being among the most influential humans ever to have lived and shaping the long-term future, because there can only be a few of those. There have been already a lot of people and even more nonhuman animals, and we’d expect going forward there to be a lot more. Certainly, if we have lasting civilizations and interstellar colonization, then ludicrously more future beings than beings early on in our kind of situation. And so, yeah, Will wants to say, so we need something that’s very unusual over the course of all of history to say that out of all of the millions of centuries that life has existed on earth and potentially the much longer stretches of time in the future and vast future populations, how could this particular time be so important, given all of the ways that other times could be important?
Carl Shulman: And then, he says, then we try and meet that with various evidence of things that actually are exceptional about this period. And he winds up balancing some of these different factors and coming to a relatively lower estimate of the chance that, say, there is an extinction event or a lock-in of one shape of future society rather than another this century. Whereas, Toby is talking about x-risk on the order of one in six over the century, and then maybe a greater probability of a lock-in being higher this century, something that more or less sets the direction of the future compared to other tracks that it could’ve gone down. Whereas Will was giving numbers that were more like one in 1,000 to one in 100 for this combined set in the post and based mainly off of these arguments of like, “Sure, it seems like there’s a ton of evidence, but maybe I’m wrong about interpreting the evidence. Maybe things don’t quite work out. And so, yeah, I should be substantially more skeptical than I might think from just looking at the object-level evidence.”
Rob Wiblin: That was a nice summary, but I suppose you were not super convinced by this line of argument. And I guess as is obvious already, you think that the probability of this century being especially pivotal is actually pretty high. I guess one of the first issues that you raised in the comments there is that it’s quite important that the past can affect the future, but not vice versa. Can you explain why that ends up being quite important here?
Carl Shulman: Yeah. So obvious case is human extinction. So if humanity goes extinct, then no later generation of humans will be able to cause extinction. You have set the future of humanity in a particular stable state that is going to last and preserve itself. And the same could be true of other things. Like if you had the development of a stable society, say like a democracy that developed educational systems and civilian control of the military and so on, such that each generation endorsed democracy and prevented the overthrow by some totalitarian coup or the like. So that it remains, over very large stretches of time, stable in that way. Then you could have an alternative world that’s like a stable autocracy. Maybe, again, each of these might be enabled by advanced technologies like AI.
Carl Shulman: And so any kind of stable equilibrium state like that, once you’re in it, you’re in it, and that blocks the opportunity for such an event to happen thereafter. And since I was arguing that there seem to be quite substantial chances for extinction or lock-in events of various kinds to happen this century from advanced technologies, then giving significant probability to those events this century gives it… Directly, you’re having that long-term impact, but it also shows how you just can’t have century after century of these things —
Rob Wiblin: Having a reasonably high probability —
Carl Shulman: That’s right.
Rob Wiblin: — because the early ones have a chance to go first. It’s very easy for early periods to preempt the possibility that later periods are especially important by just getting in and locking in some state forever.
Carl Shulman: And in addition, the later ones are much larger, which means that there’s less per capita influence at those times.
Rob Wiblin: Yeah, yeah. So this is an argument… This preemption issue is a reason to think that even if the future is very long, we shouldn’t be that surprised if the most hingey period comes extremely early on in that entire process. Because if the chance of any century being particularly hingey is just reasonably high, then you probably will get some extinction or lock-in event relatively early, and that’s just not peculiar.
Carl Shulman: Yeah. And it’s possible that’s already happened. So if abiogenesis is rare, then that’s certainly a sense in which the particular century in which abiogenesis happened was very important. Now, Will defines the things in terms of human history or maybe post-human history. But so, that’s an obvious sense in which that could be true if abiogenesis is rare and lucky.
Technological progress can’t keep up for long [03:21:51]
Rob Wiblin: Yeah. I guess another important consideration in your mind is that our current rate of technological progress is much faster than it was in the past. Also, it seems like it is much faster than it can be in the distant future as well. Can you first explain why it is that we know that this rate of technological progress can’t keep up for very long? And then maybe after that we’ll talk about why that’s so relevant.
Carl Shulman: Yeah. So simple point is that of all of the technological advance in all of history, so much of it has happened in the last few centuries, the Industrial Revolution and beyond. Then if we look at the really long scope of history, then the advance of technology and increase in population, which is our best way of measuring incremental technology along the way. Things like the adoption of agriculture and then improved agricultural methods, getting better with metal to clear forests, and things like that. But, yeah, that growth rate was accelerating over the long sweep of human history over 10,000 years, 100,000 years. Whereas, in the ancient past, it would have taken several thousand years for world population to double, it got to the region of decades in the past few centuries. And so already with a doubling time of decades, if you go to a few thousand years, you wind up with —
Rob Wiblin: Big numbers.
Carl Shulman: — economic growth. You get to the point where you have, “Oh yeah, there’s going to be a quadrillion dollars for every atom in the universe.” And so it clearly can’t go that far unless we have totally new physics to support it. Then on the flip side, if we had a resumption… And so in the last 50-plus years, global growth had not been accelerating as it had been before, and that’s something that may be caused by the demographic transition. We could get into that later. But if the acceleration resumed, then it’s even more severe, because if you go from thousands of years per doubling into decades, you do that again and then you’re at a timescale of months. And then, so you very quickly wind up with other limits and bottlenecks and things coming in.
Carl Shulman: But it looks like in aggregate we could cover a lot of what is technologically possible over a fairly short period of time. And just really exploiting the earth and the solar system would get you a large percentage of all of the cumulative percentage growth and research effort there could ever be, and so along with it, a lot of the technology there could ever be. We think also that it’s front-loaded, because sometimes you hit a physical limit. You can’t send signals faster than light. You can’t get more than 100% efficiency in a heat engine or a solar cell. And so that’s reason to expect a lot of all the technology to ever exist to come along in the next century or next few centuries, and it’s certainly a reason why we can’t keep on with historical acceleration rates or even current rates of economic growth for very long.
Rob Wiblin: Yeah. So I think it’s going to be fairly easy for people to grasp why, if you have an accelerating rate of growth, so if you have hyperexponential growth in technology and the economy, such as the doubling time keeps shrinking, then pretty quickly you end up with a situation that just cannot continue for very long. You’ll outgrow the earth and you’ll outgrow the solar system relatively fast. But so I think growth, you’re saying, is no longer accelerating the last 50 years. We’re just growing at an exponential rate. How long could we keep that up?
Carl Shulman: Yeah. So one way to think about it is how many orders of magnitude of growth left are possible as allowed by physics? And so within the solar system, so solar energy is more than a billion times what gets to the earth, and our civilization’s current energy use is less than 1/1000ths of what reaches the earth. If you have trillion-fold growth, that’s 40 doublings, and then maybe add some additional growth because of improvements of efficiency and whatnot. Then, yeah, 40 doublings, and if each doubling takes 25 years, that’s 1,000 years, so 10 centuries. Yeah. At that point, we’re not talking about millions of centuries. We’re talking about a compressed period. And if we were living in that world, then it would be a lot less obvious. To argue that this is the most important century as opposed to three centuries from now would be difficult. Then there, I’d be getting into details about how growth works, how technological change works. And then I think actually the case is actually better that growth will resume accelerating or continue decelerating rather than stay on a constant exponential for hundreds or thousands of years.
Rob Wiblin: Okay. But I suppose the bottom line from both of these points is that we should expect — whether it’s speeding up or constant, at least in those two scenarios — we should expect that a decent fraction of all of the technological progress that will ever happen might happen in the next 100 years. Granting that that’s the case, why does that suggest that this century would be especially pivotal?
Carl Shulman: Yeah. So the frame of this most important century out of millions is parsing the world up in terms of calendar years and time. And then this alternative would be to say, when some important change happens, it is not necessarily just because you had more rolls of the same dice in an almost identical situation. So you’ve had now tens of millions of years of ants doing their thing. Then there’s lots of random mutations and changes in ants, and sometimes branches of ants go and occupy new niches or are transformed in some ways. But we see more dramatic changes with the introduction of new technologies, new climatic conditions, some kind of systemic factor.
Carl Shulman: I’d say, looking at human history and changes that happen, I’d say there’s a good case for more than half of the variance and basic ways of living and such being tied to technological changes, rather than this is just a thing that was happening randomly at a fixed rate in the same environment. Like wars between ancient city-states. Those wars happened back and forth, but you could go through 500 years of those and still, like, “Oh, different kings are wearing crowns, but things are fairly similar.”
Carl Shulman: If we’re looking for something like lock-in events or human extinction, then this may be especially important because, so if lock-ins become possible at some level of technology — and they weren’t for past technologies — then, yeah, if a lock-in event happens, there’s no opportunities after that. And so when lock-ins become possible, they may occur at different rates and it may be relatively quick that one happens once it’s open. And so I’d say, yeah, lock-in hasn’t been possible largely in the past. There’s a fairly good case that it will become possible eventually. Then when it does become possible, then there’s this preemption argument that you expect things to happen around then rather than randomly distributed over the next 10 billion years.
Rob Wiblin: Yeah, yeah. So basically if somewhere on the chain between no technology and technological completion — where we’ve figured out basically everything useful that there is to know — if at some point along that ladder, there is some technology that quite quickly opens the door for either extinction or for a lock-in event, then we should think that the likelihood of that happening is basically proportional to how much of the ladder you climb in any given century. And just the basic math suggests that we are going to climb a macroscopic amount of this total technological ladder in the next century, which then means that going along with that, there’s a risk that this will turn out to be the most hingey or the most pivotal at least of all of the future centuries to come.
Carl Shulman: Yeah. And supplemented by this preemption thing. So if there’s five lock-in technologies coming up, and then one of the first two winds up causing a lock-in event, then that’s still the most important thing even though more lock-in technology, superfluous ones, were going to be developed later. And we have, of course, particular areas that seem especially prone to this. And so basically the automation of the mind and artificial intelligence both, and something that… So in replacing humans as a bottleneck for the operations of civilization, and the, to the extent there was a steering wheel, the hands at the steering wheel. And then the design and engineering and understanding of those minds, which is essential for things like creating minds that are very stable, creating social systems that are based on well-understood and well-tested software or software minds that you might have stable over millions, billions of years.
Rob Wiblin: Just another quick factor that you mentioned in your comments that seems important is that if things go well in the future, then there’ll be far more people alive than there are now. Why does that push us to act now, I guess to act as if this century is particularly hingey rather than saving up resources for a future time?
Carl Shulman: Yeah. Well, in some way, this is an artifact of… So Will’s framework was something about most influential per capita, with the idea being, “Am I very well positioned relative to a similar person in some future century?” And so naturally, if you go to have a billion times as many people or a billion billion times as many people, it can’t be the case that half of them have the ability to voluntarily cause a lock-in event one way or another, because if any of them do, they preempt the rest. Also, each one commands on average so much smaller a portion of all of the resources of the world. Whereas, today, out of several billion people, you might on average command one seven billionth of the resources. If you happen to be fortunate enough to have more financial resources or be part of a more narrow scientific community, you might be 100, 1,000 times better placed than the average to make change. Then as one out of trillions of trillions —
Rob Wiblin: It seems harder.
Carl Shulman: — it seems, yeah, pretty doubtful. If we think that a lot of interventions to shape the world are in some ways related to a balance of forces, so say like investments and certain destructive things or active harmful efforts by some, then your influence may be more proportional. Do you have more resources to push X than the opposition to X? Things like that.
Rob Wiblin: Yeah. A little bit like voting. You’re more likely to swing an election when there’s fewer voters than when there’s lots of voters.
Carl Shulman: Indeed. Indeed.
Rob Wiblin: So you had this other point, which is that you think a lot of uncertainty about humanity’s long-term prospects are going to be resolved this century. Is that conceptually separate from the things that we’ve talked about so far already, or is that more just like an implication of the above?
Carl Shulman: Yeah, it’s more of an implication. And so speed-up just gives you that, because if you are speeding up through so many technologies, then you’re resolving the uncertainties associated with each of those technological transitions. There’s less left. And then it’s more likely that you’ve had a lock-in of one event or another, like extinction or the creation of a society that is stable enough to reliably stop weapons of mass destruction from blowing it up shortly.
Carl Shulman: Then there’s an additional boost to this effect in that if developments like AI are expediting technological progress and also the speed of the fastest players, so if advanced AI systems undergo a million subjective years in a year because they’re operating on fast computers, then a world that is stable for, say, 10 years then has to be stable for 10 million subjective years of these fast-moving things and through several major technological transitions.
Carl Shulman: And that’s another source of convergence in that if existing people want to live out their natural lives or extended lives, then they need to have the world be robust to not destroy them through that many technological transitions and through that amount of subjective activity. And so if people then avert catastrophe and create some kind of stable safe state — stable in some respects, still lots of change and expansion can happen, but stably not destroying itself, stably not being ruined — then, yeah, there’s a strong drive out of immediate short-term concern to create something that’s stable over vast change, and then that thing may be stable over the long term.
Rob Wiblin: So there’s a way that a number of readers have misinterpreted Will’s argument that would make it substantially more powerful than Will certainly thinks that it is. And unfortunately, it’s related to anthropics, which is notoriously a slightly confusing and mind-bending thing to explain. But, yeah, is it possible to explain in any simple way the mistake that some folks have made in interpreting Will’s blog post?
Carl Shulman: Well, the simplest way is to say that some people have interpreted that as an example of the philosophical Doomsday argument. And to say, no, as far as I can tell from talking to Will, that’s not what he’s intending, so don’t think that. Rather, it’s just a question of, so without getting into anthropic and dubious anthropic theories, just how likely are these different worlds? Like a world where a post-industrial civilization winds up at the “hinge of history” versus the ones where you also have an industrial civilization and then it winds up not being hingey? So you might think of those. Each of those have some plausibility in advance. Then you want to ask for the details of those worlds and the details that we see in the world. How likely are they given each of those two hypotheses? And so it could be that in the worlds where we are the hinge of history, it’s written in a big neon sign on the moon. And so then we would update on that and know we’re definitely the most important century or definitely not.
Carl Shulman: But then the challenge, I think — and it’s discussed some about the stability of arguments and other people who have thought they were the most important century — is, well, what if people have a strong tendency to think they’re in the most important century and they would generate arguments and evidence as good as what we see now, regardless of whether that case was correct? And there, I think, yeah, I would mostly refer to the discussion about what did past people think about existential risks and lock-ins? The reference class winds up being basically religious cults. Then there’s a very small handful of scientifically based concerns, and then also, yeah, quite a lot of systematic trend and how people have assessed those. And so I’d say there is less vulnerability to those than Will, but this is a different and more empirical question about just how often do people give arguments this good, and how sensitive is that to the world? It doesn’t involve anthropics or really matter in exactly how many people there are in the future and things like that.
Strongest argument against this being a really pivotal time [03:37:29]
Rob Wiblin: Yeah. I guess flipping things around, in your view, what’s the strongest argument that we’re not likely to be at a really pivotal time in history?
Carl Shulman: Yeah. So there’s one set of arguments related to the simulation hypothesis that would say we seem to be an industrial-era civilization, but actually we’re not. And there could be vast numbers of beings who are given the false impression that they are in an extraordinarily important position than can actually be. Because in a vast galactic history, you can only have a few of those originals, but you can have arbitrarily many of the latter. And so there’ve been now, how many biographies and biopics of Roman emperors compared to the actual number of Roman emperors? And if you had the audience for such programs increased by a trillion trillion times, then you’d get a lot of coverage. Likewise, our interest in the dinosaurs or the first living things is great, and if our society kept expanding, we would still give a nontrivial percentage of attention to early paleontology, early history. And so there’s that kind of story.
Carl Shulman: That’s more plausible than the traditional Doomsday argument, because it’s not saying that we should update based on situations we know that we’re not in. It’s saying we don’t have enough evidence to distinguish between the situation we appear to be in and that situation being manufactured or a hoax or something like that. I think we can mostly set these issues aside if you’re taking a sort of a global strategic view. You ask like, “Well, if some people are actually in the situation we appear to be in, and maybe a bunch are in facsimiles thereof, what would be the policy you would want to set for the lot of them?” And I think it would be a policy that those who are actually very influential and helping to set the course for history take reasonable responses to that, even if —
Rob Wiblin: That creates a bunch of work for the —
Carl Shulman: A bunch of work for some of the assimilated beings. But, yeah, so mostly I think that’s a maybe interesting side detour, but it’s not going to really drastically change a lot of these arguments. It might interact with a bunch of agent-relative things, but I’m not going to get into that.
Rob Wiblin: That seems sensible.
Carl Shulman: Yeah. Then if we’re going to address more substantive “The world is not an illusion or a simulation” kind of stories, then I think the obvious ones are, so a global catastrophe that sets back civilization for centuries. So if 99% of people died from super COVID and then we don’t get back to our current technology level for another 500 years, then obviously then this is not the most important century. And then if we wind up with technological stagnation… So I mentioned earlier the demographic transition where historically, economic and technological growth drove faster population growth. Recently, those have been decoupled. And so we’re no longer expanding the potential research labor force as fast as we used to. We’ve been making up for it by increasing the percentage of people who work in innovation, but that means you’re increasingly having to draw on people who are less enthusiastic or suited to the fields. Also, you’re bounded. You can’t have 110% of people working in science.
Rob Wiblin: And I guess you can’t have people spending 100% of their life doing their PhD either.
Carl Shulman: Yeah. And so, yeah, Chad Jones has two recent pieces (2020 and 2021) analyzing growth over the past century and finding that, yeah, a lot of it is these transitory elements that involve basically reallocating more of society away from things like reproduction and towards more paid work and more innovative activity. But then as you go forward, those get exhausted. And if you don’t get to some technologies like artificial intelligence, say, that can reboot growth, and where AI can expand its population even while the human population stays fixed, if you miss out on those things and Moore’s law grinds to a halt and it turns out you need significantly more compute and cumulative research work to get out of this trap of stagnant or slow population growth, then you could have potentially centuries in that stagnant state. I don’t think that’s very likely. Also, I don’t think it would be that many centuries because —
Rob Wiblin: We’re just too close?
Carl Shulman: — in that world… Well, that’s one reason. I do think we’ll probably get there this century. But even setting that aside, we do have some people, so religious sects that have strong cultural norms in support of having children can manage to have eight kids under modern conditions. And so if you give like 200 years, then that’s enough time for high fertility cultural norms and such to spread and become dominant. And so I don’t think you can take this over 1,000 years, but it’s certainly a story in which we don’t get to technological milestones like advanced AI, internet-advanced biotechnologies… But it requires that those technological milestones be harder than they look, on the high end of difficulty.
Rob Wiblin: That’s super interesting. Where did you say we could read more about that? Because I don’t think I’ve heard about this possible scenario many times, if at all.
Carl Shulman: Yeah, yeah. These are recent pieces by Chad Jones.
Rob Wiblin: Okay, yeah. We’ll stick up a link to that in the show notes. What do you think about the idea that maybe this century is especially pivotal in expectation, sure, but so will be the 22nd century and the 23rd century and the 24th century and they’re all in the same ballpark just for similar reasons, because we don’t know when these particularly pivotal innovations are actually going to be thrown up? Does that seem like a plausible scenario to you?
Carl Shulman: Well, then we’d have to get into the details of forecasting to refine it. Obviously, you need more data to pin down between 21st and 22nd century than this millennium versus any of the previous thousand or next thousand millennia. And we have a much stronger case for “most important millennium” than “most important century.” Yeah. And so when I go into that data and things like Ajeya Cotra‘s biological anchors report, other things like that, and, yeah, look at other available economic and research data, I tend to expect that we get these transformative technologies more likely this century, which then just doesn’t leave enough room for the later ones. So that view of equal importance in the 21st, 22nd, 23rd, 24th centuries requires that it has to be —
Rob Wiblin: It can’t go over 100%.
Carl Shulman: Yeah. And then —
Rob Wiblin: So it can’t be 50% plus 50% plus 50%. Instead, it could be like 50% plus 25% plus 12 and a half percent. It’s tailing off.
Carl Shulman: It’s that, or it has to be lower. And then that just requires getting into the details of computer hardware, how computer hardware has improved as we’ve invested more in it, how the response of AI performance has been to scaled up application of compute and research labor. And so, yeah, we can’t get into all of it here and now, but, yeah, that puts me in that frame. Then there’s an element, again, of preemption in that. So if we don’t have this kind of radical transformation this century, then it may be because we’re in that stagnant world. And if we get into the stagnant world, then it dives down and the remainder is more spread out.
Carl Shulman: And likewise with the collapse. If 99% of people are destroyed, our recovery time estimates are spread out over centuries. So that still gives something of a peak since the cases where we don’t get to transformative outcomes this century are ones where something really messes up the engine of progress — whether that is collapse or stagnation — and then those spread out a little more thinly, not over 10,000 years, but over more centuries.
How Carl unwinds [03:45:30]
Rob Wiblin: All right. I’ve got a lot more questions for you. Always keen to pick your brain on this AI and philosophy stuff. But I think this episode has arguably gone on long enough, so we might have to save that for another day, perhaps. I guess to wrap up, we’ve been talking about some extremely serious stuff and people might be left with the impression that you basically spend all of your time reading very long papers and books and so on, but I’m sure that that can’t quite be the case. What do you do in order to unwind and take your mind off these mind-bending and sometimes quite depressing things about the world?
Carl Shulman: Yeah. One of my niche interests is seeing the hot springs of the world. It was nice while I was, yeah, in California being able to circulate through quite an amazing number of great ones within a few hours of the Bay Area. There’s one lovely one in the ocean north of San Francisco, and when the tide is out, there is a hot spring which people separate with an artificial barrier to keep the ocean water from mixing in. Then when the tide comes back in, it’s submerged. It’s a lovely little hidden gem, at least for the hot spring lovers among us.
Rob Wiblin: What do you particularly like about hot springs?
Carl Shulman: Oh, I’m not one to buy into mystical healing power stories. But, yeah, they’re just lovely and relaxing, especially. And beautiful nature. Look up at the moon and the stars.
Rob Wiblin: Well, that’s very wholesome. Is there anything that you, I guess, watch on TV in order to waste time, if you do waste much time?
Carl Shulman: Yeah, probably like a lot of your guests, I’m a Rick and Morty fan. And with respect to scary and mind-bending big worlds, I guess I would go more with the Rick and Morty perspective, existentially aware humor rather than the sort of Lovecraftian “Egads, it’s a doomed universe, second law of thermodynamics, giant squid monsters beneath the Atlantic.” Or the Pacific. I guess the Pacific.
Rob Wiblin: I suppose that, yeah, the philosophy of Rick and Morty is like, “Everything is messed up, but let’s try to enjoy the ride,” which I guess is maybe one thing to keep in mind if you’re dabbling in all these views.
Carl Shulman: Well, yeah, and try and steer it in a better direction. But this is what we’ve got, so have some fun.
Rob Wiblin: Yeah. Nice. Well, with that in mind, my guest today has been Carl Shulman. Thanks so much for coming on the 80,000 Hours Podcast, Carl.
Carl Shulman: Thanks.
Rob’s outro [03:48:02]
Rob Wiblin:If you’ve listened to the end of this episode you’re probably the sort of person who should think about applying to speak with our team one-on-one for free.
We’ve made some hires and so are able to speak personally with more readers and listeners than we’ve ever been able to do before.
Our advisors can talk over your philosophical views, which problem might be a good fit, look over your plan, introduce you to mentors, and suggest specific organizations that might be able to use your skills. Just go to 80000hours.org/speak to learn more and apply.
Alright, the 80,000 Hours Podcast is produced by Keiran Harris.
Audio mastering by Ben Cordell.
Full transcripts are available on our site and made by Katy Moore.
Thanks for joining, talk to you again soon.
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.