Suppose we make these grants, we do some of those experiments I talk about. We discover, for example — I’m just making this up — but we give people superforecasting tests when they’re doing peer review, and we find that you can identify people who are super good at picking science. And then we have this much better targeted science, and we’re making progress at a 10% faster rate than we normally would have. Over time, that aggregates up, and maybe after 10 years, we’re a year ahead of where we would have been if we hadn’t done this kind of stuff.

Now, suppose in 10 years we’re going to discover a cheap new genetic engineering technology that anyone can use in the world if they order the right parts off of Amazon. That could be great, but could also allow bad actors to genetically engineer pandemics and basically try to do terrible things with this technology. And if we’ve brought that forward, and that happens at year nine instead of year 10 because of some of these interventions we did, now we start to think that if that’s really bad, if these people using this technology causes huge problems for humanity, it begins to sort of wash out the benefits of getting the science a little bit faster.

Matt Clancy

In today’s episode, host Luisa Rodriguez speaks to Matt Clancy — who oversees Open Philanthropy’s Innovation Policy programme — about his recent work modelling the risks and benefits of the increasing speed of scientific progress.

They cover:

  • Whether scientific progress is actually net positive for humanity.
  • Scenarios where accelerating science could lead to existential risks, such as advanced biotechnology being used by bad actors.
  • Why Matt thinks metascience research and targeted funding could improve the scientific process and better incentivise outcomes that are good for humanity.
  • Whether Matt trusts domain experts or superforecasters more when estimating how the future will turn out.
  • Why Matt is sceptical that AGI could really cause explosive economic growth.
  • And much more.

Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Highlights

How could scientific progress be net negative?

Matt Clancy: I was like, “Obviously this is the case. It’s very frustrating that people don’t think this is a given.” But then I started to think that taking it as a given seems like a mistake. And in my field, economics of innovation, it is sort of taken as a given that science tends to almost always be good, progress in technological innovation tends to be good. Maybe there’s some exceptions with climate change, but we tend to not think about that as being a technology problem. It’s more like a specific kind of technology as bad.

But anyway, let me give you an example of a concrete scenario that sort of is the seed of beginning to reassess and think that it’s interesting to interrogate that underlying assumption. So suppose we make these grants, we do some of those experiments I talk about. We discover, for example — I’m just making this up — but we give people superforecasting tests when they’re doing peer review, and we find that you can identify people who are super good at picking science. And then we have this much better targeted science, and we’re making progress at a 10% faster rate than we normally would have. Over time, that aggregates up, and maybe after 10 years, we’re a year ahead of where we would have been if we hadn’t done this kind of stuff.

Now, suppose in 10 years we’re going to discover a cheap new genetic engineering technology that anyone can use in the world if they order the right parts off of Amazon. That could be great, but could also allow bad actors to genetically engineer pandemics and basically try to do terrible things with this technology. And if we’ve brought that forward, and that happens at year nine instead of year 10 because of some of these interventions we did, now we start to think that if that’s really bad, if these people using this technology causes huge problems for humanity, it begins to sort of wash out the benefits of getting the science a little bit faster.

And in fact, it could even be worse. Because what if, in year 10, that’s when AGI happens, for example. We get a super AI, and when that happens the world is transformed. We might discuss later why I have some scepticism that it will be so discrete, but I think it’s a possibility. So if that happens, maybe if we invented this cheap genetic engineering technology after that, it’s no risk: the AI can tell you, “Here’s how you mitigate that problem.” But if it comes available before that, then maybe we never get to the AGI, because somebody creates a super terrible virus that wipes out 99% of the population and we’re in some kind of YA dystopian apocalyptic future or something like that.

So anyway, that’s the sort of concrete scenario. And your instinct is to be like, come on. But you start to think about it and you’re like, it could be. We invented nuclear weapons. Those were real. They can lead to a dystopia and they could end civilisation. There’s no reason that science has to always play by the same rules and just always be good for humanity. Things can change. So I started to think it would be interesting to spend time interrogating this kind of assumption, and see if it’s a blind spot for my field and for other people.

Non-philosophical reasons to discount the far-future

Matt Clancy: So remember, ultimately we’re sort of being like, what’s the return on science? And there’s a bunch of reasons why the return on science could change in the distant future. It could be that science develops in a way in the future such that the return on science changes dramatically — like we reach a period where there’s just tonnes of crazy breakthroughs, so it’s crazy valuable that we can do that faster. Or it could be that we enter some worse version of this time of perils, and actually science is just always giving bad guys better weapons, and so it’s really bad.

But there’s a tonne of other scenarios, too. It could be just that we’re ultimately thinking about evaluating some policy that we think is going to accelerate science, like improving replications or something. But over time, science and the broader ecosystem evolves in a way that actually, the way that we’re incentivising replications has now become like an albatross around the neck. And so what was a good policy has become a bad policy.

Then a third reason is just like, there could be these crazy changes to the state of the world. There could be disasters that happen — like supervolcanoes, meteorite impacts, nuclear war, out-of-control climate change. And if any of that happens, maybe you get to the point now where like, our little metascience policy stuff doesn’t matter anymore. Like, we’ve got way bigger fish to fry, and the return is zero because nobody’s doing science anymore.

It could also be that the world evolves in a way that, you know, the authorities that run the world, we actually don’t like them — they don’t share our values anymore, and now we’re unhappy that they have better science. It could also be that transformative AI happens.

So like, the long story short is the longer time goes on, the more likely it is that the world has changed in a way that the impact of your policy, you can’t predict it anymore. So the paper simplifies all these things; it doesn’t care about all these specific things. Instead, it just says, we’re going to invent this term called an “epistemic regime.” And the idea is that if you’re inside a regime, the future looks like the past, so the past is a good guide to the future. And that’s useful, because we’re saying things like 2% growth has historically occurred; we think it’s going to keep occurring in the future. Health gains have looked this way; we think they’re going to look this way in the future. As long as you’re inside this regime, we’re going to say that’s a valid choice.

And then every period, every year, there’s some small probability the world changes into a new epistemic regime, where all bets are off and the previous stuff is no longer a good guide. And how it could change could be any of those kinds of scenarios that we came up with. Then the choice of discount rate becomes like, what’s the probability that you think the world is going to change so much that historical trends are no longer a useful guide? And I settle on 2% per year, a 1-in-50 chance.

And where does that come from? Open Phil had this AI Worldviews Contest, where there was a panel of people judging the probability of transformative AI happening. And that gave you a spread of people’s views about what’s the probability we get transformative AI by certain years. And you get something a little less than 2% per year is the probability, if you look in the middle of that. Then Toby Ord has this famous book, The Precipice, and in there he has some forecasts about x-risk that is not derived from AI, but that covers some of those disasters. I also looked at, there’s been sort of trend breaks in the history of economic growth — since there’s been one since the Industrial Revolution, and maybe we expect something like that.

Anyway, we settle on a 2% rate. And the bottom line is that we’re sort of saying people in the distant future don’t count for much of anything in this model. But it’s not because we don’t care about them; it’s just that we have no idea if what we will do will help or hurt their situation. Another way to think of 2% is that, on average, every 50 years or so, the world changes so much that you can’t predict, you can’t use historical trends to extrapolate.

How technology generates huge benefits in our day-to-day lives

Matt Clancy: I’ve had 40 years of technological progress in my lifetime. And if you use the framework we use to evaluate this in this model, it says if progress is 1% per year, my life should be like 20% to 30% better than if I was my age now, 40 years ago. So I thought, does it seem plausible that technological progress for me has generated a 20% to 30% improvement? So I spent a while thinking about this. And I think that yeah, it is actually very plausible, and sort of an interesting exercise, because it also helps you realise why maybe it’s hard to see that value.

Like, I like to work in a coffee shop, and because of the internet and all the technology that came with computing, I can work remotely in a coffee shop most days for part of the day. I like digital photography. These are just trivial things. And not only do I like taking them, but I’ve got them on my phone. The algorithm is always showing me a new photo every hour. My kids and I look through our pictures of our lives way more often than when I was a kid looking at photo albums.

A little bit less trivial is art, right? So my access to some kind of art is way higher than if I’d lived 40 years ago. The Spotify Wrapped came out in November, and I was like, oh man, I spent apparently 15% of my waking hours listening to music by, it said, 1,000 different artists. Similarly with movies: I’m watching lots of movies that would be hard to access in the past.

Another dimension of life is like learning about the world. I think learning is a great thing, and we just know a lot more about the world, through the mechanisms through science and technology and stuff. But there’s also been this huge proliferation of ways to ingest and learn that information in a way that’s useful to you. So there’s podcasts where you can have people come on and explain things; there’s data — there’s explainers, data visualisation is way better, YouTube videos, large language models are a new thing, and so forth. And there’s living literature reviews, which is what I write. So like a third of what I spend my time doing didn’t exist like 40 years ago.

Another dimension that life is worth living and valuable is like your social connections and so on. And for me, remote work has made a big difference for that. I grew up in Iowa; I have lots of friends and family in Iowa. And Iowa is not the hotspot of economics of innovation stuff, necessarily. But I live here, I work remotely for Open Phil, which is based in San Francisco. And then remote work also has these time effects. So I used to commute for my work 45 minutes a day each way. And I was a teacher, a professor, so that was not all the time — I had the summers off and so on — but anyways, still saving 90 minutes a day, that’s a form of life extension that we don’t normally think of as life extension, but it’s extending my time.

And then there’s tonnes of other things that have the same effect, where they just free up time. I used to, when I was a kid, drive and go shop — walk the shop floors a lot to get the stuff with my parents that you need. Now we have online shopping and a lot of the mundane stuff just comes to our house, shipped, it’s automated and stuff. We’ve got a more fuel efficient car; we’re not going to the gas station as much. We’ve got microwave steamable vegetables that I use instead of cooking in a pot and stuff. We’ve got an electric snow blower; it doesn’t need seasonal maintenance. Just like a billion tiny little things. Like every time I tap to pay, I’m saving maybe a second or something. And then once added up with the remote work, the shopping, I think this is like giving me weeks per year of stuff that I wouldn’t be doing.

But I can keep going. So there’s like, it’s not just that you don’t have to do stuff that you wouldn’t normally do. There’s other times when it helps you make better use of time that might otherwise not be available to you. So like all these odd little moments that you’re waiting for the bus or for the driver to get here, for the kettle to boil, at the doctor’s office, whatever, you could be on your phone. And that’s on you, how you use that time, but you have the option to learn more and do interesting stuff.

Audio content, the same: for like a decade, half the books I’ve “read” per year have been audiobooks and podcasts. And I’m sure maybe there’s people listening to this podcast right now while they’re doing something that they otherwise normally would not be able to learn anything about the world — so they’re driving or walking or doing the dishes or folding laundry or something like that. So that’s all the tonnes of tiny little things.

And this is just setting aside medicine, which is equally valuable to all that stuff, right? I’ve been lucky in that I haven’t had life-threatening illnesses, but I know people who would be dead if not for advances in the 40 years, and they’re still in my life because of this stuff. And then I benefited, like everyone else, from the mRNA vaccines that ended lockdown and so forth.

So, long story short: it seems very plausible to me that the framework we’re using, which says this should be worth 20% to 30% of my wellbeing, seems plausible over a 40-year lifespan. I’m luckier than some people in some respects, but I’ve also benefited less than other people in some respects. Like, if somebody had a medical emergency that they wouldn’t be alive here today, they could say that they benefited more from science than me. So if this is happening to lots of people now and in the future, that’s where I start to think that $70 per dollar in value starts to seem plausible.

Can science reduce extinction risk?

Matt Clancy: One interesting theoretical argument is that if you discover some kind of really advanced scientific solution to problems, a huge group like governments can coordinate around it — you can get the world to coordinate around making this thing happen, like making mRNA vaccines or something, and faster science could basically tilt the balance so that we have more of those frontier things available to these big actors. And presumably that kind of cutting-edge stuff, that takes the coordination of hundreds or thousands of people, wouldn’t be available to sort of bad actors.

But I ended up coming away a little bit pessimistic about this route, just because the reduction in risk that you need to hit seemed really large. So the way I looked at this was, I thought rather than pausing science for a year and evaluating that, it was sort of like, what if you’ve got an extra year of science somehow — through efficiency gains, the equivalent of an extra year — and that allowed you to reduce by some percentage or some amount the danger from this time of perils. How big would that need to be?

And I thought a reasonable benchmark is however much we reduce all-cause death per year. That’s what science is trying to figure out now, we have a bunch of scientists working on it, and we can look at how much life expectancy tends to go up each year. So that’s just a ballpark place to start. And if that’s what you get from an extra year, like you reduce the annual probability of death by some small amount, and now you apply that to these estimates from the superforecasters or the domain experts, it’s just not big enough to make a difference. It doesn’t move the needle very much in this model.

Luisa Rodriguez: Is that really the best we could do though, if we tried really hard to do better science? Like, if we had some sense of what the risks were, and we have at least some sense, could better science just be really, really targeted at making sure those risks don’t play out?

Matt Clancy: I mean, I think it is possible, so I don’t say that this is definitive or anything, but I say it would need to be better than we’re doing against the suite of natural problems that are arising.

So I’ll give you two reasons: I’ll give you a reason to be optimistic and a reason to be pessimistic. A reason to be optimistic is that we actually do have pretty good evidence that science can respond rapidly to new problems. So when COVID-19 hit, prior to COVID-19 the number of people doing research on that kind of disease was like all but zero. Afterwards, like one in 20 papers — this is based on a paper by Ryan Hill and other people — was in some way related to COVID. So basically, 5% of the scientific ecosystem, which is a huge number of people, pivoted to starting to work on this.

You can see similar kinds of responses in other domains too. Like, the funding for different institutes in the National Institute for Health tends to be very steady, just goes up with inflation or whatever, and doesn’t move around very much. But there were like three times when it moved around quite a lot in the last several decades. One was in response to the AIDS crisis, and another was after 9/11, when there were a lot of fears about bioterrorism. So these both are examples of something big and salient happened, and the ecosystem did change and pour a lot of resources into trying to solve those problems.

The counterargument though, like why you might want to be pessimistic, is that it took a long time to sort out AIDS, for example. And in general, even though science can begin working on a problem really quickly, that doesn’t mean that a solution will always be quick in arriving. Throughout the report, we assume that the benefits from science take 20 years before they turn into some kind of technology — that’s under normal science, not any kind of accelerated crisis thing — and then they take even longer to spill out over the rest of the world. So, you know, the mRNA vaccines that ended COVID quickly, the research that was underlying that had been going on for decades earlier. It wasn’t like COVID hit, and then we were like, we got to do all this work to try to figure out how to solve it.

Are we already too late to delay the time of perils?

Luisa Rodriguez: So in theory, the time of perils might be decades from now, and pausing science would in fact meaningfully delay its start. But I guess it also seems possible, given the time it takes for technology to diffuse and maybe some other things, that we could just already be too late.

Matt Clancy: That’s right, yeah. That’s another additional analysis we look at in the report: that, as you said, we could be too late for a lot of reasons. One simple scenario is that all of the know-how about how to develop dangerous genetically engineered pathogens, it’s sort of already out there, it’s latent in the scientific corpus, and AI advances are just going to keep on moving forward whether or not we do anything with science policy. It sort of is driven by other things, like what’s going on in OpenAI or DeepMind, and eventually that’s going to be advanced enough to sort of organise and walk somebody through how to use this knowledge that’s already out there. So that’s one risk: that actually it hasn’t happened yet, but there’s nothing left to be discovered that would be decisive in this question.

Luisa Rodriguez: Would that mean we’re just already in the time of perils?

Matt Clancy: The time of perils is this framework of how you estimate the probability and if you would want to bring that forward. Technically we wouldn’t be in it, but we would be too late to stop it. It’s coming at a certain point. And I guess this is also a useful point to say that it doesn’t have to actually be this discrete thing where we’re out and then we’re in; it could be this smooth thing, but it gets riskier and riskier over time. And this whole paper is sort of like a bounding exercise. This is a simple way to approach it that I think will tend to give you the same flavour of results as if you assume something more complicated.

Luisa Rodriguez: Sure. I guess just being in this time of heightened risk doesn’t guarantee anything terrible. In theory, we still want to figure out what the right policy is. So if we’re in that world where we’re too late to pause science, what exactly is the policy implication from your perspective?

Matt Clancy: I mean, I think it’s like, basically there’s this quote from Winston Churchill, “If you’re going through hell, keep going.” So we’re in a bad place, and there’s no downside to science in this world, right? If you think things can’t get worse, if you think that the worst stuff has already been discovered and it’s just waiting to be deployed, well, then you’ve got to kind of bet on if we accelerate science, there’s all these normal benefits that we would get. It’s not going to get worse, because that die has already been cast, and maybe there’s a way to get to the other side and so forth. So I think that’s the implication. And that’s it’s a bad place to be, but at least it’s clear from a policy perspective what the right thing to do is.

Luisa Rodriguez: Yeah. Do you have a view on whether we are, in fact, already too late?

Matt Clancy: I’m not a biologist; I’m an economist. But it’s another one where I’m like, I don’t know, 50/50.

The omnipresent frictions that might prevent explosive economic growth

Matt Clancy: So factors that can matter is like, it might take time to collect certain kinds of data. A lot of AI advances are based on self-play, like AlphaGo or chess. They’re sort of self-playing, and they can really rapidly play like a tonne of games and learn skills. If you imagine what is the equivalent of that in the physical world? Well, we don’t have a good enough model of the physical world where you can self-play: you actually have to go into the physical world and try the thing out and see if it works. And if it’s like testing a new drug, that takes a long time to see the effects.

So there’s time. It could be about access to getting specific materials, or maybe physical capabilities. It could be about access to data. You could imagine that if people are seeing jobs are getting obsoleted, then they could maybe refuse to share and cooperate with sharing the data needed to train on that stuff. There’s social relationships people have with each other; there’s trust when they’re deciding who to work with and so forth.

And if there’s alignment issues, that could be another potential issue. There’s incentives that people might have, and also people might be in conflict with each other and be able to use AI to sort of try to thwart each other. You could imagine legislating over the use of IP, intellectual property rights, and people on both sides using AI to not make the process go much faster because they’re both deploying a lot of resources at it.

So the short answer is, if there’s lots of these kinds of frictions, and I think you see this a lot in attempts to apply AI advances to any particular domain. It turns out there’s a lot of sector-specific detail that had to be ironed out. And there’s kind of this hope that those problems will sort of disappear at some stage. And maybe they won’t, basically.

One example that I’ve been thinking about recently with this is: imagine if we achieved AGI, and we could deploy 100 billion digital scientists, and they were really effective. And they could discover and tell us, “Here are the blueprints for technologies that you won’t invent for 50 years. So you just build these, and you’re going to leap 50 years into the future in the space of one year.” What happens then?

Well, this is not actually as unprecedented a situation as it seems. There’s a lot of countries in the world that are 50 years behind the USA, for example, in terms of the technology that is in wide use. And this is something I looked at in updating this report: what are the average lags for technology adoption?

So why don’t these countries just copy our technology and leap 50 years into the future? In fact, in some sense, they have an easier problem, because they don’t have to even build the technologies, they don’t have to bootstrap their way up. It’s really hard to make advanced semiconductors, because you have to build the fabs, which take lots of specialised skills themselves. But this group, they don’t even have to do that. They can just use the fabs that already exist to get the semiconductors and so forth, and leapfrog technologies that are sort of intermediate — like cellular technology instead of phone lines and stuff like that. And they can also borrow from the world to finance this investment; they don’t have to self-generate it.

But that doesn’t happen. And the reason it doesn’t happen is because of tonnes of little frictions that have to do with things like incentives and so forth. And you can say there’s very important differences between the typical country that is 50 years behind the USA and the USA, and maybe we would be able to actually just build the stuff.

But I think you can look at who are the absolute best performers that were in this situation. They changed their government, they got the right people in charge or something, and they just did this as good as you can do it. Like the top 1%. They don’t have explosive economic growth; they don’t converge to the US at 20% per year. You do very rarely observe people growing at 20% per year, but it is like always because they are a small country that discovers a lot of natural resources. It’s not through this process of technological upgrading.

Articles, books, and other media discussed in the show

Open Philanthropy’s Innovation Policy programme is currently seeking pre-proposals for living literature reviews — contact Matt if you have ideas!

Matt’s work:

Measuring the value of scientific progress:

Forecasting:

Approaches to mitigate the risks:

Other 80,000 Hours podcast episodes:

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.