#213 – Will MacAskill on AI causing a “century in a decade” — and how we’re completely unprepared

The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang theory, quantum mechanics, birth control, and more. Now imagine all of it compressed into just 10 years.

That’s the future Will MacAskill — philosopher, founding figure of effective altruism, and now researcher at the Forethought Centre for AI Strategy — argues we need to prepare for in his new paper “Preparing for the intelligence explosion.” Not in the distant future, but probably in three to seven years.

The reason: AI systems are rapidly approaching human-level capability in scientific research and intellectual tasks. Once AI exceeds human abilities in AI research itself, we’ll enter a recursive self-improvement cycle — creating wildly more capable systems. Soon after, by improving algorithms and manufacturing chips, we’ll deploy millions, then billions, then trillions of superhuman AI scientists working 24/7 without human limitations. These systems will collaborate across disciplines, build on each discovery instantly, and conduct experiments at unprecedented scale and speed — compressing a century of scientific progress into mere years.

Will compares the resulting situation to a mediaeval king suddenly needing to upgrade from bows and arrows to nuclear weapons to deal with an ideological threat from a country he’s never heard of, while simultaneously grappling with learning that he descended from monkeys and his god doesn’t exist.

What makes this acceleration perilous is that while technology can speed up almost arbitrarily, human institutions and decision-making are much more fixed.

Consider the case of nuclear weapons: in this compressed timeline, there would have been just a three-month gap between the Manhattan Project’s start and the Hiroshima bombing, and the Cuban Missile Crisis would have lasted just over a day.

Robert Kennedy, Sr., who helped navigate the actual Cuban Missile Crisis, once remarked that if they’d had to make decisions on a much more accelerated timeline — like 24 hours rather than 13 days — they would likely have taken much more aggressive, much riskier actions.

So there’s reason to worry about our own capacity to make wise choices. And in “Preparing for the intelligence explosion,” Will lays out 10 “grand challenges” we’ll need to quickly navigate to successfully avoid things going wrong during this period.

Will’s thinking has evolved a lot since his last appearance on the show. While he was previously sceptical about whether we live at a “hinge of history,” he now believes we’re entering one of the most critical periods for humanity ever — with decisions made in the next few years potentially determining outcomes millions of years into the future.

But Will also sees reasons for optimism. The very AI systems causing this acceleration could be deployed to help us navigate it — if we use them wisely. And while AI safety researchers rightly focus on preventing AI systems from going rogue, Will argues we should equally attend to ensuring the futures we deliberately build are truly worth living in.

In this wide-ranging conversation with host Rob Wiblin, Will maps out the challenges we’d face in this potential “intelligence explosion” future, and what we might do to prepare. They discuss:

  • Why leading AI safety researchers now think there’s dramatically less time before AI is transformative than they’d previously thought
  • The three different types of intelligence explosions that occur in order
  • Will’s list of resulting grand challenges — including destructive technologies, space governance, concentration of power, and digital rights
  • How to prevent ourselves from accidentally “locking in” mediocre futures for all eternity
  • Ways AI could radically improve human coordination and decision making
  • Why we should aim for truly flourishing futures, not just avoiding extinction

This episode was originally recorded on February 7, 2025.

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Camera operator: Jeremy Chevillotte
Transcriptions and web: Katy Moore

Highlights

A century of history crammed into a decade

Will MacAskill: So we’re thinking about 100 years of progress happening in less than 10. One way to get a sense of just how intense that would be is imagine if that had happened in the past. So imagine if in 1925 we’d gotten a century’s worth of tech progress in 10 years. We should think about all the things that happened between 1925 and 2025 — including satellites, biological and chemical weapons, the atomic bomb, the hydrogen bomb, the scale-up of those nuclear stockpiles. We should think about conceptual developments: game theory, social science, the modern scientific method; things like computers, the internet, AI itself.

Rob Wiblin: The decolonisation movement.

Will MacAskill: Then of course, yeah, social and political movements as well: decolonisation, second- and third-wave feminism, fascism, totalitarianism, totalitarian communism…

Rob Wiblin: Yeah, the rise and fall of communism.

Will MacAskill: Exactly, yeah. Postmodernism. So all of these things are happening over the course of 10 years in this thought experiment.

Human decision making and human institutions don’t speed up though. So just taking the case of nuclear weapons: in this accelerated timeline, there’s a three-month gap between the start of the Manhattan Project and the dropping of the nuclear bomb in Hiroshima. The Cuban Missile Crisis lasts a little over a day. There’s a close nuclear call every single year.

This clearly would pose an enormous challenge to institutions and human decision making. And Robert Kennedy, Sr., who was a crucial part of the Cuban Missile Crisis, actually made the comment that if they’d had to make decisions on a much more accelerated timeline — like 24 hours rather than the 13 days they had — they probably would have taken much more aggressive, much riskier actions than they in fact did.

So this thought experiment is to try to really get a sense of just the sheer amount of change, including multiple different sorts of change. And we are talking about a century in a decade; I actually think that the amount of technological development we might get might be much larger again: we might be thinking about many centuries, or even 1,000 years in a decade.

And then if you think about the thought experiment there, it’s like you’ve got a mediaeval king who is now trying to upgrade from bows and arrows to atomic weapons in order to deal with this wholly novel ideological threat from this country he’s not even heard of before, while still grappling with the fact that his god doesn’t exist and he descended from monkeys.

Rob Wiblin: Which they found out a couple of months ago.

Will MacAskill: Which they found out like two months ago, exactly. Clearly the sheer rate of change poses this enormous challenge.

Rob Wiblin: Yeah. So I think people might be thinking this would all play out completely differently, because some stuff it is possible to speed up incredibly quickly as AI gets faster, and other things not.

And that is exactly your point: that there’ll be some stuff that will rush ahead — I suppose we could imagine that there could be a real flourishing of progress in pure mathematics very soon as a result of AI — while things in the physical world might happen a little bit more slowly. Some biological work that requires experiments might progress more slowly. I guess anything to do with human institutions and human decision making, that slows down in relative terms.

And that is exactly why, basically because some stuff slows down relative to other things, we end up with whatever problems result from having a shortfall of the things that we couldn’t speed up. Is that basically the issue?

Will MacAskill: Yeah, exactly. So if it was literally the case that time speeds up by a factor of 10 —

Rob Wiblin: It’s not even clear exactly what that would mean.

Will MacAskill: It’s actually a philosophical question if that’s even possible, as in if that would be a meaningful difference. But instead, what is happening is that in some areas, and even the century in the decade, it’s not going to be exactly the same tech progress — because like you say, some areas of science are slowed by regulation or by slow physical experiments, or by just the need to build capital. So building the Large Hadron Collider, for example, takes a lot of time.

But the crucial thing is that human reasoning, human decision making, and human institutions don’t speed up to match the pace of technological development.

What does a good future with AGI even look like?

Will MacAskill: I mean, you should really pause and reflect on the fact that many companies now are saying what we want to do is build AGI — AI that is as good as humans. OK, what does it look like? What does a good society look like when we have humans and we have trillions of AI beings going around that are functionally much more capable?

There’s obviously the loss of control challenge there, but there’s also just the like —

Rob Wiblin: Sam Altman, I’ve got a pen. Can you write down what’s your vision for a good future that looks like this?

Will MacAskill: What’s the vision like? How do we coexist in an ethical and morally respectable way? And it’s like there’s nothing.

Rob Wiblin: Deafening silence.

Will MacAskill: Careening towards this vision that is just a void, essentially. And it’s not like it’s trivial either. I am a moral philosopher: I have no clue what that good society looks like.

Rob Wiblin: I think people aren’t spelling it out because as soon as you start getting into concrete details, if you describe any particular vision, people will be like, “This is super objectionable in this respect.”

Will MacAskill: This is part of the issue: it’s super objectionable in all respects. I think the one that’s most common is you’ve just got humans in control of everything and these AI servants doing exactly whatever people want, in the same way that software does whatever we want at the moment. But as soon as you think maybe, and quite probably, those beings have moral status, that no longer looks like an attractive vision for future society.

Rob Wiblin: Closer to a dystopia.

Will MacAskill: Exactly. Whereas then, go to the other side where they have rights and so on…

Rob Wiblin: It’s like now humans are totally disempowered.

Will MacAskill: Exactly. So that doesn’t seem good either.

Rob Wiblin: I guess we’ll do some middle thing. But what’s that?

Will MacAskill: What is that?

Rob Wiblin: It’s just going to be some combination of objectionable in these two ways. I suppose because the numbers are just going to be so off, there’s going to be so many more AGIs than humans, even giving them a tiny amount of weight just swamps us immediately, politically.

Will MacAskill: Yes, exactly. Maybe let’s talk about that later if we get time.

Rob Wiblin: Another objection would be that if there’s a fact of the matter about what is right and wrong here, with all the improvements in science and technology and intellectual progress, we’ll be able to figure that out, and we’ll be able to act on that information in future. Why shouldn’t that super reassure us?

Will MacAskill: I think for two reasons. The main one is that I expect the worry is not that people won’t know what to do; it’s that they won’t care. So, I don’t know, let’s take animals today and factory farming and so on.

Rob Wiblin: Are you saying that we have any ethical or moral insights today that we don’t all act on well?

Will MacAskill: I know it’s a bold and provocative claim.

Rob Wiblin: Explain how that would play out.

Will MacAskill: Sometimes I really put my neck out on things. So take animal welfare today. There’s a lot of information that is publicly available that is, in fact, directly inconsistent with things that people believe. But the problem is not really that people don’t know about animal suffering.

Rob Wiblin: Or couldn’t find out.

Will MacAskill: Exactly, they could quite easily find out, in fact, and people deliberately choose not to. The deep problem is that people do not care about nonhuman animals unless they’re getting a lot of social pressure to do so and so on. And it’s obviously inconsistent.

Rob Wiblin: And it’s at zero cost to them.

Will MacAskill: Yeah. They care about dogs and pets and so on. People are very inconsistent on this. Similarly, in the future the worry is just that, whatever the facts, people won’t in fact care.

And then it might well be quite contingent on early decisions how the balance of power and considerations play out. Potentially, if you started off kind of locked into some regime where AI is a software, with exactly the same legal framework, then that’s given the power and the incumbency to those who do not care about digital beings.

If instead, there’s some other framework, or even some other set of norms, or even the thought of like, “We don’t really know what we’re doing, so we’re going to use this legal framework and it must end in two decades and then we start again with a clean slate,” that could result in essentially the advocates for the interests of digital beings having more competitive power at the point of time that legal decisions are made.

AI takeover might happen anyway — should we rush to load in our values?

Rob Wiblin: We’re discussing if there is a lock-in relatively soon, how can we make it go better?

You talk about one approach would be doing better AI “value loading” — where I guess you try to make sure that the most powerful AGIs are taught firstly to reason and think about what is good and what is right, and also potentially taught to care about doing what is right for its own sake, rather than merely following whatever instructions they’re given.

That immediately poses a problem that you would be specifically training them to not follow human instructions and to do what their own internal reasoning tells them is good or right. This increases the chance of takeover, surely?

Will MacAskill: You could do both, is the thought. So you can imagine someone who’s a morally concerned citizen. Think of a human. Actually let’s even imagine they’re an ideologue or something. They’re just super libertarian; they just want the US to become a libertarian paradise. So they’ve got this strong goal — but they’re only willing to work within the bounds of the law; they’re not willing to be deceptive and so on. I’m kind of OK with that. I’m very scared of the libertarian who’s willing to do anything and is much smarter than me.

In the case of AI, you have corrigible AI. Like I said, I want it to be risk averse as well, so it doesn’t even get that much payoff from taking over. There’s other things you could do. Maybe it’s myopic, it discounts the future very heavily. So again, it doesn’t get that much payoff from a takeover. But then it’s also corrigible. Perhaps it’s got nonconsequentialist restrictions. It just really doesn’t want to lie, really doesn’t want to do anything violent. Also, it just knows it always checks in with you.

But nonetheless, there’s still a difference between, if I’m asking that AI for advice, for example, does it just tell me whatever it thinks I want to hear? That’s one thing. Or maybe what it thinks I would want to hear if I were sufficiently reflective? That would be a lot better. Or thirdly, it’s just a virtuous agent with its own moral character and is willing to push back?

There’s obviously a spectrum here. But I could be asking it to do something that is immoral but legal, and not within its guardrails and so on, and it might say, “I am the AI. I’m ultimately going to help you, but I really think this is wrong.” It could do that, or it could refuse.

So there’s a big spectrum, I think, within the AI character. And I think that’s really important for two reasons. One is because I do expect there to be a lot of relying on AI advice. And I actually think it will matter what the character of the AI is like when human beings are relying on AI advice.

In the same way as if you imagine the president surrounded by cronies and yes-men and sycophants, that’s going to result in one outcome. You could imagine another one where the president is surrounded by expert advisors; they have somewhat different opinions, perhaps, but they’re willing to push back.

Rob Wiblin: I guess ultimately they might do what the president wants if the president really is just not persuaded by what they say, but they’ll attempt to provide good advice.

Will MacAskill: Yeah. And then there’s also kind of shades to “aligned with what” question — of just, do we want it? You know, you could have this perfectly aligned AI that does all sorts of very harmful things. But you can also build in more constraints, such that even if something is legal but immoral, it still refuses to do that.

But there’s two reasons I think this avenue is promising, and they have somewhat different implications. The first is easier to understand, which is just how it affects our social epistemology and decision making.

The second one is kind of a Hail Mary to guard against the possibility of AI takeover. If AI does take over, there’s still this huge range of possible outcomes. One where it kills everybody. Second, where it disempowers humanity but doesn’t kill everybody, and in fact leaves perhaps humanity with a lot of resources and we are all personally much better off than we were; it’s just that we’re no longer in control of the future. Or perhaps it tortures all of humanity because maybe it’s even a sadist, or it just uses us all in experiments.

Or it goes and does something just really good, in fact, with the future. It’s like a moral reasoner itself, figures out what’s correct, judges that humanity was on the wrong track, and does it all. It’s still misaligned, it still takes over, it’s still bad; it’s not the outcome we want.

But there’s this huge range of possible outcomes, even conditional on AI takeover. It’s not just like a one or a zero.

Lock-in is plausible where it never was before

Rob Wiblin: Let’s actually back up a minute and think about this issue of lock-in, because that’s going to come up again and again through the conversation, and it’s highly related to this issue of seizure of power.

So there’s this risk that within the next century, reasonably soon, we could end up tying our hands somehow, getting really stuck on a path from which we can’t escape, even if many or most people might not want humanity to be going down that route.

I think it’s interesting that despite being at this game for 10,000 years, humanity doesn’t feel locked in at all. At the moment we have this very freewheeling, kind of chaotic nature, where it’s very unclear what direction we’re going, and there’s no one group or person who is especially powerful, who has grabbed so much power that they can really control the direction of things.

Why is that? Why haven’t we had any lock-in so far?

Will MacAskill: Yeah, lock-in is the idea that some institution or set of values kind of closes down the possibility space for future civilisation. And note that definition is neither good nor bad necessarily. Obviously there’s a lot to worry about with lock-in, and we’ll talk about that, but the key thing when we talk about lock-in as well is that it’s indefinite.

So there have been many attempts to lock in values and institutions in history — some successful, some not. In What We Owe the Future, I talk about a lot of examples from history, from shortly after the Confucian period onwards.

One example that I think a lot about is the constitution of the United States. It is very hard to change the US Constitution — much harder than in other countries. That’s I think an example of at least temporary lock-in, where a relatively small group in the late 18th century decide how the country is going to be governed, and make it extremely hard to change that. And that’s still guiding the American political system in a very significant way today.

Now, you’ve asked why have we not had lock-in in the past? I think we have to some extent, actually. So there’s only one human species. That is because Homo sapiens outcompeted their rivals.

Rob Wiblin: A polite way of putting it.

Will MacAskill: Well, in some cases interbred as well. I do think of that as a sort of lock-in. But thankfully it seems like the future is still very open.

There are other things that could well amount to lock-in. I actually think the US Constitution: it’s quite plausible to me that the US wins the race to AGI, becomes such a large part of the economy that it’s a de facto world government, and that guides just how the very long-term future goes — in which case Madison and Hamilton were in fact locking things in in a really indefinite way.

Rob Wiblin: Although even then I guess you’re locking in more of a process than a very specific set of values that we’re just going to then operationalise.

Will MacAskill: Yeah, that’s right. So there are ways in which I think, once we get to AGI and beyond, things get really quite different.

Let’s say that the American founding fathers wanted to lock in a very specific set of ideals or values — I’m not actually speaking to their true psychology, but supposing they really did want to. Well, it’s hard for them because they’ll die, so they can’t continue to enforce that afterwards. It’s also not even clear what they might want.

So we have the Supreme Court adjudicating the meaning of the Constitution. That obviously changes over time. However, with AGI, we can have very specific sets of goals encoded in the AGI. We can have that, we can have many backups. We can use error correction so that the code that specifies those goals doesn’t change over time.

So we could have a constitution that is enshrined in the AGI, and that just persists indefinitely. In the past, you’ve got these attempts to lock in values on institutions, but there’s just a decay over time. But over time that decay rate can get lower and lower. And I think with AGI it can get to essentially zero over time.

Rob Wiblin: I guess many dictators or many people who have had a strong ideology that they wanted to push on people and have promulgated forever, they’ve tried to create a sort of lock in, but they get old and die. Their followers don’t believe exactly the same thing. So the ideas drift over time and then they die.

And then there’s another generation that believes a different set of things. You can’t even clone yourself to create another person who has the same genes who might be more inclined to believe the same thing. You’re constantly getting this remixing at all times, which keeps things a bit uncontrolled. It’s impossible for anyone to really impose their will indefinitely, because they just can’t. There’s no argument that’s persuasive enough to get all future generations to insist on it.

But with AGI, that completely changes, because you can make an AI that has whatever goals it has, and you can just copy it. Firstly, it will never be destroyed necessarily, and you can just make an unlimited number of copies of them. And even if it’s drifting over time, you can just reset it back to factory settings and then have a go again. It’s like a total revolution in your ability to lock things in.

Will MacAskill: Yeah. That was extremely well explained. You should really work on this stuff. Yeah, exactly.

ML researchers are feverishly working to destroy their own power

Will MacAskill: I’ve talked about this a bit, but one is just spreading some of the important ideas — so just what the intelligence explosion is, where we are, what challenges we’ll face. Actually trying to get buy-in on that ahead of time.

Second is just empowering more responsible actors. No matter how much we try to distribute power, a bunch of people are going to have a lot of power at a very crucial point in time. They’re going to be making really consequential decisions. I want those people to be humble and cooperative and farsighted and morally motivated and communicative and accountable and so on.

Rob Wiblin: [laughs] We’re pretty set on all of those points, I would say.

Will MacAskill: Yeah, luckily this one actually is already taken care of and we really don’t need to worry about that one.

Rob Wiblin: Yeah, but if things get worse, then we could intervene and try to make them better.

Will MacAskill: And that’s something that people can influence in terms of who you vote for, which companies you buy products from.

Machine learning researchers, they currently have an enormous amount of power that they will predictably lose.

Rob Wiblin: That they are feverishly working to keep away from themselves.

Will MacAskill: Yeah. So there’s actually this amazing confluence of self-interest among them, and also potential benefits for the world — because this is a way in which we can move some amount of power away from just leaders, and if the governments get more involved, the governments running the project. You could imagine this kind of union of concerned computer scientists, probably informal, but just a group saying, “We will only build this thing that will replace us under XYZ conditions.”

Rob Wiblin: It’s only a few thousand, maybe like 10,000 at most, who are relevant to this, and are likely to be relevant by the point that these decisions have to be made.

Will MacAskill: Yeah, exactly. And that’s just a way in which it could be another lever basically to steer things in a good direction so that it’s not wholly just economic and military incentives driving the shape of the technology.

Rob Wiblin: Just to spell this out a little bit more clearly for people at the back, there’s a few thousand ML researchers who currently have an enormous potential influence over the future because they have all of this intellectual knowledge. They’re the people who, over this relevant short period of time, are the ones who are going to be automating AI research and setting off this chain reaction, the intelligence explosion.

They are currently working very hard to automate themselves to the point where they are no longer required for this process and the AI will be able to entirely do AI research better than they could all by itself. At which point their leverage completely evaporates and they no longer have any control over anything. Currently they are working to bring about their own disempowerment while asking for nothing other than a salary in the meantime. I think that they should bargain for more.

Will MacAskill: That’s very well put, Rob. It’s very well put. And I should say ML researchers are often just good people, nice people.

Rob Wiblin: Many of my best friends are ML researchers. Please, folks. Please.

Will MacAskill: Exactly. And there will be very hard decisions that will be much easier to make if there were some sort of informal community or union or something like that. That could be, “Hey, this company is just not holding itself to high enough safety standards. We’re leaving.”

Rob Wiblin: “This company stinks. We’re going to go to a better company doing the same thing” — literally probably on another block down the road.

Will MacAskill: Down the street. Or secondly, in cases with much larger government involvement, saying, “I’m only going to work on this project under XYZ conditions.” And the conditions can be a really low bar.

Rob Wiblin: It can be, “We won’t build billions of killer mosquitoes.”

Will MacAskill: “Any AI we train has to be aligned with the US Constitution” — if it’s built in the United States.

Rob Wiblin: “It wouldn’t assist with a coup.”

Will MacAskill: It wouldn’t assist with a coup. Pretty low-bar stuff. But yeah, that lever does not currently exist, as far as I can tell.

People distrust utopianism for good reason

Rob Wiblin: Interesting. I guess the upshot is, let’s worry a little bit less about the extinction thing. Instead what we should do is try to take whatever future we’re going to have and turn it into this utopia, the very best possible future, that’s an incredibly fragile and incredibly narrow target, we think.

I think to many people that might sound a little bit utopian. And I guess people have negative associations with utopian because it tends to justify negative behaviour. It can just seem kind of naive, I guess. People who tend to be utopian tend to be very locked into quite a narrow perspective on what’s good; that’s the sort of mindset that drives people towards utopianism.

Why isn’t this the sort of bad utopianism that has justified all kinds of bad things?

Will MacAskill: It’s a great question, and I’m very aware of the fact that I use the term utopia a lot. I spell it with an E: eutopia.

Rob Wiblin: To try to get some distance from the associations of the concept.

Will MacAskill: Yeah. Though it sounds the same. But yeah, utopia is extremely unpopular as a term and as an idea. And honestly, I think for very good reason. Utopianism was much more popular at the end of the 19th century, early 20th century. And some things happened. It didn’t age well.

But lots of the depictions of utopia didn’t age well either. My partner and I have a bit of a shared hobby of reading old utopian fiction, and it’s really remarkable the extent to which those utopias now look like dystopias, even having not that much kind of moral progress that has happened. It’s like only a century, which is small in the grand scheme of things. But very often the societies are totalitarian; very often they bake in kind of moral blindspots of the time.

Thomas More’s utopia, the person who coined the term, had amazing abundance, this wonderful society in many ways. Every household owned two slaves. Not so appealing nowadays. Similarly, Aldous Huxley, who wrote Brave New World, also had a piece of utopian fiction called Island. Again, this very technologically advanced society, but also in touch with nature. The adults had sex with children. It’s just like, whoa.

Rob Wiblin: Wasn’t Aldous Huxley doing that in the ’30s? I’m surprised that was the kind of thing someone would write in the ’30s.

Will MacAskill: This is definitely a digression, but even in the ’60s and ’70s, the French existentialists and philosophers were signing an open letter to say that there should be no age of consent, or it should be much lower than 16. It’s really easy to forget how quickly moral attitudes change, and actually potentially how morally different even just 100 years ago was, or even less.

So depictions of utopia often, in fact, end up looking really quite dystopian. I think that’s true from Plato’s Republic onwards. And the lesson is we shouldn’t be trying to depict some particular vision of the future and aim directly towards that. In fact, that’s terrifying. That’s what we should be guarding against.

This is why I’m trying to develop and promote this idea of a viatopia. That’s a way station instead; that’s acknowledging we don’t know what the ideal future looks like, and instead is a state that we just think would be on track. It’s good enough that it’s on track to get into a really good future.

I’ve talked before about the long reflection as an idea, that would be one implementation or one proposal for what viatopia would look like. But there’s other potential proposals too, like the idea of a morally exploratory society, which I talk about in What We Owe the Future, or the idea of a grand bargain between different value systems.

And my hope at least is that this can give a positive vision — which is extremely lacking in the world today — for the post-AGI future, without the terrifying and often totalising kind of utopian impulse that we saw in the early 20th century in particular.

Non-technological disruption

Will MacAskill: So look through the past few centuries or 1,000 years: many of the ideas that upend society are intellectual. It’s like communism or fascism or atheism or the idea of universal human rights or feminism and so on. We should expect loads more of them. That is a priori work — that’s like sitting in an armchair kind of generating arguments. It’s true the diffusion of the ideas will be slower, and perhaps so slow that the effect is greatly mitigated. But I do think, just imagine some of the really big, groundbreaking —

Rob Wiblin: I guess we’ve had this whole culture war over the last 20 years or so — people who are more social justice oriented and then a big reaction against that — if that played out in one year rather than 10 years.

Will MacAskill: Yeah, exactly. But also potentially ideas like, maybe we’re in a simulation, and the arguments are just extremely good. And you’ve got this superintelligence; it’s great at all of these other domains, just clearly are epistemic superiors. There’s loads of them. That would be potentially quite disruptive.

Or perhaps just extremely strong arguments against the idea of there being an objective moral truth. Perhaps that’s something that people in fact don’t really internalise. And now you’ve got these AI advisors, just being like, “What are you doing? You have these goals and you keep not acting on them because of this false belief that you have.”

They call these “disruptive truths.” It’s obviously just extremely hard to predict what they would be. I don’t think people had any of these ideas of atheism or abolitionism or communism beforehand, before they became prevalent.

The 3 intelligence explosions

Rob Wiblin: In the paper you lay out the three different types of intelligence explosion that can occur. So there’s a software intelligence explosion, then there’s an intellectual one, and then an industrial one. Is that right?

Will MacAskill: Technological and then industrial.

Rob Wiblin: Yeah. So maybe I need to have this explained again. Can you explain the three different types of intelligence explosion? Because they each create different dynamics, and they occur at different stages.

Will MacAskill: Sure. So there’s this separate paper written with Tom Davidson and Rose Hadshar, which is describing these different intelligence explosions.

Previously I was arguing that even if you didn’t have this recursive improvement, just the sheer rate of progress at the moment would be enough to drive forward a century in a decade. But in fact, I think we will have this recursive improvement, and that will speed things up even more.

The first is a software feedback loop, where AI systems get really good at designing better algorithms, and so they make better algorithms, which means you can make even better AI that helps you make even better algorithms, and so on. So that’s a software feedback loop.

Second is a chip quality or technological feedback loop, where AI gets really good at chip design or the other aspects of just making higher quality chips, where you get more computational power per dollar.

And then the third is the industrial explosion and the chip production feedback loop. That’s where you now have AI and robotics, and rather than needing human labour to produce more chips, and in fact to just produce more goods in general, instead you now have AIs controlling robots — such that you can have wholly autonomous factories producing goods, including chips and so on. And that would be another kind of feedback loop where just the more computational power you have, the more AIs you have, and the higher quality AIs you have.

Articles, books, and other media discussed in the show

Will’s work:

Others’ work in these spaces:

80,000 Hours resources:

Other 80,000 Hours podcast episodes:

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.