Transcript
Cold open [00:00:00]
Nikita Lalwani: There’s a lot of uncertainty with this fast-takeoff scenario. If it does happen, that could really change the calculus on the nuclear deterrence question.
Sam Winter-Levy: In a nuclear world, once you have nuclear weapons, the winner of contests and these bargaining dynamics is determined less by the balance of power — like which side has more weapons, a stronger economy, more resources — and more by the balance of nerves: which side has more resolve, which side is more willing to run a higher risk of nuclear escalation, which side cares more about a particular outcome.
Nikita Lalwani: Imagine going to the president today and saying, “This AI system can tell us with 100% certainty where all of China’s nuclear weapons are.” Would he accept that statement unquestioningly, or would he have some doubt as to whether the system was foolproof?
Sam Winter-Levy: I think the AI community, AI experts, are really best placed to understand the technologies that are emerging, but they are not expert on nuclear weapons and nuclear deterrence. And conversely, the nuclear community knows that field of work, but they’re just not necessarily entirely on top of the frontier of AI breakthroughs.
Who are Nikita Lalwani and Sam Winter-Levy? [00:01:03]
Luisa Rodriguez: Today I’m speaking with Nikita Lalwani and Sam Winter-Levy. Nikita is a lawyer and policymaker. Previously she served as Director for Technology and National Security at the White House, where she focused on issues related to US–China technology competition, and before that as a senior advisor to the director of the CHIPS Program office at the US Department of Commerce. Sam is a fellow at the Carnegie Endowment for International Peace, where he focuses on the intersection of national security and AI.
They’ve both just coauthored a really great piece for Foreign Affairs called “The end of mutual assured destruction? What AI will mean for nuclear deterrence.”
Thanks for coming on the podcast, Nikita and Sam.
Nikita Lalwani: Thanks so much for having us.
Sam Winter-Levy: Great to be here.
How nuclear deterrence actually works [00:01:46]
Luisa Rodriguez: You think AI experts need to pay more attention to nuclear deterrence. What is the case for that?
Sam Winter-Levy: The basic case here is that nuclear deterrence is a key check on the ability of even technologically very powerful states to impose their political preferences on their less technologically advanced adversaries. And so long as nuclear deterrence remains in place, we think the economic and military advantages produced by AI, while very significant, will not necessarily allow states to completely dominate one another. States will continue to temper their actions for fear of nuclear reprisal, and there will remain pretty significant limits on states’ ability to impose their political preferences on their major nuclear-armed adversaries.
But conversely, if AI does undermine nuclear deterrence, then the technology really could make a state unrivalled in its capabilities to threaten, coerce and dominate its adversaries.
So we think this question, of whether nuclear deterrence persists or erodes in this kind of age of AI that we’re entering, will fundamentally shape both the risks and the stakes of the AI competition.
Luisa Rodriguez: Yeah, it seems like a really big deal which way this question goes. I think there are a bunch of nitty-gritty details to get into to really understand the role of nuclear weapons in a world where one country or maybe multiple countries are building AGI.
But to understand those, I want to quickly run through some key concepts: nuclear deterrence and mutual assured destruction. First, can you give just a very quick refresher on what exactly nuclear deterrence is and how it works?
Nikita Lalwani: Yeah, absolutely. Very quickly, deterrence refers to the practice of dissuading an adversary from taking certain undesirable actions up to and including a nuclear attack. And the way deterrence works is by making your adversary think it should not act — either because its objectives are too costly or too uncertain, or because the penalties it will incur from acting outweigh any benefits.
The concept of deterrence has been at the heart of international relations theory and international politics for much of the last century. I’ll just highlight two features of deterrence that I know we’ll come back to later in this discussion.
The first is that, for as long as nuclear deterrence has been in place, it has been difficult for states to fully impose their political preferences on nuclear-armed rivals for fear of nuclear retaliation.
The second is that the central pillar of nuclear deterrence really comes down to what is known as the “second-strike capability,” which is a state’s capacity to respond to a nuclear attack with a devastating nuclear strike of its own.
Luisa Rodriguez: I want to come back to the second-strike capability, but first I’m interested: to what extent does nuclear deterrence actually prevent states from coercing and pressuring their adversaries? It seems like clearly this is true to some extent, but it also seems like there are big exceptions, like North Korea. What exactly are the limits of deterrence?
Sam Winter-Levy: Yeah, so nuclear deterrence definitely doesn’t end the possibility of coercion, intense competition, pressure between states, or even the risk of outright war. But in the conventional view of nuclear theorists, people like Thomas Schelling and Robert Jervis, nuclear weapons change the nature of these competitions between states.
In a nuclear world, once you have nuclear weapons, the winner of contests and these kind of bargaining dynamics is determined less by the balance of power — like which side has more weapons, a stronger economy, more resources — and more by the balance of nerves: basically the idea of which side has more resolve, which side is more willing to run a higher risk of nuclear escalation, which side cares more about the particular outcome.
What this means is that there’s generally an upper bound on the kinds of coercion that states can exercise against their nuclear-armed adversaries. Because if you attempt to coerce them on matters that are sufficiently important to them — things like territorial integrity or regime survival, the core political interests of a state — you run the risk of nuclear escalation.
Just to give one example to try and motivate this: if you think about the US and Russia, right now the US is economically far superior; its economy is like 15 times Russia’s. It’s conventionally far superior as well. But there are real limits on how much leverage the US can exercise over Russia — because on issues that the Russians care about sufficiently, they can always threaten moves that can add risk of nuclear escalation or add risks of a nuclear accident. And that puts a pretty significant limit on how much influence the US can really wield over them.
Luisa Rodriguez: Yep, that feels important, and I think it’s going to come up again. For now, I want to understand the second strike better. So it’s not actually the number of warheads, or even that much about how much power the nuclear weapons in an arsenal have, like how big the bombs are. Basically, it’s this secure second strike. So can you explain what exactly it is and why it’s important?
Nikita Lalwani: Yeah, for sure. A secure second strike, as I was saying before, is the cornerstone of nuclear deterrence. It’s the ability of a state to retaliate after absorbing a nuclear attack. So long as two nuclear powers can credibly maintain a second-strike capability that can inflict unacceptable damage on their adversary, a first strike would be suicidal.
To maintain a secure second-strike capability, your forces need to be survivable — which means that a substantial portion of your nuclear force should be able to survive any potential adversary attack, and endure throughout crises and conflict.
States, by and large, pursue different strategies to ensure the survivability of their nuclear forces. But to take the United States as an example, we have a triad of strategic nuclear forces: we have land-based intercontinental ballistic missiles; submarine-launched ballistic missiles, which are carried by nuclear powered submarines; and we have an airborne force of bombers.
And there’s a lot of redundancy built in and different approaches to survivability: for example, keeping weapons in hardened silos, or moving them around, or making them extremely difficult to find underwater. And we do all this to try to ensure we maintain a retaliatory capability.
Sam Winter-Levy: It might be worth flagging that different countries have emphasised different approaches to securing a second-strike capability. So the US has the triad that Nikita mentioned. The UK only has nuclear submarines. Russia and China have road-mobile launchers, which are basically large trucks carrying nuclear missiles that drive around roads and highways. That was considered politically unviable in the US, because people will be so freaked out to see nuclear weapons just driving around their cities, so the US does not have road-mobile launchers.
So different states have taken different approaches, but they all revolve around this idea of resilience, redundancy, and survivability.
Luisa Rodriguez: Nice. Yeah, I’d never really known why the US doesn’t have road-mobile launchers, but it makes sense just because it’s not politically viable. I’m surprisingly unsettled by the idea of nuclear weapons driving down the highway in the US, so I’m kind of glad we don’t do that.
What exactly would it mean to undermine the secure second strike, practically speaking?
Nikita Lalwani: This question really gets at the heart of what we’ve been interested in exploring, and helps set the stage for a more rigorous exploration of how AI might or might not undermine nuclear deterrence.
There are probably many ways to think about this, but the way we’ve thought about it is that, broadly speaking, there are three ways that a state could undermine an adversary’s second-strike capability:
- The first is that a state could destroy a rival’s entire nuclear arsenal in what is called a “splendid first strike.” This would require pinpointing all of an adversary’s nuclear weapons, including the locations of nuclear submarines and mobile launchers.
- The second way that you could undermine the second-strike capability is the state could prevent a rival from launching a retaliatory strike by disabling nuclear command, control, and communications networks. Those are the networks that support nuclear decision making and communicate those decisions to nuclear forces in the field for execution.
- And then finally, a state could strengthen missile defences such that a rival could no longer credibly threaten nuclear retaliation.
So the question overall is whether advances in AI will give states these capabilities. And there’s a kind of second question that I’m sure we’ll get into as well, which is whether and how these capabilities would then translate into geopolitical dominance.
AI vs nuclear submarines [00:10:31]
Luisa Rodriguez: OK, I’m interested in getting into the details. I do find it super plausible that AI could be really good at solving the kinds of problems that humans can’t really solve, and therefore that secure this second strike. But I can also imagine that maybe intuitively it feels like AI could kind of magically solve everything, but maybe it’s harder than it sounds. So I’m interested in going through different elements of different countries’ nuclear arsenals, one by one, and trying to kind of see how realistic this is, starting with nuclear submarines.
So nuclear subs, if I remember correctly, are extremely, extremely survivable because it’s really hard to track things that are moving very slowly and quietly in the ocean. Can you explain why that is?
Sam Winter-Levy: Yeah. You’re definitely right that nuclear subs are generally viewed as the most survivable leg of the triad, particularly for the US. The US’s subs are extremely hard to find. That’s really for three core reasons.
The first is just that the ocean is huge. I think it’s easy to underestimate how big the ocean is. The scale of the search problem here is just enormous: like tens of millions of square miles of ocean that you are trying to find these relatively small objects in.
Second, as you said, water kind of blocks a lot of what we generally use to detect things. Seawater is almost totally opaque to electromagnetic radiation, so techniques such as radar that work pretty well for surface and airborne targets are just much less useful for finding submarines. Remember, they can remain submerged for months at a time, potentially, when they’re out on patrol.
States are generally limited to acoustic methods, but only low-frequency sound waves propagate significant distances in water — and in the process, they interact with all sorts of complicated oceanographic phenomena that distort sounds. A lot of the ocean just contains a lot of ambient noise, and oceans are actually getting noisier over time with increased commercial shipping. And submarines just produce very low levels of signal relative to this background noise.
Third, and finally, modern subs are engineered for silence. So their propeller designs, their hull coatings, their internal machinery, it’s all just designed to minimise noise. Nuclear-powered submarines can be so difficult to detect from other subs that they sometimes collide underwater.
Luisa Rodriguez: That’s an incredible fact. I had never heard that before. And it does feel like it really helps my intuition to understand how hard subs are to find in the water.
I also agree that it feels really difficult to truly understand how massive and opaque oceans are. Part of it is that we see them on the map, and we see the surface area and we don’t see the depth — and the volume of it is a big part of why this is such a difficult problem. It’s not just the surface of a country where you have road-mobile missiles. This is a much larger area.
Sam Winter-Levy: Exactly. It’s like really finding needles in a haystack — where those needles are designed to be as difficult to find as possible, and where you need to find every single needle simultaneously at the same moment with zero margin for error. So yeah, an extremely difficult search problem.
Luisa Rodriguez: Yeah, so that really helps there. I think the other thing that I kind of forget is that the oceans have like mountains in them. I kind of picture it as mostly empty except for fish — and if you wanted to find a submarine, that would look different from a fish, and you’d use these techniques and it just couldn’t possibly be that hard. But they’re actually very, very complex environments. I guess on the day to day that is not that salient to me, but when I actually think about it, I’m like, that is why this is such a hard problem.
Sam Winter-Levy: Yep, that’s right.
Luisa Rodriguez: What is the most compelling story that you could give for how AI would allow adversaries to track and target nuclear submarines? It seems like the problem is hard, but what is the version that people who are really bullish on AI think is going to explain how AI does this successfully?
Sam Winter-Levy: There’s maybe two ways to think about this, and we can kind of bracket the first one and come back to that later potentially.
The first one is going to be true of many of these different types of questions, which is: maybe AI can invent things that we can’t anticipate in advance, just like totally new methods that it’s just very hard to think through now. So we can maybe come back to that set of answers.
The other way of thinking about this is: what are the methods that AI could use that are most continuous with what states currently do, and that just seem most plausible? And there the story would go something like: you can use machine learning to just integrate data from thousands of sensors, filtering out noise, identifying these faint signatures that humans would miss.
In particular, you can use AI to just fuse noisy data from multiple different sensors, including sonar, but also what are called “magnetic anomaly detectors.” So subs have steel hulls, which means they disturb local magnetic fields, which you might be able to pick up. And then also satellite-based synthetic aperture radar, which might be able to identify very small wake patterns generated on the ocean’s surface by passing submarines.
So you can fuse all of that data. Maybe you could also use autonomous underwater vehicles that can kind of patrol, continuously coordinate with satellites, and build a kind of persistent ocean-wide surveillance net. And if you combine all of that data with signals intercepts and human intelligence and all these other sources of data that states have, maybe you could use AI systems to track submarines more effectively. You might also be able to use AI systems to hack into the systems that states themselves use to track and communicate with their own submarines.
So that’s kind of one plausible version of this story for how states could use AI to track nuclear submarines.
Luisa Rodriguez: Nice, nice. It sounds compelling to me, but also my sense is that there are some reasons to think that this wouldn’t actually work. Can you go through some of those reasons?
Sam Winter-Levy: Yeah. So I think first it sounds easy when you say, “Just integrate all these different data from different sensors.” But this is just an extremely hard technical problem, and there are probably limits to what these improvements can yield. Again, I should bracket that there is a lot of uncertainty in every part of this: we’re talking about future technological developments, there’s uncertainty about what states’ current capabilities are, there’s uncertainty about the nature of technology. So huge bracket for just big uncertainty here.
But that said, this is just a really hard technical problem. The physics of the undersea domain are brutal. Any system will struggle to continuously identify, track, and monitor multiple targets amid ocean background noise, especially as submarines get quieter and oceans noisier, which is currently happening.
The number of uncrewed underwater vehicles you would need to render even part of the ocean transparent is just going to be enormous. Most of these vehicles have pretty low endurance, and are pretty limited in their covert communication range.
And if you wanted to deploy millions of sensors around the oceans, that’s also going to be challenging, because if you want universal coverage, you’re going to have to put some of those sensors in contested waters, which will incur risks of sabotage and interference of its own.
So I think the first reason to think this is difficult is just that it’s a hard technical problem. Second, it’s just unlikely that states are going to do nothing in response.
And this is going to be true for every element of this discussion here. This is going to be a move/countermove dynamic. And states have a lot of countermeasures that they can use to tip the scales in their favour: they can jam signals, they can manipulate sensor data with deceptive decoys — for instance, they can just play recordings of a submarine into an amplified underwater acoustic source.
And states can also use a lot of these technologies to protect their own submarines. Russia and China, for example, can use these underwater sensors, underwater networks to protect their submarines, which generally operate in waters closer to their own home territory where it’s easier to protect them. I think there are just a lot of ways that states can add uncertainty to every step of the process of detecting, tracking, and targeting submarines, so it will likely remain a kind of probabilistic affair.
And states are probably not going to want to target their adversaries’ nuclear submarines unless they’re very confident that they can take them all out at once — because each submarine could carry as many as 200 warheads on them, so if even one submarine escapes, that’s probably enough to deprive you of any meaningful notion of victory.
So yeah, it’s a really hard problem. I would just flag one good recent paper on this is from Tom Stefanick, which I’m sure we can put in the show notes. In that paper, he goes through a lot of these move/countermove scenarios that are likely to play out. And in his view he thinks that ballistic missile submarines are likely to remain reliable second-strike nuclear forces over the next 20 years and beyond. That’s his take on this.
Luisa Rodriguez: OK, yeah. So maybe one thing to put a pin in is that this assumes that progress is meaningful, but not so fast that an adversary won’t have any time at all to respond and build up some kind of defence capability that’s meaningful. So yeah, pin in that and maybe we’ll talk about some scenarios where actually progress is too fast for adversaries who don’t have AI or AGI to respond to.
In the world where things are moving at a pace where countries can respond with countermeasures, is it plausible that there is still a sudden enough breakthrough in something like acoustic modelling or real-time data fusion to create a short-lived offensive window where nuclear subs are identifiable? And maybe it doesn’t need to be forever; it’s just like if there’s a window, a state could choose to use it, knowing with some confidence that their adversary hasn’t been able to build up defences yet.
Sam Winter-Levy: Yeah, it’s possible. Maybe one interesting piece of history here is that there were periods during the Cold War, particularly during the ’60s to the ’80s, where the US was actually pretty good at tracking Soviet submarines. That was partly for reasons of geography and partly for reasons of technology: Soviet submarines weren’t as quiet as American ones, and they also had to pass through very narrow chokepoints — most famously the gap between Greenland, Iceland, and the UK. They have to pass through these narrow chokepoints to get out to the open ocean, so there was this window where the US Navy was actually pretty good at tracking Soviet submarines. The US didn’t have this problem because the US just has direct access to tens of millions of miles of ocean from the west coast and the east coast.
So it is possible that you get windows where states will feel more confident at different times. But again, it kind of comes back to this whole issue here, which is just the level of confidence you need and the degree of redundancy, the degree of different systems that you need to take out simultaneously. It’s not enough to get significantly better at tracking other states’ submarines; you need to know that you have every submarine and every road-mobile launcher and every ICBM, which I’m sure we’ll come on to in a little bit.
So I definitely don’t want to rule out the possibility of windows emerging, and we can come on to that later, but it remains just a very challenging situation where states are likely to be spending a lot of resources staying on top of this problem.
Luisa Rodriguez: Yeah, OK. Let’s come back to the uncertainty and actually what would countries do if they thought maybe they had these kinds of opportunities.
AI vs road-mobile missiles [00:22:21]
Luisa Rodriguez: Let’s move to road-mobile missiles. You said in China and Russia there’s more of an emphasis on road-mobile missiles — which, again, sounds actually really terrifying if that’s a place where you live. What makes these so hard to find?
Nikita Lalwani: The basic story here is very similar to what Sam was just talking about with submarines: road-mobile launchers, like subs, are concealed, camouflaged, and they don’t stay in one place for very long. They’re carried on vehicles that can hide under netting, under bridges, in tunnels, and they’re driven from one concealed location to the next.
Taking a step back, the survivability of these launchers really depends on the competition between a hider’s ability to keep them concealed on the one hand, and a seeker’s ability to locate and track them on the other.
At least historically, hiders have had some real advantages: they can send mobile launchers to remote locations, they can move mobile launchers in short bursts at times that are selected specifically to make them difficult to track — for example, at night or under extensive cloud cover.
Finders, by contrast, have to overcome several obstacles. They have to track all mobile launchers deep within enemy territory. During a crisis, they’d have to find the launchers and destroy them all more or less simultaneously in coordination with attacks on the rest of a country’s nuclear forces. And they’d have to do all of this over a relatively short period of minutes or hours, all while the adversary is doing everything it can to thwart surveillance and information-gathering capabilities.
Luisa Rodriguez: Yeah, yeah. I mean, that both sounds extremely hard, and also I again have this feeling like this just sounds like the perfect problem for AI to solve. Can you start actually by laying out that case that plausibly AI has a shot at this?
Sam Winter-Levy: Sure, yeah. In many ways, this is a proven application of AI systems. The United States probably already has large amounts of data from satellites, signals intercepts, aircraft, and so on — but that data currently outpaces the ability of human analysts to digest.
And states probably also have a significant stock of images of these mobile launchers. Many of them are just paraded in public through Beijing. You can find images on the open source of these launchers, along with information about their signatures and how fast they drive and their weight and so forth. You could plausibly use these images and these signatures to train machine learning algorithms to just dramatically speed up the processing of intelligence to make it easier to conduct operations against these vehicles.
So in many ways, like you said, this is sort of a classic “pattern recognition within a big dataset” type problem that in many ways could be suited to AI. The key thing here is that AI can reduce the area you need to search dramatically, and similarly reduce the area you would need to attack by orders of magnitude, potentially.
Just to kind of bring home what this entails, in most of the modelling on this, the unclassified modelling, once you find these launchers, you then basically just bombard that area with nuclear warheads of your own. States only have so many nuclear warheads, so you have to pick your targets relatively carefully. So if you can use these AI systems to dramatically shrink the search area and the targeting area, it could plausibly increase the possibility that you can pull off a splendid first strike with a nuclear first strike of your own against these road-mobile launchers.
Luisa Rodriguez: Yeah, OK. So that’s the case for. What options would the defending countries have for keeping their road-mobile launchers hidden?
Nikita Lalwani: As with every element of nuclear deterrence, there are likely to be move/countermove reactions. There’s another great paper that we can put in the show notes: Thomas MacDonald (previously at Carnegie, now at Lawrence Livermore) has a recent paper on this where he describes some of the countermeasures that hiders could adopt to keep their launchers hidden — some of which are very straightforward and low tech and others of which are more complicated.
Most simply, hiders can adopt old-fashioned, low-tech solutions. For example, covering roads with netting or constructing decoys — decoys being vehicles that are made to look and act like mobile launchers, but aren’t actually mobile launchers. That would both increase the number of vehicles finders have to monitor and potentially strike and decrease their certainty of any given detection.
More dramatically, in a crisis states could use anti-satellite weapons to destroy or impair satellites, which would create holes in coverage that hiders could exploit to move their launchers. This would obviously be a provocative measure, so less likely during peacetime, but could be used during a conflict scenario.
The big picture here is that although it’s impossible to predict exactly how states will react, there are countermeasures available to them, and if they care about the survivability of their mobile launchers, they have every incentive to use them. Of course, finders can then innovate as well, so one might expect a measure/countermeasure cycle that could potentially lead to greater instability.
Sam Winter-Levy: Maybe one thing to take away from this whole section is that, although AI systems may make it easier to track submarines and to track road-mobile launchers, defending states have a lot of options that they can take to try to shore up the survivability of their second-strike capabilities, or at the very least, to inject enough uncertainty into the belief of a state that might be considering launching a first strike that they’re going to think very hard about doing so.
And many of those options available to them do not require equivalent levels of technological sophistication. Many of these are pretty low-tech measures to just massively expand the search area, or massively increase the number of options you would need to target — through things like making your launchers drive faster: that’s not a high-tech move, but it can make launching a first strike significantly more difficult to pull off.
Luisa Rodriguez: Yeah, I do think that the low-tech-ness is striking to me. It doesn’t have to be the case that other countries have AI as good as the leading country; it just has to be the case that AI continues to struggle to see through chicken wire over a highway, which feels like a surprise.
AI vs missile defence systems [00:28:38]
Luisa Rodriguez: Let’s turn to missile defence. I remember when I first learned about missile defence, it sounded like it could be a huge deal. If you can defend against incoming nuclear missiles, you can basically make it impossible for them to achieve this second strike even if they have plenty of missiles left.
After decades, my impression is that no one can do missile defence reliably. And I feel like there are famous analogies of shooting a missile out of the air is like shooting a bullet with a bullet. Can you explain what exactly makes it so hard?
Nikita Lalwani: Yeah. So missile defence systems must do the nearly impossible: detect a launch, track potentially hundreds of missiles travelling through space at 20 times the speed of sound, estimate their future trajectories, and destroy them with interceptors — all in less than 30 minutes, which is the rough flight time for most land-based missiles travelling between the United States and Russia or China. This was actually just dramatised pretty effectively in A House of Dynamite, the new Kathryn Bigelow movie on Netflix.
As some others have pointed out, it’s likely that US missile defence would ultimately be effective against a lone ICBM, but its hit rate would fall dramatically if faced with a barrage of missiles. As you say, missile defence has been likened to trying to hit a bullet with a bullet. In actual fact, it’s probably a little bit easier than that — hitting a bullet with a bullet is obviously incredibly difficult — but it’s still very, very hard.
Sam Winter-Levy: It might be worth just clarifying the distinction between missile defence where you’re trying to defend the continental United States against a nuclear strike from Russia or China, from things like Iron Dome in Israel, which listeners might have a sense is very effective. Defending the United States against a full-scale nuclear strike from Russia or China is just a vastly harder proposition than protecting small areas of a small country from pretty small rockets launched from Lebanon or from other neighbouring states in the Middle Eastern context.
And just to put some context on how expensive and how difficult this is: right now, the US is talking about potentially spending around $3.5 trillion over 20 years to build a missile defence system that might be able to block North Korea’s ICBM arsenal, but definitely not one the size of Russia’s. Just to give you a sense of just how expensive and difficult missile defence is.
Luisa Rodriguez: Right. Wow. Yeah, that is striking. So the thing that I want to make sure I understand is: it sounds like for a very limited strike, it is plausible that the US could defend against it using its missile defence system. And that’s because of something like it doesn’t have a 100% success rate, but if there were a small enough number of missiles incoming, it is good enough that it would have a decent shot at taking all of those down. The issue is that it doesn’t have enough of those, and the success rate isn’t quite high enough that a massive strike — say from Russia or even China, which has in the hundreds of missiles — would overwhelm the US missile defence system. Is that right?
Sam Winter-Levy: Yeah, I think that’s right. And I think there’s two issues that kind of complicate that in addition.
One is that you don’t know where a strike is coming from necessarily, like in the case of submarines. It’s not the case that you can just put all your missile defence systems around one particular city to protect against one attack trajectory. These strikes could be coming from anywhere.
And the second is that states can use decoys alongside warheads. Distinguishing between these decoys and the real warheads is a very challenging problem. So that’s another way in which you can overwhelm missile defence systems, because it’s often much cheaper to build additional decoys than it is to build additional interceptors.
So the economics combined with the physics of missile defence is just very challenging.
Luisa Rodriguez: Makes sense. How plausible do you think it is that very advanced AI could solve some of these problems?
Sam Winter-Levy: As with every element of this, I think AI can certainly help with elements of this problem:
- Software advances can make it easier to predict a missile’s trajectory and speed up decision making once you detect a launch.
- Machine learning algorithms can potentially rapidly analyse data from multiple sensors to distinguish actual warheads from decoys. These things are built to mimic the radar and heat signatures of real warheads, they’re much cheaper to deploy.
- And maybe through advancements in material science, AI might produce lighter, more agile interceptors that make weapons kind of cheaper and more manoeuvrable in flight.
But AI will probably still face pretty big limits here. There are lots of really hard technical challenges. In particular, an AI-hardened system of missile defence is going to depend on machine learning algorithms that are trained on large, reliable datasets regarding decoys and countermeasures. That’s data that US adversaries just have every incentive to obscure. US rivals could also try to confuse AI algorithms by manipulating missile tests, including disturbances or perturbations intended to poison datasets for machine learning if they believe they’re going to be observed.
And then, even if AI can harden missile defence systems and improve them, none of these developments will take place overnight. Missile defence architectures take years to develop, and US adversaries will not just kind of stand by and watch that play out. Again, not to bombard you with papers to put in the show notes, but Laura Grego has a pretty good paper on how this offence/defence competition is likely to play out in the missile defence domain.
And attackers just maintain really significant advantages:
- They can launch from unexpected directions.
- They can use hypersonic missiles, which are much more manoeuvrable than ICBMs.
- They can overwhelm defences with coordinated salvo attacks using large numbers of decoys. They can directly target those defence systems if they really wanted to.
So I think most experts in this area think that missile defence remains a domain that is really tilted against the defender. Attackers just have a lot of options here for getting around missile defence.
One last thing to note is that even in a world where you have perfect missile defence — so an unlikely world, but even if you’re in that world — I think states can still resort to more creative delivery methods. They can use uncrewed undersea vehicles released from submarines near important ports. Russia is already developing exactly this kind of exotic means of nuclear delivery. They have this nuclear-powered autonomous torpedo called Poseidon, capable of travelling supposedly thousands of miles. And states could also try to smuggle or pre-position small nuclear devices in enemy territory, along the lines of the recent Ukrainian attack on Russian airfields where they smuggled in weapons deep into Russian territory.
So even with perfect missile defence, states may still have options to deliver nuclear weapons if they really want to.
AI vs nuclear command, control, and communications (NC3) [00:35:20]
Luisa Rodriguez: OK, so that’s missile defence. The last big question mark is around nuclear command, control, and communications. In theory, one country could undermine an adversary’s ability to launch a second strike, but not by finding their adversary’s hidden nuclear weapons or solving the problems of missile defence. What are the key components of nuclear command, control, and communications, and how do current nuclear powers currently keep those components survivable?
Nikita Lalwani: Just as a quick refresher: nuclear command, control, and communication systems are designed to monitor the conditions of nuclear forces, develop and update nuclear plans, and gather and understand information about adversary forces and possible targets. In the context of an attack scenario, NC3 must enable decision makers to assess information, consult with other parties, and then direct US forces and personnel to implement nuclear decisions.
To do all of this, NC3 infrastructure, at least in the United States, consists of more than 150 different systems, including infrared satellites that look for the hot flare of missile launches, ground-based early warning radars, air surveillance radars, nuclear detonation detectors, fixed and mobile command centres, and then communications facilities — both ground-based and space-based — that connect civilian leadership to US military forces and others.
Many parts of the NC3 system are already vulnerable to attack, and we should definitely talk more about cyber vulnerabilities here. But the basic way that states have tried to keep their systems survivable is by building in resilience. For example, some command bunkers are buried like 700 metres underground, which is deep enough to survive even a direct hit from a large nuclear weapon. In space, nuclear powers have sent hundreds or thousands of satellites into orbit. And in the air, the curvature of the Earth limits the distance at which surveillance radar can track airborne command posts.
Luisa Rodriguez: OK. So in theory, how might AI get closer to decapitating an adversary’s command and control systems? At least some of these components seem pretty invulnerable. It’s hard to imagine how AI solves this problem of a nuclear weapon not being able to reach a very hardened bunker. What’s the best case?
Sam Winter-Levy: I think AI might make it easier to track some of the mobile command posts on land. It might make it easier to more precisely target airborne command posts. Maybe AI-enhanced anti-satellite weapons could make it easier to target satellites that provide early warning of incoming nuclear attacks.
But other aspects of states’ nuclear command and control systems are pretty immune to or are pretty robust against AI developments in particular. For instance, bunkers that are 700 metres underground, tracking them is not the problem. The problem is that even if you drop a nuclear bomb right on top of them, they’re likely to survive. So that part of it seems less vulnerable to AI breakthroughs.
I’d say the biggest area of uncertainty and the most plausible pathway through which AI could have an impact here is the cyber domain. So you could envisage sophisticated cyber operations just supercharged by AI that might allow states to penetrate a rival’s command and control networks, disable early warning systems, disrupt the transmission of orders. Huge uncertainty here about assessing this, but potentially there are vulnerabilities that could allow one state to penetrate another’s nuclear networks, and those potentially already exist.
Luisa Rodriguez: Right. Yeah, I’m interested in that, because it does seem like if there was enough redundancy with digital and analogue parts of the system, this cyber thing wouldn’t be decisive. But maybe there are entire components that have been made digital and where there just aren’t analogue redundant components anymore.
Sam Winter-Levy: I think there’s definitely going to be redundancy. Maybe one way you could think about cyber here is that it potentially buys time in coordination with everything else you have to do. So potentially you can delay a state’s response, and that might give you time to track their submarines and track their road-mobile launchers. So even buying time there may be helpful.
But as you said, states invest a huge amount in resilience and redundancy in different systems that run on different frequencies and use different software — to the extent that they even use digital technology at all. So yeah, I think resilience and redundancy are going to be baked into most aspects of states’ command and control systems.
Luisa Rodriguez: The other thing that sticks out to me here is that cyber in particular feels like a place where AI is having a big impact already, and will probably have an even bigger impact than other parts of this triad question.
Sam Winter-Levy: Sure. Though it’s worth flagging that the question of whether AI will ultimately benefit cyberdefenders or cyberattackers is obviously a big open question. Big arguments on both sides there. But states will almost certainly use AI systems to try to shore up their cyberdefences as well.
Luisa Rodriguez: And what defences might keep nuclear command, control, and communications survivable?
Sam Winter-Levy: I think the key challenge here, as with almost every part of the splendid first strike that we’ve been discussing, is that a state needs to destroy really as much of every component of nuclear command and control at the same time with as little warning as possible. That’s the key thing that makes this such a hard challenge. As we said, a few kinds of these assets — really deep bunkers, command and control aircraft — those are so difficult to destroy or disable that they’re likely to remain survivable even given plausible improvements in strike capabilities.
I think that the cyber domain is the hardest to assess here what the defences would look like. Vulnerabilities almost certainly exist — but so do defences, so do patches. And I think there are still going to be big challenges for an attacking state in the cyber domain.
I can just name three quickly:
- First, nuclear-armed states are likely to expend a tonne of resources trying to defend their NC3 systems, which are probably among their best protected networks. They’ll use multiple redundant networks, they’ll use different software, they’ll use their own AI cyberdefences.
- Second, CyberOps would probably require persistent access to an adversary system. But whenever you have persistent access, that risks detection and possible retaliation. Anything that looks like a cyberattack on another state’s nuclear command and control systems could trigger this whole escalatory spiral that states are going to want to be very cautious about.
- And third — and this is a recurring theme with a lot of these elements of a splendid first strike — it’s very hard to test how well your malware actually works here in a counter-NC3 campaign. You can’t test it against the actual target networks, because that might look like the beginning of a splendid first strike. So you’ll need to use virtual or perhaps even physical models of the target. But simulations will only be as good as your available knowledge, and given how secret and classified and protected these networks are, your knowledge is going to be limited.
So this testing issue, I think it affects every part of this. You can’t test pulling off a splendid first strike against every nuclear submarine, but you also can’t test it in the cyber domain.
Nikita Lalwani: Just taking half a step back: Even if you do destroy a command and control system, it may not be enough to prevent retaliation. The UK, for example, has adopted procedures to allow its submarine commanders to assess whether a nuclear strike has destroyed the country. And if they determine that it has, they then open what’s known as a “letter of last resort,” which is carried on the submarine and may include instructions from the prime minister to launch a nuclear response.
Luisa Rodriguez: Whoa.
Sam Winter-Levy: Yeah. And the Russians have something similar. Famously, during the Cold War they had what was called the Dead Hand system, also known as Perimeter, which was designed to automatically trigger retaliation if it judged that a state’s national command authority had been destroyed. So even if you can take out a state’s nuclear command and control system, they may well still have measures in place to try to make sure that retaliation still takes place.
Luisa Rodriguez: Yeah, that just feels really huge to me.
AI won’t break deterrence, but may trigger an arms race [00:43:27]
Luisa Rodriguez: I guess a thing that is striking to me about all of this is both the extent to which it really depends on this offensive/defensive back and forth between attackers and defenders, and also the extent to which a lot of this rides on how much uncertainty a country is willing to accept in attempting a splendid first strike.
So I want to ask some questions about that, the first one just being: If this survivability ends up depending a bunch on whether rivals invest thoughtfully and sufficiently in adaptation and defence, how confident are you that that will definitely happen?
Sam Winter-Levy: I think, as you’ll probably get the sense, a recurring theme here is that this is a kind of move/countermove dynamic, where states are responding both to technological changes and to moves of their adversaries. And that’s really been the story of nuclear deterrence for the past 80 years. This is not new with AI. AI may increase the speed at which that takes place, but this move/countermove dynamic has really played out since the early years of the Cold War.
Now, states may adapt at different speeds from each other. Austin Long and Brendan Rittenhouse Green have a great paper (that we can put in the show notes) on anti-submarine warfare during parts of the Cold War. In their view, there were these windows, potentially from the 1960s to the 1980s, where the Soviets were a little bit slow to adapt — both the ways in which they used their nuclear submarines and also the technology that they adopted in terms of how quiet those submarines were. And that gave the US Navy an advantage for a brief period during the Cold War — never to the extent that you could pull off a splendid first strike, but to the extent that the US was better than is commonly acknowledged at tracking Soviet submarines.
So it’s definitely not a guarantee that states will adapt, or that they will adapt at the same speed, but given the stakes in the nuclear domain, given this is likely a top priority of any major nuclear-armed state, things would have to go wrong for them not to be paying significant attention to this problem set. Especially given that, in many of these areas we’ve been discussing, they don’t necessarily need to be at the technological frontier to shore up the survivability of their second-strike forces, and the onus is entirely on the attacker to be able to get close to 100% certainty to pull off this remarkably difficult sensor fusion challenge — where you’re trying to target submarines and mobile launchers and hardened silos all at once, with no room for error and no margin for testing.
So yeah, a lot of this does depend on states acting, it does depend on equilibrium behaviour shaking out, and there is the possibility that windows could emerge — but the balance is tilted against the state that might want to launch a splendid first strike.
Luisa Rodriguez: Yeah, I guess on this, that seems good from the perspective of nuclear deterrence holding, but it seems potentially bad from the perspective of just arms races seem dangerous. How worried are you about this?
Sam Winter-Levy: I think when we describe these move/countermove dynamics, that gives us a degree of confidence that nuclear deterrence, in the sense of a secure second-strike capability, will survive.
But as you say, there are big costs associated with these dynamics that will play out in response to technological change or in response to fears of technological change, because exactly this move/countermove scenario is essentially a form of nuclear arms racing. As states feel more insecure about their second-strike capabilities, as they disperse launchers, build up more warheads, build more decoys, try to take moves to shore up their second-strike capabilities, that’s very expensive. Not just in terms of just the sheer economic costs, but also in the political costs, in terms of the mutual hostility and distrust that will play out in this kind of security dilemma type story.
Luisa Rodriguez: Yeah, the Cold War wasn’t a good time.
Sam Winter-Levy: Exactly. The Cold War wasn’t a good time. So even if second-strike capabilities survive as a result of this kind of move/countermove scenario, that’s still a very destabilising, potentially scary world to live in. So we definitely don’t want to seem sanguine about the impact of AI on nuclear stability, even if truly undermining a state’s second-strike capabilities is a hard lift for states to pull off.
So it’s not only expensive in terms of the economic costs, but also the political costs of mutual hostility, distrust, fear, that’s kind of ramped up in this security dilemma.
And it also just increases the risk of accidents. So if you’re having your mobile launchers drive faster to make them harder to target, that’s just a more accident-prone type of move to take. If you’re reducing signals communications because you’re worried about a state intercepting them, or if you are delegating launch authority to lower levels because you’re worried about a state interfering with your nuclear command and control systems, those are all moves that might shore up second-strike capabilities, but also increase the risk of accident. And that’s also quite a scary world to live in.
Luisa Rodriguez: Totally. It seems like another one of the big considerations here is that advanced AI will likely make destroying more of an adversary’s missiles very possible, but it still seems like it’s extremely difficult to guarantee success of a 100% splendid first strike where you take out all of their missiles, and you’re confident in advance that you will take out all of their missiles. So if an adversary has even a few nuclear weapons left and can still kind of communicate with their launchers that yes, we want to launch a second strike, that can be enormously costly for the attacking country, even if it isn’t a full-scale second strike.
So how close to a splendid first strike could a country get? I guess both in terms of percent of arsenal destroyed and also in terms of being confident enough in advance to be willing to take on this risk?
Sam Winter-Levy: Yeah. So maybe the question here is: even if you can’t take out 100% of an adversary’s nuclear arsenal, how important is it if you can get much closer to 100% than in the absence of AI capabilities? I would say this is just an area of huge debate in the nuclear policy field. I’m just going to gloss over a lot of nuances here — listeners should read more about this — but broadly speaking, there are kind of two views in the nuclear policy debate.
One view is generally dominant in the academic community. This is the view that’s associated with the classic theories of mutually assured destruction. On this view, unless you have very high confidence that you can track and destroy all of an adversary’s nuclear weapons, which they generally think is infeasible, then the relative size of countries’ survivable nuclear forces doesn’t really matter. So even if AI could help the US take out 90% of Russian or Chinese nukes so that it could sort of “win” a nuclear exchange, that’s not super meaningful — because 10% of Russian or Chinese nukes getting through is just still going to be absolutely devastating for the US. And on this view, using these AI capabilities to target other states’ second-strike systems is itself dangerous and destabilising and pointless.
Then there’s a second view. This view is generally known as the “damage limitation” approach. It’s long been US government policy, and receives some support from the academic community, but more limited support there. And on that view, it’s generally seen as good strategy to deliberately hold at risk an adversaries’ nuclear forces, in an attempt to limit the damage they can do to you. In this view, there really is a huge difference between losing two American cities and losing 20 American cities. So they think that even if AI capabilities can’t guarantee 100% success rate — maybe you can’t pull off a splendid first strike in the sense of taking out every single Russian or Chinese nuke — they do think that you can meaningfully limit damage. Or at least that if the other side thinks that you can meaningfully limit damage, then that will make US threats to escalate more credible, which could contribute to deterrence.
So this question of what meaningful damage limitation is, ultimately that’s not a technological question, that’s a political one. And it’s going to be very difficult to define ex ante what will constitute “unacceptable damage” to a nation-state: it’s going to depend on the stakes and the leaders and psychology and all sorts of factors that it’s hard to reason about in advance. But I would just say this is a live area of debate in the nuclear policy field.
Luisa Rodriguez: Yeah, I do find it pretty compelling that a 300-weapon nuclear attack is extremely different to a five-weapon nuclear attack. And if a country could take out 295 missiles with reasonable confidence — and I guess “reasonable” is still very high because of the stakes — I can imagine that being a pretty different decision to opening oneself up to that 300-missile attack. So that does feel like a pretty important, cruxy debate.
Technological supremacy isn’t political supremacy [00:52:31]
Luisa Rodriguez: Pushing on: let’s assume that AI does enable states to find all of an adversary’s nuclear weapons — so a proper splendid first strike. Would we expect a state with that capability to be able to impose its will on other states?
Nikita Lalwani: I think this is a really important question, because it gets at a lot of the inherent uncertainties here and also the difficulties of predicting whether and how technological advantages will give states decisive strategic advantages. We could probably have a whole discussion just on this, but let me just offer a few observations.
First, to make the somewhat annoying move of fighting the hypothetical, I just think it’s hard to imagine a circumstance where a state has 100% certainty that it knows the location of all of an adversary’s nuclear weapons. That’s because that would require the state to also have 100% certainty that they’ve seen through any countermeasures. And as the RAND scholar Ed Geist has written, AI tools could potentially be harnessed to optimise military deception in ways that offset perceived advances in situational awareness.
The second is that even if AI could get a state to a 100% find rate, there’s uncertainty as to what a state would do with that information. Imagine going to the president today and saying, “This AI system can tell us with 100% certainty where all of China’s nuclear weapons are.” Would he accept that statement unquestioningly, or would he have some doubt as to whether the system was foolproof? Even if he did accept that statement, would he be certain enough to use that information to attempt a splendid first strike? Keep in mind that these are not capabilities that can be tested in advance.
So there is a lot of uncertainty there.
Sam Winter-Levy: And just to put a finer point on it: launching a splendid first strike here involves launching hundreds, potentially thousands of nuclear weapons at another state based on a belief that you have 100% probability of pulling it off. That’s a huge gamble. That’s just a huge move to act on.
And I think there’s still this broader question of willingness to act on a capability. The United States had nuclear weapons for a period of time before the Soviet Union did, but it didn’t act on that advantage for various historical reasons, but the broader reason is that there are ethical, political, and international norms that often constrain states from using the full extent of their available power. US leaders at the time did not think the public would tolerate launching a preemptive nuclear strike on the Soviets. They didn’t want to trigger another war. They didn’t want to be seen as the aggressor in a new conflict.
So there are a bunch of other considerations that might continue to constrain states’ capabilities even in a world where a state had the potential to pull off a splendid first strike.
And I think one final point here is that, even if you do have unchallenged technological advantage, that doesn’t always translate straightforwardly into the political dominance some people talk about when they talk about AI giving you the ability to just impose your political preferences worldwide — complete dominance and control of the type that people like Dan Hendrycks have written about.
Just to give one example, the US clearly had unquestionable technological dominance over Vietnam and over the Taliban, and just suffered an unambiguous defeat in both cases after a couple of decades of trying to impose its political preferences. So this whole question of the relationship between technological power and political power is just a little bit more complicated than the most straightforward stories might imply.
Luisa Rodriguez: Yeah, yeah. This point about the US having an opportunity to achieve something like a decisive strategic advantage when it had nuclear weapons before everyone else did, and not really doing anything with that advantage, seems important to me.
Sam Winter-Levy: I think it gets to the kind of underspecified nature of this concept. It just kind of conflates a lot of different things in one phrase.
Luisa Rodriguez: Yep, that makes sense.
Fast AI takeoff creates dangerous “windows of vulnerability” [00:56:43]
Luisa Rodriguez: Pushing on: so far we’ve assumed that AI evolves somewhat gradually. It’s been moving quickly, but not at the extremely, extremely fast pace that some people think it could move at. I think people who really think that AI could basically give one country a decisive strategic advantage by undermining nuclear deterrence are mostly imagining this world where there’s really fast or recursive AI takeoff — where an AI system goes from subhuman level to human level to superhuman level in a matter of weeks or months.
How should we think about fast takeoffs in the context of nuclear deterrence?
Sam Winter-Levy: Taking a step back, the critical question here is likely to be the relative speed of two different quantities or two different properties. The first is what is the speed in calendar months or years at which AI progress proceeds and translates into advantage? And the second is what is the speed at which other states, whose nuclear arsenals might be newly threatened, adapt?
And if the first of those (how fast AI progress is taking place) is faster than the second (how fast states are able to adapt) — which could be the case either because you are in one of these very fast takeoff worlds that you just described, or because states are just kind of slow to respond because of bureaucratic reasons or political reasons or any number of other reasons — then you get these windows of vulnerability and instability with year-to-year fluctuation, which can be particularly dangerous. I think a fast takeoff just exacerbates some of those issues. But even if AI progress is not so fast, as we’ve discussed, if it outpaces the ability of states to adapt, then you get these kinds of dangerous windows of opportunity.
I think one factor that complicates this is that, in the case of AI-enabled intelligence processing, US adoption of AI capabilities could be relatively invisible to adversaries. So if it’s just like you suddenly have a kind of discontinuous leap, or maybe it’s continuous but just very rapid leap, in the ability of states to use AI systems to process intelligence that they’re already collecting, to process signals and data they’re already connecting, then other states may not know that this breakthrough has occurred.
So potentially you could get more significant windows of opportunity that open up, as opposed to in the industrial explosions where we’re coating the ocean in underwater sensors and we’re building massive missile defence architectures — in that world, there are just going to be visible changes to the physical environment that other states are going to be able to see and respond to, and I think that will likely give states more time to respond with countermeasures of their own.
Luisa Rodriguez: I guess I’m still curious: you’ve outlined all of these constraints that mean that even if AI is progressing significantly, it’s still pretty difficult to get anywhere near a certain splendid first strike. How many of those constraints still hold if we’re talking about this super-fast-takeoff world?
Sam Winter-Levy: I think even in these fast-takeoff scenarios, some constraints are likely to remain.
First, on the technical side, some technical constraints will surely remain. As we’ve discussed, these are very hard technical problems to solve, and powerful AI systems won’t be able to evade the laws of physics. But let’s say a lot of these technical constraints evaporate. I would just flag that Edward Geist has an argument that some of these problems may just be computationally intractable, even for extremely advanced AI systems. I don’t know if I would go that far. But the technical problems are very hard to solve.
But let’s say the technical constraints evaporate. There are still going to be a lot of political and institutional constraints that remain that will slow a state’s ability to respond. Because even if technology changes overnight, states don’t generally integrate advanced technology at the same speed. That rarely takes place. Doing the kind of testing that you need to do, updating doctrine, updating bureaucratic systems: all of this stuff takes much longer in general than just a technological breakthrough — especially when the stakes of getting it wrong are so high, when you need to avoid triggering a preemptive response, rehearsing thousands of steps with no room for error.
So even if technological breakthroughs occur overnight, I think the political constraints, the institutional constraints, some of the normative constraints about whether leaders are actually going to act on these powers when they’re thinking about legitimacy and public signoff before rolling the dice — I think many of those constraints are likely to last longer. How long is an unknowable question with huge variation between states, between leaders in how quickly they respond, but that’s certainly going to be a lagging factor behind the speed at which technology on its own is advancing.
Luisa Rodriguez: Yeah, I do find it helpful to kind of reframe. I guess the thing that feels compelling to me is the stakes will feel so high for a country that feels like they could gain this “decisive strategic advantage,” but they will also feel so high for the same country in terms of the possibility that they get this wrong. And then, like, they face this existential risk of a second strike that is much bigger than they hoped it would be, even though they thought they maybe had this splendid first strike capability. So yeah, that’s feeling quite salient to me.
Sam Winter-Levy: That latter risk is just much more salient and visible. States know what nuclear war looks like. Living in an AGI world where another state has won this race, I think there’s just so much more uncertainty about what that looks like, whether it really is that bad an outcome. There’s just huge uncertainty there.
Luisa Rodriguez: Yep, that makes sense. How should governments account for this possibility of fast takeoffs in the context of nuclear deterrence?
Nikita Lalwani: I think the first thing to say is that governments should just take this possibility seriously, even if they think it’s low probability. In terms of how to respond, I think the lowest-hanging fruit is probably just to increase state capacity. You need people in government who understand AI capabilities and how they’re evolving, and who are able to translate that knowledge into actionable nuclear policies.
Ideally, you’d also have dialogue between nuclear experts and AI experts — including between people in government and people in the frontier AI labs — to understand how the technology is evolving and also what that means specifically for nuclear deterrence.
Sam Winter-Levy: Yeah, I think building dialogue between these two communities is really important — because I think the AI experts are really best placed to understand the technologies that are emerging, but they are not experts on nuclear weapons and nuclear deterrence. And conversely, the nuclear community knows that field of work, but they’re just not necessarily entirely on top of the frontier of AI breakthroughs. So just building dialogue between these two communities, both in government and outside government, I think is really important if you’re worried about fast-takeoff scenarios where things could start to change very quickly.
Nikita Lalwani: We just want to emphasise that there’s a lot of uncertainty with this fast-takeoff scenario, and if it does happen, that could really change the calculus on the nuclear deterrence question. And so, as we’ve written in our Foreign Affairs piece, and as we’ve tried to say throughout this podcast, it’s something that governments should be taking seriously and monitoring so that if it does seem like we’re approaching a fast-takeoff world, there are actions we take to kind of reduce the risks.
Luisa Rodriguez: Yeah, I buy that. It does seem from the bit of learning I’ve done for this episode that there’s just like no dialogue at the moment, but that they do seem like incredibly overlapping fields in this way that seems super important for people to be tracking.
OK, so that’s fast takeoff speeds. We’ve talked about how one reason to think nuclear deterrence won’t prevent one country from having a decisive strategic advantage is that AGI might threaten adversaries’ secure second strike. But there are also people that argue that AGI could give a country a decisive strategic advantage without even having much of an impact on nuclear survivability. I think roughly the argument here is something like: nuclear weapons as a technology that determine, at least in large part, some parts of global balance of power are eclipsed by faster, smarter, non-nuclear forms of power.
So I guess the question is: Is it plausible to you that a state with superintelligent AI could coerce its nuclear peers, and I guess non-nuclear peers, through economic or information or kind of cybernetic channels, without needing to use nuclear arsenals?
Sam Winter-Levy: Yes, I think it’s certainly possible that new forms of competition may emerge, new technologies may emerge, that kind of route around nuclear weapons, potentially. AI is going to matter for national security in all sorts of ways, many of which are kind of hard to foresee in advance.
But so long as states retain second-strike capabilities, as long as they can still credibly threaten to just unleash devastation on an adversary’s cities, I think there are good reasons that nuclear deterrence is likely to still matter and is still likely to kind of constrain a state’s actions.
You mentioned economic growth. The United States and South Korea together right now have a combined 1,000-fold economic advantage over North Korea, but pretty limited ability to coerce the North Koreans to do things they don’t want to do on issues they care about significantly. So AI could turbocharge a state’s economic power, but so long as these systems of nuclear deterrence remain in place, there may still be limits on how much they can act.
As for information operations, cyber operations, these are all things to monitor. But this track record of states using economic sanctions, information operations, cyber operations to coerce their nuclear-armed opponents is extremely mixed, to put it very generously. It’s just really hard to coerce nuclear-armed states to do things they don’t want to do on issues they care about sufficiently. And I think that’s likely to persist so long as nuclear deterrence remains in place.
Luisa Rodriguez: Yeah, it does feel important to notice these cases where countries that are radically dominated economically are still just completely unwilling to bend on the issues that are most important to them.
Sam Winter-Levy: Exactly.
Nikita Lalwani: In addition to things governments should do in a fast takeoff world, there are certain no-regrets moves that they can take regardless of what world we’re in:
- Ensuring that policy processes include AI experts alongside nuclear ones to encourage dialogue between these two sometimes disparate communities.
- Conducting rigorous reviews of nuclear systems to check for vulnerabilities that could be exploited by advanced AI, especially in cyberspace. Herb Lin has a great book about cyber vulnerabilities within the nuclear system. I think shoring those up would be very helpful.
- Carefully calibrating any statements about the need to race to advanced AI or the importance of being the first to develop a sort of wonder weapon that runs some risk of exacerbating risky and costly nuclear competition.
- And then finally, it’s more important than ever to maintain channels of communication and pathways to reduce the risk of inadvertent escalation, calamity. It’s important to double down on arms control dialogues and to strengthen the significant ethical, political, legal constraints on the use of nuclear weapons in the first place.
Book and movie recommendations [01:08:53]
Luisa Rodriguez: OK, that was a lot and it was all pretty heavy. I’m curious if you guys have a book or a movie recommendation that you think our audience might enjoy?
Nikita Lalwani: Yeah, just to crib from a colleague who gave this answer in another podcast, a book I really loved this year is George Saunders’s book, A swim in a pond in the rain. He basically takes five short stories by Russian short story writers and then writes essays about those stories. I think it’s just a wonderfully human endeavour, telling stories about the world and then trying to understand them. And it ends up being this really beautiful meditation on the value of reading and writing, and sort of understanding how human beings kind of think and operate in the world. So I would highly recommend that book.
Luisa Rodriguez: Nice. OK, great.
Sam Winter-Levy: And maybe I can just give one TV recommendation, which is a British TV show called the Up series, which basically started in the ’60s and they follow a group of kids —
Luisa Rodriguez:
This is my favourite! Sorry to interrupt.
Nikita Lalwani: We’re obsessed.
Sam Winter-Levy: The first episode in this show is called Seven Up! and they get a group of seven-year-old kids in the ’60s, and then they check back in on them every seven years. So Seven Up, 14 Up, 21 Up. And you just see all the way through to, I think 63 Up is the most recent one. And you just see these people’s lives unfold over the course of many hours of documentary footage broken down into these seven-year increments. And there’s just like so much humanity conveyed by just the passage of time and the stuff of life that these people’s lives are filled with, even if they’re not like super dramatic lives, but you just realise how much drama and tragedy and triumph is in every individual’s life.
Luisa Rodriguez: Yeah, I could not recommend that series more highly too. Massive plus one. And also I could talk about it for hours, but we have used up all of the time we have. So thank you so much for coming on. My guests today were Nikita Lalwani and Sam Winter-Levy. I really appreciate it. Thank you.
Nikita Lalwani: Thanks so much for having us.
Sam Winter-Levy: Thank you so much for having us.