#238 – Sam Winter-Levy and Nikita Lalwani on how AI won’t end nuclear deterrence (probably)

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race.

Nuclear deterrence rests on a state’s capacity to respond to a nuclear attack with a devastating nuclear strike of its own. But some theorists think that sophisticated AI could eliminate this capability — for example, by locating and destroying all of an adversary’s nuclear weapons simultaneously, by disabling command-and-control networks, or by enhancing missile defence systems. If they are right, whichever country got those capabilities first could wield unprecedented coercive power.

Today’s guests — Nikita Lalwani and Sam Winter-Levy of the Carnegie Endowment for International Peace — assess how advances in AI might threaten nuclear deterrence:

  • Would AI be able to locate nuclear submarines hiding in a vast, opaque ocean?
  • Would road-mobile launchers still be able to hide in tunnels and under netting?
  • Would missile defence become so accurate that the United States could be protected under something like Israel’s Iron Dome?
  • Can we imagine an AI cybersecurity breakthrough that would allow countries to infiltrate their rivals’ nuclear command-and-control networks?

Yet even without undermining deterrence, Sam and Nikita claim that AI could make the nuclear world far more dangerous. It could spur arms races, encourage riskier postures, and force dangerously short response times. Their message is urgent: AI experts and nuclear experts need to start talking to each other now, before the technology makes any conversation moot.

This episode was recorded on November 24, 2025.

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcripts, and web: Nick Stockton and Katy Moore

The interview in a nutshell

Nikita Lalwani (former White House National Security Council director for technology and national security and nonresident Carnegie Endowment scholar) and Sam Winter-Levy (Carnegie Endowment fellow) argue that the most significant impact of AI on global security isn’t its economic or conventional military boost, but whether it can undermine nuclear deterrence. If nuclear-armed states maintain a “secure second-strike capability,” AI’s ability to confer total geopolitical dominance will remain limited.

Nuclear deterrence is the ultimate check on AI-driven power

As long as an adversary can respond to an attack with a devastating nuclear strike, even a state with a massive lead in AGI cannot fully impose its will:

  • In a nuclear world, victory is determined less by the balance of power (who has the most conventional forces or economic might) and more by the “balance of nerves” — which side is willing to risk nuclear escalation to protect its core interests.
  • Nuclear weapons put a ceiling on coercion; the US is 15 times wealthier than Russia, yet it cannot exert total leverage because Russia can credibly threaten nuclear reprisal.
  • Deterrence relies on a “secure second-strike capability”: the ability to retaliate after absorbing a first strike.

AI could improve targeting — but physics and countermeasures bite hard

AI plausibly helps with:

  • Fusing vast sensor datasets to track submarines.
  • Recognising road-mobile missile patterns from satellite imagery.
  • Hardening or penetrating missile defence and cyber systems.

But severe constraints remain:

  • Submarines: The ocean is vast, noisy, and hostile to sensing. You must find every sub, simultaneously, with zero margin for error.
  • Mobile missiles: Hiders can use decoys, camouflage, tunnels, night movement, and anti-satellite attacks — and many of these responses are low-tech.
  • Missile defence: Economics favour the attacker; decoys are cheaper than interceptors.
  • Command and control: Redundancy, deep bunkers, and preauthorised or even automated retaliation systems (e.g. the UK’s “letter of last resort” and the Soviet Union’s Dead Hand) make decapitation unreliable.

Crucially, you can’t test a real first strike or malware campaign against the actual target. That makes confidence nearly impossible.

Sam and Nikita’s view: AI may increase capability, but achieving near-100% certainty remains brutally hard.

Cyber may be the most plausible path to “decapitating” a nuclear response

While command bunkers 700 metres underground are immune to AI-targeted strikes, the digital networks connecting leadership to those weapons are not. This leads to several complications in the risk calculus:

  • AI-supercharged cyberattacks might not permanently disable a response, but they could delay it long enough for an attacker to finish off road-mobile launchers and submarines.
  • Vulnerabilities that could allow a state to penetrate another’s command-and-control networks likely exist.
  • But you cannot test a cyberattack against a nuclear network without risking an accidental nuclear war, leaving any attacker with huge uncertainty.
  • And even if a cyberattack works, some states have systems designed to launch a retaliatory strike even if their command-and-control networks are disabled.

Why “dominance” is a massive, unlikely gamble

Even if a country develops a 95% effective first-strike capability using AI, the geopolitical stakes remain unchanged for two reasons:

  • Unacceptable damage: If just 5% of a Russian or Chinese arsenal survives, they can still impose vast costs on the United States. Most leaders will not roll the dice on a “splendid first strike” unless they have 100% certainty — which AI cannot provide.
  • The history of restraint: When the US held a temporary nuclear monopoly in the Cold War, it did not use it to conquer the world because of ethical, political, and normative constraints.

Fast AI takeoff scenarios and “decisive strategic advantage” deserve more scrutiny — but face underappreciated constraints

A very fast AI takeoff could shrink the window for adversaries to adapt, potentially opening dangerous windows of vulnerability. But even in fast-takeoff worlds:

  • Political and institutional constraints lag behind technology: Integrating breakthroughs into military doctrine, testing, and updating bureaucratic systems takes far longer than any technological advance — especially when the stakes of getting it wrong mean nuclear retaliation.
  • Leaders would need to gamble on AI perfection: Launching a splendid first strike means firing hundreds of nuclear weapons based on a belief in 100% success probability, using capabilities that can’t be tested in advance. The risk of failure is more vivid and salient than the uncertain benefits of “winning.”
  • Technological dominance doesn’t automatically translate to political dominance: The US had unchallenged superiority over Vietnam and the Taliban and still suffered defeat. North Korea resists coercion despite a 1,000-fold economic disadvantage versus the US and South Korea combined.

Urgent “no-regrets” moves for governments now

If AI progress moves fast, “windows of vulnerability” could open before states have time to adapt. Nikita and Sam recommend:

  • Ramp up dialogue between AI researchers at frontier labs and nuclear deterrence experts; currently, these two communities rarely speak.
  • Conduct rigorous reviews of nuclear systems for AI-exploitable vulnerabilities, especially in cyberspace.
  • Calibrate rhetoric carefully: framing AI as a “wonder weapon” or emphasising the need to “race” risks exacerbating nuclear competition.
  • Double down on arms control dialogues to reduce the risk of accidental escalation during an AI-driven arms race.

Highlights

Nuclear deterrence: the "balance of nerves" trumps the "balance of power"

Luisa Rodriguez: To what extent does nuclear deterrence actually prevent states from coercing and pressuring their adversaries?

Sam Winter-Levy: Nuclear deterrence definitely doesn’t end the possibility of coercion, intense competition, pressure between states, or even the risk of outright war. But in the conventional view of nuclear theorists, people like Thomas Schelling and Robert Jervis, nuclear weapons change the nature of these competitions between states.

In a nuclear world, once you have nuclear weapons, the winner of contests and these kind of bargaining dynamics is determined less by the balance of power — like which side has more weapons, a stronger economy, more resources — and more by the balance of nerves: basically the idea of which side has more resolve, which side is more willing to run a higher risk of nuclear escalation, which side cares more about the particular outcome.

What this means is that there’s generally an upper bound on the kinds of coercion that states can exercise against their nuclear-armed adversaries. Because if you attempt to coerce them on matters that are sufficiently important to them — things like territorial integrity or regime survival, the core political interests of a state — you run the risk of nuclear escalation.

Just to give one example to try and motivate this: if you think about the US and Russia, right now the US is economically far superior; its economy is like 15 times Russia’s. It’s conventionally far superior as well. But there are real limits on how much leverage the US can exercise over Russia — because on issues that the Russians care about sufficiently, they can always threaten moves that can add risk of nuclear escalation or add risks of a nuclear accident. And that puts a pretty significant limit on how much influence the US can really wield over them.

Will AI undermine nations' credible ability to retaliate after a nuclear strike?

Nikita Lalwani: The central pillar of nuclear deterrence really comes down to what is known as the “second-strike capability,” which is a state’s capacity to respond to a nuclear attack with a devastating nuclear strike of its own. … So long as two nuclear powers can credibly maintain a second-strike capability that can inflict unacceptable damage on their adversary, a first strike would be suicidal. …

Broadly speaking, there are three ways that a state could undermine an adversary’s second-strike capability:

  • The first is that a state could destroy a rival’s entire nuclear arsenal in what is called a “splendid first strike.” This would require pinpointing all of an adversary’s nuclear weapons, including the locations of nuclear submarines and mobile launchers.
  • The second way that you could undermine the second-strike capability is the state could prevent a rival from launching a retaliatory strike by disabling nuclear command, control, and communications networks. Those are the networks that support nuclear decision making and communicate those decisions to nuclear forces in the field for execution.
  • And then finally, a state could strengthen missile defences such that a rival could no longer credibly threaten nuclear retaliation.

So the question overall is whether advances in AI will give states these capabilities. And there’s a kind of second question — which is whether and how these capabilities would then translate into geopolitical dominance.

Why nuclear subs would be so hard to track down, even with AI

Sam Winter-Levy: Nuclear subs are generally viewed as the most survivable leg of the [US’s nuclear arsenal]. The US’s subs are extremely hard to find. That’s really for three core reasons.

The first is just that the ocean is huge. I think it’s easy to underestimate how big the ocean is. The scale of the search problem here is just enormous: tens of millions of square miles of ocean that you are trying to find these relatively small objects in.

Second, as you said, water kind of blocks a lot of what we generally use to detect things. Seawater is almost totally opaque to electromagnetic radiation, so techniques such as radar that work pretty well for surface and airborne targets are just much less useful for finding submarines. Remember, they can remain submerged for months at a time, potentially, when they’re out on patrol.

States are generally limited to acoustic methods, but only low-frequency sound waves propagate significant distances in water — and in the process, they interact with all sorts of complicated oceanographic phenomena that distort sounds. A lot of the ocean just contains a lot of ambient noise, and oceans are actually getting noisier over time with increased commercial shipping. And submarines just produce very low levels of signal relative to this background noise.

Third, and finally, modern subs are engineered for silence. So their propeller designs, their hull coatings, their internal machinery, it’s all just designed to minimise noise. Nuclear-powered submarines can be so difficult to detect from other subs that they sometimes collide underwater.

Luisa Rodriguez: That’s an incredible fact. I had never heard that before. And it does feel like it really helps my intuition to understand how hard subs are to find in the water. … It’s not just the surface of a country where you have road-mobile missiles. This is a much larger area.

Sam Winter-Levy: Exactly. It’s like really finding needles in a haystack — where those needles are designed to be as difficult to find as possible, and where you need to find every single needle simultaneously at the same moment with zero margin for error.

Luisa Rodriguez: What is the most compelling story that you could give for how AI would allow adversaries to track and target nuclear submarines? …

Sam Winter-Levy: There’s maybe two ways to think about this. … The first one is going to be true of many of these different types of questions, which is: maybe AI can invent things that we can’t anticipate in advance, just like totally new methods that it’s just very hard to think through now. …

The other way of thinking about this is: what are the methods that AI could use that are most continuous with what states currently do, and that just seem most plausible? And there the story would go something like: you can use machine learning to just integrate data from thousands of sensors, filtering out noise, identifying these faint signatures that humans would miss. …

So you can fuse all of that data. Maybe you could also use autonomous underwater vehicles that can kind of patrol, continuously coordinate with satellites, and build a kind of persistent ocean-wide surveillance net. And if you combine all of that data with signals intercepts and human intelligence and all these other sources of data that states have, maybe you could use AI systems to track submarines more effectively. You might also be able to use AI systems to hack into the systems that states themselves use to track and communicate with their own submarines. …

I think first it sounds easy when you say, “Just integrate all these different data from different sensors.” But this is just an extremely hard technical problem, and there are probably limits to what these improvements can yield. … The physics of the undersea domain are brutal. …

So I think the first reason to think this is difficult is just that it’s a hard technical problem. Second, it’s just unlikely that states are going to do nothing in response. … This is going to be a move/countermove dynamic. And states have a lot of countermeasures that they can use to tip the scales in their favour: they can jam signals, they can manipulate sensor data with deceptive decoys — for instance, they can just play recordings of a submarine into an amplified underwater acoustic source.

And states can also use a lot of these technologies to protect their own submarines. … I think there are just a lot of ways that states can add uncertainty to every step of the process of detecting, tracking, and targeting submarines, so it will likely remain a kind of probabilistic affair. And states are probably not going to want to target their adversaries’ nuclear submarines unless they’re very confident that they can take them all out at once — because each submarine could carry as many as 200 warheads on them, so if even one submarine escapes, that’s probably enough to deprive you of any meaningful notion of victory.

Low-tech defences might still work well against AI

Nikita Lalwani: Road-mobile launchers, like subs, are concealed, camouflaged, and they don’t stay in one place for very long. They’re carried on vehicles that can hide under netting, under bridges, in tunnels, and they’re driven from one concealed location to the next. The survivability of these launchers really depends on the competition between a hider’s ability to keep them concealed on the one hand, and a seeker’s ability to locate and track them on the other.

At least historically, hiders have had some real advantages: they can send mobile launchers to remote locations, they can move mobile launchers in short bursts at times that are selected specifically to make them difficult to track — for example, at night or under extensive cloud cover. …

Sam Winter-Levy: In many ways, this is a proven application of AI systems. The United States probably already has large amounts of data from satellites, signals intercepts, aircraft, and so on — but that data currently outpaces the ability of human analysts to digest.

And states probably also have a significant stock of images of these mobile launchers. Many of them are just paraded in public through Beijing. You can find images on the open source of these launchers, along with information about their signatures and how fast they drive and their weight and so forth. You could plausibly use these images and these signatures to train machine learning algorithms to just dramatically speed up the processing of intelligence to make it easier to conduct operations against these vehicles.

So in many ways, like you said, this is sort of a classic “pattern recognition within a big dataset” type problem that in many ways could be suited to AI. The key thing here is that AI can reduce the area you need to search dramatically, and similarly reduce the area you would need to attack by orders of magnitude, potentially. …

Luisa Rodriguez: What options would the defending countries have for keeping their road-mobile launchers hidden? …

Nikita Lalwani: Most simply, hiders can adopt old-fashioned, low-tech solutions. For example, covering roads with netting or constructing decoys — decoys being vehicles that are made to look and act like mobile launchers, but aren’t actually mobile launchers. That would both increase the number of vehicles finders have to monitor and potentially strike and decrease their certainty of any given detection.

More dramatically, in a crisis states could use anti-satellite weapons to destroy or impair satellites, which would create holes in coverage that hiders could exploit to move their launchers. This would obviously be a provocative measure, so less likely during peacetime, but could be used during a conflict scenario.

The big picture here is that although it’s impossible to predict exactly how states will react, there are countermeasures available to them, and if they care about the survivability of their mobile launchers, they have every incentive to use them. Of course, finders can then innovate as well, so one might expect a measure/countermeasure cycle that could potentially lead to greater instability. …

Sam Winter-Levy: Defending states have a lot of options that they can take to try to shore up the survivability of their second-strike capabilities, or at the very least, to inject enough uncertainty into the belief of a state that might be considering launching a first strike that they’re going to think very hard about doing so. And many of those options available to them do not require equivalent levels of technological sophistication. Many of these are pretty low-tech measures to just massively expand the search area, or massively increase the number of options you would need to target — through things like making your launchers drive faster: that’s not a high-tech move, but it can make launching a first strike significantly more difficult to pull off.

Geopolitics of powerful AI

Luisa Rodriguez: Let’s assume that AI does enable states to find all of an adversary’s nuclear weapons — so a proper splendid first strike. Would we expect a state with that capability to be able to impose its will on other states? …

Nikita Lalwani: First, to make the somewhat annoying move of fighting the hypothetical, I just think it’s hard to imagine a circumstance where a state has 100% certainty that it knows the location of all of an adversary’s nuclear weapons. That’s because that would require the state to also have 100% certainty that they’ve seen through any countermeasures. …

Second is that even if AI could get a state to a 100% find rate, there’s uncertainty as to what a state would do with that information. Imagine going to the president today and saying, “This AI system can tell us with 100% certainty where all of China’s nuclear weapons are.” Would he accept that statement unquestioningly, or would he have some doubt as to whether the system was foolproof? Even if he did accept that statement, would he be certain enough to use that information to attempt a splendid first strike? Keep in mind that these are not capabilities that can be tested in advance. So there is a lot of uncertainty there.

Sam Winter-Levy: And just to put a finer point on it: launching a splendid first strike here involves launching hundreds, potentially thousands of nuclear weapons at another state based on a belief that you have 100% probability of pulling it off. That’s a huge gamble. That’s just a huge move to act on.

And I think there’s still this broader question of willingness to act on a capability. The United States had nuclear weapons for a period of time before the Soviet Union did, but it didn’t act on that advantage for various historical reasons, but the broader reason is that there are ethical, political, and international norms that often constrain states from using the full extent of their available power. US leaders at the time did not think the public would tolerate launching a preemptive nuclear strike on the Soviets. They didn’t want to trigger another war. They didn’t want to be seen as the aggressor in a new conflict.

So there are a bunch of other considerations that might continue to constrain states’ capabilities even in a world where a state had the potential to pull off a splendid first strike.

And I think one final point here is that, even if you do have unchallenged technological advantage, that doesn’t always translate straightforwardly into the political dominance some people talk about when they talk about AI giving you the ability to just impose your political preferences worldwide — complete dominance and control of the type that people like Dan Hendrycks have written about.

Just to give one example, the US clearly had unquestionable technological dominance over Vietnam and over the Taliban, and just suffered an unambiguous defeat in both cases after a couple of decades of trying to impose its political preferences. So this whole question of the relationship between technological power and political power is just a little bit more complicated than the most straightforward stories might imply.

Do fast AI takeoff speeds shift the balance?

Sam Winter-Levy: The critical question here is likely to be the relative speed of two different properties. The first is what is the speed in calendar months or years at which AI progress proceeds and translates into advantage? And the second is what is the speed at which other states, whose nuclear arsenals might be newly threatened, adapt?

And if the first of those (how fast AI progress is taking place) is faster than the second (how fast states are able to adapt) — which could be the case either because you are in one of these very fast takeoff worlds that you just described, or because states are just kind of slow to respond because of bureaucratic reasons or political reasons or any number of other reasons — then you get these windows of vulnerability and instability with year-to-year fluctuation, which can be particularly dangerous. I think a fast takeoff just exacerbates some of those issues. But even if AI progress is not so fast, if it outpaces the ability of states to adapt, then you get these kinds of dangerous windows of opportunity.

I think one factor that complicates this is that, in the case of AI-enabled intelligence processing, US adoption of AI capabilities could be relatively invisible to adversaries. So if it’s just like you suddenly have a kind of discontinuous leap, or maybe it’s continuous but just very rapid leap, in the ability of states to use AI systems to process intelligence that they’re already collecting, to process signals and data they’re already connecting, then other states may not know that this breakthrough has occurred.

So potentially you could get more significant windows of opportunity that open up, as opposed to in the industrial explosions where we’re coating the ocean in underwater sensors and we’re building massive missile defence architectures — in that world, there are just going to be visible changes to the physical environment that other states are going to be able to see and respond to, and I think that will likely give states more time to respond with countermeasures of their own.

Luisa Rodriguez: You’ve outlined all of these constraints that mean that even if AI is progressing significantly, it’s still pretty difficult to get anywhere near a certain splendid first strike. How many of those constraints still hold if we’re talking about this super-fast-takeoff world?

Sam Winter-Levy: I think even in these fast-takeoff scenarios, some constraints are likely to remain.

First, on the technical side, some technical constraints will surely remain. As we’ve discussed, these are very hard technical problems to solve, and powerful AI systems won’t be able to evade the laws of physics. …

But let’s say the technical constraints evaporate. There are still going to be a lot of political and institutional constraints that remain that will slow a state’s ability to respond. Because even if technology changes overnight, states don’t generally integrate advanced technology at the same speed. That rarely takes place. Doing the kind of testing that you need to do, updating doctrine, updating bureaucratic systems: all of this stuff takes much longer in general than just a technological breakthrough — especially when the stakes of getting it wrong are so high, when you need to avoid triggering a preemptive response, rehearsing thousands of steps with no room for error.

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.