#61 – Helen Toner on the new 30-person research group in DC investigating how emerging technologies could affect national security

From 1870 to 1950, the introduction of electricity transformed life in the US and UK, as people gained access to lighting, radio and a wide range of household appliances for the first time. Electricity turned out to be a general purpose technology that could help with almost everything people did.

Some think machine learning could alter 21st century life in a similar way.

In addition to massively changing everyday life, past general purpose technologies have also changed the nature of war. For example, when electricity was introduced to the battlefield, commanders gained the ability to quickly communicate with units far away in the field.

How might international security be altered if the impact of machine learning is similar in scope to that of electricity? Today’s guest — Helen Toner — recently helped found the Center for Security and Emerging Technology at Georgetown University to help policymakers prepare for any such disruptive technical changes that might threaten international peace.

Their first focus is machine learning (ML), a technology which allows computers to recognise patterns, learn from them, and develop ‘intuitions’ that inform their judgement about future cases. This is something humans do constantly, whether we’re playing tennis, reading someone’s face, diagnosing a patient, or figuring out which business ideas are likely to succeed.

Sometimes these ML algorithms can seem uncannily insightful, and they’re only getting better over time. Ultimately a wide range of different ML algorithms could end up helping us with all kinds of decisions, just as electricity wakes us up, makes us coffee, and brushes our teeth — all in the first five minutes of our day.

Rapid advances in ML, and the many prospective military applications, has people worrying about an ‘AI arms race’ between the US and China. Henry Kissinger and the past CEO of Google Eric Schmidt recently wrote that AI could “destabilize everything from nuclear détente to human friendships.” Some politicians talk of classifying and restricting access to ML algorithms, lest they fall into the wrong hands.

But if electricity is the best analogy, you could reasonably ask — was there an arms race in electricity in the 19th century? Would that have made any sense? And could someone have changed the course of history by changing who first got electricity and how they used it, or is that a fantasy?

In today’s episode we discuss the research frontier in the emerging field of AI policy and governance, how to have a career shaping US government policy, and Helen’s experience living and studying in China.

We cover:

  • Why immigration is the main policy area that should be affected by AI advances today.
  • Why talking about an ‘arms race’ in AI is premature.
  • How the US could remain the leading country in machine learning for the foreseeable future.
  • Whether it’s ever possible to have a predictable effect on government policy.
  • How Bobby Kennedy may have positively affected the Cuban Missile Crisis.
  • Whether it’s possible to become a China expert and still get a security clearance.
  • Can access to ML algorithms be restricted, or is that just not practical?
  • Why Helen and her colleagues set up the Center for Security and Emerging Technology and what jobs are available there and elsewhere in the field.
  • Whether AI could help stabilise authoritarian regimes.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Highlights

I think maybe a big misconception is around autonomous weapons and all of the effects that AI is likely to have on security and on warfare, how big a part of that is specifically autonomous weapons versus all kinds of other things. I think it’s very easy to picture in your head a robot that can harm you in some way, whether it be a drone or some kind of land-based system, whatever it might be. But I think in practice, while I do expect those systems to be deployed and I do expect them to change how warfare works, I think there’s going to be a much deeper and more thoroughgoing way in which AI permeates through all of our systems, in a similar way to how electricity in the early 20th century didn’t just create the possibility to have electrically-powered weapons. But it changed the entirety of how the armed forces worked, so it changed communications, it changed transport, it changed logistics and supply chains.

And I think similarly, AI is going to just affect how absolutely everything is done, and so I think an excessive focus on weapons, whether that be from people looking from the outside and being concerned about what weapons might be developed, but also from the inside perspective of thinking about what the Department of Defense, for example, should be doing about AI. I think the most important stuff is actually going to be getting a digital infrastructure in order. They’re setting up a massive cloud contract to change the way they do data storage and all of that. Thinking about how they store data and how that flows between different teams and how it can be applied, I think that is going to be a much bigger part of, when we look back in 50 or 100 years, what we think about how AI has actually had an effect.

I do think there’s a lot of room for people who care about producing good outcomes in the world and who are able to skill up on the technical side, and then also operate effectively in a policy environment. I just think there’s a lot of low-hanging fruit to slightly tweak how things go, which is not going to be some long-term plan that is very detailed, but is just going to be having a slightly different set of considerations in mind.

An example of this, this is kind of a grandiose example, but in the Robert Caro biography of LBJ, there’s a section where he talks about the Cuban Missile Crisis, and he describes Bobby Kennedy having a significant influence over how the decision-making went there, simply because he was thinking about the effects on civilians more than he felt like the other people in the room were. And that slight change in perspective meant that his whole approach to the problem was quite different. I think that’s a pretty once in a lifetime, once in many lifetimes experience, but I think the basic principle is the same.

If we were doing the Malicious Use of Artificial Intelligence report again today, the biggest question in my mind is how we should think about uses of AI by states that, to me certainly and to many Western observers, look extremely unethical. I remember at the time that we held the workshop, there was some discussion of should we be talking about AI that is used that has bad consequences, or should we be talking about AI that is used in ways that are illegal, or what exactly should it be? And we ended up with this framing of malicious use, which I think excludes things like surveillance, for example. And for me, a really big development over the past couple of years has been seeing how the Chinese government has been using AI, only as one small part but certainly as one part of a larger surveillance regime, especially in Xinjiang, with Muslim leaders who are being imprisoned there.

I think if we held the workshop again today, it would be really hard. At the time, our motivation was thinking, “Well, it would be nice to make this a report that can be sort of global and shared and that basically everyone can get behind, that there’s clearly good guys and bad guys, and we’re really just talking about the really bad guys here”. And I think today it would much harder to cleanly slice things in that way and to exclude this use of AI from this categorization of deliberately using AI for bad ends, which is sort of what we were going for.

In government work and in policy work, [it’s so important to get] buy-in from all kinds of different audiences with all kinds of different needs and goals. Being able to understand if you’re trying to put out some policy document, who needs to sign off on that, what considerations they’re considering. An obvious example is, if you’re working with members of Congress, they care a lot about reelection. That’s a straightforward example. But anyone you’re working with at any given agency is going to have different goals that they’re trying to fulfill, and so if you can try and navigate that space, it’s sort of a complicated social problem. Being able to do that effectively, I think, is a huge difference between people who can have an impact in government and who’d have more trouble.

Articles, books, and other media discussed in the show

Helen and CSET

80,000 Hours articles

Other articles mentioned in the episode

Related organisations

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.