#61 – Helen Toner on the new 30-person research group in DC investigating how emerging technologies could affect national security
#61 – Helen Toner on the new 30-person research group in DC investigating how emerging technologies could affect national security
By Robert Wiblin and Keiran Harris · Published July 17th, 2019
From 1870 to 1950, the introduction of electricity transformed life in the US and UK, as people gained access to lighting, radio and a wide range of household appliances for the first time. Electricity turned out to be a general purpose technology that could help with almost everything people did.
Some think machine learning could alter 21st century life in a similar way.
In addition to massively changing everyday life, past general purpose technologies have also changed the nature of war. For example, when electricity was introduced to the battlefield, commanders gained the ability to quickly communicate with units far away in the field.
How might international security be altered if the impact of machine learning is similar in scope to that of electricity? Today’s guest — Helen Toner — recently helped found the Center for Security and Emerging Technology at Georgetown University to help policymakers prepare for any such disruptive technical changes that might threaten international peace.
Their first focus is machine learning (ML), a technology which allows computers to recognise patterns, learn from them, and develop ‘intuitions’ that inform their judgement about future cases. This is something humans do constantly, whether we’re playing tennis, reading someone’s face, diagnosing a patient, or figuring out which business ideas are likely to succeed.
Sometimes these ML algorithms can seem uncannily insightful, and they’re only getting better over time. Ultimately a wide range of different ML algorithms could end up helping us with all kinds of decisions, just as electricity wakes us up, makes us coffee, and brushes our teeth — all in the first five minutes of our day.
Rapid advances in ML, and the many prospective military applications, has people worrying about an ‘AI arms race’ between the US and China. Henry Kissinger and the past CEO of Google Eric Schmidt recently wrote that AI could “destabilize everything from nuclear détente to human friendships.” Some politicians talk of classifying and restricting access to ML algorithms, lest they fall into the wrong hands.
But if electricity is the best analogy, you could reasonably ask — was there an arms race in electricity in the 19th century? Would that have made any sense? And could someone have changed the course of history by changing who first got electricity and how they used it, or is that a fantasy?
In today’s episode we discuss the research frontier in the emerging field of AI policy and governance, how to have a career shaping US government policy, and Helen’s experience living and studying in China.
We cover:
- Why immigration is the main policy area that should be affected by AI advances today.
- Why talking about an ‘arms race’ in AI is premature.
- How the US could remain the leading country in machine learning for the foreseeable future.
- Whether it’s ever possible to have a predictable effect on government policy.
- How Bobby Kennedy may have positively affected the Cuban Missile Crisis.
- Whether it’s possible to become a China expert and still get a security clearance.
- Can access to ML algorithms be restricted, or is that just not practical?
- Why Helen and her colleagues set up the Center for Security and Emerging Technology and what jobs are available there and elsewhere in the field.
- Whether AI could help stabilise authoritarian regimes.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
The 80,000 Hours Podcast is produced by Keiran Harris.
Highlights
I think maybe a big misconception is around autonomous weapons and all of the effects that AI is likely to have on security and on warfare, how big a part of that is specifically autonomous weapons versus all kinds of other things. I think it’s very easy to picture in your head a robot that can harm you in some way, whether it be a drone or some kind of land-based system, whatever it might be. But I think in practice, while I do expect those systems to be deployed and I do expect them to change how warfare works, I think there’s going to be a much deeper and more thoroughgoing way in which AI permeates through all of our systems, in a similar way to how electricity in the early 20th century didn’t just create the possibility to have electrically-powered weapons. But it changed the entirety of how the armed forces worked, so it changed communications, it changed transport, it changed logistics and supply chains.
And I think similarly, AI is going to just affect how absolutely everything is done, and so I think an excessive focus on weapons, whether that be from people looking from the outside and being concerned about what weapons might be developed, but also from the inside perspective of thinking about what the Department of Defense, for example, should be doing about AI. I think the most important stuff is actually going to be getting a digital infrastructure in order. They’re setting up a massive cloud contract to change the way they do data storage and all of that. Thinking about how they store data and how that flows between different teams and how it can be applied, I think that is going to be a much bigger part of, when we look back in 50 or 100 years, what we think about how AI has actually had an effect.
I do think there’s a lot of room for people who care about producing good outcomes in the world and who are able to skill up on the technical side, and then also operate effectively in a policy environment. I just think there’s a lot of low-hanging fruit to slightly tweak how things go, which is not going to be some long-term plan that is very detailed, but is just going to be having a slightly different set of considerations in mind.
An example of this, this is kind of a grandiose example, but in the Robert Caro biography of LBJ, there’s a section where he talks about the Cuban Missile Crisis, and he describes Bobby Kennedy having a significant influence over how the decision-making went there, simply because he was thinking about the effects on civilians more than he felt like the other people in the room were. And that slight change in perspective meant that his whole approach to the problem was quite different. I think that’s a pretty once in a lifetime, once in many lifetimes experience, but I think the basic principle is the same.
If we were doing the Malicious Use of Artificial Intelligence report again today, the biggest question in my mind is how we should think about uses of AI by states that, to me certainly and to many Western observers, look extremely unethical. I remember at the time that we held the workshop, there was some discussion of should we be talking about AI that is used that has bad consequences, or should we be talking about AI that is used in ways that are illegal, or what exactly should it be? And we ended up with this framing of malicious use, which I think excludes things like surveillance, for example. And for me, a really big development over the past couple of years has been seeing how the Chinese government has been using AI, only as one small part but certainly as one part of a larger surveillance regime, especially in Xinjiang, with Muslim leaders who are being imprisoned there.
I think if we held the workshop again today, it would be really hard. At the time, our motivation was thinking, “Well, it would be nice to make this a report that can be sort of global and shared and that basically everyone can get behind, that there’s clearly good guys and bad guys, and we’re really just talking about the really bad guys here”. And I think today it would much harder to cleanly slice things in that way and to exclude this use of AI from this categorization of deliberately using AI for bad ends, which is sort of what we were going for.
In government work and in policy work, [it’s so important to get] buy-in from all kinds of different audiences with all kinds of different needs and goals. Being able to understand if you’re trying to put out some policy document, who needs to sign off on that, what considerations they’re considering. An obvious example is, if you’re working with members of Congress, they care a lot about reelection. That’s a straightforward example. But anyone you’re working with at any given agency is going to have different goals that they’re trying to fulfill, and so if you can try and navigate that space, it’s sort of a complicated social problem. Being able to do that effectively, I think, is a huge difference between people who can have an impact in government and who’d have more trouble.
Articles, books, and other media discussed in the show
Helen and CSET
- Helen speaking to the United States-China Economic and Security Review Commission: Technology, Trade, and Military-Civil Fusion: China’s Pursuit of Artificial Intelligence, New Materials, and New Energy.
- Current career opportunities at CSET
- The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (2018). Brundage and Avin et al.
- Helen speaking with Julia Galef on the Rationally Speaking Podcast
- Helen on Twitter
80,000 Hours articles
- AI strategy and governance roles on our job board
- The case for building expertise to work on US AI policy, and how to do it by Niel Bowerman
- The Schwarzman Scholarship: An exciting opportunity to learn more about China and get a Masters in Global Affairs by Helen Toner and Robert Wiblin
- A new recommended career path for effective altruists: China specialist by Benjamin Todd and Brian Tse
- Philosophy academia career review: Philosophy is one of the hardest grad programs. Is it worth it, if you want to use ideas to change the world? by Arden Koehler and William MacAskill
Other articles mentioned in the episode
- The Metamorphosis by Henry Kissenger, Eric Schmidt and Daniel Huttenlocher in The Atlantic
- How sure are we about this AI stuff? by Ben Garfinkel
- Wikipedia on the German tank problem
- Wikipedia on Export of cryptography from the United States
- DARPA’s Cyber Grand Challenge
- Better Language Models and Their Implications (GPT-2) by OpenAI
- The Path to Power: The Years of Lyndon Johnson by Robert A. Caro
- Searching for truth in China’s Uighur ‘re-education’ camps by John Sudworth, BBC News, Xinjiang
- San Francisco votes to ban city use of facial recognition technology by Jeffrey Dastin, Reuters
- XKCD comics: ‘Physicists’ and ‘Here to Help’
Related organisations
Transcript
Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, where each week we have an unusually in-depth conversation about one of the world’s most pressing problems and how you can use your career to solve it. I’m Rob Wiblin, Director of Research at 80,000 Hours.
Today’s guest is helping to found a new think tank in Washington DC focused on guiding us through an era where AI is likely to have more and more influence over war, intelligence gathering and international relations.
I was excited to talk to Helen because we think working on AI policy and strategy could be an opportunity to have a very large and positive impact on the world for some of our listeners and Helen has managed to advance her career in the field incredibly quickly, so I wanted to learn more about how she’d manage to do that.
It’s also just a fascinating and very topical issue. Just today Henry Kissenger, Eric Schmidt and Daniel Huttenlocher wrote an article warning that AI could destabilise everything from nuclear détente to human friendships in The Atlantic.
If you want some background before launching into this episode I can recommend listening to episode 31 with Allan Dafoe for a brisk and compelling description of the challenges governments might face adapting to transformative AI. But you certainly don’t need to listen to that before this one.
Before that just a quick announcement or two.
Firstly, if you’re considering doing a Philosophy PhD, our newest team member, Arden Koehler, just finished rewriting our career review of philosophy careers. We’ll link to that in the show notes.
Secondly, last year we did a few episodes about operations management careers in high impact organisations, especially non-profits. I just wanted to flag that our podcasts and articles on that topic have been pretty successful at encouraging people to enter the area, which has made the job market for that career path more competitive than it was 12 months ago. That said, we still list a lot of operations-related roles on our job board, at current count 113 of them.
Speaking of our job board, I should add that it currently lists 70 jobs relating to AI strategy & governance for you to browse and consider applying for. Needless to say, we’ll link to the job board in the show notes.
Finally, in the interests of full disclosure, note that the biggest donor to CSET where Helen works is also a financial supporter of 80,000 Hours.
Alright, without further ado, here’s Helen.
Robert Wiblin: Today I’m speaking with Helen Toner. Helen is the Director of Strategy at Georgetown University’s new Center for Security and Emerging Technology, otherwise known as CSET, which was set up in part with a $55 million grant from Open Philanthropy, which is their largest grant to date. She previously worked as a senior research analyst at Open Philanthropy, where she advised policymakers and grant-makers on AI policy and strategy. Between working at Open Phil and joining CSET, Helen lived in Beijing for nine months, studying the Chinese AI ecosystem as a research affiliate for the University of Oxford’s Center for the Governance of AI. Thanks for coming on the podcast, Helen.
Helen Toner: Great to be here.
Robert Wiblin: I hope to get into talking about careers in AI policy and strategy and the time that you spent living in China, but first, what are you doing at the moment, and why do you think it’s really important work?
Helen Toner: Yes, I’ve spent the last six to nine months setting up this center that you just mentioned, the Center for Security and Emerging Technology at Georgetown. Basically the mission of the center is to create high-quality analysis and policy recommendations on issues at the intersection, broadly, of emerging technology and national security. But specifically right now, we are focusing on the intersection of AI and national security as a place to start and a place to focus for the next couple of years.
Helen Toner: We think this is important work, because of how AI is gradually reshaping all kinds of aspects of society, but especially relevant to our work, reshaping how military and intelligence and national security more generally functions and how the US should be thinking about it. And we think that getting that right is really important, and getting it wrong could be really bad. The amount of work that was currently being put into analyzing some of the more detailed questions about how that looks and what the US government should be doing in response, we thought was a little bit lacking, and so we wanted to bring together a team that could really look into some of those questions in depth and try and come up with more accurate analysis and better recommendations.
Table of Contents
AI Policy
Robert Wiblin: Let’s dive into actually talking about some AI policy issues and what people get right and what people get wrong about this. A couple of weeks ago, you gave evidence to the US-China Commission, which I guess is a commission that was set up by Congress to report back to them on issues to do with technology in the US and China.
Helen Toner: That’s right.
Robert Wiblin: And the title of your presentation was Technology, Trade, and Military-Civil Fusion: China’s Pursuit of Artificial Intelligence, New Materials, and New Energy. We’ll stick up a link to your testimony there.
Helen Toner: Yeah, that was the title of the hearing that I testified at.
Robert Wiblin: Oh, that was the hearing? Okay, right. You didn’t write that yourself. How was that experience?
Helen Toner: It was very interesting. It was a real honor to go and testify to that commission, and it was in a Senate committee hearing room, which is a very intimidating place to speak. It was encouraging as well that the questions that they asked, they sent some questions in advance that I prepared written testimony for, and then the hearing itself was mostly Q&A, and it was encouraging that the questions that they sent were very related to the types of topics that CSET had already been working on. Actually, while I was preparing, I was kind of scrolling through our Google Drive, looking at the first and second draft reports that people had been putting together and just cribbing all of their answers, which was really great.
Robert Wiblin: How is DC thinking about this issue? Were the people who were interviewing you and asking you questions very engaged? It sounds like maybe they’re really on the ball about this.
Helen Toner: Yeah, it’s definitely a big topic. A huge topic in the security community generally is the rise of China, how the US should relate to China. And AI is obviously easy to map onto that space. There’s a lot of interest in what AI means for the US-China relationship. I was really impressed by the quality of the commissioners’ questions. It’s always hard to know in situations like this if it’s the commissioners themselves or just their excellent staff, but I would guess that at the very least they had really, really good staff support, because they asked several questions where it’s easy to ask a slightly misinformed version of the question that doesn’t really make sense and is kind of hard to answer straightforwardly, but instead they would ask a more intelligent version that showed that they had read up on how the technology worked and on what was concerning and what made less sense to be concerned about.
Robert Wiblin: That’s really good. Is the government and the commission approaching this from the perspective of like, “Ah, no, China is rising and threatening the US”? Or is more interest in the potential of technology itself as well?
Helen Toner: Definitely different answers for the US government as a whole. I mean, it’s hard to answer anything for the US government as a whole, versus this particular commission. This commission was actually set up specifically to consider risks to the US from engagement with China. I believe it was set up during the process where China was entering the World Trade Organization, and there was much more integration between the US and China. So I believe this commission was set up to then be a kind of check, to consider are there downsides, are there risks we should be considering? This commission and this hearing was very much from the perspective of what are the risks here? Should we be concerned? Should we be placing restrictions or withdrawing from certain types of arrangements and things like that.
Robert Wiblin: Given that, what were the key points that you really wanted to communicate to the commissioners, make sure they remembered?
Helen Toner: I think the biggest one was to think about AI as a much broader technology than most specific technologies that we talk about and think about. I think it’s really important to keep in mind that AI is this very, very general purpose set of technologies that has applications and implications for all kinds of sectors across the economy and across society more generally. And the reason I think this is important is because I think commissions like the US-China Commission and other parts of government are often thinking about AI the way they might think about a specific rocket or an aircraft or something like that, where it is both possible and desirable to contain the technology or to secure US innovations in that technology.
Helen Toner: The way that AI works is just so different, because it is such a more general use technology and also one where the research environment is so open and distributed, where almost all research innovations are shared freely on the internet for anyone to access. A lot of development is done using open source platforms like TensorFlow or PyTorch that for-profit companies have decided to make open source and share freely. A big thing that I wanted to leave with the commission was that if they’re thinking about this as a widget that they need to lock safely within the US’s borders, that they’re going to make mistakes in their policy recommendations.
Robert Wiblin: I guess they’re imagining it as kind of a tank or something like that, some new piece of physical equipment that they can control. And the temptation is like keep it for ourselves, make sure that no one else can get access to it. But that’s just like a total fantasy in the case of a piece of software or a much more general piece of technology like machine learning.
Helen Toner: Yeah, especially in the case of machine learning, where it’s not a single piece of software, I think it’s likely that there will be, well, there are already controls that apply to specific pieces of software doing specific, for example, militarily relevant things. But if you’re talking about AI or machine learning, I sometimes find it useful to mentally replace AI with advanced statistics. I think I got that from Ryan Calo at University of Washington.
Robert Wiblin: We have to keep the t-test for ourselves.
Helen Toner: Right. Whenever you’re saying something about AI, try replacing it with advanced statistics and see if it makes sense.
Robert Wiblin: I think there was some statistical methods that people were developing in World War I and World War II that they tried to keep secret.
Helen Toner: Oh, interesting.
Robert Wiblin: [crosstalk 00:07:11] analysis.
Helen Toner: Is that related to cryptography or something else?
Robert Wiblin: Oh, a lot of it was cryptography, but no, also other things. I think there’s that famous problem where they were trying to estimate the number of tanks that Germany had produced, and they would get … The Germans were stupid enough, it turned out, to literally give serial numbers to them that were sequential, and then they were trying to use statistics to use the serial numbers that they observed on the tanks that they destroyed, to calculate how many existed. I think that was a difficult problem that they put a bunch of resources into, and then it was kind of like regarded as strategically advantageous to have that. There was probably various other cases, although they wouldn’t expect to be able to keep that secret beyond a year or two, right?
Helen Toner: Yeah. Well, or maybe-
Robert Wiblin: They’re in the middle of a total war there, as well. It’s a very different situation.
Helen Toner: Very different, and also, that I think is more analogous to a specific application, so perhaps a specific machine learning model or something like that, or a specific dataset that is going to be used to train a critical system of some kind. Yeah, I think protecting statistics more generally from spreading would be a much heavier lift.
Robert Wiblin: Yeah. I’ll put up a link to that story about the tanks. Hopefully I haven’t butchered it too badly. What bad things do you think will happen if the US does take the approach of trying to bottle advanced statistics and keep those advances to themselves?
Helen Toner: I think essentially, if you think of AI as this one technology that has military implications that you need to keep safely in your borders, then you would really expect that there are various things you can do to restrict the flow of that information externally. Two obvious examples would be restricting the flow of people, so restricting immigration, perhaps from everywhere or perhaps just from competitor or adversary nations. And then a second thing would be putting export controls on the technology, which would actually have a similar effect, in that export control technologies, like aerospace technologies, for example, there’s restrictions on, it’s what’s called a deemed export. Basically, if you have a lab in the US doing something, and a foreign national walks in and starts working on it, that’s deemed as an export, because it’s kind of being exported into their foreign brain.
Robert Wiblin: I guess we’ve both got foreign brains here right now.
Helen Toner: Indeed, yeah. I’m working on it. I think those kinds of restrictions make sense, first, if the technology is possible to restrict, and second, if you’re going to get buy-in from researchers that it’s desirable to restrict. So yeah, you can say if you’re working on rockets, rockets are basically missiles. You don’t want North Korea to be getting your missile technology. You probably don’t want China to be getting your missile, you probably don’t want Turkey to be, whatever. It’s very easy to build expectations in the field that that needs to stay in the country where it’s being developed.
Helen Toner: And AI is different in two ways. One is that I think it just would be really, really hard to actually effectively contain any particular piece of AI research. And then second, and this reinforces the first one, it’s going to be extremely difficult to get buy-in from researchers that this is some key military advance that the US needs to contain. And so I think the most likely effect of anything that researchers perceive as restrictive or as making it harder for them to do their work is mostly going to result in the best researchers going abroad. Many American researchers, if they wanted to go somewhere else, would probably look to Canada or the UK, but there are also plenty of people currently using their talents in the US who are originally Chinese or originally Russian, who might go home or might go somewhere else. It just seems like an attempt to try and keep the technology here would not actually work and would reduce the US’s ability to continue developing the technology into the future.
Robert Wiblin: I’m not sure if this story is quite true either, but I think I remember reading that there’s some encryption technologies that are regarded as export controlled by the United States, but are just widely used by everyone overseas. It’s kind of this farcical thing where they’ve defined certain things as dangerous, but of course it’s impossible to stop other people from copying and creating them, and so it just is an impediment to the US developing products that use these technologies. Maybe I’ll double check it that’s true. Yeah, you could imagine that it is, and that’s kind of indicative just how hard it is to stop software from crossing borders.
Helen Toner: Yeah, I don’t know about that specific case. It certainly sounds plausible. A thing that is not the same but is kind of analogous is that if you’re speaking with someone who holds a security clearance, you can get into trouble if you share with them information that is supposed to be classified, but actually everyone has access to. Things like talking about the Snowden leaks can be really problematic if you’re talking to someone who holds a clearance and who is not supposed to be discussing that information with you, even though that has been widely published.
Robert Wiblin: Yeah. I guess, is that just a case where the rules were set up for a particular environment and they don’t imagine this edge case where something that was classified has become completely public, and it hasn’t been declassified, and they’re stuck. It’s like everyone knows and everyone’s talking about it, but you can’t talk about it.
Helen Toner: I guess so. Again, I don’t know the details of this case. I just know that it’s something to look out for.
Robert Wiblin: Are there any examples of software advances or ideas that people have managed to keep secret for long periods of time, as a kind of competitive advantage?
Helen Toner: Yeah. I think the best, most similar example here would be offensive cyber capabilities. Unfortunately, it’s a very secretive area, so I don’t know many details. But that’s certainly something where we’re talking entirely in terms of software, and there do seem to be differences in the capabilities between different groups in different states. Again, it’s perhaps more analogous, each technique is perhaps more analogous to a single AI model as opposed to the field of machine learning as a whole.
Robert Wiblin: Yeah, and I guess the whole cyber warfare domain has been extremely locked down from the very beginning. I guess machine learning is almost the exact opposite. It’s extremely open, even, I think, by the standards of academic fields.
Helen Toner: That’s right, and I think again here the general purpose part comes into play, where I think if computer security researchers felt like their work could make massive differences in healthcare and in energy and in education, maybe they would be less inclined to go work for the NSA and sit in a windowless basement. But given that it is in fact purely an offensive or a defensive technology, it’s much easier to contain in that way.
Robert Wiblin: That was your main bottom line for the committee, you’re not going to be able to lock this down so easily. Don’t put on export controls and things like that. Did you have any other messages that you thought were important to communicate?
Helen Toner: Yeah. I think the biggest other thing would be to really remember how much strength the US draws from the fact that it does have these liberal democratic values that are at the core of all of the institutions and how the society works as a whole, and to double down on those rather than … I think it’s easy to look to China and see things that the Chinese government is doing and ways that Chinese companies relate to the Chinese government and things like that and feel kind of jealous. But I think ultimately the US is not going to be able to out-China China. And so instead it needs to do its best to really place those values front and center.
Robert Wiblin: What do you think people get most wrong about the strategic implications of AI? I’m especially wondering if there’s exaggerated fears that people have, which maybe you read about in the media and you roll your eyes at the CSET offices?
Helen Toner: Yeah. I think maybe a big one is around autonomous weapons and all of the effects that AI is likely to have on security and on warfare, how big a part of that is specifically autonomous weapons versus all kinds of other things. I think it’s very easy to picture in your head a robot that can harm you in some way, whether it be a drone or some kind of land-based system, whatever it might be. But I think in practice, while I do expect those systems to be deployed and I do expect them to change how warfare works, I think there’s going to be a much deeper and more thoroughgoing way in which AI permeates through all of our systems, in a similar way to how electricity in the early 20th century didn’t just create the possibility to have electrically-powered weapons. But it changed the entirety of how the armed forces worked, so it changed communications, it changed transport, it changed logistics and supply chains.
Helen Toner: And I think similarly, AI is going to just affect how absolutely everything is done, and so I think an excessive focus on weapons, whether that be from people looking from the outside and being concerned about what weapons might be developed, but also from the inside perspective of thinking about what the Department of Defense, for example, should be doing about AI. I think the most important stuff is actually going to be getting a digital infrastructure in order. They’re setting up a massive cloud contract to change the way they do data storage and all of that. Thinking about how they store data and how that flows between different teams and how it can be applied, I think that is going to be a much bigger part of when we look back in 50 or 100 years, what we think about how AI has actually had an effect.
Robert Wiblin: Do you think that people are too worried or not worried enough about the strategic implications of AI, all things considered?
Helen Toner: Just people in general, just all the people?
Robert Wiblin: People in DC.
Helen Toner: I think that still varies hugely by people. I suspect that the hype levels right now are a little bit higher than they should be. I don’t know, I do like that classic line about technology, that we generally overestimate how big an effect it’s going to have in the short term and underestimate how big it’ll be in the long term. I guess if I had to overgeneralize, that’s how I’d do it.
Robert Wiblin: You mentioned that people are quick to draw analogies for AI that sometimes aren’t that informative. I guess people very often reach for this analogy to the Cold War and nuclear weapons, and talk about an AI arms race. And I have to admit, I find myself doing this all the time, because when I’m trying to explain to people why you’re interested in the strategic and military implications of AI, that’s a very easy analogy to reach to. And I guess that’s because nuclear weapons did dramatically change the strategic game for war studies or for relations between countries. And we think that possibly AI is going to do the same thing, but that doesn’t mean that it’s going to do it in anything like a similar manner. Do you agree that it’s a poor analogy, and what are the implications of people reaching for an analogy like nuclear weapons?
Helen Toner: Yeah, I do think that’s not a great analogy. It can be useful in some ways. You know, no analogy is perfect. The biggest thing is this question of to what extent is this a discrete technology that has a small number of potential uses, versus being this big umbrella term for many, many different things? Nuclear weapons are almost the pinnacle of, it’s very discrete. You can say, “Does this country have the capability to create a nuclear weapon, or does it not? If it does, how many does it have, of which types?” Whereas with AI, there’s no real analogy to that. Another way I find it useful to think about AI just sort of gradually improving our software. You can’t say, “Is this country using AI in its military systems?”
Helen Toner: Even with autonomous weapons, you run into the exact same problem of oh, is a land mine an autonomous weapon? Is a automated missile defense system an autonomous system in some way? And I think the strategic implications of this very discrete thing where you can check whether an adversary has it, and you can sort of counter this very discrete technology, are very different from just gradually improving all of our systems and making them work better and making them need less human involvement. It’s just quite a different picture.
Robert Wiblin: Yeah, it does seem like there’s something quite odd to talk about or to really emphasize an arms race in a technology that as far as I can tell is predominantly used now by companies to suggest videos for you to watch and music that you’re really going to like, far more than it’s been used for military purposes, at least as far as I can see at the moment. Do you agree with that?
Helen Toner: Yeah, I do agree, and I think also in general people, again with the overestimating the short term effects, right now the machine learning systems that we have seem so poorly suited to any kind of battlefield use, because battlefields are characterized by having highly dynamic environments, highly unpredictable. There’s an adversary actively trying to undermine your perception and your decision-making ability. And the machine learning systems that we have are just so far from ready for an environment like that. They really are pretty brittle. They’re pretty easy to spoof. They do unpredictable things for confusing reasons. So I think really centering AI weapons as the core part of what we’re talking about is definitely premature.
Robert Wiblin: Yeah. I think I’ve heard you say before that you expect that the first times that AI will really start to bite as a security concern is with cybersecurity, because that’s an environment where it’s much more possible to use machine techniques, because you don’t have to have robots or deal with a battlefield. Do you still think that?
Helen Toner: Yeah. I mean, in general it’s much easier to make fast progress in software than in hardware. And certainly in terms of if we’re talking about states using it, then the US system for procuring new hardware is really, really slow. Well, and then software, I won’t say that they’re necessarily better, but the way that they basically handle cyber warfare as far as I know is pretty different. I think it will be much easier for them to incorporate new technologies, tweak what they’re doing, gradually scale up the level of autonomy, as opposed to saying, “Okay, now we’re going to procure this new autonomous tank that will have capabilities X, Y, and Z.” Which is going to be a much clunkier and longer-term process.
Robert Wiblin: When people have asked me to explain why we care about AI policy and strategy, I’ve found myself claiming that it’s possible that we’ll have machine learning systems in future that are going to become extremely good at hacking, like other computer systems. And I find myself wondering after saying that, is that actually true? Is that something that machine learning is likely to be able to do, to just give you vastly more power to break into an adversary country’s computer systems?
Helen Toner: I expect so. Again, I’m not an expert in cybersecurity, but if you think about areas where machine learning does well, somewhere where you can get fast feedback, so where you can simulate, for example, an environment, so you could simulate the software infrastructure of an adversary and have your system learn quickly how to find vulnerabilities, how to erase its own tracks so that it can’t be detected, versus things like robotics, where it’s much harder to gather data very quickly. I would expect that it will be possible for … And there is already plenty of automation of some kind used in these hacking systems, it’s just not necessarily learned automation. It might be hand-programmed. It seems like fertile ground. Again, I would love to know more about the technical details, so I could get more specific, but from the outside it looks like very fertile ground for ML algorithms to gradually play a larger and larger role.
Robert Wiblin: Do you know if ML algorithms have already been used in designing cyber attacks or just hacking computers in general, or is that something that’s yet to break into the real world?
Helen Toner: I don’t believe that it’s widely used. There was a competition run by DARPA. It was called the Cyber Grand Challenge or something, which was basically an automated hacking competition. This was in 2016. And I believe that the systems involved there did not use machine learning techniques.
Robert Wiblin: You mentioned earlier that electricity might be a better analogy for artificial intelligence. Why is that, and how far do you think we can take the analogy? How much can we learn from it?
Helen Toner: Yeah. The reason I claim it’s a better analogy, again no analogy is perfect, is that it’s a technology that has implications across a whole range of different sectors of society, and it basically really changed how we live, rather than just making one specific or a small number of specific things possible. And I think that is what we’re seeling from AI as a likely way for things to develop. Who knows what the future holds? I don’t want to say definite.
Helen Toner: In terms of how far you can take it, it’s a little hard to say. One piece that I would love to look into more, I was actually just before the interview looking up books that I could read on the history of electrification, is thinking about this question of infrastructure. Electricity is so clearly something where you can’t just buy an electric widget and bring it in and now your office is electrified, but you really need to sort of start from the ground up. It seems to me like AI is similar, and it would be really interesting to learn about how that happened, both in public institutions but also in people’s homes and in cities and in the countryside, and how that was actually rolled out. I don’t know, I’ll get back to you.
Robert Wiblin: This analogy to electricity has become a little bit more popular. I think Benjamin Garfinkel wrote this article recently, trying to be a bit more rigorous about evaluating how strong the arguments are that artificial intelligence is a really important leverage point for trying to influence how well the future goes. And I guess when I imagine it more as electricity rather than as nuclear weapons, then it makes me feel a little bit more skeptical about whether there’s much that we can do today to really change what the long-term picture is or change how it pans out.
Robert Wiblin: You can imagine an electricity and security analysis group in the late 19th century trying to figure out, how do we deal with the security implications of electricity and trying that go better? Maybe that would have been sensible, but it’s not entirely obvious. Maybe it’s just like the illusion of things so far away makes it seem well, everyone’s going to end up with electricity soon. This doesn’t have big strategic implications. But perhaps it did. Have you given any thought to that issue?
Helen Toner: Not as much as I would have liked to. And again, maybe I should go away and read some books on the history of electricity and then get back to you. I do expect that there could have been more thought put into the kinds of technologies that electricity would enable and the implications that those would have. And that is something that we haven’t begun doing at CSET, but that I would be really interested to do in the future. So far we’ve been focused on this US-China competition angle, but it would be really interesting to think through beyond autonomous weapons what types of changes might AI make and what would that imply? So yeah, in the electricity case, that might be, if you have much more reliable, much faster communication between commanders and units in the field, what does that imply? How does that change what you can do? I don’t know how much that was thought through in advance and how much it might have been possible to think through more in advance. But it would be interesting to learn more about.
Robert Wiblin: Yeah, it would be really interesting to find out whether people thought that electricity had really important security implications, and they’re worried about the country that gets electricity first and deploys it is going to have a massive advantage and have a lot of influence over how the future goes. I guess it kind of makes sense. It was like rich countries of the time that probably electrified earlier on, and maybe that really did help them with their colonial ambitions and so on, because they just became a lot richer.
Helen Toner: Yeah. Certainly I think it also makes it clear why it’s a little strange to say, “Oh, who’s going to get AI first? Who’s going to get electricity first?” It’s like it seems more like who’s going to use it in what ways, and who’s going to be able to deploy it and actually have it be in widespread use in what ways?
Robert Wiblin: I guess if you imagine each different electrical appliance as kind of like an ML algorithm, then maybe it starts to make a little bit more sense, because you can imagine electronic weapons, which I guess didn’t really pan out, but you could have imagined that the military would use electricity perhaps more than we see them using it today. And then people could have worried about how much better you could make your weapons if only you could electrify them.
Helen Toner: Yeah, perhaps.
Robert Wiblin: If that’s the case, it seems like AI is like electricity, and it seems like the US government would have to restructure just tons of things to take advantage of it. So it seems then kind likely that actual application of AI to government and security purposes is probably going to lag far behind what is technically possible, just because it takes to long. Military procurement is notoriously slow and expensive, and it takes a long time for old infrastructure to be removed and replaced by new stuff. I think nuclear systems until recently were still using floppy discs that they totally stopped manufacturing, which actually I think … You’re face-palming, but I think-
Helen Toner: That’s horrible.
Robert Wiblin: No. Well, I’m not sure it is, because it had proven to work. Do you really want to fiddle with something in nuclear systems?
Helen Toner: That’s fair.
Robert Wiblin: Yeah. I think there was a case for keeping it, which they they did point out. Anyway, the broader point is yeah, government systems in general are replaced slowly. Sometimes mission-critical military systems, replaced even slower. Is it possible that it will just be a little bit disappointing in a sense, and the government won’t end up using AI nearly as much as you might hope?
Helen Toner: Yeah, I think that’s definitely possible, and I do think that the places where it will be implemented sooner will be in those areas that are not mission-critical and are not security-critical. You know, things like all of DOD is basically one huge back office. All of the logistical and HR and finance systems, there’s plenty of commercial, or there’s an increasing number of commercial off-the-shelf products that you could buy that use some form of machine learning to streamline things like that. And so I expect that we’ll see that before we see two drone swarms battling it out with no humans involved, over the South China Sea or wherever it might be.
Robert Wiblin: Yeah, I suppose. I wonder whether that can tamp down on the arms race, because if both the US and China expect that the other government is not going to be actually able to apply ML systems, or not take them up very quickly, then you don’t have to worry about one side getting ahead really quickly, just because they both expect the other side to … There’s just slow government bureaucracy, so yeah, you don’t worry about one side tooling up way faster than you can.
Helen Toner: Yeah, I think that definitely maybe tamps it down a little bit. I do think that the whole job of a military is to be paranoid and thinking ahead about what adversaries might be developing. And there’s also been a history of the US underestimating how rapidly China would be able to develop various capabilities. So I think it’s natural to still be concerned and alarmed about what might be being developed behind closed doors and what they might be going to field with little warning.
Robert Wiblin: Are there any obvious ways in which the electricity-to-AI analogy breaks down? Any ways that AI is obviously different than electricity was in the 19th century?
Helen Toner: I think the biggest one that comes to mind is just the existence of this machine learning research community that is developing AI technologies and pushing them forward and finding new applications and finding new areas that they can work in and improving their performance. And the fact that that community is such a big part of how AI is likely to develop, I don’t believe there’s an analogy for that in the electricity case. In a lot of my thinking about policy, I think considering how that community is likely to react to policy changes is a really important consideration. I’m not sure there is something similar in the electricity case.
Robert Wiblin: I thought you might say that this analogy would be that electricity is a rival good, a material good, that two people can’t use the same electricity. But with AI as software, if you can come up with a really good algorithm, it can be scaled up and used by millions, potentially very quickly.
Helen Toner: Yeah, that’s true as well, definitely.
Robert Wiblin: Guess that’s another way that it could transformative, perhaps, a bit more quickly, because you don’t necessarily need to build up as much physical infrastructure.
Helen Toner: Yeah, that could be right.
Robert Wiblin: People have also sometimes talked about data as the new oil, which has always struck me as a little bit daft, because oil has this rival risk, where two people can’t use the same barrel of oil, whereas data is easily copied. And the algorithms that come out of training on a particular set of data can be copied super easily. It’s completely different from oil in a sense. Do you agree that’s a misleading analogy?
Helen Toner: I do, and I think it’s for the reason that you said, but also for a couple other reasons, a big one being that oil is this kind of all-purpose input to many different kinds of systems, whereas data in large part is very specific to, or what kind of data you need for a given machine learning application is pretty specific to what the machine learning application is for. And I think people tend to neglect that when they use this analogy.
Helen Toner: The most common way that I see this come up is people saying that well, I think Kai-Fu Lee coined the phrase that if data is the new oil, then China is the Saudi Arabia of data. This is coming from the idea that well, China has this really large population, and they don’t have very good privacy controls, so they can just vacuum up all this data from their citizens, and then because data is an input to AI, therefore the output is better AI. Is this some fundamental disadvantage for the US?
Helen Toner: I kind of get where people are coming from with this, but it really seems like it is missing the step where you say, “So what kind of data is who going to have access to? And what are they going to use it to build?” I would love to see more analysis of what kind of AI-enabled systems are likely to be most security-relevant, and I would bet that most of them are going to have very little to do with consumer data, which is the kind of data that this argument is relevant to.
Robert Wiblin: Yeah, I guess the Chinese military will be in a fantastic position to suggest products for Chinese consumers to buy on whatever their equivalent of Amazon is, using that data. But potentially it doesn’t really help them on the battlefield.
Helen Toner: Right. And if you look at things like satellite imagery or drone imagery and how to process that and how to turn that into useful applications, then the US has a massive lead there. That seems like much more relevant than any potential advantage that China has.
Robert Wiblin: Oil is mostly the same as other oil, whereas the data is not the same as other data. It’s kind of like saying PhD graduates are the new oil. The thing is, PhD graduates in what? Capable of doing what? They’re all very specific to particular tasks. You can’t just sub in 10 PhD graduates.
Helen Toner: Yeah. And I mean, there are definitely complications that come from things like transfer learning is getting better and better, which is where you train an algorithm on one dataset, and then you use it on a different problem, or you retrain it on a smaller version of a different dataset. And things like language understanding, like maybe having access to the chat logs of huge numbers of consumers has some use in certain types of language understanding. So I don’t know. I don’t think it’s a simple story, but I guess that’s the point. I think the story people are telling is too simple.
Robert Wiblin: Let’s push back on that for a second. Let’s say that we get some kind of phase shift here, where we’re no longer just programming machine learning systems to perform one specific task on that kind of data, but instead we do find a way to develop machine learning systems that are good at general reasoning. They learn language, and they learn general reasoning principles, and now it seems like these machine learning algorithms can perform many more functions, eventually going to novel areas, and learn to act in them the same way that humans do. Is that something that you consider at all? Is that a vision that people in DC think about or that people at CSET think about at this point?
Helen Toner: Not something that I consider in my day job. Definitely something that’s interesting to read about on the weekends. I think in DC there’s a healthy skepticism to that idea, and certainly given that CSET is focused on producing work that is going to be relevant and useful to decisions that are coming up in the near term, it’s not really something that’s in our wheelhouse.
Robert Wiblin: Something I saw you arguing about in your testimony is that the AI talent competition, inasmuch as there is one, it’s the US’s to lose. I guess a lot of people imagine that over time, China is going to just probably overtake the United States in AI research, in the same way that it is overtaking the US economy just through force of population. But I guess you think that’s wrong?
Helen Toner: Yeah, I do, and I think it’s because it’s really easy to underestimate the extent to which the US is just a massive hub for global talent. When I was in China, I had two friends who were machine learning students at Tsinghua University, a very prestigious Chinese university. I was asking them about where they were hoping to get their internships over the summer, and it was just so obvious for both of them that the US companies were by far the best place to get an internship, and therefore would be super competitive, and therefore they probably wouldn’t get it, and so they’d have to go to a different place. And I think it’s really easy to overlook that from within the US, how desirable it is to come here.
Helen Toner: And I included in my testimony at the end a figure that came from a paper looking at global talent flows. The figure relates to inventors, so holders of patents, which is not exactly the same as AI researchers, obviously. But I included it because it’s just really visually striking. Basically, it’s looking at different countries and their net position, in terms of how many inventors, where an inventor is a patent holder, I think, how many inventors they import versus export.
Helen Toner: First off, China is a massive net exporter, so they’re losing something, I’m just eyeballing this chart, around 50,000 people a year, net leaving China. And then all these other countries, they’re sort of around that same range, in the thousands or maybe tens of thousands, and most of them are sort of either exporting or they’re very, very slightly importing. And then you just have this massive spike at the far right of the chart for the United States, where its net importer position is around 190,000 people, which is just way off the scale of what all these other countries are doing.
Helen Toner: I haven’t seen a chart like that for AI researchers or for computer science PhDs, but I would guess that it would be pretty similarly shaped. I think China is going to gradually do a better job of retaining some of its own top talent at home, but I really can’t see it, short of massive political change, really can’t see it becoming such a hub for people from other countries. And certainly if you think about the prospect of the US losing 50,000 really talented people to go live in China because they think it’s a better place to live, I just think that’s completely ludicrous really.
Helen Toner: And again, this comes back to the point of the United States leaning into the advantages that we do have, and those really do include political freedom, freedom of expression and association, and even just having clean air and good infrastructure. Maybe that last point, the good infrastructure, is one where China can compete. But everything else, I think the US is in a really strong position if it will just maintain them.
Robert Wiblin: Yeah, I think I’ve heard Tyler Cowen make the argument that it’s clear that DC isn’t taking AI that seriously, because they’ve done absolutely nothing about immigration law to do with AI. There’s no particular program for AI researchers to come into the United States. Which you’d think that there would be if you were really worried about your competitive situation and losing technological superiority on that technology. If you think that the US government should do anything about AI, do you think it should change immigration laws so that AI scientists can come to America is the no-brainer?
Helen Toner: Yeah, I definitely think that’s the no-brainer if you ignore political considerations, and the problem is that immigration is just this hugely political issue here. There’s so much deadlock on all sides. If you try to make some small obvious-seeming change, then people will want it to become part of a larger deal. One person I heard who worked a lot on immigration policy said that if you try to put any kind of immigration legislation through the Congress whatsoever, it’s just going to snowball and become comprehensive immigration reform, which is then this huge headache that no one wants to deal with.
Helen Toner: So I do think it’s the obvious low-hanging fruit, aside from political considerations, but the political considerations are really important. So we are looking into, in our project on this, looking into changes that don’t need legislation, that can just go through agencies or be done through executive action, in the hope that those could be actually achieved. I don’t know, I think Tyler Cowen’s quote is cute, but not necessarily reflecting the way that government actually works.
Robert Wiblin: You said in your testimony that you thought it would be pretty dangerous to try to close up the openness of the current AI ecosystem. How could that backfire on the US?
Helen Toner: The thing I’m most concerned about would be if the US government is taking actions in that direction that don’t have a lot of buy-in from the research community. I think the AI research community cares a lot about the ability to publish their work openly, to share it, to critique it. There was a really interesting release recently from OpenAI where they put out this language model, GPT-2, which could generate convincing pieces of text. They deliberately, when they released this, said that they were going to only release a much smaller version of the model and not put out the full version of the model, because of concerns that it might be misused. The reaction to this within the research community was pretty outraged, which was really interesting, given that they were explaining what they were doing. They were saying that it was explicitly for reasons of public benefit, basically. And still they got all this blowback.
Helen Toner: And so I think if the US government took actions to restrict publishing in a similar way, they would be much more likely to do that in a way that would be seen as even worse by the AI research community. I do think that would prompt at least some significant number of researchers to choose a different place to work, not to mention also slowing down the US’s ability to innovate in the space, because there obviously are a lot of great symbiotic effects you get when researchers can read other’s work openly, when they’re using similar platforms to develop on, when there are shared benchmarks to work from.
Robert Wiblin: So yeah, I guess an attempt like that to try to stay ahead of everyone else could end up with you falling behind, because people just jump ship and leave and want to go do research elsewhere. And then also, your research community becomes kind of sclerotic and unable to communicate.
Helen Toner: Right. And so I do think there’s plenty of room for, and maybe a need for, a conversation about when openness and complete openness is not the right norm for AI research, and I really applaud OpenAI for beginning to prompt that conversation. But I think it’s very unlikely that the government should be leading that.
Robert Wiblin: Let’s just be a little bit more pessimistic here about the odds of CSET having a positive impact for a second. What reason is there to think that the US government is realistically going to be able to coordinate itself to take predictably beneficial actions here? Could it be that it’s just better for the government to just stay out of this area and let companies that aren’t so threatening to other countries just lead the way in this technology?
Helen Toner: Yeah. I think I would not describe the effect we’re trying to have as trying to get some kind of coordinated whole of government response that is very proactive and very large. Instead, I would think that there are going to be government responses to many aspects of this technology, some of which may be application-specific, a regulation around self-driving cars or what have you, and some of which may be more general. So there’s definitely been a lot of talk about potential restrictions on students or restrictions on companies and whether they’re able to work with US partners.
Helen Toner: I think there are going to be actions taken by different parts of the government, and we would hope that our work can help shape those actions to be more productive and more likely to have the effects that they’re intended to have, and better based on a real grounding in the technology, as opposed to trying to carry out some grand AI strategy, which I think I agree would be kind of dicey if you could get the strategy to be executed and certainly extremely difficult to get the point where any coordinated strategy is being carried out.
Robert Wiblin: 80,000 Hours were pretty excited for people to go into AI policy and strategy and do the kind of thing that you’re doing. But I guess the biggest kind of pushback I get is from people who are skeptical that it’s possible to reliably inform policy in such a complicated topic in a way that has any reliable effect. Even if you can understand the proximate effects of the actions and the things that you say, the effects further down the line, further down the chain of causation, are so hard to understand, that the government system that you’re part of is so chaotic and full of unintended consequences, it seems like even someone who’s very smart and understands the system as well as anyone can is still going to be at a bit of a loss to figure out what they should say that’s going to help rather than hurt. Do you think there’s much to this critique of AI and other difficult policy work?
Helen Toner: I think it’s a good critique in explaining why it doesn’t make sense to come up with grand plans that have many different steps, involve many different actors, and solve everything through some very specific plan of action. But I also think that it’s kind of the reality of how so much of policy works, is that there are people who are overworked, who don’t have time to learn about all the different areas that they are working on, who have lots of different things they’re thinking about. Maybe they’re thinking about their career, maybe they’re thinking about their family, maybe they’re hoping to do a different job in the future.
Helen Toner: But I do think there’s a lot of room for people who care about producing good outcomes in the world and who are able to skill up on the technical side, and then also operate effectively in a policy environment. I just think there’s a lot of low-hanging fruit to slightly tweak how things go, which is not going to be some long-term plan that is very detailed, but is just going to be having a slightly different set of considerations in mind.
Helen Toner: An example of this, this is kind of a grandiose example, but in the Robert Caro biography of LBJ, there’s a section where he talks about the Cuban Missile Crisis, and he describes Bobby Kennedy having a significant influence over how the decision-making went there, simply because he was thinking about the effects on civilians more than he felt like the other people in the room were. And that slight change in perspective meant that his whole approach to the problem was quite different. I think that’s a pretty once in a lifetime, once in many lifetimes experience, but I think the basic principle is the same.
Robert Wiblin: I guess it’s because working with government, you get this huge potential leverage from the power and the resources that the government has access to, and then on the flip side, you take this hit that it’s potentially a lot harder to figure out exactly what you should say, and there’s a good chance that the actions that you take won’t have the effect that was desired. You’ve kind of got to trade off these different pros and cons of using that particular approach to try to do good.
Helen Toner: Yeah, and I definitely think that there’s a difficult thing of when you’re deciding how you want to shape your career, it’s difficult to choose a career where you will predictably end up in some situation where you can have a lot of leverage over some important thing. And so it’s more likely that you’ll be able to find something where you can either be making slight changes often, or where there’s some chance that some important situation will come up and you’ll have a chance to play a role in it.
Helen Toner: But then the problem is, if you go with there’s a chance that a big situation will come up and you’ll get to play a role in it, there’s a much greater chance that it won’t, and then you’ll spend most of your career doing much less important stuff. I think there’s a difficult set of prioritization and motivation questions involved in, is that the kind of career that you want to have? And how to feel about the fact that looking back, probably most likely you’ll feel like you didn’t accomplish that much. But maybe ex ante there was a chance that you would be able to be part of an important time.
Robert Wiblin: All the way back in February 2017, there was this two-day workshop in Oxford that led to this report, which we talked about on the show a few times before, The Malicious Use of Artificial Intelligence, which had fully 26 authors from 14 different institutions, kind of writing this, I guess, consensus view on what concerns you all had about how it might be misused in future. You were one of many authors of this report. Two years after it’s written, how do you think it holds up, and what might you say that was different today than what was written then?
Helen Toner: I think it holds up reasonably well. The workshop was held in February 2017, and then the report was published in February 2018, building on the workshop. Something that was amusing at that time was that we had mentioned in the report the possibility that machine learning would be used to generate fake video essentially. I believe in the workshop we talked about it being used for political purposes, and then in the meantime between the workshop and the report, there were actually the first instances of deepfakes being used in pornography. That was interesting to see that we’d got close to the mark, but not necessarily hit the mark on how it might be used.
Helen Toner: I think the biggest thing if we were doing it again today, the biggest question in my mind is how we should think about uses of AI by states that, to me certainly and to many Western observers, look extremely unethical. I remember at the time that we held the workshop, there was some discussion of should we be talking about AI that is used that has bad consequences, or should we be talking about AI that is used in ways that are illegal, or what exactly should it be? And we ended up with this framing of malicious use, which I think excludes things like surveillance, for example. And for me, a really big development over the past couple of years has been seeing how the Chinese government has been using AI, only as one small part but certainly as one part of a larger surveillance regime, especially in Xinjiang, with Muslim leaders who are being imprisoned there.
Helen Toner: I think if we held the workshop again today, it would be really hard. At the time, our motivation was thinking, “Well, it would be nice to make this a report that can be sort of global and shared and that basically everyone can get behind, that there’s clearly good guys and bad guys, and we’re really just talking about the really bad guys here. And I think today it would much harder to cleanly slice things in that way and to exclude this use of AI from this categorization of deliberately using AI for bad ends, which is sort of what we were going for.
Robert Wiblin: One thing that troubled me a lot from my interview with Allan Dafoe was the relationship between AI and authoritarian states, and perhaps their improved capacity to track people and analyze their moods and their beliefs and basically make it very hard for people to engage in organized dissent and potentially change their country for the better. I heard you on Julia Galef’s podcast say that you think people’s fears about the Chinese social credit system is somewhat overblown, and that’s something that we talked about and we’re a bit worried about in that interview. Maybe I could get you to explain why you think the Chinese credit system isn’t perhaps everything that people have made it out to be and also just comment on whether people should be worried about implications of ML and big data and authoritarian states.
Helen Toner: Yeah. The social credit question is really interesting. I worry a little bit that when I say that the Western coverage of the social credit has been overblown, that people hear from that, “Oh, actually there is nothing to be worried about here.” I do think that the Chinese government has plenty of quite concerning plans about what they would like to do with social credit, but they are very much in the prototype stage, and I think more importantly, I think another thing that I find irritating is that people want to talk about the social credit system as if it’s the be-al and end-all of Chinese state control, but it seems to be actually a relatively small part.
Helen Toner: And in fact, the widespread surveillance, the introduction of these camps in Xinjiang, things like that, it’s part of a much larger infrastructure and apparatus that I think is extremely concerning. Certainly also the Great Firewall and the censorship within chat apps, for example. If you post something in WeChat that is being censored, it might just be deleted without your noticing, or you send a message to a friend and it just doesn’t get sent.
Helen Toner: So I don’t know. I guess the short story is that I do think this is really concerning. I think it’s likely to continue. On the other hand, I also think that I’m not convinced yet of how much of a threat it poses to US security, so in the circles that I move in there’s a lot of concern that this sort of Chinese authoritarianism and population control and also their access to the fact that Chinese companies are selling systems abroad has some really concerning geopolitical implications for the US. And honestly, I haven’t heard a fully fleshed-out version of that argument that I’ve found very compelling.
Helen Toner: So I guess where I end up landing is thinking that this is really, really concerning for the populations that it affects, and something that I find really troubling, while also thinking that I’m not sure that from a security perspective the US needs to be especially concerned about it. And so perhaps we should be responding to it from a different perspective, from a human rights perspective or something like that.
Robert Wiblin: Couldn’t you worry that if this technology becomes really easy to operate, then the temptation for the US to use it on its own population could become too great, or it could end up being misused in more subtle ways? The US isn’t going to turn into China overnight, but there might be some technologies of population control that we would just rather didn’t exist.
Helen Toner: Yeah, I think that’s definitely something to be concerned about. That, I think, is not usually what people in DC are talking about if they’re concerned about the spread of authoritarianism. But yeah, I definitely think that that is concerning, and I think there are interesting discussions to be had on, for example, San Francisco recently putting a moratorium on facial recognition and just saying this is a technology that is not mature enough to be used in these kind of law enforcement contexts, is a really interesting step, I think. It would be good to see more consideration of how we should be thinking through the deployment of technologies like this that obviously can have some safety and security benefits. I felt super safe in China. There’s just like very little petty crime. And thinking through how to trade that off against the political freedom, privacy, all those considerations.
CSET
Robert Wiblin: Yeah, let’s dive into talking about CSET for a little bit. Lots of people have ideas for research institutes that could be set up. How is it that CSET actually managed to get off the ground?
Helen Toner: I think it was a confluence of a few different factors. I think most important was this kind of demand within DC for more information about AI, better understanding of AI, and a better understanding of what implications it has and what implications it doesn’t have, because a lot of people working in DC, especially working in government roles, have so many different topics that they need to be able to understand and react to. And so when some new topic comes up, especially one that is highly technical like AI, it’s very hard for them to quickly get spun up on what they should be doing about it and how they should be thinking about it, and very hard for them to tell the difference between good ideas and good analysis and sort of bad ideas or oversimplified cliches.
Helen Toner: One piece of why CSET got set up was this demand for information that we saw in what we thought was an important space. Another piece that was really important was that Jason Matheny, who’s now the Executive Director of CSET, was able to come in and lead it. He has experience as director of IARPA, which is basically an organization that funds the development of new technologies for the intelligence community in the US government, and is really knowledgeable about AI and about broader technology issues generally, as well as having this network and this understanding of how DC worked. And so I think that was an important piece.
Helen Toner: And then certainly a third important piece was the fact that we were able to find Georgetown as an institutional home that was both, has this really great cachet in DC as being a great place in DC to do work on security and having lots of different experts there with expertise on all different areas relating to national security more generally, and also having the connections to decision-makers. And so the fact that Georgetown was willing to host us and that the dean of the School of Foreign Service was really excited about making CSET work made it much, much easier for us to get started and for us to get off the ground running, because we were hosted within this university that people know and that people respect.
Robert Wiblin: What exactly is kind of the gap that CSET fills that was previously vacant? I guess, is it like AI research specifically, or maybe like a technical understanding of technologies that’s kind of lacking in DC? Or is just that not many people are looking into emerging technologies and what security implications they have, just across the board?
Helen Toner: It’s looking at how the national security establishment in DC should be and shouldn’t be responding to developments in technologies like AI. As these changes begin to happen in the external world, policymakers obviously want to think, should we be doing something about this, should we be funding something, should we be regulating something, should we be procuring something for our military? And there are so many options, and it can be really difficult to know which ones to take. So the gap that we wanted to fill was looking at what that group of people, policymakers and decision-makers, should be doing based on AI, which obviously takes an understanding both of the technology and of what is already happening and what is likely to happen in the future, as well as understanding the national security side of it. How does the military work? How does the intelligence community work? What are they trying to do? How are they trying to do it? What kinds of changes are feasible or not feasible?
Robert Wiblin: So people who aren’t already convinced, what’s the case that emerging technology and security is just a really important issue that more people should be thinking about?
Helen Toner: I think looking at historical analogies is maybe a helpful way to go here, where if you look back at how, I don’t know, we now call it security studies, in the past … King’s College still calls it war studies, for example, but how war has changed over time and how the relationships between states have changed over time. It’s just really clear that the development of new technologies has a huge effect on that. If you think back to the difference between fighting with bows and arrows on horseback, compared to even just having access to really simple guns, that was just a huge difference and totally changed not just how you should fight on a single battlefield but also how you should structure your forces, how you should plan, how you should provision them, all of that kind of thing.
Helen Toner: And so when you start looking at more recent technologies that have been developed, things like aircraft or, obviously, nuclear weapons, similarly there’s just these huge knock-on effects for how countries relate to each other and how they would relate to each other on the battlefield and how that affects whether they want to come to a conflict situation, not to mention more recently with developments in cybersecurity and other technologies, the way that that has created an entire new field of ways that hostile or semi-hostile states can relate to each other that’s off the battlefield but still in this slightly adversarial setting. And it really changes the, I don’t know, just everything about how countries relate to each other and how they make decisions, what they think is acceptable and unacceptable, what they don’t want to do because they feel deterred by some potential future action.
Helen Toner: And I think navigating transitions like that gracefully can reduce the chances of unintended conflict. One example that I find compelling, though I haven’t dug into the historical record of whether it’s correct or not, is this idea that World War I was partially a result of changes in force structure and transportation, even trains, the fact that once one army began boarding the trains and getting closer to the front, that the other countries involved had to begin also getting into that process, and sort of started this mobilization into war that was really hard to pull back from. Again, I’m not 100% sure that that is an accurate causal story, but it certainly makes sense.
Robert Wiblin: It could have been.
Helen Toner: It could have been. And I think we could see similar things now, even things like what is the appropriate way to retaliate to cyber attacks and how does that apply to cyber attacks on corporations versus cyber attacks on public assets, is a sort of a more modern example.
Robert Wiblin: Yeah, okay. So I guess changes in technology change the balance of power between countries, potentially, also change the dynamic of threats that they might pose to one another. I guess the most famous one is mutually assured destruction. Nuclear weapons set up this thing where we had to figure out what kind of stable arrangement can you have with those? With this new technology, with intercontinental ballistic missiles, they can attack within just like 10 minutes, so you have to rearrange everything so that you don’t end up going to war with that. Whereas previously, obviously, it would it have taken much, much longer.
Helen Toner: Right. Maybe another example is the advent of machine guns and the way that they were used in World War I, where it seems possible that if military leaders had realized in advance how trench warfare was going to look, it seems possible that a lot of horrific human suffering could have been avoided in those trenches, because there just wasn’t a theoretical understanding or a way of thinking about military strategy that took into account the way that machine guns could just mince people up over and over again.
Robert Wiblin: Yeah. That might have been good to know ahead of time.
Helen Toner: Right.
Robert Wiblin: It sounds like you’re mostly focused on state actors. Are you also interested in how technology can enable terrorist groups or non-state actors?
Helen Toner: Yeah, definitely interested in it. We hope that our research is going to be useful and is going to be policy-relevant and relevant to decisions that our key audiences are having to make. A lot of the topics we choose to work on are driven by what we’re hearing in conversations with those people. Right now a lot of that is focused on how AI relates to US-China competition, and so that’s a lot of the work that we’ve begun doing. Certainly, non-state actors come up. I think it’s an important topic. It’s one that I’ve thought about a little less.
Robert Wiblin: Do you have any examples of successes in technology and security studies in the past that you can look to as exemplars of how you might be able to help going forward?
Helen Toner: Yeah, the go-to example would be RAND in the early Cold War era, thinking through the strategic implications of how nuclear weapons worked and what that implied for deterrence, and eventually, as you mentioned, coming up with mutually assured destruction as a stable deterrence regime.
Robert Wiblin: You have decided to focus basically exclusively on AI for the first couple of years. Were there any other technologies that were in the running as potential focus areas at first?
Helen Toner: Yeah, definitely. Our motivation here was thinking about which of these technologies is there a real demand to hear more about? And also, which of these technologies do we think are actually likely to have big effects in the next few years? We certainly considered working on biotechnology, synthetic biology, that kind of work. There, essentially we think that the Center for Health Security at Johns Hopkins is just doing really excellent work, and we didn’t want to be stepping on their toes, and so for the sake of that and also for the sake of focusing on one technology, we did decide to stick with AI for now.
Helen Toner: Others that come up include quantum computing comes up a lot, as a potential area that we might focus on the future. Hypersonic weapons are an area of huge interest to the US military. Essentially, the way that we want this to work is that our work is shaped, again, by what is needed and what is necessary and what is useful right now. It may well be that in two years we look around, and my guess would be that AI continues to be really relevant and continues to change the game in new ways, and so in that case we would continue working on it. But we wanted to leave open the possibility that AI and machine learning do a bit of a blockchain and kind of fall off the relevance radar, in which case we would be able to move on to some other technology that we thought was more useful. Or if AI continues to be relevant, and something else seems like it’s being under-explored, then we could obviously spin up new and additional programs.
Robert Wiblin: Did you hear that, listeners? Blockchain, no longer relevant. Sure there’s like seven furious listeners right now. What is CSET’s model or theory for how it can have an impact? Is it talking to policymakers or publishing articles? What’s the vision?
Helen Toner: Yeah, it’s a mix. The bulk of our work is going to be proactive longer-term research projects that we choose and work on over the course of several months each. Those topics are, as I mentioned, informed by what we’re hearing would be interesting to policymakers, but then that’s sort of us thinking about what framing we want to use, what exact version of the question we think is going to be most useful. And those will usually, we think … Everything that I say about CSET is subject to the caveat that we’ve existed for six months, so who knows what we’ll look like six months from now? But our plan would be that most of those research projects would end up with a longer report, accompanied by some shorter and more accessible version, for example, an op-ed or something like that.
Helen Toner: And then those outputs can also be used to go and talk to various people in government who are interested in hearing about the results that we’ve come up with. Having a newly-published report is always a good excuse to get in touch with people and a good excuse for them to bring you in. And as well as that more proactive longer-term work, we’ll also do some shorter-term more reactive things. We’ve already done a little bit of this, so that’s looking into more specific questions that some specific agency or office is interested in knowing the answer to and trying to give them a fast turnaround answer on that.
Robert Wiblin: This is a slightly hard question to figure out exactly what I’m asking, but how much of the impact do you think will come from doing really complicated analysis and having great insights, versus just having your head screwed on and knowing something about this topic, and preventing people’s fantasies from getting away from them? You know, people hear about security threats, and then I think there’s a risk that people in government can lose their heads and do really silly things. And just talking to someone who actually knows about the area can bring us back to planet Earth. Do you think that just being experts who are not going to lose their heads is a big win?
Helen Toner: I’m not sure I would put it like that. I think I would say instead that I don’t expect most of the value that we add to be through deep thinking that comes up with these brilliant new ideas that no one has ever thought of before, but I do think having the time and space to look into an issue and to try and figure out what the status quo is and what potential other options are. So for example, one of our big projects that we’ve got going right now is responding to a topic that lots of people wanted to know about, which was AI expertise is such an important input to AI progress, and how is the US doing on that? How is China doing on that? How could we do better? How does that all work?
Helen Toner: Just having the time to look into what are the statistics on where students and researchers are and how do they actually move between countries, and how does that seem to be affected by immigration policy, for example? And then if you want to dig into the immigration policy angle, which one of our fellows is doing, what are actual specific, very specific changes to the US regulatory environment or the specific application processes, not just the specific visas available to people but what conditions those visas are available under, what paths there are between student status and permanent resident status, things like that. Us having the time and space to think through those very concrete options and making it as easy as possible for policymakers to then go in and say, “Oh yeah, great. I want do these three things that they put in their white paper,” I think is actually a really underappreciated value add, because again, so many of the people that have the decision-making power in these roles just have no time to think through different options and to come with actually specific plans.
Robert Wiblin: You’ve only been around for six months. Do you have any messages for policymakers at this point, or is it just too early and you don’t want to jump the gun before you’ve actually thought things through?
Helen Toner: We definitely have some early recommendations and early conclusions. More fleshed-out versions of them will be coming out in the reports that we will begin to publish over the next couple months. One set of recommendations is definitely around this human capital question. How can the US better attract and retain top AI talent? There are just specific recommendations that will be coming out in an immigration policy report about tweaks to the H-1B process, tweaks to the green card process, changes to how the organizations that are running these screening and application processes allocate their funding, things like that.
Helen Toner: Another set of recommendations that we’ve put out was as a comment on the request for information put out by NIST, the National Institute for Standards and Technology, I believe, asking for input on AI standards. There we were recommending that NIST try to develop a national AI test bed, where different AI products and applications can be tested and evaluated, and also that NIST try to do the difficult work of establishing technical standards for safety and reliability of AI systems, because having standards like that, I think, could dramatically improve our ability to deploy AI systems in important settings and know how to think about whether those systems are reliably going to do what we want them to do.
Helen Toner: And I think the really tough thing there is going to be developing these technical standards. Once some organization is able to do that, I would expect many different institutions around the world to then pick up on those technical standards. So that was a set of recommendations we had for NIST.
Robert Wiblin: It seems like other groups that might take this on in academia or think tanks or in the government mostly haven’t allocated a lot of resources to thinking big picture about the implications of AI. Why do you think that is?
Helen Toner: I think there’s a few reasons. One reason is that, again, resources are tight, time is tight, it’s hard to allocate resources to a brand new thing. Another reason is there’s a real lack people who can understand both the technical and policy sides of this space, and due to that, one of CSET’s main goals, we have essentially two primary goals, one of which is to produce research and recommendations on these topics and hopefully influence policy decisions or the discourse directly, and the second main goal is to be training up the next generation of people who can think through these problems and take on government roles.
Helen Toner: Because there is a real lack of people who have sufficient familiarity with machine learning and with the details of the technology and how it works, who then can actually also turn around and go and talk to your undersecretary of defense and talk that person’s language and understand the concerns that they’re facing day to day and be able to talk in detail about topics that are going to be useful to them. We really hope that a few years from now, CSET played a role in making sure that there is a much larger set of people who are able to traverse both of those worlds.
Robert Wiblin: Looking at your staff page, the people you’ve managed to hire in the last six months, it seems like you’ve invested really heavily in data scientists and people who know a lot about data science. What’s the plan there, having such technical people?
Helen Toner: Basically our research staff is divided into two teams. One team is our analysis team, which is our research fellows, research analysts who are leading and executing these research projects that I’ve talked about a little bit. And then the other team is the data science team, and their role is essentially to be gaining access to data that the analysis team can use to do that research. The hypothesis here is essentially that there are huge amounts of data available on AI research, for example, openly published papers on AI investment, on specific people, so job postings and resumes that are available openly.
Helen Toner: And honestly, the hypothesis is that there is a lot of insight to be gained here that other actors may not look at, and that specifically the intelligence community has perhaps overlooked because they are openly available. We’re hoping that by having access to all of that data and being able to combine it in intelligent ways, we’ll be able to notice trends and find insights that others may miss.
Robert Wiblin: Is it also going to be the case that these people with data science backgrounds have a better understanding of ML from a technical point of view and what are sensible things to say about it and what things are technically uninformed?
Helen Toner: Yeah, partially, though we are also hiring for a machine learning and AI fellow. Our data scientists certainly have a better understanding of machine learning than many of our analysis team staff, just because they are coming from a more technical background, but we would love to hire someone full time who is really deep in the foundations of machine learning research, who can advise us on that. And we do have several nonresident fellows. We have two who are working at top AI labs, and we also consult with them about whether what we’re saying makes sense, if we’re making any silly mistakes.
Robert Wiblin: I’m familiar with a lot of organizations that have started up, but I think none that came onto the scene with such prestigious organizational affiliations and people with a lot of experience and impressive credentials. Has that made it a lot easier to fundraise and hire for the organization and allow you to grow much more quickly than if you were fresh off the boat?
Helen Toner: I think so, yeah. We’re been really, really fortunate in that. It’s been great that everyone has been so excited to see a new organization in this space and that people have been up for taking a risk on us, because it’s certainly always a bit of a thrill ride joining a new organization while we’re still figuring out what we’re doing. It’s been really good and really nice to have the stability and the credibility that comes from, for example, being based at Georgetown, that make people feel like, “Oh yeah, this is an organization that maybe things are a little bit wild right now, but they’re working on it, and it’s going to settle down and turn into something great.”
Robert Wiblin: What made you decide that this was your best opportunity to have an impact personally? Were there any other opportunities that you were considering that you almost took instead?
Helen Toner: I think this was pretty obviously the right thing for me to do once the option came up. I had been interested for a few years in this intersection of AI and national security specifically. People sometimes talk about AI policy more broadly, and I always found that the national security angle seemed to me like one that was really important and relatively concrete and tractable as well. I knew that I wanted to work in that space. I had been working at Open Philanthropy and doing some work on that, and I decided to take a break and go spend some time in China. While I was in China, kind of just in learn and explore mode, hearing that this opportunity came up to work in DC, to work with exactly the communities that I wanted to be learning about and helping, and working with Jason and working at Georgetown, it was just such an obvious choice.
Robert Wiblin: What have you learned about how to get people to take you seriously in DC, not having been there until recently?
Helen Toner: Well, one thing is get older. It’s been fun moving from the Bay Area, where I feel like everyone wants to be the child prodigy, the 21-year-old billionaire startup founder, and instead moving to a city where I feel like, “Oh yeah, every year I age is actually good for my career. This is great.” I think it’s nothing other than that. There’s no magic to it. Knowing what you’re talking about, thinking about what the person you’re talking with is interested in and what they’re looking for, and trying to help them achieve that. Certainly, having a relatively firm technical understanding of AI and being able to give grounded explanations of why you have the opinions you do and why maybe some things you’re hearing from other people aren’t as valid. Yeah, I think it’s all relatively straightforward stuff like that.
Robert Wiblin: Speaking of age, I read an article recently pointing out that I think the Democratic leadership team is like 78 years old, 79 years old, and 79 years old, which I think collectively made them older than the US Constitution. Is that just politicians, or are people in the national security scene or people in the civil service cycle also often working past normal retirement age?
Helen Toner: I think that’s probably mostly that specific group. In the government more generally, I think people are usually having pretty regular careers.
Robert Wiblin: You’re an Australian like me. I guess you’ve lived most of your life in Australia. Has that been an impediment to integrating yourself into the US policy world?
Helen Toner: So far, not really, but again I’ve been in DC for six months, so ask me again in 10 years, I guess. Certainly, working in security, there is much more suspicion of foreign nationals. I’m very fortunate to be Australian, which I think is possibly the least suspicious country from a US perspective. Maybe New Zealand squeaks in with being even less suspicious.
Robert Wiblin: They don’t think you’re a sneaky Australian plotting an invasion of the American mainland?
Helen Toner: Well, I don’t know when the last time was that Australia did something the US didn’t approve of. I think it was probably a long time ago. So no, I mean I’m definitely unsure how it’s going to go. I’m in the process of applying for a green card. I hope to be naturalized in the future. I think probably the fact that I spent a significant amount of time in China is likely to be more of a hurdle to, for example, a future security clearance application, but I would guess that that, compounded with originally being a foreign national, probably doesn’t help. So I don’t know. Apparently the deputy chief of staff in the White House is a New Zealander with a strong accent, which I take as a sign of encouragement.
Robert Wiblin: Yeah. Do you have to wait until you get your green card or until you’re a citizen before you can apply for a security clearance?
Helen Toner: Yes. There are some clearances that are open to people from the Five Eyes countries. That’s an intelligence alliance between the US, UK, Canada, New Zealand, Australia. But the types of clearances that would relevant to me, I would need to be a US citizen.
Robert Wiblin: Is this a very competitive space, because people are like, AI is kind of a hot issue? Or is the case that maybe because it’s a new area, it’s relatively easy to break into it, because you don’t have a lot of incumbents that you have to edge out?
Helen Toner: I think in one sense it’s hard to break into, if break into means getting a job specifically in this space, because there aren’t that many jobs specifically working on AI and security, but I think it’s easy to break into in the sense that there are very few people who are really skilled up on both sides of it. And I would expect that in the future the number of specific jobs on this are only going to increase. So I think if you’re interested in it, my advice would be to learn the basics of machine learning, do a Coursera course on the technical details of machine learning, and then try and marinate in the national security world, come to understand how that world thinks, what kinds of issues are on their mind.
Helen Toner: Learn about the relevant history, the relevant policy today. Maybe you want to specialize in an area that’s adjacent to AI, like cybersecurity, where there are more existing positions and more of an established field. And then I think if you can show that you are thoughtful on AI, as well as having that really strong security background, then I think you’ll be a really strong candidate when new roles do open up.
Careers
Robert Wiblin: Let’s talk for a couple of minutes about career options in this area, before we move on to talking about China at greater length. What kind of people is CSET looking to hire over the next couple years, and maybe do you have any vacancies at the moment or coming up in the next few months?
Helen Toner: The biggest vacancy we have, and we expect to continue having, is for research fellows. These are people who are relatively early career. They have a graduate degree, they have a couple of years of research experience, and they come to us to work on basically leading research projects. They work with our leadership team to develop a project idea, and they then execute on that over a few months, and then they start again and start a new project.
Helen Toner: Good candidates for that role will span both the technical and the policy side of things. We love getting people from security studies backgrounds, but we’re also happy to get people from all kinds of other backgrounds, economics, law, history, international relations, whatever else. And ideally, they have an interest in working in national security policy and in learning about it, and also some amount of background knowledge. It might just be casually acquire background knowledge on AI and a willingness to think technically.
Robert Wiblin: What other roles outside of CSET in security and technology studies are you excited about seeing people go into?
Helen Toner: In general, I’m excited about seeing people go into roles in this space and taking with them an understanding of technology, especially of AI. There’s plenty of roles in government, roles in think tanks in DC, places like the Center for Security and International Studies, CSIS, I believe that’s the right expansion of their acronym, Center for New American Security, places like this that are sort of part of this conversation. I think it’s really valuable to be in this space and be learning about how to think through the kinds of problems that the national security establishment is preoccupied with.
Robert Wiblin: Are there any other academic departments other than CSET, or are you the obvious one?
Helen Toner: Yeah, I think we’re the obvious one if you’re interested in US national security policy. There’s obviously the Center for the Governance of AI at Oxford, which is looking more at a little bit bigger picture issues and a little bit more global.
Robert Wiblin: What about the Leverhulme Centre for the Future of AI?
Helen Toner: Future of Intelligence? Yeah, they’re at Cambridge University. Again, I think they also do a fair amount of work with the UK government. If you’re a British citizen or interested in the UK angle, that would be a good place for sure.
Robert Wiblin: For American citizens, what about just going and working in relevant government agencies or the intelligence services or the military directly?
Helen Toner: Yeah, I think that’s a great idea. I believe 80,000 Hours actually has articles on the kinds of steps you might want to take before that, what kinds of degrees it might be good to get. I think then going, getting that experience directly is really valuable, certainly something where we’re hoping that research fellows working at CSET will be able to go on and later take on government positions. We notice a real difference in approach and background context between people who have worked in some kind of government agency and seen how things work on the ground in practice and people who have not yet had that experience. I think that’s extremely valuable.
Robert Wiblin: What kinds of things should people who want to go into this field be learning about now, other than, I guess obviously, machine learning technical understanding? Are there things that are just assumed common knowledge that you would need to fit in in this scene?
Helen Toner: Yeah, I definitely think it’s valuable to learn about military history, current military operations and national security policy, cybersecurity, and things like that. There’s a lot to know there and a lot to understand, and I think having a detailed model of what the concerns and considerations and norms and common concepts are is absolutely necessary.
Robert Wiblin: Is that something that I guess people who have an academic interest in technology and security, or perhaps have primarily a focus on machine learning or AI, that they tend to lack? They haven’t paid so much attention to international relations or history of war?
Helen Toner: Yeah, that’s right. I mean, it’s very much a case of any given field having a lot of detail and a lot of depth, and so if you’re coming at it as an outsider … I think there’s a great xkcd comic about this, actually … coming out as an outsider, it’s easy to think that you have a good sense of things, when actually you don’t. In the same way that folks from DC might want to talk about AI as a thing, and that might be frustrating for people with a more technical background, who want to say, “What are we talking about? Are we talking about deep learning? Do you want about convolutional neural networks? What’s going on?”
Robert Wiblin: Advanced statistics.
Helen Toner: Right. Similarly, I think it can be very frustrating if people come in and say things like, “Oh well, the US government should … ” Or even, “The State Department should … ” or “The Department of Defense should do XYZ.” And instead, it’s much more helpful if you can come in and really understand how things get done and how decisions get made and make suggestions for very specific ways that things could go differently.
Robert Wiblin: Yeah, does that mean that potentially just getting any role in the US government and booting experience for how these organizations work might be helpful, even if it’s not as directly related to technology and security as you might like?
Helen Toner: Yeah, my sense would be that that’s basically right. I think you would want to try and get advice from someone who is familiar with the space to check that. I’m sure that some roles come with expectations of oh well, if you’ve done this, then maybe you want to do this in the future and so then maybe you’re not such a good fit for this other role. So you’d want to have some awareness about what kinds of transfers are possible, but I certainly think that fundamentally that’s right.
Robert Wiblin: Are there any other kind of skills or experience that would help someone have a productive and valuable career in AI policy and strategy that you think is worth highlighting?
Helen Toner: Yeah, I think maybe the main one we haven’t talked about much so far is just the importance in government work and in policy work, in getting buy-in from all kinds of different audiences with all kinds of different needs and goals. Being able to understand if you’re trying to put out some policy document, who needs to sign off on that, what considerations they’re considering. An obvious example is, if you’re working with members of Congress, they care a lot about reelection. That’s a straightforward example. But anyone you’re working with at any given agency is going to have different goals that they’re trying to fulfill, and so if you can try and navigate that space, it’s sort of a complicated social problem. Being able to do that effectively, I think, is a huge difference between people who can have an impact in government and who’d have more trouble.
Robert Wiblin: Yeah, paying attention to the specific organizational incentives of different people is something you have to do in any organization. Do you think it’s even more the case, or maybe it’s just a more complicated calculation, in government careers?
Helen Toner: I think it’s mostly that the US government is a larger bureaucracy than you’ll find just about anywhere. As you say, it’s the same types of problems that you would encounter in any large firm or large university or something like that, and it’s just that the scale is especially large.
Robert Wiblin: I guess, does that mean that you’re often interacting with people whose incentives you don’t really understand? It’s like when you go into an office at first, you don’t quite understand what different people’s goals are, who they’re accountable to. If you’re constantly interacting with new people, then it’s always a little bit opaque what the hell’s going on.
Helen Toner: Yeah, that seems right, though I don’t know. I feel the need to say that this is certainly a skill that I would like to gain, an experience I would like to gather, because I personally haven’t worked in the US government, obviously.
Robert Wiblin: Are there any kind of entry level positions here that it might be worth mentioning, or ways of meeting people or getting some relevant experience to get your foot in the door if you’re not yet qualified to take potentially the more advanced positions at CSET?
Helen Toner: I would guess that the best thing to do is to enroll in grad school and look for internships. There’s a really well-established culture in DC of taking on interns in the summer, and that’s just a good way. Often the work that you’re doing is not that exciting. It’s maybe going to events and taking notes or things like that. But it’s a great way to meet people and to get established in the space and get a sense of how things work. And I think if you’re serious about working in this space, you’re going to need a graduate degree of some kind anyway.
Robert Wiblin: Which graduate degrees? I guess you’ve got masters in public administration, security studies, or possibly even a PhD in security studies?
Helen Toner: Yeah. It depends a little bit on the role, and I think actually 80,000 Hours has some good articles on this that list more in detail. I’m actually about to start a master’s in security studies at Georgetown, so I obviously think that that’s a good program. Johns Hopkins, SAIS, has a really well-respected master of arts in international relations. Law degrees are actually very respected for policy careers. And then depending on the position that you would hope to end up in, maybe a master’s is fine or maybe you want a PhD.
Helen Toner: I guess one other route that I would mention is for people with a STEM background, and especially a STEM PhD, there are established programs to take technical people and inject them into policy roles. The most well-known one is the AAAS Fellowship. There’s also Tech Congress, which takes people from tech backgrounds and puts them into Congressional offices as staffers. There’s I think a couple others along those lines. So if you’re eligible for a program like that, that’s also a really great way to dive into the space.
Robert Wiblin: I guess we might cut this section a little bit shorter than we might otherwise do it, because as you said, we’ve got several pretty lengthy and detailed articles about especially US AI policy careers, which we’ll link to in the show notes, and if you’re interested in going into the area then you absolutely should read. And we’ve also talked a little bit about this is some previous interviews that I’ll pitch in the outro after the interview’s over, if you would like to learn more.
China
Robert Wiblin: Okay, let’s turn to talking about China and your experience in China and what you managed to learn about it, if anything, in your nine months there. Did you learn very much while you were there? It kind of informs your work today? I do wonder whether living in a country for nine months, one wants to be modest about how much one can realistically understand Chinese culture and history.
Helen Toner: Yeah, absolutely. I think it was extremely interesting and extremely valuable, and I’m really glad I did it, but I feel in no way qualified to call myself an expert on China or claim that I learned these top 10 tips for understanding Chinese AI policy.
Robert Wiblin: Number one, the food is really good.
Helen Toner: That’s definitely number one. So no, I feel like I kind of got a better sense of stuff like what companies are thought of, like how do people think of the different companies or the different products that those companies make? Or what do people tend to talk about and tend not to talk about? Certainly got a better sense of how censorship works and how the Great Firewall works on so on. Again, as good a sense as you can get in nine months, which is not very good, but better than nothing, hopefully.
Robert Wiblin: Maybe something you did manage to learn a bunch about might be the ex-pat community. You were in Beijing, right?
Helen Toner: That’s right.
Robert Wiblin: Presumably that’s more like 10,000 people or maybe even less than that. What did you learn about the ex-pat community?
Helen Toner: A friend of mine who is still in Beijing actually and lived there for a few years commented to me that the ex-pat community in China is a really fun mix of people who are really smart and thoughtful and thinking ahead and recognize that China is going to be a important player in the future, and people who are just kind of odd and maybe couldn’t have made it elsewhere and can come to China and find something interesting to do.
Robert Wiblin: Yeah, I’ve heard that there’s this phenomenon. I’ve had some friends who’ve gone and lived in China, and there’s a phenomenon of Westerners going and living in China. Initially, I think the Chinese were kind of impressed with Westerners, because they assumed that they must be very successful to be going overseas, and then they realized that very often these are people who are coming there because their life hadn’t been going so well in the West. Maybe the reputation of ex-pats went a little bit downhill there. I’m not sure whether you’ve heard that narrative.
Helen Toner: I haven’t heard that specific story. Certainly, two decades ago when there were far fewer foreigners in China, there are all kinds of stories of getting hired to go and stand at some expo and just be the white person for this company, which is much less common now. I would guess that it’s much less common, just because foreigners are more visible. Who knows?
Robert Wiblin: A dime a dozen.
Helen Toner: It was actually something I found surprising, though it was obvious in retrospect, was I’m used to when I’m in a foreign country, if I go to some kind of tourist location, that there’s going to be lots of English signage, there’s going to be lots of foreigners, people are going to be super used to seeing all kinds of different faces and different types of people there. But because domestic tourism is so huge in China, they have maybe a billion people who are able to move around the country and visit the cities, the tourist sites in Beijing, for example, are just a real hub for people coming from the countryside.
Helen Toner: So whereas in most of Beijing, people are used to seeing foreigners roaming around, in these tourist sites, firstly, there’s hardly any English signage, because the vast majority of people coming there are Chinese. But secondly, those were always the places where people would still stop me and ask for my photo and would kind of point and stare, which I didn’t get at all in Beijing, so that was I guess a little bit of a flip from what I was expecting.
Robert Wiblin: Among the more accomplished, I guess, China watchers who are living in Beijing, what did you learn about that community?
Helen Toner: I’m not sure that there are that many people who live in Beijing, but I met, I think, largely the people that I now follow on these kinds of issues, I just follow them on Twitter, and they’re sort of spread throughout, some in China, some in the West. For some reason, some in Vietnam and Thailand. I guess maybe they’re like more fun places to live while still being close to East Asia. I think that was definitely something that I do feel I was able to learn a bit more about during my time there, was who is in that space and what kinds of opinions they have and who I think is more or less thoughtful about things. That’s something that I’m definitely continuing to keep an eye on.
Robert Wiblin: Just on the light side, do you have any advice for people who want to go to China as a tourist? The 80,000 Hours team actually went to China for a month back in, I think, 2015. We lived in Chengdu and worked from there. It was pretty challenging actually working using Google Drive and Docs and WhatsApp and all that, given that it was kind of blocked by the Firewall. But we managed to use VPNs to kind of make it work. I’ve got to say, Chengdu was amazing. The food’s incredible. The people were really friendly, happy to see us. And it was, to be honest, a much nicer and better organized city than practically even like the richest cities I’ve been to in America or the UK or Australia. Maybe that was a very non-typical experience.
Helen Toner: No, I’ve heard great things about Chengdu. I didn’t get to go there, but I’d love to. And certainly, Sichuan food is incredible. Tips, I mean, definitely sort out your VPN before you get there, because you can’t do it once you’ve arrived. I do think that finding someone who can show you around a little bit, who speaks Chinese, will just open up things that you might not notice if you’re trying to navigate by yourself. Compared to, say, traveling in Europe, it’s more difficult to travel there if you only speak English. Shanghai is pretty achievable, but even Beijing, and I would certainly assume Chengdu, would be pretty challenging if you didn’t speak Chinese. I know that Ben Todd, who I assume was with you, he’s a good Chinese speaker.
Robert Wiblin: Fortunately, surprisingly, two of my colleagues at the time, one of them spoke Chinese really quite well, and the other spoke it passably, well enough to be a tourist, because they had both lived in China before. We were a surprisingly China-familiar group, although I had a very hard time. I couldn’t speak to the taxi driver, I couldn’t speak to anyone most of the time.
Robert Wiblin: I noticed that over the last few weeks you’ve been tweeting a few supportive things about the protests in Hong Kong that have been going on at the moment. Do you ever worry about saying things like that and potentially expressing opinions about Chinese policy, potentially meaning that it’s harder to get a visa to go to China, or just antagonizing even the contacts that you have within China?
Helen Toner: Yeah, it’s definitely something that I think about, like how much I should be self-censoring on these topics. It’s probably difficult to hold and express positions that both the Chinese government and the US government will think are acceptable. I think at some point you need to, as someone working in this space, think through to what extent you want to leave open the option of spending a lot of time in China, interacting a lot with Chinese experts, because if you want to do that, then you’re probably going to need to self-censor a lot more.
Helen Toner: I don’t know, I guess I don’t see that as likely to be a huge part of my career. I would love to take opportunities to do that where possible, but I mean, there’s so much that the CCP is doing that I think is really awful. And I don’t think the return to staying silent about that is big enough to justify it. So I will continue to post about things like the protests in Hong Kong, the concentration camps in Xinjiang, and so on.
Robert Wiblin: Do you think it’s actually realistic that tweeting about how you support the protestors in Hong Kong would mean that you might not be able to get a visa there? Of course, there’s like two million people in China, in a sense, who are going out and doing these protests. They’re not going to arrest them all, and they’re probably not going to kick them all out of the country either. You’re raising an eyebrow there.
Helen Toner: They might arrest some of them. I definitely think that having a history of saying anti-Chinese things probably makes my chances worse. I don’t know how much worse. The situation on this is changing pretty rapidly. For journalists, for example, journalists is really the case where there’s a lot of scrutiny applied to your past record and trying to figure out if you’re going to come in and just say bad things about China. And certainly there have been several cases recently of journalists at major outlets not having their visas renewed for similar reasons.
Helen Toner: It’s a little harder to say, given that I wouldn’t be applying for a journalist visa, but unfortunately, on the visa application form for China, you have to check what kind of work you’re doing, and there are little different sections. And there’s one section off on the side where the options are journalist, religious worker basically, and NGO worker. Those are the three risky categories of people who might be coming in to try and change minds. And unfortunately, I believe as an employee of Georgetown, I would count as an NGO worker, but I might see if there was a different category that I fit in, since it’s not really a typical NGO.
Robert Wiblin: Another concern that we hear is the exact reverse of this. As you were alluding to, it seems like rather than being too critical of China, potentially being viewed as too cozy with China. I’ve understood that the US might be, as you’re saying, less likely to give you a security clearance if you spent too much time in China or perhaps have too many close Chinese contacts. How do you thread the needle here between having a great familiarity with China, if you want to be a specialist in the area, and actually being able to use those skills and get a security clearance, or even a position in the US government?
Helen Toner: Yeah, I definitely think it’s a shame that the existing infrastructure that the US has built in order to protect itself from counterintelligence threats and so on does mean that a lot of the people working on China day to day have not spent that much time in China, if any, and they’re certainly not traveling there for their work, because it causes a massive headache with clearances if you’re planning to travel to a country like that. But I believe that one of the biggest concerns in a clearance application process, for example, is whether you have close and ongoing contacts with nationals of that country.
Helen Toner: And so what I hope and what I believe has been the case for others is that it’s possible to spend time in China and just not maintain contact with people that you may have met there, to make it clear on your clearance application process that although you may have spent time there to learn, you don’t have any kind of conflicts of interest or any potential for sabotage by some buddy of yours who happens to be Chinese.
Robert Wiblin: Who’s going to blackmail you or something?
Helen Toner: Right.
Robert Wiblin: Maybe I’m just being really naïve here, but isn’t it pretty strange to think that an American who’s grown up in America or an Australian who’s grown up in Australia is going to go to China and hang out there for a year or two, going to make some Chinese friends, and is going to be so sympathetic to Chinese Communism or the Chinese system that they would side with them and become counterintelligence agents? Maybe this just has happened from time to time, and one person who’s just so damaging that they have to be very cautious. But as a naïve outsider, it seems to me like it’s not a great trade-off, tying to protect yourself from that at the cost of the fact that most of your China experts have barely spent any time in China.
Helen Toner: Yeah, I mean, it’s definitely a matter of the trade-off. I don’t think it’s crazy to expect that there will be cases of that. I think there’s a long history of foreign countries recruiting US nationals to spy for them, essentially. So I think it’s not at all crazy to be concerned about that. As you say, the issue is at what cost, and my guess is that the US is currently not hitting that trade-off at the optimal spot, but it’s very hard to say. And it’s especially hard to say without access to all of the classified information that exists on this.
Robert Wiblin: It seemed in the past, Soviet Communism really did have some ideological draw for people in the West. It seems hard for me to imagine that Americans growing up in America are going to be that attracted to Xi Jinping thought that they’re going to be very keen to side with China. But maybe we’ll drop that point. I guess I’m not really in a position to judge.
Helen Toner: I mean, you could also be doing it for reasons other than ideological commitment, right?
Robert Wiblin: Yes. I guess sometimes I forget about that, because I’m such an ideologically-driven person.
Helen Toner: So pure.
Robert Wiblin: What do you think people get most wrong about China? I imagine there’s a lot of misunderstandings that people have, both about the government’s intentions or just about what the culture is like.
Helen Toner: Yeah. I mean, I think a big one in this space is underestimating the extent to which the Chinese Communist Party is thinking about stability and control as their real primary goal. I think it’s really easy from an American perspective to be thinking about Cold War analogies, where the Soviet Union had this ideology that it really wanted to spread to the entire world, and it was really very actively hostile to the US as an opposing ideology. It’s easy to slot China into that mold, and I don’t think it’s an especially good fit. Which is not to say that I think there’s nothing for the US to be concerned about, but I think it’s really important to recognize how much effort China is going to continue to put into that, sort of population control issue.
Helen Toner: And there’s two ways you could interpret that. One could be, and I think maybe a common one is, oh, they care a lot about population control, so they’re going to develop surveillance technologies and they’re going to develop all this stuff that’s going to help them then also internationally. A different way, which actually makes more sense to me, is to think oh, they care so much about this population control stuff, and it’s going to need such a large proportion of their attention and their resources that they’re going to have less left over to think about international issues. Either one could be right, but honestly the second one seems more logical to me. I guess we’ll see.
Robert Wiblin: Again with the ideology, I suppose it’s like Soviet Communism was an expansionary ideology that has hopefully this vision of spreading itself to help most of the world by turning it Communist. I guess you’re just saying China doesn’t really have that anymore. They’re not aiming to ideologically convert people outside of China. It’s not a great interest. And so I suppose to some extent, inasmuch as we don’t mess with their goals inside of China, there’s potentially not that much conflict between the US and China, or at least there needn’t be.
Helen Toner: I’m not sure that I would put it that simply, but I think there’s work needed to be done to figure out, for example, if the balance of naval forces between China and the US changes, what does that imply for things like Taiwan, or what does it imply for the South China Sea? I don’t think it’s immediately straightforward that they’re just going to stick right within their boundaries. They’re definitely very concerned about being able to protect their borders and maintain their territorial integrity. But yes, I do think it’s very different from the Soviet threat.
Robert Wiblin: There’s been a lot of hyperventilation over the last year or two about the Chinese government’s investments in ML, and they’re going to try to beat the United States. Is that accurate? Is the government making a huge push to kind of be a world leader in machine learning research?
Helen Toner: Yeah.
Robert Wiblin: Okay, cool.
Helen Toner: I think they are.
Robert Wiblin: So that’s just right.
Helen Toner: There’s a whole other discussion you could have about how likely they are to succeed and what parts will succeed and so on. But no, I think there’s no doubt that they are taking more concerted and more serious action than the US government is.
Robert Wiblin: Do they view that as a military or strategic thing? Or is that more of an economic move?
Helen Toner: Their messaging is certainly almost entirely economic. It’s hard to look at that external messaging and draw strong conclusions about their motives, but yeah, I guess this is another thing that was interesting in spending time in China, was that there is so little discussion of arms races or AI races there, and it’s much more about how do we take advantage of this great potential boon? Again, at least in the public messaging.
Robert Wiblin: You might have been able to sense, there’s a bit of skepticism from me about some of these AI strategy issues. I do wonder how much of this idea of an AI arms race between the US and China is just kind of a fantasy in our heads, that maybe people have gotten really excited about in the US, but it’s just not actually happening in real life.
Helen Toner: Yeah. I think the way that I’ve found it more helpful to think about, which I think is actually more prevalent in DC even, is thinking of AI as one aspect of this larger US-China competition. So I definitely think that it is a real thing that China is growing in wealth and military power and is sort of taking a new place on the world stage that it hasn’t had in a long time. And I think that absolutely has implications for the US. AI is this interesting sub-part of that larger trend. I think sometimes AI can seem glamorous and exciting and can end up occupying more of the discussion than it deserves, but I think that overall framing makes a little more sense to me.
Robert Wiblin: Even looking more broadly, how much do you think there’s perhaps a bias in the United States or just among everyone to be scared of this new country, rising power? It’s different ethnicity, different language. It’s very easy to get potentially spooked by China, even though it perhaps doesn’t actually pose a material threat to anything that we care about all that much.
Helen Toner: Yeah. I think there’s something to that. I also think something that I worry about a little is there being too much attention paid to how to stop this trend or how to reverse this trend, as opposed to figuring out whether it will be possible. And if it won’t be possible to stop, if the trend is just there to stay, then how to build a new international equilibrium that is still acceptable for US interests, that includes a more powerful and richer China. You know, for the vast majority of US history, the US has not been a global hegemon with no rival superpower. That’s really just the last 30 years or so. And so I think thinking about US interests as opposed to US supremacy would better serve US interests, and that’s something that I would love to see more a part of the conversation. I think it’s a little bit heretical still to try and have discussions about-
Robert Wiblin: Accommodation.
Helen Toner: Yeah. I’m not sure I love that term, but about looking realistically at what is the situation we’re going to have to deal with, and how can we make the best of that situation?
Robert Wiblin: I’ve been worrying over the last year or two. There’s lots of things that Trump is doing that I’m not a huge fan of, but I wonder whether switching the US-China relationship from primarily one of trade and economic relationship and a sense, to some extent, of collaboration, and perhaps a more hopeful message that they’re going to work together to create a 21st century that’s good for both sides, into a more antagonistic relationship, both from a military and strategic point of view and from a trade point of view. But that could be one of the biggest negative long-term effects that Trump has, just by reducing the sense of a common goodwill between those countries. Is that something that concerns you as well?
Helen Toner: I’m not sure. I mean, honestly, I think that that change was probably inevitable. I would more take issue with the way that Trump has gone about that, rather than the fact that it is shifted to that kind of relationship. Specifically, I think there was a real missed opportunity to kind of take stock of looking back on, in the ’90s basically, I guess ’80s, ’90s, and early 2000s, there was this process of China gradually integrating into the world economy, gradually integrating into international political arrangements. Some things there have gone well, and some things have not gone so well.
Helen Toner: I think there could have been a real moment of taking stock, looking back, saying, “What is working, what is not working? How is China behaving fairly and as a responsible global actor, and how is it not?” And then thinking about how to put international pressure on China to come into a more reasonable position, from the perspective of other countries. And I think it’s really unfortunate that Trump has made something like that shift, but he’s made it very much from the perspective of America first, America versus China, America winning, which really both is not very compelling for any of our allies or partners, and potentially directly fuels China in being able to … it sort of becomes like a cheerleading contest instead of working towards a rules-based global order.
Robert Wiblin: You said earlier that you thought it would be better for the US to promote a kind of values-based message when it’s talking about China and the things that it doesn’t like about what China’s doing. How would that actually work from a strategic point of view? Can’t China just be like, “Well, you talk about democracy, you talk about civil liberties, but we just don’t care”?
Helen Toner: Yeah. I mean, I think the trade issue is actually a decent example for that, where you can talk about having equal access and talk about the ways in which US firms are restricted in China as setting up an unfair playing field, and saying, “Look, if you want to be part of this global economy, you need to treat our firms the same way that we treat your firms. You can’t just suddenly block a company from working on your territory.” Or talking about the Great Firewall, for example, from the perspective of global systems, as opposed to the perspective of the US interest. I think that that just provides a much firmer footing to make these criticisms.
Helen Toner: China’s favorite thing to do is to say, “Oh, well, we’re different. We have our own civilization, we have our own history. Our own rules apply to us.” And I think if the US’s response to that is, “Well, no, the American way is better,” it just doesn’t land at all, right? Whereas if the response is, “No, actually these are principles that should apply everywhere, and here’s why, and here’s how you’re not applying them.” I think that can be a much more compelling message.
Robert Wiblin: Do you have any views on the tit-for-tat retaliation that there’s been a bit between the US and China? The US is trying to block Huawei. China pushes out American companies, gives them a hard time, imposes tariffs. Did you look at this in the newspaper in the morning and be like, “Oh no, this is terrible”? Or you’re just like, “This is something we can work around. It’s overblown”?
Helen Toner: I think the biggest way in which it’s harmful is that it devalues national security concerns. If the administration one day says, “Oh hey, UK, Germany, you can’t use Huawei in your networks, it’s really a security threat. We’re really concerned about this. We’re looking out for you. We’re your allies. You really need to pay attention. This is dangerous.” And then the next day says, “Oh, you know, maybe we could just make some arrangement about Huawei as part of our trade war,” like it’s no big deal, that just massively undermines our ability to make claims about security threats and have them be credible. And I think that kind of mixing of trade interests and security interests has been really damaging.
Robert Wiblin: Do you think Huawei actually is a security threat? Do you want to comment on that, or is it beyond your pay grade?
Helen Toner: I don’t know enough about it. I don’t know enough about the telecom situation. My understanding is the issue is less are they a security threat and more is there any possible alternative, given how embedded they already are in 4G systems and the fact that there isn’t really a good competitor to them who can as cost effectively deliver really high quality networks. But again, not an expert.
Robert Wiblin: We recommend becoming a China specialist as one of our priority career paths. Do you think that is a very potentially high-impact thing to do? Would you agree that maybe should be on the short list of most interesting career paths that we have on the site?
Helen Toner: I definitely think that learning about China and becoming familiar with China is very valuable. I don’t love the framing of “China specialist”, because I think it makes it sound as though you can just kind of specialize in China and know about China and have that be your thing. So I guess I would more advocate for doing something more specific. I certainly am not a China specialist, but I think it was really valuable that I gained some understanding of the country and that I continue to try and keep track of what is going on there, and learning the language was definitely helpful in that as well. So I think I would say learning about China is super valuable. Not sure that I totally understand the idea of China specialist as a career track in itself.
Robert Wiblin: I guess it is kind of we’re trying to group together a whole bunch of different paths. It is one of the challenges with that profile, and I guess with advising people, suggesting this to them, is that it can be very hard to know then what’s the next step? It’s not a specific thing, it’s just a very broad class of things you could learn about. Do you have any suggestions maybe for someone who wanted to try to get leverage in their career by studying an important topic, like an emerging country like China? What kinds of ways could they go about gaining at least some understanding, some amateur, possibly specialist understanding of China that would be useful in their career?
Helen Toner: If you know what country you want to learn about, for example, China, by far I think the best thing to do is to go and study there. Ideally, I think it usually works best to do some prep work in your home country, learning about history and politics and learning some of the language. But then I think there is no substitute for going to the place and enrolling in a language program or enrolling in a … best of all is if you can have your language to the point where you can actually study substantively in country, which my Chinese was not good enough to do.
Helen Toner: And then the extent to which you want to do that for a whole degree program or a really long period of time, versus doing it for a shorter period of time, I think depends on whether you want to … I shouldn’t be too rude about the term “China specialist”. There are roles which are based entirely around understanding China, but I think you probably want to be choosing an angle, whether it be Chinese politics, Chinese economy, Chinese military policy, and then specializing in that as well.
Robert Wiblin: Do you have any particularly good universities to study at in China or potentially programs where you study China in the West that are worth doing?
Helen Toner: I’m not sure that I have any great tips that aren’t fairly obvious. Two well-known master’s programs in China, in Beijing, are the Yenching Academy and the Schwarzman Scholarship, where you go and spend one year and earn a master’s degree. Both of those have slightly different reputations. Schwarzman is known for being a little bit insular, so I think if you go there you want to be really proactive about getting involved in the community outside of the college. But I think those can both be a great way to … they can be a great launching pad for spending time in China if you are able to make use of the resources that the program makes available to you.
Helen Toner: In terms of programs in the West, I know that Middlebury College has an intensive Chinese language program that has a good reputation, where you take a language pledge and only speak Chinese for the period that you’re there. I’m not familiar enough with Chinese studies programs, again, because I’m not a China studies person to make recommendations, unfortunately.
Robert Wiblin: I’m slightly grilling you for things you maybe only have a tenuous understanding of. Do you think that even having kind of the amateur level of understanding of China that you have is potentially a boost for your career in DC or in other governments around the world?
Helen Toner: Yeah, I mean, I think the cynical answer is yes.
Robert Wiblin: Because everyone else knows even less?
Helen Toner: Yeah, essentially. It’s pretty similar to the situation with machine learning. Similarly with machine learning, I have an advanced beginner level of knowledge, and that’s often just more than most other people in the room, and so is useful in that way.
Robert Wiblin: You talked about learning Chinese. Just quickly, how far did you manage to get in nine months of intensive Chinese study?
Helen Toner: I’d been actually teaching myself for two or three years before I went over there. When I arrived I had kind of a decent vocabulary in theory, but was very under-practiced in interacting with people in person. My urge was always to start a conversation, and then they would say something and I would want to sort of pause them for one minute and go away and think about what they said. And then take another minute to formulate what I was going to say, and then go back and continue the conversation. Not the best way to connect with people.
Robert Wiblin: Well, it’s just helping you to not make deep connections there.
Helen Toner: Great for the clearance. Luckily, that started to wear off after a month or two. And also, I was doing 20 hours of Chinese study per week. And so by the end I was pretty pleased with where I got by the end of the year. I was in a place where I could have most basic day-to-day conversations relatively well. Not sound like a genius or anything, but sort of convey meaning on many different topics. Professionally, it was a little bit more of a struggle. I went to a couple of conferences and was kind of able to sit in on conversations between Chinese participants and say pretty basic things, and roughly have the gist of what was going on. But certainly not like give a talk or anything like that.
Robert Wiblin: Usually I’m a little bit of a skeptic of learning languages, because it seems like 90% of people who start learning a language give it up before they actually have enough ability that they can gain any value from it. Would you recommend that people start Chinese, or would you discourage them a little bit so that really only the people who are most motivated even start out on that journey?
Helen Toner: I think maybe the thing I would say is that in my experience of learning languages, by far the most value comes when you can actually get in country with already having a foundation under your belt. You don’t want to go to the country and start learning, because then you’ll be learning how to say like hello, goodbye, how are you, and wasting that time, where you could have just really easily have done that externally. So I think maybe the thing I would say is if you’re interested in learning a language, think about is there a time when you’re going to be able to spend at least six months, ideally longer. Ideally, if I’d been focusing on Chinese, I would have spent at least two years there in the country, immersed in it, really cementing your skills.
Helen Toner: Because I think a lot of people learn languages, and they only learn in the classroom environment, and so they can repeat these stock phrases or answer stock questions, but they never get to the point where they’re really conversing and really feeling at home in the language. I think you get there by spending time in country, so I guess my litmus test would be, if you want to learn this language, do you think it’s likely that in the next few years you’ll have time to spend a lot of time in country really cementing it?
Robert Wiblin: Can people get long-term tourist visas or jobs in China? If they can’t get into a grad program there, how can they find an excuse to be there and talk to people?
Helen Toner: There are Chinese language programs, so what I did was a semester-by-semester program where there are basically no admission requirements. As long as you can pay like $2,000 per semester or something like that, you can get a student visa. You can stay, I think, for as many semesters as you really want. So that’s pretty straightforward. I believe there’s some kind of 10-year visa specifically for Americans. I think it might be a business visa. I think you have to leave the country often. I believe it’s for people who are making frequent trips, so I’m not sure that would be better than just doing a student visa.
Robert Wiblin: Can you still go there as an English teacher? I heard this was a popular track in the past.
Helen Toner: Yeah, there definitely seem to be plenty of people who do that. I believe the working visa conditions have changed recently, but I’m not sure what the situation is for English teachers.
Robert Wiblin: Okay. We’re coming up on time. You recently moved to DC. I guess it was like six or 12 months ago?
Helen Toner: That’s right, about six months ago.
Robert Wiblin: I’ve heard pretty bad things about DC as a place to live. What’s its nickname, like America’s armpit or something like that? For the horrific weather that it has. It’s very muggy. How have you found living in DC? Is that something that you could recommend, at least tentatively, to people?
Helen Toner: Yeah. I’ve really enjoyed it so far. I mean, I love medium density housing, is a random nerdy interest of mine. And DC is much better on that front than either San Francisco or Beijing, the last two places I lived. And I don’t know, the weather, people really complain about it, but after living in California where there are no storms, DC’s thunderstorms are incredible. It’s like several times a week you’ll just get this amazing downpour and thunder and lightning. So if you’re a fan of thunderstorms, DC is your place.
Robert Wiblin: I spent a couple of years studying in Canberra, Australia’s also kind of artificial capital city, and it has some bad things, but one thing it did have going for it was lots of really smart young people who are into politics and policy and economics and kind of shared my interests. I guess DC has that going for it as well.
Helen Toner: Yeah, definitely. And another big thing that is really noticeable is the number of free institutions. The Smithsonians are all free. You can just go in and out, and it’s really lovely to have that. And similarly free concerts and free performances, which I feel like really changes how you interact with a city if you can just kind of show up and take part in stuff without planning or paying. It’s really nice.
Robert Wiblin: People are probably constantly coming into and out of DC, like living there for a while and then leaving. Does that make the social scene a bit more in flux, such that it’s easy to break into social networks, or does it potentially make it alienating, because no one’s there long enough to really form deep friendships?
Helen Toner: Definitely, the stereotype is that everyone treats every potential opportunity to make new friends as a networking opportunity and is always asking what you do and trying to figure out what you could give them. That hasn’t been my experience. I’ve met some really lovely people and started to form some really nice friendships. So I don’t know, I think it’s a little bit what you make it.
Robert Wiblin: It’s good to hear that DC is working out. Maybe we can check back in a couple of years and see how you and CSET are developing there.
Helen Toner: Sounds great.
Robert Wiblin: My guest today has been Helen Toner. Thanks so much for coming on the podcast, Helen.
Helen Toner: Thank, Rob.
Robert Wiblin: We have some other episodes on this topic:
31 – Prof Dafoe on defusing the political & economic risks posed by existing AI capabilities
54 – OpenAI on publication norms, malicious uses of AI, and general-purpose learning algorithms
1 – Miles Brundage on the world’s desperate need for AI strategists and policy experts
On related topics we have:
57 – Tom Kalil on how to do the most good in government
44 – Dr Paul Christiano on how we’ll hand the future off to AI, & solving the alignment problem
There’s also our article ‘The case for building expertise to work on US AI policy, and how to do it’ which we’ll link to in the show notes.
Finally, the 80,000 Hours job board currently lists 70 jobs relating to AI strategy & governance for you to browse and consider applying for. So go have at it folks.
The 80,000 Hours Podcast is produced by Keiran Harris.
Thanks for joining, talk to you in a week or two.
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.