Podcast: The world desperately needs AI strategists. Here’s how to become one.

If a smarter-than-human AI system were developed, who would decide when it was safe to deploy? How can we discourage organisations from deploying such a technology prematurely to avoid being beaten to the post by a competitor? Should we expect the world’s top militaries to try to use AI systems for strategic advantage – and if so, do we need an international treaty to prevent an arms race?

Questions like this are the domain of AI policy experts.

We recently launched a detailed guide to pursuing careers in AI policy and strategy, put together by Miles Brundage at the University of Oxford’s Future of Humanity Institute.

It complements our article outlining the importance of positively shaping artificial intelligence. If you are considering a career in artificial intelligence safety, both are essential reading.

I interviewed Miles to dig deeper into his advice. We discuss the main career paths; what to study; where to apply; how to get started; what topics are most in need of research; and what progress has been made in the field so far.

The audio, summary and full transcript are below.

To listen on your phone, just subscribe to the ‘80,000 Hours Podcast’ (RSS) wherever you listen to podcasts. That way you can listen to it sped up and get alerts about future episodes.

Want to work on AI policy? We want to help.

We’ve helped dozens of people formulate their plans, and put them in touch with academic mentors. If you want to work on AI safety, apply for our free coaching service.

Apply for coaching

Summary

The current problem

  • There has been a large growth in the amount of technical work being done to ensure that artificial intelligence is designed in safe ways, but relatively little increase in efforts to figure out i) how to ensure organisations developing AI cooperate in a safe way, ii) how we want governments to respond if at all.
  • There is a lack of research into how to create incentives to cause different groups developing AIs to focus on safety, and avoid rushing to deploy an AI prematurely for strategic advantage. Fortunately, “there is a high concentration of computing power and talent and skill in a fairly small number of organizations” which means only a few people need to be encouraged to work together.
  • Countries may well want to develop artificial intelligence for national security purposes, which creates the risk of premature deployment. To make that less likely “it would be beneficial for countries to have some experts in house in their governments who actually know something about AI and are able to deal with crises as they arise – if they arise – and to be able to think carefully about the impacts of AI on the labor force, for example.”
  • The Asilomar Principles provide a guide to the outcomes desired from the development of AI, but in Miles’ view “there’s somewhat of a gap between the level of abstraction of the principles and concrete steps that people can take. It’s not totally clear what any individual can do to stop an arms race, for example, but I think it’s a step in the right direction to know where you’re headed, and then figuring out exactly what to do about that is the next step.”

Career options

  • There’s likely to be substantial growth in the number of AI policy and strategy roles available in the next few years, which might reduce how competitive the roles are to get.
  • A strong option for someone with technical skills is the American Association for the Advancement of Science’s, Science and Technology Policy Fellowship program. The AAAS “essentially puts people in rotations in organizations like Congress and the executive branch, where they can draw on their technical experience, and someone with a background in AI would probably end up in an AI-relevant organization.”
  • Miles notes “if one casts a fairly large net in terms of what AI issues one wants to work on, there’s a fairly large range of organizations. It’s not just the Future of Humanity Institute. There’s also the Center for the Study of Existential Risk, the Leverhulme Center for the Future of Intelligence, the Tech Policy Lab at the University of Washington, and various faculty programs. Various academic programs around the country, in the US, and around the world are looking to hire people, for example, as postdocs or a faculty member in areas related to AI and policy if you’re someone with an academic background. There are probably a fair number of policy positions at tech companies that aren’t necessarily specific to AI, but that would lead to getting valuable experience in the broad area of tech policy.”
  • Researchers at Google Brain, Open AI and DeepMind are thinking about these problems and trying to develop solutions. For example, people at “DeepMind in particular have been very influential in calling for more AI safety work and they’ve been publishing fairly early on this matter relative to other organizations.” The benefit of being in organisations actively developing AI is to have more direct exposure to what’s happening on the ground and problems that are actually appearing with current technology. However, getting jobs in these organisations is competitive.
  • A particularly promising vacancy to fill is “the policy researcher position at DeepMind. They’re currently hiring for someone to specifically look at AI policy. I think the case is pretty clear that that would be a good place to work and a good role to have if you’re interested in influencing the conversation around AI policy and being exposed to the latest developments. If you’d be a good candidate for that and you’re interested in that, you should definitely apply.”
  • Working in government “you can play an important role in framing these issues and in convening discussions.” For example “the future of artificial intelligence report put out by the White House Office of Science and Technology Policy last year was the result of four workshops that brought together experts in a wide variety of fields.” There is also a new “AI Caucus to organize the discussion of members of Congress on issues related to AI with particular focus on issues related to the future of work.”

How to prepare

  • If you can’t get the position you most want right away, try “working in adjacent areas … This is not the sort of area where people have been working on AI policy for decades and it’s hard to break in for that reason. It’s more just that it’s still a field that’s growing. I don’t think that that growth will stop.”
  • When studying, “if you have a policy background, you might want to focus on learning more about AI, and if you have a technical background, you might want to focus more on learning about policy.”
  • While many roles require technical skills, “there are a lot of roles that can be played by people who are just good at distilling the literature on a particular topic. For example, if you’re interested in understanding the role that AI could play in authoritarian states, then you don’t necessarily have to have any technical background to read up on the literature around surveillance and coercion and things like that. I think there’s a lot of room for good synthesizers and people who are curious and interested to dip in areas they’re unfamiliar with. … I think it’s a pretty broad range of possible skill sets that would be useful.”
  • If you want to network, “all of the AI conferences are open to anyone who is capable of paying for the trip and the ticket. Some examples of big AI conferences are NIPS, ICML, IJCAI, AAAI, and so forth. NIPS is probably the biggest machine learning conference right now with an emphasis on deep learning.”
  • Working in the military, intelligences services or foreign policy “could be very useful. Looking at arms control and foreign policy, both from an academic perspective and from a practitioner perspective, I think, is very valuable, and it’s something that’s not our strong suit in the AI safety and policy community right now.” However, “it would depend on the particular role, so probably a role higher up in the chain and more on the policy and strategy sides of things, as opposed to the more operational or logistical or just execution side of things would be more likely to give you an opportunity to affect those sorts of things.”
  • Miles concludes by noting that “AI safety was in a very nebulous stage of development a few years ago, and it took the work of Nick Bostrom in Superintelligence and Stuart Russell in giving a lot of talks and writing op-eds to call more attention to it and give it more legitimacy, and then subsequent work was done to refine the issue and develop research agendas by people, including the authors of the “Concrete Problems in AI Safety” paper. Now we have a lot of postdocs and graduate students working specifically on this.
    We have people in industry specifically working on well defined problems in this space, and we have the opportunity to make a similar transition in AI policy over the next few years. We’re [moving] from fairly high level desiderata and problem framings to specific proposals and formal models and good white papers and so forth over the next few years. It’s an area that would benefit from a lot of people’s expertise. I would definitely encourage people who think they might have some relevant expertise to seriously consider it.”

Full transcript

Robert Wiblin: Hi, I’m Rob, Director of Research at 80,000 Hours. Today I’m speaking with Miles Brundage, research fellow at the University of Oxford’s Future of Humanity Institute. He studies the social implications surrounding the development of new technologies and has a particular interest in artificial general intelligence, that is, an AI system that could do most or all of the tasks humans could do.

Miles’s research has been supported by the National Science Foundation, the Bipartisan Policy Center, and the Future of Life Institute. Thanks for making time to talk to us today, Miles.

Miles Brundage: [00:00:30] Thanks for having me.

Robert Wiblin: First up, maybe just tell us a bit about your background and what questions you’re looking into at the moment.

Miles Brundage: Sure. I used to work on energy policy when I was living in Washington DC, and I’ve gradually moved into AI policy over the past few years, because it seems like it is more of a neglected area and could have a very large impact. I recently started at the Future of Humanity Institute and I’m also concurrently finishing up my PhD in human and social dimensions of science [00:01:00] and technology at Arizona State University. In both capacities, I am interested in what sorts of methods would be useful for thinking about AI in a rigorous way, and particularly the uncertainty of possible futures surrounding AI. I have a few more specific interests such as openness and artificial intelligence and the risks related to bad actors.

Robert Wiblin: Are you trying to predict what’s gonna happen and when and what sort of effects it will have?

Miles Brundage: [00:01:30] Not necessarily predict per se, but at least understand what’s plausible. I think that it’s very difficult to predict with any high confidence what’s gonna happen and when, but understanding, for example, what sorts of actions people could carry out with AI systems today or in the foreseeable future in different domains like cybersecurity and information operations, production of fake news automatically [00:02:00] and autonomous drones and so forth. I think understanding the security landscape of those sorts of things doesn’t necessarily require a very detailed forecast of when exactly something will occur so much as understanding what’s technologically possible given near term trends. Then you don’t necessarily have to say this is when this particular accident or malicious behavior will occur, but just that these are the sorts of risks that we need to be prepared for.

Robert Wiblin: Does that bring you into contact with [00:02:30] organizations that are developing AIs like Google or Open AI?

Miles Brundage: Yeah, absolutely. Those organizations have an interest in making sure that AI is as beneficial as possible, and they’re keenly aware of the fact that they can be misused and that there might be accident risks associated with them. For example, I held a workshop fairly recently on the connection between bad actors and AI and what sorts of bad actors we might be concerned [00:03:00] about in this space, and we had some participation from some of those groups.

Robert Wiblin: What kinds of bad actors are you thinking of?

Miles Brundage: You can look at it from a number of different perspectives. Depending on the domain that you’re interested in, you might end up with different sorts of actors. In the case of physical harm associated with AI, you might look at autonomous weapons and the weaponization of consumer drones, and in that case it’s something that would be [00:03:30] particularly appealing to state actors as well as terrorists, and we’re already seeing the weaponization of consumer drones, though without much autonomy if any, in the case of ISIS overseas. There are cases like that where we can foresee that particular groups when given more advanced capabilities such as higher baseline levels of autonomy in consumer drones, would be able to and would like to carry out damage on a higher [00:04:00] scale.

In other cases, it’s less clear. For example, there are concerns around mass surveillance, and it’s often not totally clear whether one should be more concerned about states or corporations, depending on what sorts of risk you’re concerned about. For example, corporations currently have a lot of data on people, and there have been a lot of concerns raised about the use of AI to make decisions that have critical impacts on [00:04:30] people’s lives, such as denying loans and court decisions and so forth that are being informed by often not very transparent algorithms. Those are some concerns, but there’s also a potentially different class of concerns around authoritarian states using AI for oppression.

I think there’s a range of risks, but generally speaking there’s going to be a steady increase in [00:05:00] the floor of the skill level as these AI technologies diffuse. That is, there will be more and more capabilities available to people at the bottom of the scale, that is individuals as well as people with more access to computing power, money, and data at the higher end. It’s not totally clear how to think about that because on the one hand you might expect that the biggest actors are the most concerning because they have the most skill and resources, but on the other hand they also [00:05:30] are sometimes, in the case of democratic governments, held accountable to citizens, and in the case of corporations held accountable to stakeholders. You might also think that you would be more concerned about individual rogue actors. That’s an issue I’m still trying to think through in the context of the report that I’m writing based on that workshop.

Robert Wiblin: People often distinguish between the short term risks from AI, like self driving cars not working properly or algorithmic [00:06:00] bias, and then the longer term concerns of what we’re gonna do once artificial intelligence is as smart as humans or potentially much, much more intelligent. Which one of those do you spend more time working on?

Miles Brundage: I try to focus on issues that span different time horizons, so I think that AI and security is something that we’re already seeing some early instances of today. It’s already being used for detection of cyber threats, for example, and the [00:06:30] production of vulnerabilities in software. There’s a lot of research going on, which an example being the DARPA cyber grant challenge recently, where there’s automated hacking and automated defense. There’s already stuff happening on the front of AI and security as well as the economics of AI and issues surrounding privacy, but it’s not totally clear whether the issues in the future will just be straightforward extensions of [00:07:00] those or whether there will be qualitatively different risks. I think it’s important to think about what’s happening today and imagine how it might progressively develop into a more extreme future, as well as to think about possible discontinuities when AI progresses more quickly.

Robert Wiblin:I should say this point, that we have a profile up on our website about the potential upsides and downsides of artificial intelligence, where we go through all of the basics. [00:07:30] Miles, you have a forthcoming guide to work on AI policy and strategy that we’re gonna link to. Here, we’re gonna try to go beyond that. If you find yourself a little bit confused, then potentially just go on to read one of those documents first.

Which policy or strategy questions do you see as most important for us to answer in order to ensure that the development of artificial intelligence goes well rather than in a bad way?

Miles Brundage: I think there are a lot of questions, and some of the ones I was talking about earlier surrounding security [00:08:00] and preventing the weaponization of AI by dangerous actors is one. There are also other issues that have been widely discussed in the media and by academics, such as accountability of algorithms, transparency, the economic impacts of AI and so forth, but one that I think is particularly important, particularly over the long term, this is a case where there might be a need [00:08:30] to think about possible discontinuities, is coordination surrounding AI safety. There’s been a lot of attention called to AI safety in the past few years and there’s starting to be a lot of concrete and successful research on the problem of avoiding certain AI safety failure modes, like wire heading and value misalignment and so forth, but there’s been less attention paid to how do you actually incentivize people to act on the best practices and the good theoretical frameworks [00:09:00] that are developed, assuming that they will be developed.

That’s a potentially big problem if there’s a competitive situation between companies or between countries or both and there’s an incentive to skimp on those safety measures. There’s been a little bit of work, for example, by Armstrong et al at the Future of Humanity Institute looking at arms race type scenarios involving AI where there would be an incentive to skimp because you’re concerned about losing the lead in an [00:09:30] AI race. We don’t really know what the extent will be, but it might turn out that there are significant trade offs between safety and performance of AI systems. One might be tempted to, in order to gain an advantage whether economic or military or intelligence-wise, ramp up the capabilities of an AI system by adding more hardware, adding more data, adding more sensors and effectors and so forth, but that might actually be dangerous if you don’t understand the full [00:10:00] behavioral envelope of the system, so to speak. And how to constrain human actors from doing that is a very difficult problem.

I, along with others at the Future of Humanity Institute and other different organizations have started thinking about what is an incentive compatible mechanism to ensure that people do the right thing in that case. That is to say, we don’t want to ask people to do something that’s totally against their interests if [00:10:30] they actually do have an interest in developing these systems and protecting themselves, militarily or otherwise, but we also want to ensure that they’re constrained in some sort of way. One example would be, this is not necessarily a fully fleshed out example, but just to illustrate the sort of thing I’m talking about, would be some sort of arrangement between AI developers such that in order to gain access to the latest breakthroughs or the latest computing power, you would need to submit to some sort of [00:11:00] safety monitoring and adhere to certain best practices.

That would create an incentive if you want to be on the cutting edge to participate in those safety protocols. Again, that’s not a fully worked out example, but that’s an example of the class of things that I would like to see more of. Developing specific proposals for how to ensure that the right thing that’s being developed on the technical front actually gets implemented.

Robert Wiblin: [00:11:30] Do you look much at, say, what governments should be doing in this area? Whether there’s regulations that we should be putting in place or at least preparing to put in place in future?

Miles Brundage: Yeah. I think that that ties in with what I was saying a little bit, in that one might envision that as AI capabilities develop, that it’ll be increasingly seen as a matter of national security that a country be on the leading edge. I think it’s not super clear how [00:12:00] to navigate that yet. I think there’s some low hanging fruit. Clearly, in my opinion, it would be beneficial for countries to have some experts in house in their governments who actually know something about AI and are able to deal with crises as they arise, if they arise, and to be able to think carefully about the impacts of AI on the labor force, for example.

I think there’s some low hanging fruit and some policies that have been developed [00:12:30] for dealing with some of the near term issues, but for the longer term issues, it’s not super clear that we want to rush into a situation where governments are leading this, if that would turn out to only accentuate the arms race dynamics that we should be trying to avoid if AI is seen even more as a national security issue in an unhelpful way. I think it’s incumbent on those who are thinking about the long term policy issues around AI to develop a more positive [00:13:00] proposal that involves not just the US government getting involved for the purpose of accelerating American AI capabilities, but a more collaborative approach.

Robert Wiblin: I suppose with previous dangerous technologies like nuclear weapons or chemical or biological weapons, there’s been agreements to try to slow down their development and deployment. Is that something that could potentially happen here at the international level?

Miles Brundage: I think it’s possible [00:13:30] to imagine some sort of slowing down among a small number of actors, and we’re fortunate that there is a high concentration of computing power and talent and skill in a fairly small number of organizations. There’s pretty much no chance of some random person out competing the top organizations in AI in a surprising way. It’s possible that with some of the top companies and [00:14:00] nonprofits as well as countries, they could coordinate in some sort of way. Not to postpone AI across society for a long period of time, but at least to be cautious in the later stages of development and to allow time for mutual vetting of safety procedures and things like that. I think it’s possible to imagine some sort of coordination, but the technical and the political factors interact to a large extent.

To the extent that we actually have good safety measures that [00:14:30] are efficient and that don’t introduce a lot of overhead, computational or otherwise, into these systems, then we’ll be in a better place to actually get people to coordinate because it won’t be imposing a lot of costs on them. Likewise, to the extent that we’re able to coordinate better, we’ll be in a better position to actually get people to implement those mechanisms if they do impose some performance penalty.

Robert Wiblin: Can you think of any major bits of progress we’ve made on Ai strategy [00:15:00] questions over the last couple of years?

Miles Brundage: The example I mentioned of the paper by Armstrong et al called Racing to the Precipice is one example, and it presented sort of a stark version of the arms race scenario, that I think there are reasons to be more optimistic than are presented in that paper because they look at only a certain set of assumptions, as with any model. More generally, I think there’s been progress in [00:15:30] the development of principles and criteria for good policies in recent years. For example, with the Asilomar conference and the set of Asilomar AI principles, I think that was a good step towards developing some shared understanding around the need to avoid arms races and concerns over AI weaponization and so forth.

I still think that there’s a lot of room for progress to be made. There was a paper [00:16:00] put out by the Future of Humanity Institute by Nick Bostrom, Carrick Flynn, and Allan Defoe recently that looks at policy desiderata for the development of machine superintelligence. There’s a lot of good material there, but I think they would be the first to admit that we’re still at the high level principles stage of developing these policies. We have a better understanding of what we want to avoid than we did a few years ago, and what a good policy proposal [00:16:30] would look like, but we don’t yet have anything super actionable.

I think the situation is a bit analogous to where AI safety was a few years ago, where there was starting to be an articulation of what the problem was with the book Superintelligence and various other publications. People were starting to take the problem seriously and started to have a vocabulary for what a solution would look like with value alignment and [00:17:00] other terms being coined, but there wasn’t yet any concrete research agenda. Subsequently, there have been a lot of concrete research agendas, as well as technical progress on some parts of those research agendas with work by Open AI and Deep Mind and others leading the way. I think we’re potentially in a similarly exciting phase on the AI policy front where we have a decent understanding of what the problem is. [00:17:30] With the example of avoiding arms races being forefront in my mind, at least, but we don’t have clear models and case studies that we can point to as ways forward. I think that’s the next step, is moving into more concrete proposals and trying to balance some of the trade offs that have been identified in recent papers.

Robert Wiblin: It seems like a lot of the latest thinking in this area isn’t [00:18:00] written up in public yet, partly because people don’t want to publicize their views at this point or just because it takes a long time to get papers published. What do you think’s the best way for someone who’s interested in working in the area to get up to speed on the issues?

Miles Brundage: I think, again, it’s somewhat analogous to the situation with AI safety a few years ago. I think the book Superintelligence was a big step forward on that front in terms of having a single reference to point people to. [00:18:30] It’s less clear if there’s one single reference to point people to on AI policy issues, though I’m hoping that the career guide that I’m working on will be somewhat useful in that regard. Another resource that comes to mind is a syllabus for a class on the global politics of AI developed by Alan Defoe at Yale and the Future of Humanity Institute. That’s a very detailed list of resources. I’m sure there’ll be a link [00:19:00] provided for this interview somewhere.

Robert Wiblin: Yeah.

Miles Brundage: That’s another set of resources. It’s essentially a long list of both formal academic publications as well as things in the more gray literature, such as blog posts, which is, I think, the sort of thing that you’re referring to as things that aren’t necessarily written up academically, but are somewhat accessible. Even there, there’s still some things like [00:19:30] Google Docs that are used internally and so forth. If, based on reading those sorts of things, this is clearly something you’re interested in, then the obvious next step is just to get in touch with people who are working on these issues and indicate what your interests are. There might be things that aren’t available online that they can point you to.

Robert Wiblin:You mentioned the Asilomar statement of principles. Do you just want to describe that?

Miles Brundage: Sure. Two [00:20:00] years ago, there was a conference on beneficial AI held in Puerto Rico. That led to an open letter on AI, and subsequently some investment by Elon Musk in the Future of Life Institute, which supported a lot of grants in this space. I think that that led to a lot of attention to the issue and there were thousands of people, including a lot of AI scientists, who signed onto that letter, and then subsequently [00:20:30] the Asilomar principles, which were developed at a conference two years later, this year at Asilomar in California, developed a more specific set of proposals for what sorts of things AI scientists should be thinking about. Not just the fact that there should be more research, but also things like capability caution, so being attentive to the fact that we don’t know for sure what the upper limits of AI capabilities [00:21:00] will ultimately be, and the things I mentioned about avoiding arms races and being concerned about AI weaponization more broadly.

I think that’s a good example of developing a consensus view on what we want to see and what we don’t want to see. We want to see AI benefiting society as a whole, and we don’t want to see it leading to the accruing of benefits to a small number of people or to large scale war [00:21:30] or anything like that. I think it’s encouraging to see not just that, but also the development of other sets of principles through the IEEE and their effort on ethically aligned AI design.

Robert Wiblin: I read those principles and they seem quite strong on paper. Do you think people are likely to follow through on them?

Miles Brundage: I think there’s somewhat of a gap between the level of abstraction [00:22:00] of the principles and concrete steps that people can take. For example, it’s not totally clear what any individual can do to stop an arms race, for example, but I think it’s a step in the right direction to know where you’re headed, and then figuring out exactly what to do about that is the next step.

Robert Wiblin: In your guide, you suggest that people could potentially work on AI strategy and policy questions at places like Google Deep [00:22:30] Mind or Open AI, where artificial intelligence is actually being developed. What concretely could you see people doing at places like that if they took a job there?

Miles Brundage: To some extent, people at those organizations are doing something fairly similar to what I and my colleagues are doing in academia, which is thinking about what the problems are and trying to develop solutions. I think the benefit of being on that side of things is to have [00:23:00] more direct exposure to what’s happening on the ground, so to speak, in AI development, and I think that can be useful for developing a set of what sorts of problems are actually cropping up as a result of the development of capabilities that actually exist as opposed to just ones in the future.

I think it’s also notable that people at these organizations have been very active in the efforts that I’ve mentioned, such as raising attention to AI safety, which was an initiative that [00:23:30] involved people at Google Brain and Open AI and the concrete problems in the AI safety paper, and people at DeepMind have been very influential in calling for more AI safety work and they’ve been publishing fairly early on this matter relative to other organizations. I think there’s a lot of reason to think that these organizations are playing a positive role and I think it would be a good place to be involved in thought leadership on [00:24:00] these topics, as well as doing direct research.

Robert Wiblin: How hard it is to get a job at a place like that? I would imagine they not only have a few people working on AI strategy or policy. Are these extremely competitive roles?

Miles Brundage: I would say they’re pretty competitive, and I think that’s a general phenomenon in anything related to AI these days, whether on the policy side or the technical side. I think that this is a growth area for [00:24:30] sure. If one develops expertise in this topic and has something to contribute, then I think there will ultimately be opportunities available to you in the next few years. Again, I want to draw an analogy to AI safety where there was a lot of concern about will there be jobs in AI safety three years or so ago, and I think that deterred some grad students from working in this space. Now the situation seems to look a bit better in that there’s a fair number of advertisements for [00:25:00] postdocs and people being hired at top labs to work on these issues. I think we’ll probably see something pretty similar, not necessarily it becoming not competitive, but there being a little bit of slackening over time as there’s a need for developing larger teams to work on these topics.

Robert Wiblin: Another path you’ve talked about is going into politics or policy roles, for example as a Congressional staffer [00:25:30] or perhaps at a think tank like the Brookings Institute or the National Science Foundation, or potentially just going into party politics in general. What useful things do you envisage people could do there?

Miles Brundage: I think in the same way that working at an AI lab would give you a better sense of what’s practical on the technical front, working for Congress or Parliament or a political party would give you a better sense of what’s practical on the political front. In addition to that, [00:26:00] developing a better intuition for the system that you’re dealing with and what the constraints are, I think it’s also potentially a good way to actually influence what’s actually done by those bodies. For example, in the US Congress right now, there was just recently announced an AI caucus to organize the discussion of members of Congress on issues related to AI, with particular focus on issues related to the [00:26:30] future of work. I think over time, we’ll see more discussion of the longer term sorts of issues that we’ve been talking about in those fora and they would be useful to have people who are concerned about making sure that the right thing is ultimately done working in those places and able to actually act on the best current understanding developed by researchers and by practitioners.

Robert Wiblin: I think a big risk of going into politics or policy [00:27:00] in general would be that your career progresses, but you’re not specifically in one of the places that ends up having a say in how these things go. You just end up working in a Congressional committee that turns out not to be that relevant. Are there any roles that you can take early on, or what can you do to position yourself well so that you don’t just get sidelined in the end?

Miles Brundage: I think if you intend to work in politics, then you have to be somewhat opportunistic about [00:27:30] jobs and the ebb and flow of political opportunities. For example, if one Congressional committee is declining in its relevance in terms of AI, then one should consider about jumping ship towards somewhere that’s more relevant. I don’t think that taking one particular job will lock you in forever, but building up some political capital and some human capital in that area could be useful, [00:28:00] even if you ultimately want to switch to a different organization.

I would push back a little bit on the idea of not being relevant just because you’re not at the organization where people are developing the latest AI. As I mentioned before, I think that’s a very exciting opportunity and along with academia, it’s a great place to work on these issues. Governments still play an important role in framing these issues [00:28:30] and in convening discussions. An example of this would be the preparing for the future of artificial intelligence report put out by the White House Office of Science and Technology Policy last year, which was the result of four workshops that brought together experts in a wide variety of areas: in law and economics and AI technology and safety. I think you still have an opportunity to [00:29:00] frame the discussion and move the ball forward, even if you’re not working in the latest labs.

Robert Wiblin:What would be some of the best places to apply to in that area?

Miles Brundage: It’s a bit tricky to answer that in general. It depends on factors like what your citizenship is and what particular policy issues you’re interested in, but some fairly robust recommendations would be to [00:29:30] look … For a particular class of people that have technical backgrounds, one area to look at would be AAAS fellowships. The American Association for the Advancement of Science has a fellowship program where they essentially put people in rotations in organizations like Congress and the executive branch, where they can draw on their technical [00:30:00] experience, and someone with a background in AI would probably end up in an AI relevant organization. I have heard a lot of good things about that being a good way to develop career capital and experience and understanding of how the political system works, but that’s not necessarily going to result in being in the exact right location for solving AI policy problems the next few months.

I think it’s really hard to say where that would be or even if there is such a place because I think we’re still, [00:30:30] despite all the progress that’s being made, in a relatively early stage. There isn’t even yet a nomination for the Director of the Office of Science and Technology Policy in the White House. Otherwise, I would say that that is a good place to go. Likewise, we don’t really know what the agenda of the Trump Administration is on the AI front, and there’s a lot of discussion, [00:31:00] but not a lot of institution building in governments at the moment. It’s hard for me to answer that in general, but I think that getting experience is a pretty robust thing to do.

Robert Wiblin: A situation that I encounter reasonably often is I meet someone who’s really smart, who is very interested in this topic, who might be able to make a great contribution. At the moment, there aren’t that many groups that are hiring for these roles. [00:31:30] You’re at the Future of Humanity Institute, and I guess there’s also the Future of Life Institute, but there aren’t that many places that someone can potentially apply. What advice should I give someone in that situation? Should they continue studying, perhaps, or building expertise so that they’re in a better position to apply in the future when the number of positions grows?

Miles Brundage: Again, it’s hard to answer that in general. It depends on what background the person has and what sorts [00:32:00] of jobs they’re interested in. I think if one casts a fairly large net in terms of what AI issues one wants to work on, there’s a fairly large range of organizations. It’s not just FHI. There’s also the Center for the Study of Existential Risk, the Leverhulme Center for the Future of Intelligence, the Tech Policy Lab at the University of Washington, and various faculty programs. [00:32:30] Various academic programs around the country, in the US, and around the world are looking to hire people, for example, as postdocs or a faculty member in areas related to AI and policy if you’re someone with an academic background. There are probably a fair number of policy positions at tech companies that aren’t necessarily specific to AI, but that would lead to getting valuable experience in the broad area of tech policy.

To [00:33:00] generalize a bit, I would recommend casting a fairly large net and working in adjacent areas if you can’t immediately work in the exact right place. This is not the sort of area where people have been working on AI policy for decades and it’s hard to break in for that reason. It’s more just that it’s still a field that’s growing. I don’t think that that growth will stop. I think there will be more opportunities in the future, but if you can’t find the right opportunity [00:33:30] now, then talk to people in the field about what sorts of opportunities are on the horizon and try and work in adjacent fields. That might be one thing to consider.

Robert Wiblin: What do you think would be the biggest challenges for someone starting out, trying to start a career in this area?

Miles Brundage: Depending on one’s background, I think the main challenge would be boning up on areas of weakness. For example, if you have a policy background, you might want to focus on [00:34:00] learning more about AI, and if you have a technical background, you might want to focus more on learning about policy. I think one of the challenges is what you mentioned before about there not being a super clearly accessible literature on the topic, and I think there’s some effort being going into addressing that with the career guide and the bibliography that I mentioned.

I’m not really sure that there’s any specific [00:34:30] set of pitfalls that I would recommend that people avoid besides not neglecting the areas in which they’re weak. Don’t try and rest on your laurels in, say, technical areas or policy areas, because it’s a fairly interdisciplinary area, and you would need to think about what sorts of disciplinary perspectives would be beneficial to solving the problem you’re interested in, and not just what is your background, though of course you want to draw on your strengths.

Robert Wiblin: [00:35:00] Because it’s a field that is in its fairly early stages and is growing quickly, it’d be a good fit for someone who was really able to set their own direction and meet people and potentially attract funding to do what they want to do, someone who’s gonna create the opportunities that they want to go into.

Miles Brundage: Yeah, it’s an area that’s evolving and I think a lot of the organizations that exist today and I have mentioned as [00:35:30] good places to work and so forth didn’t exist five years ago.

Robert Wiblin: Some of them didn’t exist one year ago.

Miles Brundage: Yeah, some of them didn’t exist one year ago. It’s an area in flux, and I think as with politics, you should be opportunistic about thinking about what’s the right place for you to be in. Yeah, I think it’s a very exciting area to work in and I think that a lot of people will find that it’s an area that would benefit from [00:36:00] their background.

Robert Wiblin: If you had someone that could go into any organization or role and that’d be a good fit, what kind of person would you think is most valuable out of everything.

Miles Brundage: Most valuable person. I don’t really have an answer for that.

Robert Wiblin: You don’t have an answer for that.

Miles Brundage: Yeah.

Robert Wiblin: Is there any, say, vacancy out there that you just wish that someone could fill it?

Miles Brundage: One vacancy that comes to mind is for the policy researcher [00:36:30] position at DeepMind. They’re currently hiring for someone to specifically look at AI policy. I think the case is pretty clear that that would be a good place to work and a good role to have if you’re interested in influencing the conversation around AI policy and being exposed to the latest developments. Yeah, if you’d be a good candidate for that and you’re interested in that, you should definitely apply.

Robert Wiblin: Sometimes I find people who want to work on AI and policy because they don’t see themselves as [00:37:00] quite cut out for technical research, perhaps because their math skills aren’t quite good enough. Is that a sensible way to go, or is in some ways the AI strategy work harder just because it’s less clear, it’s less precise than technical work?

Miles Brundage: I wouldn’t say it’s harder or easier. It’s just different. Even on the technical side, there’s a lot of fuzziness. AI safety wasn’t very crisply defined a few years ago, and it’s starting to move more in the direction [00:37:30] of technical rigor. I think the same thing will happen to AI policy to some extent, but there will also always be some element of relationship building and qualitative analysis and so forth. That certainly is not exactly the same as solving technical problems. Yes, I do think that having a different background, it would make sense to choose different areas to work with based on your background, but I wouldn’t say that working [00:38:00] on the safety problem is totally technical and the policy problem is totally non-technical, but certainly there’s some correlation there.

Robert Wiblin: What kind of level of technical understanding do you have to get to in order to be able to make a useful contribution?

Miles Brundage: I’m not sure that you need any specific level in order to make some contribution. For example, there are a lot of roles that can be played by people who are just good at distilling the literature on a particular topic. For example, if you’re interested [00:38:30] in understanding the role that AI could play in authoritarian states, then you don’t necessarily have to have any technical background to read up on the literature around surveillance and coercion and things like that. I think there’s a lot of room for good synthesizers and people who are curious and interested to dip in areas they’re unfamiliar with.

I think there’s also a fairly high ceiling for what would be useful. I think it would also be [00:39:00] good to have people with very strong technical backgrounds thinking about the nitty gritty game theory issues related to arms races and so forth. I think it’s a pretty broad range of possible skill sets that would be useful.

Robert Wiblin: You’re thinking if you can just understand what, you can speculate about what an AI might be capable of doing in the future. Then if you know economics or if you know social science, then you can think, what implications would that have [00:39:30] for politics and that kind of thing, even if you don’t understand specifically how the AI would work.

Miles Brundage: Yeah. Ideally, you know a little bit about each of those things, but you don’t need to be an expert in everything. It’s impossible to be an expert in everything. It’s not feasible to learn about every single discipline, of which I’ll list several in this policy career guide that you mentioned. Clearly, you cannot be a deep expert in [00:40:00] technical AI, economics, sociology, political science, etc., but knowing a little bit and being able to collaborate with people with complementary skill sets is also very valuable. I think a lot of the most exciting work in the next few years will involve collaboration between people on the technical side and on the policy side.

Robert Wiblin: How can someone tell which kind of paths they’re most suited for?

Miles Brundage: I don’t think that there’s any clear algorithm for [00:40:30] this, but talking to people in the area and reading some of the literature and thinking about where you think the gaps are, what areas you think that you could shed light on given your background is definitely something you should do. You should look at what sorts of job opportunities are available and cast a fairly broad net because you don’t necessarily know what’s going to strike your interest until you stumble upon that opportunity, and networking is also super helpful. [00:41:00] I think not just in terms of finding out about job opportunities and figuring out where there’s a nice fit, but also finding out about different organizational cultures and where you might fit in is also useful. I don’t think there’s a single algorithm, but networking and reading more of the literature and looking at job postings are all good things to do.

Robert Wiblin: What kinds of places can people network? Are there any events that are open to people who don’t yet have roles?

Miles Brundage: [00:41:30] All of the AI conferences are open to anyone who is capable of paying for the trip and the ticket. Some examples of big AI conferences are NIPS, ICML, IJCAI, AAAI, and so forth. NIPS is probably the biggest [00:42:00] machine learning conference right now with an emphasis on deep learning. ICML is also a very large one. Actually, I’m not sure which one of those is bigger. I haven’t been to ICML, but I’ve been to NIPS a few times. ICLR is also a deep learning specific conference. I think going to conferences like that and specifically going to workshops and symposia that are relevant to policy questions, which often happens at these conferences.

For example, at NIPS last year, there was [00:42:30] a symposium on the AI and the law. There are opportunities like that to network and find out more about how people in that area are thinking and to meet people who have similar interests. That’s just on the AI side. It’s also we’re thinking about more policy oriented conferences. There’s some conferences like the Governance of Emerging Technologies Conference, the We [00:43:00] Robot conference series, and probably a few others that would be of interest to people who are either on the technical side and want to move in a more thinking about policy side, or just they’re in the policy side but they want to specifically seek out people interested in AI. Those would be good places to look.

Robert Wiblin: Are any of the organizations involved open enough that someone who’s interested in the area can drop by and get to know people?

Miles Brundage: Yeah. I think [00:43:30] particularly on the academic side, though to some extent also at corporations, there’s a fair amount of openness. For example, at FHI, we host a lot of visitors who just want to learn more about our work. The same is true of other organizations I’m familiar with.

Robert Wiblin: It sounded like in your guide, you thought the best undergraduate major was something like combining a technical subject, like maths or [00:44:00] computer science with a more social science topic like politics or economics. Assuming that post people listening to this have already chosen their major, what should people do at the postgraduate level? Should they do a PhD or just go and work directly at Google if they can?

Miles Brundage: I’ve probably said this a million times, but I don’t think there’s a clear answer for everyone. [00:44:30] Certainly it doesn’t hurt to have an advanced degree. Of course there are opportunity costs, but there’s certainly more job opportunities available for someone who has a PhD. For example, I don’t know if it’s a hard requirement, but certainly it would be a benefit for people applying to a fair number of industry jobs if they had an advanced degree, as well as in academia. It’s hard to avoid the fact that for academic jobs, as much as [00:45:00] we would like to hire people solely based on their demonstrated skill, it’s often also the case that there are university policies surrounding what sorts of degrees you need for different roles and pay grades and so forth.

There’s definitely value to having a graduate degree, despite the non-trivial opportunity cost if you have an opportunity. As with all these things, it depends on someone’s background and if you have a clear opportunity to make an impact, and [00:45:30] you could perhaps get some of the best of both worlds by also taking classes online or enrolling at a local university and being a part time PhD student or master’s student. There might be ways to have your cake and eat it too, but it would depend on the particular person.

Robert Wiblin: I guess the most obvious options at the postgraduate level are things like machine learning or computer science or economics or I guess public policy? Is that useful?

Miles Brundage: [00:46:00] Just to give a particular example, at Oxford we have a handful of people who are interested in AI safety and AI policy, as well as broader issues related to the future. Just at the organization that I’m at, FHI, there are people who either are currently enrolled or who soon will be enrolled in programs in mine, which is human and social dimensions of science and technology, as well as machine learning, cybersecurity, and zoology. There’s [00:46:30] a wide range of possible areas that one could study and find an advisor that is suitable and interested in supporting interesting work.

Robert Wiblin: Some people worry that doing work in this area could be harmful if not done well. For example, it could be that, as you were saying earlier, that regulation in this area could be done poorly and would actually be harmful overall, or that bringing greater attention to the issue could reinforce the arms race dynamic, or tentatively, having people [00:47:00] get involved who don’t have enough technical expertise could bring the area into disrepute. How seriously do you take those kinds of concerns? Should people be cautious about jumping into the area if they don’t feel like they’re really on top of things?

Miles Brundage: I think that is a reasonable concern, and there is, in fact, some risk of discrediting yourself or your cause by being too alarmist or whatever the particular [00:47:30] charge might be. On the other hand, there’s a lot of noise in the system no matter what. Whatever you do in the near term, there will be Terminator pictures and people who dismiss AI safety and all sorts of other things you can’t really control. I think they’re also risks of not acting, which is not being able to help out when there’s an important program and you have something to contribute. I would say if that’s something that concerns you, then talk to people about your beliefs and [00:48:00] what your beliefs are, and look at surveys of experts and see what the reasonable distribution is among people who have studied the topic more. I don’t think it’s an all or nothing sort of thing where you either have to stay ignorant or be a super expert. I think there’s a middle ground where you develop your confidence little by little as you go, and try and remain as modest as possible in areas that you have little expertise.

Robert Wiblin: [00:48:30] I suppose you could always track the things that people like you or Nick Bostrom are saying and how you speak about the issues, so that you don’t come across as alarmist and you have a measured and precise tone.

Miles Brundage: Right, right.

Robert Wiblin: You mentioned earlier that AI could potentially be used in military applications, and that’s one way that things could go badly. Given that, do you see much value in people going into military or intelligence roles with the hope of limiting this in the future?

Miles Brundage: Possibly. [00:49:00] It’s really hard to say. It would depend on the particular role, so probably a role higher up in the chain and more on the policy and strategy sides of things, as opposed to the more operational or logistical or just execution side of things would be more likely to give you an opportunity to affect those sorts of things. There might also be benefits to that beyond just having a direct impact on the issue while in that [00:49:30] role. You could also then develop valuable connections and know the right people to influence from the outside down the road or have a better understanding of the dynamics of the problem and what needs to be done to solve it. I definitely wouldn’t dismiss it, but I also wouldn’t necessarily recommend it across the board as the best thing to do.

Robert Wiblin: I guess the intelligence services and military are just such enormous bureaucracies, it’s just unlikely [00:50:00] perhaps that you would be in the right place at the right time, or have enough discretion in those roles to be able to make a difference. As you say, it could help you to build up your career capital so you can get other good, useful roles in the future, and you know how the system works, so you could potentially influence it from outside and talk to the right people.

Miles Brundage: Yeah, exactly. Again, it would depend on the particular person and what other options they have, but it’s certainly worth considering.

Robert Wiblin: What about foreign policy? Could it be useful to go into Department of State or something like that and work on arms [00:50:30] control agreements? Even just current arms control agreements so you understand how they work and can think about how they might work for AI in future.

Miles Brundage: I think that could be very useful. Looking at arms control and foreign policy, both from an academic perspective and from a practitioner perspective, I think, is very valuable, and it’s something that’s not our strong suit in the AI safety and policy community right now. More could be done on that front, but I think some [00:51:00] of the same caveats apply as to the military and intelligence side of things, that the State Department, for example, is a very large bureaucracy.

Robert Wiblin: How do you feel about more general attempts to make the world better, such that when AI is developed, things are more likely to go well? For example, just improving the quality of government in general, trying to get the right kinds of people elected, or improving our ability to forecast the future across the board. I guess one worry would be that that’s just not [00:51:30] sufficiently leveraged on this specific problem, and if you think AI is really likely to be the key issue, then you want to work on that specifically. Maybe if you could improve our ability to forecast technological changes in the future across the board, then that could be a better option. Also, you just have a lot more options if you’re open to broader ways of improving the world.

Miles Brundage: I don’t think there’s a clear decision for everyone, but both at the group level and at the individual level, it often makes sense to take a sum it up a portfolio [00:52:00] approach. There are people at FHI who also are affiliated with the Center for Effective Altruism, and involved in building the EA community like Owen Cotton-Barratt and Toby Ord. There are people who straddle that boundary and look for opportunities to benefit both sides, and there are also people who focus on one thing or the other. I don’t think there’s a clear right answer, but certainly the sorts of trade offs you mention sound reasonable.

On the one hand, one would want [00:52:30] the world to be in a better place, for there to be more people who are thinking carefully about making good decisions and benefiting the future people in various important decision making roles. It’s quite uncertain what impact you could have on AI, and likewise, even if you work directly on AI, it’s not totally clear that that’s always the best thing you should do. I have a slight bias towards thinking that direct work needs [00:53:00] more attention at the moment, but I think that you should take that with a grain of salt, because I would like more people to help solve the specific problems I’m working on.

Robert Wiblin: We should potentially continue trying to get people involved. We’ll just become AI researchers right away.

Miles Brundage: Right.

Robert Wiblin: That’s all of my questions. Was there anything you wanted to add? Any inspiring message?

Miles Brundage: Yeah, I’ll just circle back on what I was saying earlier, [00:53:30] which is that AI safety was in a very nebulous stage of development a few years ago, and it took the work of Nick Bostrom in Superintelligence and Stuart Russell in giving a lot of talks and writing op-eds to call more attention to it and give it more legitimacy, and then subsequent work was done to refine the issue and develop research agendas by people, including the authors of he “Concrete Problems in AI Safety” [00:54:00] paper. Now we have a lot of postdocs and graduate students working specifically on this.

We have people in industry specifically working on well defined problems in this space, and we have the opportunity to make a similar transition in AI policy over the next few years. We’re [moving] from fairly high level desiderata and problem framings to specific proposals and formal models and good white papers and so forth [00:54:30] over the next few years. It’s an area that would benefit from a lot of people’s expertise. I would definitely encourage people who think they might have some relevant expertise to seriously consider it.

Robert Wiblin: I’ll just reiterate that. Since I wrote our problem profile on positively shaping artificial development, we’ve heard from a lot of people who think that this is the most neglected area within AI safety. I’m very keen to get more people working on these problems and getting more organizations, hiring for them. [00:55:00] If it’s something that you’re interested in, I think you should definitely be looking into it more. We’ll put up links to guides where you can find out more and meet the right people.

Miles Brundage: Great, thanks for having me Robert.

Robert Wiblin:Yeah, thanks so much and have a great day, and look forward to talking in future.

Miles Brundage: All right. Thanks. Bye.

Want to work on AI policy? We want to help.

We’ve helped dozens of people formulate their plans, and put them in touch with academic mentors. If you want to work on AI safety, apply for our free coaching service.

Apply for coaching

Author: Robert Wiblin

Rob studied both genetics and economics at the Australian National University (ANU), graduating top of his class and being named Young Alumnus of the Year in 2015.

He worked as a research economist in various Australian Government agencies, and then moved to the UK to work at the Centre for Effective Altruism, first as Research Director, then Executive Director, then Research Director for 80,000 Hours.

He was founding board Secretary for Animal Charity Evaluators and is a member of the World Economic Forum’s Global Shapers Community.