Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, where each week we have an unusually in-depth conversation about one of the world’s most pressing problems and how you can use your career to solve it. I’m Rob Wiblin, Director of Research at 80,000 Hours.
Today’s episode is with three people working at OpenAI who want to discuss how AI development teams can best coordinate to avoid racing to deploy artificial intelligence too quickly and what role, if any, government ought to play in this.
Before that I just wanted to pull in my colleague Niel Bowerman to say hi and talk about an article he’s written – hey Niel.
Niel Bowerman: Hey Rob, thanks for having me on the show.
Robert Wiblin: My pleasure. This episode is likely to be of interest to most listeners, even if they can’t see themselves working on something to do with AI themselves.
But for those who can, I wanted to make sure they know about your new article called “The case for building expertise to work on US AI policy, and how to do it”. What might readers find in there?
Niel Bowerman: Yeah so AI policy is an issue that’s been discussed on the podcast a couple of times, and a whole bunch by 80,000 Hours. But, surprisingly often, I get people coming to me and asking… 1) “Why? I don’t really get why the US government is such an important place to go and work if what you want to do is improve the long term outcomes of AI” and 2) “How do you even get into these careers?”
And so, I wrote this guide, or article, to essentially make the case for why I think building expertise to go and work on AI policy is really important. And we give a bunch of arguments for as well as a bunch of arguments against — why this might not be a good idea, why it’s maybe a risky career move. And then we go into this question of “how do you even get into these careers?”
So just a bunch of masters programmes, a bunch of jobs that you can take early on when you’re starting out. But ultimately what this article is doing is saying: that people with the right combination of technical expertise, social skills, and willingness to work in multi-stakeholder policy environments, and the ability and willingness to move in slow-moving bureaucracies… and if you’re excited to work on shaping the future of AI policy, then this might be one of the most impactful things that you can do with your career over the coming decade.
Robert Wiblin: Great thanks for that Niel – I’ll bring you back along with Michelle Hutchinson at the end of the episode to offer some quick reactions to the interview.
But first we have to listen to it, so without any further ado, here’s Amanda, Miles and Jack from OpenAI.
Robert Wiblin: Today I’m speaking with Amanda Askell, Miles Brundage, and Jack Clark.
Since her last appearance on the podcast last September, Amanda has joined OpenAI as a Research Scientist working on AI policy. She completed a PhD in philosophy at NYU, one of the world’s top philosophy grad schools, with a thesis focused on Infinite Ethics. Before that, she did a BPhil at Oxford University, and she blogs at rationalreflection.net.
Miles had the honour of being the first ever guest on the podcast, and now also works as a Research Scientist on OpenAI’s policy team. Previously, he was a Research Fellow at the University of Oxford’s Future of Humanity Institute (where he remains a Research Associate). He’s also a PhD candidate in Human and Social Dimensions of Science and Technology at Arizona State University.
Jack works as the Policy Director at OpenAI. He was previously the world’s only neural network reporter working at Bloomberg, is the creator of the weekly newsletter Import AI, which is read by a large fraction of the AI industry, and until today, he had never been on the 80,000 Hours Podcast even once.
Robert Wiblin: Thanks for coming on the show again, Amanda.
Amanda Askell: Thanks for inviting me.
Robert Wiblin: Thanks for coming on the show again, Miles.
Miles Brundage: Great to be here. Thanks.
Robert Wiblin: And Jack, welcome on for the first time.
Jack Clark: Thank you very much.
Robert Wiblin: I hope to get into talking about how people can actually pursue careers in AI policy, like you all are, and also some of the latest developments in the field, which are pretty interesting. But first, can each of you just quickly tell me, what are you guys working on, and why do you think it’s really important work? Maybe Amanda first.
Amanda Askell: Yeah. I’ve been focusing recently on this notion of AI development races between different developers. I think that a lot of the dialogue around that has focused on these highly adversarial race scenarios where people are talking about things like arms races. I basically think that the situation is kind of more complex than that, and it’s important that we acknowledge that even if there’s a development race, it doesn’t have to be highly adversarial. That’s my main focus. And another area that I’m thinking about a lot is the kind of intersection between policy questions and questions in safety.
Miles Brundage: Yeah. There are a couple of things that I’m thinking about these days, broadly in the bucket of making sure that AI goes well and that OpenAI does what it can to set the right sorts of norms around AI development. Today we had a big blog post, which was a step in this direction in the context of disclosure norms and publishing norms, where we were very transparent about our concerns around the malicious use of a certain technology. In addition to that, I’m also thinking about some of the same issues Amanda mentioned, around cooperation and what the sort of technical and social mechanisms are that we might draw on to build trust between different countries and companies.
Jack Clark: And I spend a good chunk of my time thinking about the sorts of interventions that Amanda and Miles and other members of the team can make, which will be most effective. I do that by spending a lot of time in Washington every month, what I call the happiest place on Earth. I also go and spend time in other major cities close to other sort of reasonably large governments as well.
Jack Clark: I think that one of the challenges for AI policy is being able to do significant research that deals with many of the open questions in this domain, which are also well calibrated to what governments are trying to think about and trying to actively work on today, and ideally trying to make sure that that research is aligned with the sorts of bills, legislation, or projects that governments are thinking about doing in this domain, so we can work collaboratively.
Intro to AI policy
Robert Wiblin: Maybe Jack, we’ve already had two episodes broadly describing the problem of AI policy, one with Miles and one with that Allan Dafoe. But you want to just to quickly, for people who haven’t heard that described, what is the issue with artificial intelligence and how we ought to approach it as a society and, I guess, as a government?
Jack Clark: The good thing about AI is that it applies everywhere. And this is also the extremely bad thing about AI for a policy challenge, because AI’s effect on policy is happening in two major areas. One, it’s happening as an augmentation of existing policy problems. So you think about fairness in the criminal justice system. Well, that’s an area where AI is today having an effect and accentuating the sharp edges of those problems. Similarly, you know, AI and insurance with regard to discrimination, debates about inequality are influenced by the effects of AI on actors and the economic marketplace, and so on and so forth. And then at a meta level, the question of AI policy to me is what new questions need to be worked on, or what existing policy things need to be dramatically reframed?
Jack Clark: So if you think about issues like controls on AI in the same way that you would think about controls on previous transformative technologies like nuclear technology or so on, it’s clear that AI has very different rules and very different traits, which means that the challenge there is different. So a lot of AI policy right now is about discovering those areas where there needs to be new work on defining the questions so we can go and change things.
Significant changes over the last year or two
Robert Wiblin: The last episode we had on this topic I recorded about a year ago, maybe even a bit longer than that, and it’s a field that’s changing incredibly fast, because it kind of really only emerged in its own right a few years ago. Maybe could each of you in turn describe what the most significant changes have been over the last year or two?
Miles Brundage: Sure. Definitely, there’s been a lot of mainstreaming of AI in public discourse and AI policy and AI ethics as areas of discussion within the research community. I would say that it’s sort of been continuous with what happened in previous years. You know, in around 2015 there was the first FLI Conference and the Open Letter on Robust and Beneficial AI. So a lot of these ideas around sort of social responsibility in the AI community had been percolating for a while, but they’ve been more mainstream in terms of conferences and researcher conversations, and in the case of our blog post today, sort of concrete decisions taken by AI labs as these issues have gotten more clearly connected to the real world and AI has gotten more impactful.
Jack Clark: I’d say from my perspective that the politicization of AI, the realization among people taking part in AI, that it is a political technology that has political effects, has been very significant. We’ve seen that in work by employees at AI organizations like Google, Amazon, and Microsoft to push back on things like AI being used in drones in the case of Google or AI and facial recognition in the case of Microsoft and Amazon. And that’s happened alongside politicians realizing AI is important and is something they should legislate about. So you have Ben Sasse, who’s a Republican senator here in America, has submitted a bill called the Deepfakes Prohibition Act, which is about stopping people using synthetic images for bad purposes.
Jack Clark: I think that the fact AI has arrived as a cause of legislative concern at the same time that AI employees and practitioners are realizing that they are political agents in this regard and have the ability to condition the legislative conversation is quite significant. And I expect that next year and the year after, we’re going to see very significant changes, especially among western countries, as they realize that there’s a unique political dynamic at play here that means that it’s not just like a normal industry in your country, it’s something different.
Amanda Askell: I think some of the biggest changes I’ve seen have mainly been in a move from a pure problem framing to something more like a focus on a greater number of potential solutions and mechanisms for solving those problems, which I think is a very good change to see. Previously, I think there’s been a lot of pessimism around AI development among some people, and now we’re seeing really important ideas get out there like the idea of greater collaboration and cooperation, ways in which we can just ensure that the right amount of resources go into things like AI safety work and ensuring that systems are safe and beneficial. I think that one good thing is that there’s perhaps a little bit more optimism as a result of the fact that we’re now focusing more on mechanisms and solutions than just on trying to identify the key problems.
Robert Wiblin: Yeah, do you want to elaborate on that? Has there been a change in people’s sense of what the most important questions in this field are and what people are looking into in more detail?
Miles Brundage: I’ve definitely noticed a growing familiarity/agreement with the idea that there’s some sort of collective action problem here, not necessarily convergence on a very concrete framing, but I think some of the ideas in, for example, the book Superintelligence and more recently sort of more prosaic versions of these arms race concerns and the autonomous weapons and other contexts have caused people to think about, oh, maybe we need to find a way to coordinate. But that is not a very crisp consensus, and views vary a lot on exactly what the prospects are for coordination.
Jack Clark: Here in America in 2016 there was a presidential election, and it led to us having the current administration, and that generated a lot of interest from AI practitioners about how AI technology is used, because suddenly you had an administration came in which had political goals that frequently conflict with the political values of AI researchers themselves. And I think that that has been in a way helpful, because it’s helped frame the AI problem of multi-use or omni-use or dual-use technology away from purely military terms and into this broader context of, oh, if we build AI stuff, other people can apply it in different ways. Who are these people, how might they apply it, and what steps can we as developers take to ensure that if they do get the chance to apply this stuff, they apply it in a good context? I mean, that context led to OpenAI adopting a different release strategy with some language AI work, which we have been talking about to you today. And I think that it’s going to change how most AI developers approach these questions of release in the future, which I’m excited to see.
Are we still in the disentanglement phase?
Robert Wiblin: So what kinds of things are people spending most of their time working on these days? I think a year or two ago, people were saying that AI strategy and policy was kind of in a disentanglement phase, where what we really needed was like people who could figure out what are the most important questions to be focusing on, and that’s kind of a skill in itself. Do you think we’re still in the disentanglement phase if we ever were, or has it become clear like exactly what we need to be doing?
Miles Brundage: I think opinions vary on that question. I personally am sort of bullish on a particular framing of the problem around a sort of collective action and trust and think that there are pretty tangible research problems in that area. But others might sort of disagree with that framing or find it ill-specified or have a totally different problem framing. So I think there’s both further disentangling going on and sort of more granular research agendas for particular framings.
Amanda Askell: Yeah, I think one thing worth noting here is that there’s kind of not just one central problem but a collection of problems, and so you can have different rates of progress on each of them. So some questions might be things like how do you distribute the benefits of AI going into the future, which is a bit of a different question from things like how do you prevent adversarialism between different AI developers? And I think that some of those are more developed than others. I would probably class myself more on the disentangling end of things, but I think this is probably because I have a kind of deep love of conceptual clarity. And when you get a new research area like this, you’re having to both discover what the different problems are, what the solution spaces is, what even the relevant concepts are to use here. And so I think that that has been in fact disentangled in some areas more than others, but there’s still a lot of work to be done there.
Jack Clark: Just doubling down on what Miles and Amanda said, there are definitely known problems now that there’s convergence on working on, like the problem of multiple AI organizations needing to be able to collaborate increasingly closely with each other and exchange information. I think that everyone agrees that that’s a shared problem of concern now and deserves its own investigation. So I think that it’s positive that we have some known things to work on, but the issue with a lot of this AI policy stuff is that over time the number of actors is sort of changing, which conditions the types of questions, and that the level of entanglement or disentanglement is conditioned by the growth in the field over time. So actually, every month you’ll see a new statement about AI from a government or a billionaire or a company, and you sort of have to look at your sheet of paper on which we have our grand AI policy plan, and you need to slightly redraw it to account for those different actors in the space.
Research vs. action
Robert Wiblin: How much is the field of AI policy still in the phase of just doing research and figuring out what should be done, versus actually trying to change things in the real world, like try to get organizations to change their behavior or get the government to implement particular policies?
Miles Brundage: I would say that there are multiple worlds of AI policy and multiple senses of AI policy. And the world that is of interest to our listeners might be different from the way that it’s seen by corporate executives or whatever. A lot of people doing quote unquote “policy” are sort of in information dissemination mode. They’re trying to get policymakers up to speed on what AI is and prevent them from doing crazy things and sort of answering questions from the public and thinking about press coverage and stuff like that. So there are many things that fall under the heading of policy that aren’t necessarily focused on the long term or focused on AGI or focused on optimizing for the broad interests of humanity. So I think it’s important to draw the lines in the right way.
Miles Brundage: But even if you go further and say AI policy research, that’s still a pretty broad area. I think most people who are doing AI policy research are fairly zoomed into a particular domain of application, like either autonomous cars or predictive policing or something like that, or a slightly higher level category like law enforcement technology or something like that. So I think it’s not clear yet what the synthesis of these communities will be and what an optimal distribution would be, but currently I see fairly disconnected communities having kind of different conversations.
Amanda Askell: Yeah. I think that in terms of room for growth, I would say that if people are interested in working either on the kind of more action-oriented side of AI policy or on the research side of AI policy, there’s a huge amount of room for growth in both. And I also think that they go kind of hand in hand in some cases. If you’re doing research into AI policy, very often you’re going to want to add in certain actionable steps that people can take on the basis of that research. And I think that’s really important, because there can be something a bit disheartening about reading something fairly abstract and then not being told how to respond to a problem or things that can actually be done. It’s kind of excellent when you have people in the right positions to be able to say yes, here’s a direct output of this that I could in fact do. So I second what Miles said, but would also say if people are interested in kind of one or the other, then there are huge amounts of room for both.
Jack Clark: I’d say that there’s huge room for translators, and I describe myself as that. Miles and Amanda are producing a lot of fundamental ideas that will be inherent to AI policy, and they’re also from time to time going and talking to policymakers or other parties about their research. I spend maybe half my time just going and talking to people and trying to translate, not just our ideas, but general ideas about technical trends in AI or impacts of AI to policymakers. And what I’ve discovered is that the traditional playbook for policy is to have someone who speaks policy, who’s kind of like a lobbyist or an ex-politician, and they talk to someone who speaks tech, who may be at the company’s home office or home base. And as a consequence, neither side is as informed as they could be.
Jack Clark: The tech person that speaks tech doesn’t have a great idea of how the sausage gets made in Washington or Brussels or whatever, and the policy person who speaks policy doesn’t really have a deep appreciation for the technology and specifically for technology’s trajectory and likely areas of impact. And I found that just by being able to go into the room and say, “I’m here to talk about this tech, and I’m here to talk about the areas it may go over the next four to five years,” has been very helpful for a lot of policy people, because they think over that timeline, but they rarely get people giving them a coherent technical description of what’s going to happen.
Predicting AI capabilities over the next 5 to 20 years
Robert Wiblin: I have to come back to learning about OpenAI strategy for making AI go well later on. But first, I’m very curious to get your views on what people ought to expect about what capabilities AI is likely to develop over the next five, 10, 15, 20 years. Some eyebrows being raised here. It’s a classic difficult question, but just, okay, go, Miles.
Miles Brundage: The reason I’m raising my eyebrows is that first of all, this is very up my alley. I’m interested in this sort of question, but it’s also very difficult. And for example, we had a report on the malicious use of AI last year, where we sort of had these fairly abstract, these semi-concrete, semi-abstract scenarios, which we said were plausible within five to 10 years. Some of them had to do with generation of text. I would say we classified this broad area of more human-like creation of media as a potential source of threat. But we didn’t know exactly how quickly the technical progress would occur, that NLP would have this big jump in performance compared to, say, images.
Miles Brundage: I mean, arguably there’s been substantial progress in images, but that we’re sort of catching up now in the language domain. So I think it’s hard to be very confident about those sorts of things, and in part that’s because we don’t have the right infrastructure, so we’re sort of flying in the dark about what is the most likely misuse of this language model? We don’t have good ground truth on what people are doing with crappier versions. So I think there’s a lot of room for improvement, both in terms of actually making grounded technical forecasts, as well as sort of building an infrastructure to map how these technologies are actually used.
Robert Wiblin: Just to frame the question a little bit. I follow this as kind of an amateur person with some interest in it, and I guess there’s been various posts that maybe alarm me a little bit, but I’m not sure how to read them. So, OpenAI put out a blog post describing how it seems like there’s been about a 300,000-fold increase in the amount of computation that goes into building the state of the art ML models since 2012. I guess just last week we got this news about DeepMind producing an ML program, AlphaStar, that’s extremely good at playing StarCraft and is now beating the best players or very close to doing it, which seemed like it was going to be quite challenging just a year ago.
Robert Wiblin: And then just today, OpenAI has released this blog post describing the new method of producing a kind of natural-sounding texts, paragraphs, like basically essays written by ML programs, as far as I understand, that seem at least some of the time quite convincing. It’s almost as though a human wrote them, although they don’t have perhaps a great grasp of the concepts, and they’re not saying anything terribly sensible. It’s just very hard for me to read this and get any sense of, is this going ahead of what we thought it would? How difficult are these tasks in reality? Does anyone know the answer to these questions?
Jack Clark: One thing that’s become clear is that there’s an interplay between the kind of complexity of the task you’re trying to do and also how many tasks you are doing. So we’ve moved from this regime of evaluating single purpose AI systems against single benchmarks, to usually single AI systems against multiple benchmarks. And you’ve seen this in reinforcement learning, where we’ve started to test single agents on kind of multiple games or multiple games of games. You’ve seen this in language modeling, where in the case of what we released today, we’re testing that on like 10 or 11 different things. You see this in other large scale systems, even AlphaZero, which is playing chess and shogi and Go.
Jack Clark: And so the fact that we’ve moved to sort of multiple evaluations of single systems should itself tell us that there’s been a significant growth in the underlying complexity of these things. They now have symptoms which need to be evaluated, symptoms of some kind of like air quotes “cognition”, which is very weird and different to a specific point-in-time task. I think the other way that I think about it is that humans are incredibly bad at modeling really big growth curves, and so when we see this growth of compute by like 300,000X in six years and see that it correlates to many of these machine learning results, which have been surprising, like machine translation, Dota 2, AlphaGo, the original DQN algorithm on Atari, it makes me think that our ability to predict any further than the next three years is actually somewhat limited.
Amanda Askell: Yeah, I mean I think I would second that. And I would also say that people can look at some of these results and maybe be alarmed or, you know, just see something like a kind of upward trend, but it’s also worth noting that sometimes key difficulties are very hard to predict as well, so not only areas in which you’re going to see more progress and areas in which you will have more data, for example, but just areas in which you suddenly see technical difficulties that you didn’t anticipate. And it’s worth bearing that in mind. Yeah, I think this is a reason at least to be a bit more measured in one’s response to these results.
Miles Brundage: And just to add one point, is that it’s important to distinguish things that we can be reasonably confident about or could be more confident about, like the sort of structural properties of AI as a technology to be governed, the idea that once you’ve trained the system, it’s easy to produce copies of it. That sort of has some social implications and implications for how you release things, like our decision today. Things like the fact that the upper limit on the speed of these systems is much higher than humans. You can see that with the case of GPT-2 in our announcement today. What’s impressive both is that it’s producing these coherent samples, but also that it can do it at a superhuman rate and scale. So I think that we have to think not just about what is the sort of waterline of capabilities, but also what’s the sort of scale up from those to social impact, in terms of speed, quantity, et cetera?
Jack Clark: I’d like to just reiterate kind of what Miles said and note that there are hard problems which we know are definitely going to be here forever, like how do you release increasingly powerful systems while being confident that they aren’t going to be able to cause harm? That’s a long-term kind of safety problem, and it’s also a short-term real policy question in the case of today’s text generation systems or things like facial recognition systems.
Robert Wiblin: So how much do you kind of focus on what needs to be done to make sure the AI goes well in the next couple of years versus the next couple of decades? It seems like there’s different timelines where you might be focusing on like what this community ought to be working on.
Jack Clark: Just from the OpenAI perspective, our stuff, our activities are designed to be robust to the long term and ideally as a second order effect should help the short term. So you know, one of our initiatives is going and talking about the need for better methods to measure and assess AI, and we want a broader number of people to be doing that, not just AI organizations but specific government agencies, third party researchers, academics. That’s something where if we did it today, it would just improve debates and decision making about a number of near-term policy questions. But fundamentally what it’s doing is it’s building capacity for having a global community of people that think about measuring AI progress, which we think is a prerequisite for sensible policy with regard to long-term powerful systems.
Miles Brundage: Yeah. I think there is some sort of irreducible uncertainty about how much the challenges we’re facing today will translate into future ones. But as Jack said, we should be very mindful of sort of locking in the right or wrong set of institutions and norms and debate. So that’s something I worry about, is sort of maybe we don’t have to solve everything in the next year or two, but we do want to at least do some damage control and prevent people from locking into an AI arms race mentality, for example.
Amanda Askell: Yeah, I think it’s tricky, because in most cases there’s kind of robust interventions that you can do that work pretty well, regardless of whether you have kind of in the long term and in the short term. The key worry’s going to be cases where there’s any inconsistency between what you would do now if you were thinking that you’re going to face a challenging result in three months versus a challenging result that’s going to happen over the course of a year. I think for the most part there’s often not attention there. You want to do things like build the kinds of institutions and responses that are great now and great going forward into the future. I do think that in many ways, and this is an opinion that I’d be interested to know, maybe other people disagree with it, but challenges that come quickly and kind of in a way that you didn’t anticipate are much more difficult than ones that you have a lot of time to respond to and build institutions around.
Amanda Askell: And so in some ways you can think that actually we will have a lot of time to deal with some of these problems, and we can simply build, slowly build the institutions that we think are good for managing them, so around things like text generation. But it can be worth just doing work that assumes that you won’t necessarily have that long time to think about things, just because in that case it can be really hard to spin up a response really quickly unless you have in fact been anticipating that possibility. So sometimes we can end up working on things that are focused on, well what happens if we discover in three months that there’s going to be this really important result that’s going to have massive policy implications? And ideally you’re like, well, we’ve already been thinking about that for the last year, so that’s great.
Jack Clark: Yeah. And we to some extent already do this. The long term good version of the world you want is you want to have formal processes for coordinating between different labs. That’s going to obviously take a while to build. While we’re trying to build that, we’re also creating the super hacky version of that, which works today, of informal relationships between us and people at other labs, basically because we will get surprised, and we will need to draw on things that have the shape of the sort of institutions we want to build in the long term and which function in a similar way today, but which are the Wright Brothers held together with Scotch Tape equivalent of the big jet engine that we’re sort of driving towards.
Robot hands and computer games
Robert Wiblin: Jack, if I understood you correctly, you were saying that it’s interesting that we now have kind of reinforcement learning algorithms that can accomplish multiple quite different outputs. And I’m interested, it seems like you’re using the same reinforcement learning algorithm here at OpenAI to both train a hand in how to pick things up and manipulate objects, as well as to win at this game, Dota 2, that is, Defense of the Ancients 2. It’s a game kind of like StarCraft II, as far as I know. That’s kind of surprising to me that you would use the same underlying system for that, but perhaps that just shows my naivete about this technology. What’s the story there?
Jack Clark: I’ll tell you the cartoonish explanation of why a robot hand and a computer game are just the same problem, and maybe that will shed some light on this. So in Dota 2, you have to control a team of several different people. You need to move them around a map, and you need to attack an enemy. With the case of a robot hand, you need to hold an object, move your fingers, and rotate it to the desired position. So what do those things have to do with each other? Well actually, they have weirdly large amounts of stuff in common. Your hand has 20 or 30 different joints in it. At the same time, the number of actions that you can take at any one point in time in Dota 2 is 10 to 20 main actions, plus you can select your specific movement.
Jack Clark: And in the same way that when you’re rotating an object in your hand, it’s partially observable. You’re aware of the connection of the object, where it connects to your own sense of it, and sense of its friction, but you aren’t aware of the shape of the entire object from a sensory perspective. There are bits which are occluded to you, bits you can’t feel. It’s the same in Dota 2, where you are not able to see the whole map, you’re able to see the bits where the enemies’ connects to you or where you explore it. Counterintuitively, you end up in a place where from a computational perspective, these are remarkably similar problems. And the truth is, is that many problems in life are similar at root when it comes to compute. We’ve just lacked generalizable software systems that we can attach to those problems that can basically interpret the different inputs and compress it down to the same computational problem, which we then solve.
Jack Clark: We used an algorithm called Proximal Policy Optimization, PPO, which is a fairly robust algorithm. What we mean by robust is really just you can throw it at loads of different contexts, and you don’t need to worry too much about tuning it. It will sort of do okay initially. I think that speaks to the huge challenge of AI policy, is that we are going to continue to invent things like PPO. We are going to continue to do things like train an increasingly large general language model, and whenever we do these things we’re going to enable vast amounts of uses, some of which we can’t predict.
Miles Brundage: Yeah, so I’ll just comment a bit more concretely in the context of language models. I think it’s a particularly tricky case there, if you read the blog posts and the research paper, it’s clear that the strength of the system comes from this sort of fairly unsupervised or very unsupervised sort of learning process on huge amounts of diverse data. So it’s sort of hard to maintain the strength of that system while having a sort of more controllable, say, single topic system that can only do one thing. It’s hard to sort of … we don’t know what a pipeline is yet that will result in a fairly narrow but competent natural language system that doesn’t have potential for misuse, but we now seem to know how to make a generic one that does have potential for misuse.
Robert Wiblin: So if in fact these tasks are subtly more similar than what it might appear, perhaps is it less interesting that a very similar learning algorithm can learn to do both of them? Because you might worry that oh, it actually, if most tasks are like more similar than it seems, then you might expect more rapid progress, because we can just have one underlying learning process and then it can just learn to do practically everything that humans do. But maybe it’s just that like it happens to be that hands and computer games are similar.
Jack Clark: I think that this is a general rule that will come true over time. The story of the last few years has been increasingly robust algorithms that are more resilient to the context changing around them. And if you look, you step out from individual things like reinforcement learning to just look across supervised learning, RL, unsupervised learning, you see this trend across all of it. I should note that it wasn’t very easy to get this to work on both. Like we’re incredibly excited it did. It was a humbling experience for OpenAI to work on real robotic hardware. I would recommend everyone who has calibrated intuitions about AI timelines spend some time doing stuff with real robots and it will probably … how should I put this? … further calibrate your intuitions in quite a humbling way.
Malicious AI Report
Robert Wiblin: All right, let’s push on to the malicious AI report. So last year in February you released this malicious AI article. I think it had 26 authors of which I think at least Miles and Jack were on there. Maybe Amanda had some input as well. Yeah, I guess you had like four high level recommendations that we might be able to go through. But maybe do you just want to kind of summarize what was the key message here and perhaps … it sounds like today’s article about the production of a kind of natural language definitely plays into this, or is an example of like a risky application of AI.
Miles Brundage: Yeah, so I am not sure that this is exactly how we were thinking about it at the time, but in retrospect I think the best way to think about it is that the malicious use report sort of framed the general topic at a high level of abstraction and pointed to a lot of key variables and structural factors, like the scalability of AI, that might cause one to sort of have some reason to be worried about this stuff. But there was irreducible uncertainty or possibly somewhat reducible, but some substantial uncertainty about which the most promising defenses were and what the most worrying threats were. And I think now we have a much richer sense of what the threat landscape looks like in the case of language, and I think over the course of time that’s sort of how we’ll follow up on the report, is sort of diving deeper.
Miles Brundage: We sort of framed this high level search problem of find a way to deal with dual-use AI and now we know a little bit more about what the levers and options are in one context, but I think the broader issue still remains.
Jack Clark: And one of the recommendations of the report was that AI organizations kind of look into publication, and different methods of doing different types of publication. So today, with this language model, we’re releasing a research paper. We’re not releasing the data set. We’re releasing the small model, not the large model. So we are trying to sort of run almost a responsible experiment in this domain that was recommended by the malicious uses report. We broadly think that lots of the recommendations in that report probably need to get more evidence generated around those recommendations, and so we’d be excited to see other organizations also do this and create more case studies that we can then learn from.
Amanda Askell: Yeah, I think one of the useful things is that we have used this as a kind of reason to make sure we’re kind of evaluating the potential for misuse of our own systems. And I think this is helpful both because it means that we end up using these as essentially case studies in how to do this well, and then get feedback on that and try to make sure that we are doing so responsibly. Which might seem trivial from the outside, but I also think it’s really easy for people who are building things with the intention of doing good, which is the case with almost all ML researchers, to not think about the ways in which someone who wanted to misuse the system could misuse the system. And so I think the fact that we are starting to do that kind of evaluation is important.
Amanda Askell: And I think also, ideally, the more that other people do this, then we end up getting more case studies on like dual-use of systems and now to respond to those concerns, including feedback that we get on publication norms for example.
Four high level recommendations
Robert Wiblin: The four kind of high level recommendations … or what was it. I guess I’ll read bits of them. So number one was, “A policymaker should collaborate closely with technical researchers to investigate, prevent and mitigate potential malicious uses of AI.” Two was, “Researchers and engineers in AI should take the dual-use nature of their work seriously.” Three is, “A best practice should be identified in research areas with more mature methods for addressing dual-use concerns.” Then four is, “Actively seek to expand the range of stakeholders and domain experts involved in discussions of the challenges.” Reading that it all feels like very high level.
Robert Wiblin: It’s like who should be doing what, I guess, is a little bit the response that people have.
Miles Brundage: Yeah, so I think it was somewhat high level on purpose, because we had … or by necessity because we had 26 authors and multiple institutions. But yeah, I think there was also some inevitable abstraction, because there are a bunch of like known unknowns that relate to what the most worrying concerns are that we had limited information about at the time. So I think it was kind of inevitable that there would be some learning process and that some of our recommendations would miss the mark. So concretely, we now have a better understanding of what a concrete experiment in a different approach to openness looks like, and we’ll be following closely over what researchers’ reactions are and whether others reproduce our results and if so how quickly and whether they publish.
Miles Brundage: So there’s a bunch of information that we’ll be getting in this particular context, but I think more generally there are other domains that we have even less information about.
Jack Clark: Yeah, and I could maybe tell you a little sort of cartoon story for how I think of this. So one of these recommendations is about policymakers being better able to kind of assess and mitigate for malicious uses of AI. So how do we get there? Well I think that means that technical experts need to help produce tools to let policymakers assess malicious uses of AI or unsafe uses of AI. You could imagine an organization like OpenAI coming up with some metrics that relate to the safety of a given system, trying to work with a multi-stakeholder group like, say, the Partnership on AI. Having that group or a subgroup within it think about the safety measures that OpenAI has proposed, and if they end up agreeing that those are good measures, you could then go to policymakers and say it’s not just one organization.
Jack Clark: It’s this subset of an 80-person membership of PAI that has said you should consider using this technical metric when thinking about safety. So we can think about actual discrete sets of work that people can do here now, which I think is new, and I’m excited to have us all figure out what those should be, because there’s clearly a lot of stuff that needs to get done.
Amanda Askell: I think one thing that is useful to know on the question of like the abstraction here is just that I think it can actually be good in many cases to have fairly abstract recommendations when you’re looking at an extremely broad domain potentially. So it doesn’t make a lot of sense to give really specific recommendations of the form don’t release your model, because in lots of cases if we just had that as a norm across the field, you would expect it to be pretty harmful. And so you have to do a lot of things on a case-by-case basis, which means that one of the first things you end up doing is just giving kind of abstract recommendations and principles, and then you look at specific cases and you say well what precisely can we do in this case is getting the right balance between, say, openness and making sure that we’re preventing malicious use, and then using that as like a study going forward.
Amanda Askell: So in many ways I want to kind of both defend but also note the importance of not giving hyper-specific policy recommendations when it comes to just what is an extremely broad range of potential innovations and events. And so that’s probably why it’s good to keep things like abstract and high level when the domain is extremely broad.
Miles Brundage: And often it’s the case that the optimal action to take depends on the actions of others, so you probably shouldn’t specify everything in advance. So for example, our decision on the language model release might have been different in a world in which we knew that someone else already had a hundred X bigger one and was about to release it.
Consequences for next couple of years?
Robert Wiblin: Are there any … yeah, any malicious uses of AI that you think we should anticipate occurring over the next couple of years, and are there any like concrete things that we should be thinking about doing now to protect ourselves against those changes?
Jack Clark: I guess I get to be Dr. Doom here. I think that for some of these malicious uses of AI, talking about them is itself a safety and a policy challenge. You know, we think about the topic of information hazard here at OpenAI, and what that means is when you’re talking about some research or even talking about hypothetical research, are you at risk of saying something that could kind of differentially accelerate some actor or a group of actors towards developing AGI more or an unsafe thing. So that’s why I’m going to be a little cagey in my responses, but I do have a couple of examples for you.
Jack Clark: I think that the intersection of drones and increasing amounts of autonomy via pre-trained models is going to be an area of huge policy concern, and I’m inspired to say that because we have observed that, in the field of asymmetric warfare, groups have used drones to go and do new types of warfare because they let them access a new type of military capability, which is I can go and cause you trouble at distance. Attribution to me becomes harder. I have control of a mobile platform, and I can drop munitions from it. We’re obviously concerned about what happens when those mobile platforms that can drop munitions gain autonomy.
Jack Clark: And that will be an area where you have real questions about publication norms emerge, I would expect very quickly from the first case of that happening. I think our work on language is us trying to experiment in the domain where you’re not talking about people’s lives being at risk. You are talking about severe effects to be sure, but you’re a little ahead of where the rubber really hits the road there. The other way we can expect these malicious things to be used, I think, is just in poisoning public spaces. So I don’t have to be that smart to make it difficult for you and I to have a conversation, I just need to be incoherent and to never stop talking.
Jack Clark: Which is actually relatively easy to do, and I think that when people start to do that, that’s the point when governments are going to start to think about speech as being human or AI-driven, which will raise its own malicious uses and sort of legal questions.
How much capability does this really add?
Robert Wiblin: Yeah. I guess, if I can play skeptic for a minute: So yeah, I guess when I read that report, when I think about this in general, I find that like it’s easy to whip myself into a lather of being worried about all of these new potential threats. But then sometimes when I think about it, I’m like, “Kind of we can already do this stuff.” So it’s like, with drones for example, you could try to like … you could shoot people from the drone, but one is governments can already do that. But also ever since we’ve had like sniper rifles, it’s been like fairly easy to try to shoot someone from a long way away and very hard to get caught.
Robert Wiblin: There was like that spate of terrorist attacks in D.C. I think, back in 2001 or 2002 where just people got into the back of a car and started shooting people from far away with a sniper rifle, and it was like extremely hard to catch them. So this is like something that people could already do. Is the drone adding all that much there? There’s also like hackings or breaking into systems. I think we already believe basically that like all the major governments in the world have hacked the electricity grids of most of the other major governments in the world and would shut them off or try to do so if they ended up in a war with them.
Robert Wiblin: So in a sense, it’s like how much capability is this really adding? I think like even during the Syrian Civil War, there was like vigilante groups that had pretty substantial cyber war capabilities. And for example, people also worry about AI being used in kind of phishing attacks and things like that. Phishing people is already so unbelievably easy it’s kind of hilarious, and like everybody needs to get U2F keys to protect themselves against that already without adding AI into the picture. Yeah, do you kind of share sometimes that people can be a bit hysterical about things that are not different than what we already have?
Miles Brundage: For sure. And some people accuse us of being hysterical with the malicious use report. I think time will tell who was or wasn’t hysterical or had their heads in the sand, or other characterizations, but I think one point that I’ll sort of reiterate from earlier is that it’s important to distinguish sort of the capabilities of the system on some human scale or some other scale versus the structural properties of the technology, like scale and speed. So I think just because humans are already doing it doesn’t mean that it won’t change the economics of crime or the economics of information if you made it much easier to do it.
Jack Clark: Well I think we know that when stuff gets fast and/or cheap, the dynamics change. You know, Miles just alluded to that, and I think that if we can think of a world where phishing via AI is a hundred times cheaper than phishing via a person, or generating disinformation via a human is a hundred times more expensive than using an AI, then you’d expect to see the types of people using this technology change. To your point about sniper rifles and such, yes, however I think that the drone argument is more compelling where you’ve been able to buy the ability to go and attack people at distance for a long time via stuff like sniper rifles, except they’re somewhat controlled. And now this sort of drones arrive, and now I have the ability to attack people at distance which is much, much, much, much cheaper.
Jack Clark: It’s also much faster for me to acquire like drones and use them against people than it is for me to acquire loads of sniper rifles. And so there you saw that a change in the speed of deployment and also the cost of deployment meaningfully altered the behavior of actors, and also meaningfully altered military responses. So if I’m a military now, and I’m sending soldiers into an area where I’m dealing with sort of asymmetric war-minded people, I have to have soldiers who are protected against small drones carrying grenades, so now I need to outfit them with different things.
Jack Clark: So actually you see that tools are always used to litigate, like, the economics of war, and it may not seem like a big deal that you’re just changing the tool, even though the capability remains the same, but if you look at what it does to the costs and incentives of the different actors, those changes can have really significant second order effects.
Amanda Askell: Yeah, so I think when we think about things like technically we can do some action now, so why should we be worried about things in the future, one thing that’s important to think about is what is it that prevents more of these actions from occurring. And I think when you think about that, often it’s things like well, it can be done, but it’s not trivial to do, for example. So if you make something a little bit less trivial to do, you see like a large reduction in the amount of people that do it. Another thing is like we’ve built up pretty secure institutions to prevent people from behaving badly. You know, we’ve created a kind of set of incentives around it, and so one thing you should be concerned about I think are cases where the current mechanisms — and those are just a couple of examples that mean that things that we’re technically capable of doing, we don’t see, like, massive misuse of them — think about ways in which those mechanisms could break down if you see kind of technological advances of certain types. And I think … so the move from it is possible to commit a successful phishing attack to it is trivial to … if you just want some money you can just do it instantly … in my mind, like I think it might make a huge difference to the amount of this that we see. Similarly think about things like the institutions that we have.
Amanda Askell: So like legal institutions around these issues and how responsive and well-adapted they are to some of these problems. And if we think that we don’t anticipate current institutions actually being able to deal with it well, then that’s another reason to be really worried that as you make these things easier and more trivial, you’d just see like a lot more of it in the future.
Robert Wiblin: Yeah, I’ve read some articles lately about this drone issue, because apparently like ISIS was using them in the war in Syria. I guess I was left a little bit confused about why it’s so hard to design countermeasures against them. You’d think you could just create like counter-drones that you just say go and crash into that drone and pull it out of the sky, and like how can that be so much harder than designing the attack drones in the first place. Because I’m very curious to hear whether there’s like any ideas out there for countermeasures for the kind of things that we’re worried about in the next few years that are already being developed, or people are already trying to get policy implemented.
Miles Brundage: There is a lot of interesting countermeasures. They vary in terms of scalability and cost and so forth. I think more generally … I mean I’m not an expert on what the state of the art of the countermeasures, but for a publication on this general issue, there’s a good paper by Ben Garfinkel and Allan Dafoe on sort of how the offense and defense balance might scale over time as we automate sort of both sides, and I think that’s very relevant here.
Jack Clark: So one dynamic that I think is important is that many ways to both attack with AI and defend against AI involve compute … it involves spending some amount of resources on compute. And so there’s this underlying dynamic which is yes, we may have technical countermeasures. It’s unclear how many of these countermeasures can be defeated by the attacker having a bigger computer, or not. And I think with that idea of the extent to which AI is like offense-dominant or defense-dominant depending on the underlying computational resources of the actor will have a big bearing on AI grand strategy. And it’s not clear how we get better information about this.
Robert Wiblin: One malicious use of AI that stuck with me from the interview with Allan Dafoe was the potential for China to use it to like stabilize kind of authoritarian rule by massive scaled surveillance that’s very cheap and tracking a lot of information about kind of every citizen and being able to keep tabs on them so that such that its very hard to engage in any kind of civil disobedience. And I guess you just raised the issue here of it just poisoning politics by allowing you to have kind of garbled speech at such a huge scale that it’s hard for humans even speak to one another, at least online. Yeah, do you have any thoughts on how that situation of like AI’s influence on politics is progressing?
Jack Clark: One of the clear truths is that AI augments other stuff. You know, AI isn’t really a thing in itself. Maybe in the long-term if we have long-term powerful AGI-style systems it will become that way, but for now, a lot of AI takes place in the form of a discrete capability that you layer over in other parts of the world. Therefore, AI is uniquely powerful in the context of political systems where you can dovetail your political system and structure into your technology substrate as a society. And that’s something which in the market you’re going to have more trouble with, because the markets may not allocate technical resources to your specific political will.
Jack Clark: There may be confounding factors, like it just doesn’t make money. People don’t like it. In the case of [Maven 00:50:11], the employees don’t want to build it. All of these problems. In the case of a different regime, a regime where you have government and tech move a lot more together because they’re naturally bound up more through a whole bunch of things, AI is going to make you more effective along those lines. One of the challenges that we’re going to deal with in the West is that we have a certain political system here which doesn’t seem to get accelerated that much by AI, whereas more centralized and control-based systems do seem to get accelerated by it.
Jack Clark: I think there’s an open question as to whether that’s like a risk that societies that don’t have that capability need to think about, or perhaps an opportunity to think about how might we structure ourselves with AI when the more advanced AI systems we need to sort of make our government better arrive. It’s a good and weird question.
Amanda Askell: Yeah. Just to like kind of agree with Jack on some of this, I think that one thing you want to think about are ways in which our current institutions are already a little bit robust to some of these problems and ways in which they aren’t. So I think one reason why people have been so kind of concerned about like the possibility of generated news or like generated political speech is that in some ways, our system isn’t as robust to that as we might think, and I think we’ve seen that. You know, where people are allowed to post articles, and people are allowed to post articles on their social news feed, and ways in which that can just be kind of undermined because it’s not something that we have these safeguards against.
Amanda Askell: And that’s going to just vary across different states, essentially. So in some states you do in fact have some safeguards against like malicious kind of use of speech in political campaigns, and in many states you have similar mechanisms to prevent like massive surveillance in ways that could be problematic. So I think that it’s important to kind of look at this as a problem of like … in some ways that these are problems that every state has and not just something like centralized states versus the West, for example. So yeah, just kind of worth noting, I think.
Robert Wiblin: People have raised this concern about AI influencing politics a lot with the 2016 presidential election. I think, having looked into it a little bit, I’m not as convinced that it had all that much impact on how people voted at the end of the day, but I suppose I’m left in this awkward position of saying it didn’t have much impact but I’m really worried that it will in future, because we’re just kind of seeing the tip of the iceberg here, or just seeing the very beginning of what’s possible. I think another one with this is a lot of people are worried about AI or technology causing mass unemployment, and I think I don’t see very much evidence at all that like … that any implement that we’re seeing today is mostly driven by technological improvements.
Robert Wiblin: But I am like very concerned that in the longer term … like it’s quite possible that like almost everyone will be out of a job, because AI and machines will be able to do everything that we can do better. Yeah, do you have any comments on this? Have you looked into either of those questions?
Miles Brundage: Yeah. So I mean first of all, there are several outstanding bets and you know, some resolved and some outstanding bets on this topic, and … for example, Tim Hwang, Rebecca Crootof and a few other sort of relevant experts have been featured in IEEE Spectrum magazine debating these topics, and sort of making bets about whether it would have a big impact in 2018 and in 2020. And I think it’s hard to come to a strong conclusion about these things, because you could interpret the evidence in various ways. You could say oh well it didn’t happen this time, but that’s because they’re saving their special sauce for next time.
Miles Brundage: So sort of an unfalsifiable perspective. So I think that’s why we need these conversations to be more grounded in what’s actually happening and sort of build that infrastructure.
Amanda Askell: Yeah, I think in the case of unemployment it’s just an extremely difficult question to answer, because a lot of it varies by how responsive the market is, for example. So if you see like the automation of like one small field, does this basically have very little impact on unemployment because people can just get jobs in other fields, and you’ve got general economic growth as a result of that? This seems kind of plausible to me. In many cases it’s a bit harder to anticipate what would happen if you saw this happen more rapidly and across a broader range of fields. I also think that one question people have started to ask that is really important is who this affects.
Amanda Askell: So for example, could automation have a really negative effect on people in developing countries? So not just thinking like within states, but between states. So it’s very hard to predict based on what we’ve currently seen, and I can understand why someone would be optimistic based on that, but I also think there are reasons to think that if you … So for example like rapid automation of larger fields, then it might be that we do see changes that we didn’t anticipate based purely on … like looking at some very specific type of factory work being automated.
Jack Clark: I think a good example here is when we got the first websites for uploading and sharing videos, I think everyone thought great, here’s a way to waste time or to better inform myself about the world. And we did get that. What we also got was a system whereby we plugged the incentives of advertising and clickbait into creating content for 3-year-old to 5-year-old children to essentially sort of hack their brains. No one decided to go and make content to hack children’s brains. We just built a system that intersected with a market that led to there being enough incentives there for that stuff to be created. And I think that highlights how it’s really, really tricky to correctly anticipate where it’s really going to hurt you.
Jack Clark: And I think that some of the mindset which I want us an organization to think about and other AI researchers more generally is it’s helpful to imagine the positive uses of your stuff as well as the negative uses. And yes, it’s likely that many of your predicted negative uses are not going to be the ones that matter. Like in the case of deepfakes. In the short-term, yeah, maybe concerns about them being used in politics were overblown, but maybe the concerns about them being used to target and harass women who had been in relationships with like skeezy men who then make deepfaked porn out of those women to embarrass them after a relationship … Maybe that was underblown.
Jack Clark: Because that’s caused real human harm. We just don’t talk about it, because it doesn’t fit with a big narrative like politics. It just fits to what an individual’s life has become like. But we’re now in a world where an individual has to deal with this, especially if they’re female, as an attack vector. And that just adds sort of cognitive load to their life and has all of these effects that we can’t quite predict the outcomes of.
Why should we focus on the long-term?
Robert Wiblin: Yeah, this is a criticism I guess that some people would make of this whole field, is to say that it’s just so hard to anticipate what’s going to happen, even in a couple years’ time, that when it comes to AI policy, we should really be focusing on fixing the problems that are occurring right now, or that we can … you know, we think might happen in the next few months, rather than trying to look ahead. What do you think of that?
Jack Clark: I’m going to politely completely disagree with you, and make a point that Miles also made earlier, and Amanda has been making, which is that there are these larger problems that we know to be true. We know that with increasingly transformative technology, you need the ability to coordinate between increasingly large numbers of actors to allow that technology to make its way into the world stably. That’s not going to stop being a problem of concern, and it’s not going to stop being a problem that gets more important over time. Of course we can’t say we need to all work together now because we have a specific technical assumption that will come true in eight years. That would be totally absurd. But I don’t think anyone serious is kind of proposing that.
Jack Clark: They’re saying we have the general contours of some problems. We accept that the details may change, but there’s no way the problems change unless you fix all of like human emotion and fallibility in the short-term, which is unlikely to happen.
Amanda Askell: Yeah. In many ways I think I just don’t see the projects as being inconsistent, and there’s just room to do work on both. So sometimes I don’t like it when it’s kind of pitted against each other. Like I’m really glad that people are working on these immediate problems, and in many ways I think when you’re working on problems that are … or you’re trying to think about long-term problems, the issues that are identified in these like immediate-term problems can often be the kind of seeds of things that you are generalizing, both in terms of concerns and in terms of solutions. So if you see something like deepfakes making women’s lives terrible, you can think about things like well what are the mechanisms that we would usually use or we would want to see in place to prevent that from happening?
Amanda Askell: Are those generally good mechanisms that could in fact help with like future problems of that sort? So I think it’s tricky because I’m just … yeah, I think it’s more like reiterating the point of short-term work and long-term work are not inconsistent, and in many ways very complementary to one another.
Robert Wiblin: So I guess … because we’d talked about there’s a malicious AI report that’s kind of kept our focus a bit on like what kind of things we might expect in the next five or ten years. But I guess most people I think … I may have mentioned all of us here, are like mostly concerned about this because we think AI’s going to have really transformative impacts in the longer term, and focusing on what’s happening in the shorter term is kind of a way into affecting the longer term. Do any of you want to comment on the relationship between these two issues?
Miles Brundage: Yeah. I think the distinction is super overblown, and I mean I’m guilty of having propagated this short-term long-term distinction in among other places in an 80,000 Hours article a while back. But I think there are a bunch of good arguments that have been made for why there are at least some points of agreement between these different communities’ concerns around sort of the right publication norms and what role should governments play, and how do we avoid collective action problems and so forth. So first of all they’re structurally similar things, and secondly they plausibly involve the same exact actors and possibly the same exact sort of policy steps.
Miles Brundage: Like sort of setting up connections between AI labs and managing compute, possibly. So there are these levers that I think are like pretty generic, and I think a lot of this distinction between short and long-term is sort of antiquated based on overly confident views of AI sort of not being a big deal until it suddenly is a huge deal. And I think we’re getting increasing amounts of evidence that we’re in a more sort of gradual world.
Robert Wiblin: Because it seems like to me on the technical side, people have really started to think that the problems we have with AI not doing what we want today are just like smaller cases of this broader problem of like it being very hard to like instruct ML algorithms to do the things that you really want them to do, and the kind of … but basically it’s just the same problem at a different level of scale and a different level of power that the algorithm has. Is the same thing kind of seeming true or is there some kind of convergence on the policy and strategy side that kind of these are all just the same things, it’s just that the issues are going to get bigger and bigger?
Miles Brundage: I mean I should caveat this by saying that there is uncertainty about how connected these things are and how we address the near-term things will affect how connected they are to the long-term things, so I don’t think there’s like a crisp fact of the matter. But my general direction of change over the past several years has been thinking that it’s one-ish issue.
Amanda Askell: Yeah, I think it is another area where you might just see differences across domains. So I think that it’s certainly true that you’re seeing a lot of issues that do generalize, and there’s a question of also how different they are as you increase capabilities. So you know, I think the example you might have been alluding to there is something like kind of goal misspecification. You know, so you have a social media site that is just optimizing for people clicking on ads, for example. This can be done without any kind of like malicious intention, it just ends up being, you know …
Amanda Askell: Or similarly with like kind of other goals that it turns out are not in fact the things that make the people on the social media site happy, or just continually looking at the social media site rather than doing your work or going and meeting up with your friends, et cetera. And the idea there is it can be really easy to not realize that you’re targeting the wrong goal.
And then if you scale the capabilities of that, I think the concern becomes much larger because suddenly you have a situation where just slight goal misspecification can actually have pretty radical results. So you know, people have used examples that are like, imagine a system that is monitoring or controlling the whole of the US power grid. Suddenly just accidentally misspecifying the goal of that system can be really harmful. I think this is true in some domains, but we should also anticipate the possibility of asymmetries in others.
Amanda Askell: In general it means that I don’t want the message to be something like people should not worry so much about long-term issues because here, if we just focus on the short-term problems, it will naturally result in a solution to those long-term issues. Because you could see, for example, an increase in the speed of development in a domain that you didn’t expect. And if that’s the case, then having nearly done these key studies of the immediate impact of your system won’t prepare you for the kind of implications and the sort of actions you need to take when that’s the case.
Robert Wiblin: Let’s move on to talking about OpenAI strategy for making AI safe in general and making sure that we get humanity approaches deployment of AI in a smart way. Just wondering like at a high level, what is the approach that OpenAI is taking to make it all go well.
Jack Clark: What is our approach to definitely make sure that unpredictable, increasingly powerful advanced technology that everyone uses will benefit everyone and nothing will go wrong: an easy question. Thank you very much for asking us. I’ll give an overview and I’m sure that Miles or Amanda may have a couple of interpretations as well. As my response should indicate we have a few ideas but we’re not claiming that we know the idea. This is definitely a domain of there are more kind of questions and answers. OpenAI has three main areas. It has capabilities, safety and policy. And these are all quite intentional. You know, safety is about the long-term alignment of systems and investigations into how to assure safety from both the perspective of a person interacting with the system. You know, how can I know I’m not being deceived.
Jack Clark: Various things like that, but also from the point of view of a system designer, how can I design a system but won’t do unsafe stuff? Policy is about how do we make sure that sort of, it goes well at the institutional level, but also how do we make sure that OpenAI has enough constraints placed upon it internally to do the right thing, and how do we take the ideas that come out of safety and come out of capabilities and integrate them not only with ideas relevant to a policy domain like this experiment we’re doing it for a moment on publication norms with regard to language, but also how do we go and tell people like military organizations about safety?
Jack Clark: Because though we do not want to enable military organizations in terms of their capability, we know that they’re going to develop capability and we want that capability to be safe or else none of us get to like live in AGI world cause we all die before then, which would be unfortunate and in my case I wouldn’t like that to happen.
Jack Clark: The key idea of capabilities is that a lot of these systems are empirical in nature. What I mean by that is a priori you can’t offer great guarantees about how they will behave. You can’t offer really solid guarantees of what capabilities they will and won’t have. In the case of our language model, we trained a big model with a single purpose, predict the next word in the sentence and then when we analyzed the model we discovered, “Oh it can do summarization.” If you just write a load of text and put “TLDR, colon”, ask for a completion, it will give you a summary.
Jack Clark: Similarly, it can do like English and French translation and other things. We found that out by training a thing and then going and looking at it and sort of prodding it. And so if you’re in the world where your way to understand future capabilities for long-term powerful systems is one where you need to like poke and prod them, you really want safety and policy to be integrated into the poking and prodding of what could be some kind of proto-super intelligence. So that would be the rough notion for how OpenAI makes sure this goes well and makes sure that OpenAI is a responsible actor.
Amanda Askell: Yeah, I think I would second that and agree that one of the ways that we’re trying to tackle this is by heavily integrating these three fields and in many ways I’m kind of sympathetic to people who think it’s unfortunate that we have terms like safety and possibly also terms like policy because in almost any other discipline, this would just be part of the task of building the thing. And so in some ways I think that we’re just trying to exemplify this norm of when you’re building AI systems, you’re trying to build things that help people and are beneficial, and that means that at kind of every stage of development you should be thinking through both the social implications that your system has if you were to release it and what forms you release it, and also safety implications that it has in making sure that you have a way of verifying that your system is not going to do unintended harm.
Amanda Askell: I think Stuart Russell had a quote in this where he was like, we don’t call it building bridges that don’t fall down. We just call it building bridges. And so I think it’s really important to try, and bring these three fields together. And we’re doing that and obviously we’re also just doing further work in AI safety, which is hopefully going to be useful, and AI policy work that’s also hopefully going to be useful both within OpenAI and beyond.
Miles Brundage: In terms of like super high level framings of the problem, I sometimes think of it in terms of “figure out what steps different actors need to take and then figure out how to get them to take those steps”. And I think a lot of safety and ethical issues fall into the first bucket and then a lot of game-theoretic and economic and legal issues fall into the second. But that’s obviously a very rough rubric.
Robert Wiblin: What is the ideal vision of like how AI progresses and OpenAI’s role in making it go well over a period of decades?
Jack Clark: I guess my story here is that we help a bunch of other AI organizations coordinate and figure out what information they need to share with each other and also the processes for sharing increasingly sensitive information with each other. While doing that, we continue to stay at, in the leading edge in terms of capabilities and safety and policy. And then we’re able to use what we learned to either ensure that ourselves, as a main actor, coordinate with others to build a safe system and disperse the benefits to humanity. Or, if it’s the case for whatever reason, in the dynamics of the landscape, we are not the main person here, we are able to help that main actor make more correct decisions then if we had not existed.
Miles Brundage: Yeah, so I don’t have a very concrete preferred scenario. And partly because I think as I was saying earlier, it depends what actions others take. I mean maybe you could say, okay, here’s a globally optimal scenario. But I think it’s more useful pragmatically for us to have a sort of self-centered view of: what does this mean for us? What actions do we take given this global picture? And I think from that perspective it’s more important to be robust than optimal. And so I think I’m less interested in working backwards from a perfect solution and more interested in what are the steps along the way that we could take to marginally move things in the direction of greater trust between actors, greater awareness of relevant safety techniques, et cetera. So on the margin what can be done at every point in time.
Amanda Askell: Yeah, I think I would agree with that where there are just so many levers to make sure that things go well here. So there’s both private companies, there are governments, there’s already existing institutions like law, and working to improve those to make sure that they’re like at each step of the way responding to technological improvements is really important. I do think that in some ways, I felt like the question was something like, what is like a really good way of this going or something like that. And I do want to-
Robert Wiblin: Tell me a beautiful story. I think it’s going to well.
Amanda Askell: Tell you a beautiful story. In some ways I just think that it is easy. I don’t want people to be overly pessimistic. I’m very optimistic and excited about technological development. And I think if it’s done well, it can be extremely beneficial. We have a lot of huge problems in the world right now that I think that advanced technologies could really help with. I am actually optimistic about it really improving things like reducing global poverty, improving health outcomes. I would love to see increasing amounts of this being used to cure diseases that we couldn’t cure before, et cetera.
Amanda Askell: I think the kind of beautiful story is one where it’s like, we’ll take a lot of the problems that we currently have that we could just solve if we could put more time into them, for example, and then have a system that can in fact just like process more information and can in fact put more time into it and can like take in a bunch of medical images and can give you a really accurate diagnosis. And I’m like, that’s an exciting world to me. I think a world where you have pretty robustly safe systems doing these things could in fact be like a really wonderful world in which we really solve many outstanding problems. So maybe I’m giving a really optimistic view of the future of, you know, we have no poverty and we’re all very healthy and happy.
How to avoid making things worse
Robert Wiblin: At 80,000 hours we think of AI policy and strategy as one of the areas where it’s unusually easy perhaps to cause harm, to make things worse by saying or doing the wrong thing. What are some of the potential ways you think OpenAI could make things worse and how do you try to anticipate that and avoid that happening?
Jack Clark: We can make people race on capabilities. An inherent challenge that we have, but I think most AI people have, is that we get to create futures ahead of other organizations and other actors like governments, and we get to see those futures and see the upsides and downsides and then what we communicate about that will have huge effects on what these people choose to do. And it’s a domain where you get very little information if what you did was really bad, ’cause really bad in this world usually corresponds to a classified budget massively growing in size. That’s necessarily something that is hard to get evidence about from where I’m sitting. I think that’s one.
Jack Clark: I think the other is that you could misjudge the types of coordination actions that the community actually wants to do in practice, and you could try and contrive a load of things to do of coordination which everyone sort of does up until the point when you get to hard decisions, and then those coordination mechanisms might have some flaw which would have not been clear to most people in the community. The moment it becomes clear, everyone defaults to less communication with each other and less coordination. I think those are two of the things. But I’m curious to hear what you think, Amanda?
Amanda Askell: Yeah. I think the first point that you make was one that I was like quite focused on or interested in where, business as usual, in a lot of domains, you have lots of competing actors and if you’re just like an additional actor in that space, a worry that you might have is just that you increase the chance that people are going to try to develop capabilities faster because the idea is that the goal is to sort of outcompete other people who are within the same domain or producing similar systems. I think that one way that we can try and mitigate that a little bit is by spreading a view of this entire discipline as one where we often have shared goals with other organizations. I don’t think the goal is something like “have your organization be the first to do some specific task”, rather, there’s this shared goal of creating really good advanced technologies.
Amanda Askell: That means that you shouldn’t necessarily see any other actors as competitors but rather like similarly working towards that goal. And so yeah, it’s difficult, where I’m like, the potential for increasing the chance of racing is something that does worry me and I like hope that we can help mitigate that by kind of really focusing on that kind of mindset. On the second point, I do think that another way that you can end up doing more harm without expecting to is failing to design these mechanisms that actually work when push comes to shove or just failing to anticipate scenarios where actually these mechanisms break down. And there are lots of scenarios that I can think of where the things that we’re building, you know, might just end up kind of not working. And if you haven’t thought of sufficiently many scenarios, that’s like one example.
Amanda Askell: I think another key thing in policy that it’s important to think about is basically like how good the mechanisms that you are recommending or the actions you are recommending are across a wide variety of possible outcomes. So it’s really easy for people to think through something like, well, what is the perfect outcome? And then to kind of work backwards from that and to build mechanisms that are like what they see is like the clearest path to the perfect outcome, when in reality, because there’s so much uncertainty and so many ways that things can go, you instead have to think about like all of these like distributions of outcomes and things that do pretty well in most of those worlds rather than like what would get us to the perfect world if things happen as we think that they will happen. And that’s a way in which you can end up recommending these kind of brittle mechanisms that do fail. I think that’s like another thing.
Amanda Askell: Then a final way that any organization can end up harming others or doing unintentional harm is just by making a mistake in a judgment call. You know, we are thinking about things like publication norms with the recent language work and it’s just very easy there isn’t like a huge template for what you do here and it’s really easy to just unintentionally make mistakes or just make the slightly wrong call and more on what you do or what you release. There’s not necessarily a lot you can do about that if you’re trying to be as responsible as possible, but there is like a possibility here that people have to be aware of.
Miles Brundage: Just one follow up point on there not being a blueprint. I think that’s like a super important and underappreciated point that I think a lot of people say, “Okay, well why don’t you just do this thing, like they did in nuclear policy or whatever.” And I think there’s a ton of value of using analogies for inspiration and to make you realize a variable that you hadn’t considered or to get you to think more creatively about what’s possible. But I think that you quickly run into limits in terms of what you can get from these analogies in any particular case.
Miles Brundage: I think one way in which we probably erred in the malicious use of AI report is thinking that we had more to learn from other fields. It’s not to say that there isn’t something to learn but you quickly reach diminishing returns and have to make a context-based decision about in this particular domain what are the misuse risks and what are the relative capabilities of different actors and so forth. So it’s not clear that an influenza virus case study from like five years ago tells us that much.
Robert Wiblin: I imagine that quite a number of listeners might end up going into this field. Do you want to comment on maybe how cautious it’s appropriate for people to be as you seem to be pointing out that people working in AI strategy and policy are generally quite cautious about what they publish for example, especially I guess at this early stage ’cause you don’t want to frame things incorrectly. Do you just want to comment on whether other young people entering this area should just generally be very cautious and be trying to always get other people to check what they’re doing.
Jack Clark: I think that one thing that people in this field don’t do enough of is calibrate their model for what they shouldn’t say against some of the constituents that they really care about. It’s quite common that I see people presume a level of attention, competence and awareness in government, which I know does not exist for some governments and it conditions the model that people have. My experience has been going and talking to people. Like with this language model, we had a lot of questions of what the government reaction would be. So we just talked to a bunch of people and connected to a bunch of governments and asked them for opinions about it in a way that did not leak information about the precise techniques of the model, but let them experience it.
Jack Clark: And I think that that gave us a better calibration as to where we thought the threat was. Now obviously we could have got this horribly wrong. I may get out of this podcast to some very exciting email that makes me turn a pale shade of white, hopefully not. And I think it’s that being cautious is sensible when you aren’t situated in the world, but everyone can get situated because everyone knows someone who knows someone who’s involved in a government or an intelligence agency or something like that and can kind of ask some questions.
Amanda Askell: Yes. I think in one sense being cautious is like checking with other people that the work that you’re doing is good and useful and in that sense I think it’s good to be cautious. In fact, most people just should be cautious. I think another sense of being cautious here that I want to kind of highlight is something like, “Well, should I just like not go into this field because look at all of the potential for harm that I can do.” You know what if I could go into this other field that is much more guaranteed that I will do some good, even if I won’t do as much good. And I think that if you think that you’re going to have valuable contributions and you’re going to be able to identify if and when anything that you’re doing is harmful, then it’s better to just go with the kind of expected impact that you have, even though that can be kind of psychologically difficult.
Amanda Askell: It’s really easy for people to just do this kind of like harm avoidance strategy with what they do. And it’s realizing that that strategy can be either ineffective, so you just have like a lower impact than you wanted to have. And in some cases like, it can be negative. So in many cases, saying something, even though you know, it has some potential for harm and like ideally a large potential for good, can be better than saying nothing. And that saying nothing can often actually be itself harmful, and I think people can sort of forget that when they’re thinking about how cautious to be both in terms of like, what they’re saying and the work that they’re producing and also just with their careers generally and what they want to do. Be careful. But like yeah, I think that would be my general advice.
Miles Brundage: One general comment on risk aversion and putting out work that’s not finished. I think that some people in the EA community are a bit overly risk averse when it comes to sort of sharing their views on these topics. And both in terms of talking about AI timelines and scenarios and stuff like that. I think often people overestimate, how controversial or important their view is, that’s one point. And the other thing is in terms of risk aversion in the context of publishing. There’s a lot of people moving into the AI policy area and not all of them have the same goals and quality standards as we do.
Miles Brundage: That doesn’t mean that we should lower ourselves to their level, but I think that raises questions about what the optimal explore-exploit ratio is in the AI community or in the portion of the AI policy community that’s concerned with the long term, because we don’t want to never publish anything and then have the conversation totally be dominated by people with low intellectual standards. But nor do we want to put out bad work. So I don’t know exactly how to address that.
Robert Wiblin: Especially people who are concerned about the long term of AI are becoming fairly cautious about what they publish. And I suppose this is true both in the policy and strategy crowd and I guess also increasingly potentially among the technical crowd. People are cautious about what code they’re publishing because they’re worried about how it might be misused. And I guess that we’re seeing that today, that OpenAI doesn’t want to publish the full code for this algorithm for producing seemingly realistic text because you’re not. But I guess you want to think about it more before you put it out ’cause you can’t withdraw it. Amanda, do you just want to comment on I guess like the trade off between, I guess making things too secret versus just like everyone running their mouth and being too dangerous?
Amanda Askell: Yeah. I think this is an area where it’s really easy to see the potential harms from some publication or something that you’re working on, or just like some thoughts that you have. And to think that the thing then to do is just to kind of close up and say nothing and that, that’s going to be the best way to go about things. I do think that there’s a danger here of making it seem like this is a field or a domain that’s kind of shrouded in secrecy or that’s like lots of things are happening behind closed doors that we don’t know about when that may not actually be the case.
Amanda Askell: Lots of the problems that we’re dealing with and that we’re working on. They’re just like out in the open. People are talking about them and it’s completely fine for that to be the case and in many ways I caution against … basically the problem that you have is trying to find the balance between these two things, making sure that you’re doing kind of responsible releasing information that you think could be used maliciously, but I think also not saying absolutely nothing in a way that can also be quite harmful.
Amanda Askell: It’s really important that this is a field, I think that is credible and trustworthy and honest and where people have some faith that you are making a kind of genuine effort to evaluate the kinds of things that you are really saying. And they sort of trust the underlying mechanisms to be ones where you understand the value of openness, and are weighing that against these other considerations rather than thinking that you’re an actor that’s just, or a person who’s just unwilling to say anything or unwilling to say anything honest.
Amanda Askell: That’s extremely harmful and so yes, striking that balance is really hard, but it’s also really important. I think is really important relative to doing something like completely shutting down and being overly cautious and saying nothing.
Jack Clark: When we think about publication norms, one of the things I think about, and I don’t know if this is that widely known about me, but I spent many years as a professional investigative journalist. When you do that type of journalism, you have this philosophy of find the thing that no one talks about publicly, but everyone talks about privately and publish some kind of story that relates to that thing. That’s just the way that you do the job. I think that it’s weirdly similar in policy for certain things.
Jack Clark: Something that I’ve been hearing among policymakers for a long time now is, in public, tech companies say everything’s great, lah, lah, lah, lah, lah. Aren’t we having a good time? And then in private they’ll say to regulators, we really need actual legislation around say facial recognition, because we are selling all of this stuff and we know that it’s being used to do things that make our employees uncomfortable.
Jack Clark: It’s difficult for us to restrict ourselves. We would like there to be a conversation about norms and potential regulations that wasn’t just us having it. And I think this is just a general case in AI where you talk to AI researchers privately and if they work with very, very large models, they’ll say, “Yes from time to time I find this stuff a little perturbing, or from time to time I think about how good this stuff is getting, and I wonder about the implications.”
Jack Clark: In public there’s been a lot of challenges associated with talking about these downsides because you don’t want to cause another AI winter. You don’t want to be seen as being Chicken Little, and saying “the sky is falling” when it may not be and you don’t want to, if you’re a corporate researcher, run afoul of your company’s communication sort of policies, which will typically encourage you or force you to avoid talking about downsides.
Jack Clark: With our l work here on language and on publication norms in general. The idea is to go and get that conversation that we know is happening privately and force it into a public domain by doing something that invites those people to debate it and invites actually frankly everyone who has different opinions here to now have a case study that they can talk about, and I’m frankly, I would be excited if all that came out of this was the whole community had a discussion and said we were ever so slightly too extreme in this instance.
Jack Clark: That would actually be a good thing because it would have helped calibrate the whole community around what it really thinks and would give us loads of evidence. Now my expectation is what might happen is people will talk about what we’ve done and then when they do their own releases, will maybe be able to do their own forms of release experiment while pointing to us as the person that sort of went first and maybe we de-risked that for them.
Amanda Askell: I think also improving the conversation around publication norms so that it’s no longer one where it’s like either you’re completely in favor of everything being open source or you’re completely closed and you don’t see any of the benefits of openness. I think showing that we are as an organization sensitive to all of the upsides of openness in research, it pushes forward the kind of scientific boundaries. It gets more people into the field, it allows people to rerun your experiments. Like we’re really sensitive to the fact that there are lots of extreme benefits to being really open with your research.
Amanda Askell: And then you just have to counter that against your potential misuse or unintended side effects or bad social impacts of what you’re doing, and ideally moving the conversation to one where it’s not like you either have to be completely for or against complete openness, but to one where it’s just like, yeah, there are just like pros and cons, there’re considerations for and against, and it’s fine to have a position that’s somewhere in the middle, and to favor something that’s responsible publication and finding out exactly what the sweet spot is there is like quite difficult, but I think important.
Robert Wiblin: My impression from outside is that AI is a technical field and is extremely in favor of kind of publishing results and sharing a code so that other people can replicate what you do. Is this one of the first cases where people have published a result and not release the code that would allow people to replicate it? I guess this sounds like perhaps you are trying to set an example that will encourage other people to think more carefully about this in future?
Miles Brundage: This is not the first case in which people haven’t published all of their results and model and code and so forth. What’s different is that the decision was A) Made explicitly on the basis of these, misuse considerations and B) It was communicated in a transparent way that was aimed at fostering debate. So it wasn’t that no one has ever worried about the social consequences of publishing before, but we took the additional step of trying to establish it as an explicit norm at AI Labs.
Jack Clark: Yeah. The way I think of it is that we built a lawn mower. We know that some percentage of the time this lawn mower drops a bit of oil on the lawn which you don’t want to happen. Now, most lawn mower manufacturers would not lead their press strategy with: we’ve made a slightly leaky mower. That’s sort of what we did here, and I think the idea is to see what happens if you talk to the press about that aspect, ’cause we know that they think that aspect is undercovered. So if we can go over to them and say, we know that you think this aspect is undercovered, here’s a way for you to talk about it and here’s a way for you to talk to a character that’s now animating that issue for you, maybe we can get a better discussion to come out the other side. That’s the hypothesis.
Amanda Askell: Yeah. And I think that one thing that’s worth noting is it’s important to be as honest as you can be in this domain and just in life in general. I think here, honesty is what we’ve kind of aimed for in that we’re saying like we don’t feel comfortable releasing the model, but we’re telling you that. I think that’s also important. It’s not something where we’re trying to actively deceive people or we’re trying to be more closed. I think one way in which you can be honest, is just telling people what your intentions are, why you’re thinking about it and how you’re thinking about it, so that even if they disagree, they can see your decision process roughly and why you’re doing what you’re doing and I think that’s important.
Robert Wiblin: I read in one of the documents on your website, possibly it was the charter that you are open to basically closing things up. If that seems like the safe approach, that we might want to get to a future where most AI labs are just not publishing most of the techniques that they have available because they’re concerned about how it’s going to get applied. What do you think of the odds that, that would be a good future? One where like on the technical side it’s mostly only I guess experts are kind of vetted labs who are being given the cutting edge techniques?
Jack Clark: I think you’re going to have a chunk of AI research that goes a bit quiet just because it has to. There’s a good reason today why it’s relatively hard to read decent papers about gain-of-function research, right? It’s relatively hard for me to work out how to make a flu virus 10 times worse. And I think there’s a good reason for that. You don’t want it to be easy for people to acquire knowledge like that. The challenge of AI is going to be drawing that box around the stuff that you want to be kind of a bit more private.
Jack Clark: That needs to be the smallest possible box or else you kind of take out chunks of science and the shared progress with it. I think some of the ways to get more information here is to have more organizations just do things like this where we actually tag it publicly as, we’re doing this release thing and part of why we’re doing it is to see how people react and get more evidence. The other thing is to have governments be slightly more involved in thinking about this in the same way that governments got quite involved in stuff like gain-of-function research, stuff like CRISPR, stuff like nuclear technology, because they saw it as being critical and having potentially critical knock-on effects. You can expect similar communities of shared concern to form here.
Preventing arms races
Robert Wiblin: Okay. I want to talk for a little bit about arms races and how to prevent them. I recently read an article from the Center for New American Security called ”Understanding China’s AI Strategy”, which I guess seemed to indicate both that China was quite serious about investing heavily in reaching the cutting edge of machine learning. But also it seemed like they were making noises at least like some people in the Chinese government were making noises that they were like quite concerned about arms races between China and other countries or other labs. Where do we stand in terms of like arms races between countries or different AI labs? Are people managing to coordinate to keep it under control and keep safety a significant focus?
Miles Brundage: One view I have, which is not why they share it is that there is not nearly as much sort of “arms race” behavior as one might expect. First of all, what do I mean by that? I mean that, for example, if the US was trying to be as competitive as possible in AI, it would be doing something about its immigration system. But it’s not, because it can’t, because there are political constraints. I think that’s a sign that not all actors are unified, coherent utility maximizers that it’s important not to overgeneralize about what different actors are doing or thinking.
Amanda Askell: Yeah. And I think one thing that worries me is when people try and take the analogy of like arms races between states and apply over to private developers. So like, at the moment a lot of AI development is not happening at the state level. It’s happening at private companies.
Amanda Askell: And I think people can take this model of kind of an adversarial race between different states over militaristic AI and just say, “Well something very similar must be happening in the private domain.”
Amanda Askell: When actually, development races happen all the time. You get kind of R&D races happening within a given domain. When people are making similar systems, these races don’t have to be races to the bottom. People have worried about the prospect of something like a race to the bottom on safety with AI, where people don’t try and constrain their development in a way that ensures that the system is like safe and beneficial.
Amanda Askell: And it’s just very unclear to me that the way that an AI development race in the private domain will or should happen is the kind of ultimate target. I think instead, you can have collaborative norms between private developers, that ensure that, although you’re going to race to develop a given system, you’re going to all mutually make sure that the systems that you’re developing have a really good impact on the world and that you’ve made sure that you’ve tested it to show that it’s safe and secure.
Amanda Askell: And a lot of races in the private domain have that structure. People generally don’t race to build a plane as quickly as possible. They have extremely rigorous safety standards on what they build. Even though you’ve got different airline companies competing with one another, you’re not seeing something like a terrible race to the bottom on safety in those domains.
Amanda Askell: I don’t see strong reasons to be pessimistic about that occurring in the AI domain. So, I’m really interested in this worry, but I do think that in many ways, we can’t take this analogy of historical arms races and necessarily apply it either to military AI and states or to like private AI developers, in the way that people seem to sometimes kind of quickly do or quickly think that we can.
Robert Wiblin: Amanda, seems like one of your main research focus is races, and I guess, the game theory of arms races. So do you want to talk about what you’re looking into and what findings there are? Though I guess it’s early days.
Amanda Askell: Yeah. So, I think both Miles and I have been thinking about this for a while. I think that it can be very easy to look at a development race and think that you’re looking at something highly adversarial, where both parties just want to win at kind of any cost, where they’re willing to make any trade off.
Amanda Askell: Game theory can be kind of useful here. It can sometimes be a little bit idealizing, but you can learn some really important lessons there of the form.
Amanda Askell: Hey, look at this payoff structure that is the one that you’re kind of assuming when you assume that a race is adversarial like that. I can make really mild adjustments to that payoff structure, and I still get a race, but I get one that’s much more collaborative, and where the outcome that I expect is much, much better.
Amanda Askell: And so you might superficially look at scenario and think, “Oh gosh, that looks really adversarial.” But when I actually look at the nature of the payoffs, I realize, “Oh, I should actually just expect for people to work together to make sure that systems are safe here.”
Amanda Askell: And so, I’ve been trying to identify some of the kind of key properties that races have; that would mean that they are more collaborative, and also some of the key kind of properties of the world. Like, can people bargain. or can they communicate with one another? And I think some of the kind of features that I anticipate occurring in AI development are really conducive towards collaborative races, more than kind of adversarial races.
Amanda Askell: So examples are, if a technology has high upsides that are shared, that is something that decreases adversarialism. If it’s the case that the downsides of developing something that’s unsafe are also very low and also shared, that’s another feature that increases the chance that you would have collaborative races.
Amanda Askell: And I think we see a lot of these features in AI development, and also we have a lot of the mechanisms that make collaboration easier. So even if you were to worry that something was going to be adversarial, it can actually end up being the case that you have mechanisms just to make sure that that doesn’t happen. And often these mechanisms have been successful in other domains. So, I don’t see a reason to not also be optimistic about trying to apply them here.
Robert Wiblin: What kinds of mechanisms are you thinking of?
Miles Brundage: Yeah, so we’re actually running a workshop in April on this topic and trying to take a broader perspective than just AI. So I think while there are AI specific issues, when thinking about arms races and cooperation, so forth, it’s also important to look at lessons from say, nuclear arms control, where they’ve invested a ton of money, and time, and expertise into coming up with ways to build trust between adversarial parties and work out monitoring systems, interviews, export controls, satellite, and so forth.
Miles Brundage: I think we’re at an early stage in terms of understanding what the analogous tools would be in the context of AI, but broadly, I think there are a few buckets. So, there’s sort of software related tools, so interpretability, verification, encryption to allow decentralized access and deal with privacy issues, et cetera.
Miles Brundage: Then there’s sort of a hardware bucket of monitoring computing capacity, and who has access to the most powerful inputs which might be computing power itself, or things like data, and so forth.
Miles Brundage: Then finally, there’s institutional tools and mechanisms that involve building trust between individuals, as well as organizations, setting up incentives so that people, even if they can’t see exactly what’s going on in the latest AI system, they are confident that the incentives of the people at the organization are aligned in the right sort of way.
Amanda Askell: The way I think about it is, that there are sort of at least a couple of types of interventions here. On the one hand, there are ways of increasing trust between different parties because if you’re more confident that other people are going to act well, this just gives you a greater incentive to act well.
Amanda Askell: I guess another set of interventions actually just change the payoff structure in a way that makes it more conducive to cooperation. Those are a little bit harder to have any kind of influence over, but I think those are the kinds of things that you see happening when instead of just having trust building between different organizations, you have for example, an ombudsperson that you can go to.
Amanda Askell: Or, if you have a regulation that just says you can’t, in fact, release a system until you’ve established to this degree that is safe and secure. And so, I think we’re focusing a lot on increasing confidence that everyone is going to coordinate to avoid really bad outcomes, but also there are probably other interventions that are going to arise in the future that are actually just trying to change the pay-off structure in a way that really prevents it from being in anyone’s interest to develop unsafe technology.
Robert Wiblin: Yeah. So as I understand it, OpenAI has written into its charter that if there’s another organization that’s getting close to developing a very advanced AI then you won’t race with them. You’ll just try to basically join them and help them to succeed, which I guess is like a mechanism that you have to try to avoid spurring competition and an arms race.
Robert Wiblin: Do you think that that is enough to make sure that OpenAI doesn’t encourage competitive behavior, and are there other concrete kinds of steps or concrete things that need to happen in order to just make arms races less likely?
Jack Clark: I think there’s a shared field of information hazard and evaluating releases in themselves that I’d expect to lead to a community of shared concern forming between OpenAI and other AI research organizations.
Jack Clark: So, I’m pretty optimistic that we can get to a world where we can host a workshop in a year or two on issues x, y, z in AI research. How do we optimize publication for scientific progression while maximizing safety? Like I feel like that’s just a community that can be formed and those are the sorts of concrete things we can do along the way to help us.
Jack Clark: It’s hard to talk about super specific steps with regard to something with such uncertain timelines, but the other thing you can imagine us doing, which the policy team already is beginning to do here, is creating real controls over the organization of OpenAI itself that actually apply.
Jack Clark: They apply within the bureaucracy. They’re integrated up into board level decisions. They placed constraints on other teams. I think we view that and getting that to work as a research project in itself, because if we can’t get a hundred or 120 people to agree on certain constraints being placed on their own private development, that would be poor evidence for us being able to get the entire world to do that.
Amanda Askell: I think there are other mechanisms that will hopefully help with this. I think generally having organizations that are pretty mission focused, where their mission is good and is about creating beneficial AI is extremely helpful because it means that as someone at that organization, you don’t necessarily see your role as being just promoting the interests of the organization, but rather promoting the interests of its mission generally.
Amanda Askell: That means working with other organizations and people at other organizations in so far as it pushes forward like the mission of the organization.
Robert Wiblin: That’s going to be a little bit hard if you have say a company, who’s a public company, whose goal is to make a lot of money, then I guess it’s hard for them. It’s kind of written into their charter that their goal isn’t just to benefit everyone. Although, I suppose , perhaps, they could have a vote of the board to say “No, actually ,we’re going to have a different goal for this project.”
Amanda Askell: Ideally, often, these things also like won’t necessarily conflict. If your goal is to make sure that systems are safe and beneficial, that’s generally in the interest of most developers to like ensure that that’s the case.
Amanda Askell: So, I like to think when it comes to ensuring positive outcomes, collaboration between people in groups that is conducive to that is just basically going to be in the interest both of all organizations involved. Maybe that’s overly optimistic, but that’s my view.
Robert Wiblin: Do you think also about the kind of global cooperation, international cooperation and how to avoid militarization of AI? Or, is that kind of getting a bit beyond the remit of OpenAI?
Miles Brundage: It’s definitely in the remit. I mean, we want to make sure that AI benefits all of humanity. In terms of what the right units or level of analysis is, is it countries, or companies or public-private partnerships or individuals?
Miles Brundage: I think that’s sort of uncertain and partially something we can influence. Like, if we wanted the government to be more involved in a certain way, then developed a consensus around it, we could do that. So, I think it’s important to be aware that who the actors are is something we can influence, as one point.
Miles Brundage: Second point to consider, is that we don’t really know to what extent the mechanisms of trust building will vary depending on who the actors are. So at this stage, I have for the purpose of planning my research, I’m trying to keep as much option value as possible in terms of, am I thinking about militaries interacting with militaries, or militaries interacting with companies and so forth.
Miles Brundage: And look at it more from a mechanism perspective. I think that’s sort of why my and Amanda’s research is complementary, because Amanda is thinking more in terms of interests and incentives, and I’m thinking about tools that could potentially intervene on those incentives, and I think it’s like not obvious in advance of how those map onto each other.
Militaries and AI
Jack Clark: Can I say something counterintuitive and odd, which might make Miles and/or Amanda, or both of them slightly mad at me.
Jack Clark: I think that we don’t talk enough about militaries and AI. We talk about militaries and AI in terms of value judgments about what we do or don’t want militaries to do with AI. I’m interested in organizations like OpenAI and others talking about how when militaries eventually decide to do something, the AI community is in a position to make whatever it is that those militaries do safe, in a way that actually makes sense to the militaries.
Jack Clark: I think that we are currently potentially underinvesting in that, out of a reasonable hypothesis that if we are going to talk to militaries, there’s a lot of information that could leak over to them, and there’s a lot of information hazard there.
Jack Clark: I recognize those concerns, but I think if we basically just never talked to the militaries, essentially treat the militaries like an adversary in their own right, then when the time comes that the U.S. and China are creating highly autonomous drone swarms in the South China Sea.
Jack Clark: And we have certain ideas around safety that we really think they should embed into those platforms, they may not listen to us, and that would actually be uniquely destabilizing and maybe one of the places where these arms race dynamics could rapidly crystallize. So, I think it’s important for people to bear in mind that there are already kind of stigmas emerging in the AI community about what is and isn’t acceptable speech with regard to AI policy and AI actors.
Jack Clark: We should just be cognizant of that and try to talk about all of the different actors inclusively.
Robert Wiblin: I know that some people are a little bit freaked out that getting AI military systems in use could make states more willing to engage in attacks because it would mean that it’s only kind of machines that are getting destroyed, or equipment that’s getting destroyed, rather than people dying.
Robert Wiblin: Do you think that’s a realistic possibility? To be honest, that doesn’t sound so intuitive to me ’cause I don’t think that’s the main reason that states don’t attack one another. But,-
Jack Clark: I mean, more people die because AK-47s exist. I think there’s actually good evidence that if you make stuff to hurt other people, really, really, really, really cheap, more people end up using it. I think that that to some extent is separate to these specific worries about AI.
Jack Clark: Like for me, what’s called a lethal autonomous weapon, it sits on a spectrum from today’s technology and extends into the future. There isn’t really a hard and fast point at which it becomes an AI weapon, as opposed to just a military device that’s got iteratively better and more independent.
Jack Clark: By some definition, landmines are kind of like a lethal autonomous weapons in a sense that they’re autonomous, lethal, and they’re a weapon, but they’re not AI, are they?
Jack Clark: And so, I think that because we do not want to hurt other people, I think the issue of militaries using AI is kind of very hard to think clearly about, because your main thoughts are “That’s just going to hurt other people.” And I think it sort of breaks your ability to think in that domain. I don’t have a super constructed point to make here because it’s broken my own thinking to think in this domain.
Amanda Askell: For my part, I just think that a lot of these things are just empirical questions to some degree. There are lots of areas where people can work here and one of them is trying to assess how these things are going to influence military outcomes in potentially unanticipated ways.
Amanda Askell: Like, what does the effect of uncertainty have on likeliness of going to war? How much do you expect future wars to be ones in which it’s easier to verify whether one side will win or not? And does that expectation decrease the amount of war that you have? Is it the case that if you expect to have fewer casualties because you have more autonomous weapons, that you’ll see like a reduction in the amounts of wars that you have?
Amanda Askell: Because this is in fact currently something that massively prevents countries from going to war. I think these are kind of open questions that one has to just kind of approach, looking at the current evidence that we have. I think what Jack said is also like very true, but for this, my kind of overall summary is like it is an open question.
Robert Wiblin: Ultimately, our goal is to get lots of great colleagues for you three; I guess, either at OpenAI or in this broader AI policy and strategy ecosystem. So, we want to spend, yeah, the last perhaps 40 minutes that we have here trying to basically offer advice to our listeners who might be able to usefully contribute to this whole field.
Jack Clark: I’m going to make a very specific plug, which is incredibly biased. So, I’ll lay out the bias and I’ll describe the plug. So, the bias is, in my spare time I write an AI newsletter called Import AI. So, I am biased towards newsletters being useful.
Jack Clark: So, there’s my bias. The plug is, but we’ve seen a number of people start to write policy-specific AI newsletters. There’s a newsletter on European AI from Charlotte Stix. There’s a newsletter on Indian AI from someone at the Berkman Institute at Harvard that’s just started.
Jack Clark: My colleague, Matthew van der Merwe, who I believe is at the Future of Humanity Institute, writes a policy section within my newsletter, Import AI.
Jack Clark: I know that any congressional staffer, any staffer for any politician in any country I’ve been to, has actually made mention of needing more materials to read to get them into AI and AI policy. And I think this is a very high leverage area, where if you are interested in AI policy, just trying to produce something that’s useful to those people, which summarizes AI and its relevance to policy within a specific tightly scoped domain will not only give you the ability to calibrate your own thinking and generate evidence, but it will allow you to make friends with the very people you may want to work with.
Jack Clark: I think it’s unbelievably useful and high leverage and everyone should do this more.
Robert Wiblin: So, the cutting edge is email.
Jack Clark: In the glorious AI future, the cutting edge is text based emails that have no images in them. Yes.
Robert Wiblin: I guess maybe you’ll be able to get your AI systems to produce these summaries rather than having to write them yourself.
Jack Clark: We’ve tried that and we have tried it on the newsletter. The bad issue here is that it’s entirely wrong. So, I think that makes it meaningfully worse from what I write. But there, there may be a crossover point.
Amanda Askell: Yeah, it’s not clear how useful fictional newsletters are. If you want to write your own fictional newsletter-
Miles Brundage: [crosstalk 01:50:39] on Earth Two.
Jack Clark: I would read the Earth Two newsletter. The Alt-AI newsletter, and it’s just parallel- we should do this. We should do parallel universe reports.
Amanda Askell: And just combine it with existing results. So it’s like the unicorns have now developed AI.
Robert Wiblin: As you can see, everyone, you should join this field ’cause they have great banter.
Robert Wiblin: So, Amanda, last time I spoke to you, which was I guess was on the six, not even six months ago, you were just finishing your Ph.D. in philosophy doing infinite ethics and I guess, now you’re like pretty much at the center of this field of research.
Robert Wiblin: Do you want to talk a bit about your journey into getting into, AI strategy and how you’re finding it, and how you did the transition?
Amanda Askell: Yeah. So, I had been working on some questions relating to AI policy before I finished my Ph.D. because it was an area of interest. I then took a kind of opportunity to do a project at OpenAI on these topics and use that as time to sort of learn a lot more about this space, and about related fields, and then hopefully, did some useful work. And, hopefully will do further useful work in the future.
Amanda Askell: I basically found that I thought a lot of the problems here are extremely important. They require a certain level of willingness to investigate multiple fields, because there isn’t like an existing literature that’s huge and can really hold your hand through everything.
Amanda Askell: And so I saw that as something that was like quite interesting and exciting, because I just like learning about new things but it does mean that for a lot of people, I think it can be a little bit overwhelming. I think honestly takes a long time before it stops being kind of overwhelming.
Amanda Askell: The amount of additional information you need to understand and know to make progress in some of these questions. It can feel kind of like you’re trying to quickly get a Ph.D. in seven different subjects, because they’re all relevant to what you’re doing.
Amanda Askell: So, you’re like, “Yeah, I need to know game theory, I need to understand international relations, I need to know how businesses work. I need to understand technical AI.”
Amanda Askell: And so that’s the challenge, and I suppose like that’s mainly what I’vebeen doing, is both working in this area and also trying to like acquire those seven Ph.Ds.
Jack Clark: And to add to Amanda’s point, we do have questions here like, “Who’s the best economist who knows about game theory and international relations we could get to like read our paper? Or, I’ll come back from Washington and be like, “Who’s the best person we can talk to about CRISPR or biomedicine or the horsepox virus,” or you get the picture.
Jack Clark: It’s such an interesting domain because the problems that AI involve basically crystallize unsolved problems from a whole bunch of other fields. So we need to map the examples of other fields and then maybe by solving some of these problems in the AI domain, we’ll develop solutions that can be ported back out to some of those fields that we’ve taken the problems from.
Jack Clark: I think that in itself is an exciting idea to me also.
Robert Wiblin: So, let’s work a little bit chronologically through someone’s career here. I guess if someone was fairly young, do you have any advice for what they should be doing early on in their career? In order to, prepare what they should be studying, or reading, and who they should be meeting?
Miles Brundage: I mean this is sort of a high level suggestion of how you think people should not be as risk averse as they often are when it comes to reaching out to people and applying to places when there aren’t formal job availabilities and so forth.
Miles Brundage: I think generally it’s sort of a fast moving field and like reaching out to people independent of whether you think they’re too senior or whatever is something that I think is very fruitful and I’ve had a lot of fruitful conversations with relatively junior people who just reach out to me by email.
Miles Brundage: And I think more people should do that. Well, I mean, I don’t know how scalable it is for a bunch of people that email me but people should do similar things.
Amanda Askell: Yeah. I think one question is something like, “What should people be doing generally in terms of like studying and getting to know things?” And that is tricky here because I think there’s lots of useful fields, and in some ways that’s great.
Amanda Askell: It means that perhaps, you can contribute to this field from a myriad of different backgrounds. I do think that, you know, doing a little bit of technical work in computer science is always helpful. And some of the other fields that we’ve talked about are also helpful.
Amanda Askell: But if I look at people who have worked in this field, often it starts by them simply finding a problem that is really interesting to them, that is pretty relevant, and just writing something on it, for example, forming a view on that.
Amanda Askell: And then, I think reaching out to people can be really fruitful because you’ve already shown both interest and roughly like what you can do, in terms of the types of research that you’re engaged with. So, I think I would actually and so far that seems to be the way that a lot of people are entering the fields.
Amanda Askell: It’s maybe something that’s worth considering, um, is just to actually do some work, see if you like it, see if you feel like you’re good at it, and see if other people like that work.
Jack Clark: I’d give a special vote to just trying to show thinking with regard to some of the material output by AI organizations. Now, that could be reading some of the, say blogs from Microsoft calling for federal regulators to look at facial recognition. Or it could be looking at Google’s governance of AI paper, or it could be looking at the stuff that we and other institutions put out about the malicious uses of AI and then responding to it.
Jack Clark: Because none of those things are entirely correct documents or blogs. They have points which you can disagree with and I think the easiest way for us as a team to better understand how to advise people and to know where they might fit in, is to get a sense of their ability to synthesize stuff in this domain, and identify the kind of logical inconsistencies because that seems so important.
Jack Clark: I think that people underestimate how valuable it can be to produce an example of their own thinking. Because in this domain, we are all overstretched, there are way too many problems to work on, and we have enough trouble just scaling our own hiring processes to keep up with inbounds; so, you can make it easier on people like us and others, by producing one or two kind of opinionated outputs in the domain that you want to work in.
Robert Wiblin: What skills, or temperament or abilities are required to be a good AI policy or strategy expert and usefully contribute here?
Jack Clark: I think humbleness is pretty useful because I think one of the natures of policy is that you get to talk to people who’ve done huge amounts of things, or who represent very, very large numbers of people, and you have to recognize that your concerns may seem very, very important to you, but to this person, they may also be wondering about how they keep like 50 million people fed, or something.
Jack Clark: So how do you talk to these people in a way it feels reasonable to them? Seems like a challenge in itself. I think it’s also really valuable to have a good synthesizing brain. Like, if you like looking at multiple bits of information in multiple domains and bringing it all together to develop some kind of theory of the world or theory of change, I think you’ll do better in AI as a consequence.
Miles Brundage: First, I just want to echo what Amanda was saying about the value of doing some thinking and coming to people with opinions and reactions of your own, and I’m not sure what the optimal way to balance that is. With the other point I was saying earlier about not being too risk averse, but you know, obviously there’s some trade offs.
Miles Brundage: I think generally, people should not be afraid to reach out and get feedback and circulate ideas, but obviously, if you’re reaching out to an academic and they’ve published some stuff you might read eventually, you should read it before you email them.
Miles Brundage: I think stuff like is, I’m not sure there’s a well written list of those sorts of norms somewhere, but someone should do that.
Miles Brundage: Other thoughts, I think generally surrounding yourself with people who are better in some way, or more knowledgeable in some way, is super useful. I think at like OpenAI for example, it can be true that within the organization that everyone is exceeded by someone else in some dimension.
Miles Brundage: And there’s sort of no one who strictly dominates everyone else, in terms of knowledge. I think that is the mix of AI policy stuff is like, and is sort of necessary given that it’s a messy interdisciplinary problem.
Robert Wiblin: So this can be somewhat uncomfortable kind of question, but how do people tell if they’re smart enough basically? And are there any kind of red flags that people can notice that would suggest that they’re just not going to be such a good fit for the field?
Jack Clark: I’ve always found that a good way to tell if I understand something is if I can go up to an expert in that domain and ask them a question that is a relevant and sophisticated question that usually suggests to me that I’ve read enough of the literature and internalized it enough that I can ask them something that means something.
Jack Clark: I actually knew that I really wanted to get more into AI after I ran into Ilya Sutskever, who’s the co-founder of OpenAI, at some press dinner in San Francisco in 2014.
Jack Clark: I asked him how he thought Neural Turing Machines and other differentiable memory systems for AI components may or may not scale. And he kind of went, “Who are you?” and like stared at me for a bit. And I was a journalist at the time, but that told me that I had understood it sufficiently to ask a person like Ilya, who is a domain expert, a question that they thought was actually relevant.
Jack Clark: Right? And I think that’s a tell. So, coming back to my point earlier about having people go and look at outputs, like Google’s governance of AI paper, malicious actors other stuff, and find the point where they have questions or perhaps disagree, would be really helpful.
Jack Clark: Because if you can produce an output that comes out of a reasonable disagreement with something in the literature, it probably means you’ve understood it. If you’ve read the literature top to bottom and you can’t find any points where you personally disagree, it feels unlikely that you’ve understood it sufficiently well.
Amanda Askell: Yeah, so I think also we’ve been focusing a lot on research positions here, and I do think that as the field grows, there’s going to be room for like more positions of different sorts.
Amanda Askell: And so one thing that is worth knowing is, if you feel like your skill set is, for example, you’re really good at communicating and you’re really good at synthesizing the kind of latest innovations, and talking about it to a public audience.
Amanda Askell: That’s probably going to be a skill set that’s really useful. So don’t necessarily rule yourself out if you’re like, “Well actually I don’t enjoy that kind of research,” or ”I simply don’t do it”. I think that there’s a little room for other roles here, hopefully in the future.
Amanda Askell: I do think that just trying it is like a good idea and then getting feedback, and hopefully you’ll be surrounded by the kind of people who will give you honest feedback. If they’re like, “Actually, it seems like that you’re not suited to this kind of problem.”
Amanda Askell: Also, don’t take that as an insult. And part of me is like, these problems are of a very particular kind. They involve a lot of moving parts. I find a lot of that quite difficult. And so it’s like, you can be amazing at research in one area, and then just find that you really struggle in another area.
Amanda Askell: So try it, get feedback, and don’t worry too much if it turns out that this isn’t what you want to work in, I guess.
Miles Brundage: Just a quick comment on different levels of expertise in different areas. I think, there’s no perfect solution to like, should I spend my time thinking about policy stuffor, figuring out the state of the art , or implementing deep learning models, or trying someone else’s better model?
Miles Brundage: I don’t think there’s a clear answer for what the optimal use of your time is, but some heuristics I think about are that, first of all, it’s important to distinguish different types of expertise. So, there’s a sociologist named Harry Collins who distinguishes interactional expertise from contributory expertise.
Miles Brundage: What Jack was talking about earlier about, getting Ilya to be like, “What? You actually know what you’re talking about.” That sort of interactional expertise, where you’re able to sort of pass the Turing test at a conference as being someone from that area.
Miles Brundage: And I think in my experience, I have incrementally been trying to pass that test better and better by going to dozens of AI conferences and reading a lot over the years. But there’s no definitive end point, unless I were to decide I wanted to be an actual technical researcher, which would then go into contributory expertise, a mode where you’re actually advancing the state of the art.
Miles Brundage: I think it’s important to think in terms of those distinctions and different thresholds of expertise. So, in some cases, you might want to have an expertise for signaling reasons, or for networking reasons. And in another case, it might be because it’s actually useful for your work.
Miles Brundage: I think those are different motivations and it’s important to be clear about that.
Robert Wiblin: This isn’t a huge field, at the moment, or at least jobs that are specifically focused on AI policy. So are there any positions that are kind of natural stepping stones that people can take between what they’re doing now and ultimately end up entering that field specifically?
Jack Clark: I think that project manager type positions are probably useful here. I mean, that’s certainly a need that we’ve observed that we likely have here at OpenAI, and I think it’s the same. For policy, you’re going to be putting events together. You’re also going to be creating processes, like information hazard or safety reviews that need to actually be run within an org and need to work properly.
Jack Clark: And that’s the sort of skill where, you can have a lot of interest in AI, you can have worked in a completely different field, but you can work as a project manager and self-educate during that and perhaps then make your transition to a research assistant role or something like that.
Jack Clark: I’m quite bullish on that sort of role being a good path in because I’m, as Miles said, I’m a big believer here, and I think Amanda’s observed this, in learning by osmosis, and getting yourself just into the community, so that you can be surrounded by those conversations is kind of half the challenge.
Robert Wiblin: How important is it to have a technical background, do you think?
Jack Clark: I have a degree in creative writing, so I don’t know. I’m not a good person to ask, because I’ve self-educated in this domain, which was admittedly much harder than having all of the equipment. I would have done it quicker if I had a slightly different background where I’d invested slightly earlier in my career in a compressed base of stats and math, rather than having to self-teach. But I’m pretty confident you can self-teach to a reasonable level of competency.
Amanda Askell: I think that the thing that is important to have is excitement about and interest in the field and the work that’s being done. One thing that, yeah, is quite noticeable is that all of us in our spare time in the past have either self-educated in this, and in some cases just on an ongoing basis, like actually engage in trying to build their own things, even though we’re not technical researchers. So yeah, I think you definitely want to have that. You don’t want to be unexcited about AI and go into AI policy.
Miles Brundage: Yeah, Jack mentioned communication and translation as a big part of what policy people do, and I think that’s really true. Just to speak from my own experience, I found that having a lot of experience working in DC with policymakers and being forced to compress the key takeaways of various areas into layman’s terms was very useful preparatory experience, as well as academic research, requiring to write for different audiences, both for technical papers as well as broader communication like blogging, tweeting. All of those things, I think, are useful for calibrating how you express things to different audiences.
Robert Wiblin: Yeah, I guess I want to qualify that a bit, that I guess all three of you are kind of researchers at what I guess is kind of a think tank focused on this area. And I suppose there’s a kind of a larger attack surface on this problem, which would include going into government, which might require quite different skills, or even trying to become a politician or something like that. Do any of you want to comment on the broader picture, so we don’t lose that while we’re just talking about the roles that you’re closest to?
Miles Brundage: Well I mean, everyone needs communication skills, even if you’re just working in a lab. The three of us need to communicate with each other. So I think no one can get off with not being to think or communicate clearly. I think everyone needs to work on that, but Jack can comment on more who needs what.
Jack Clark: As far as I can work out, a universal requirement is being able to work on multiple things in parallel, because policy is so fundamentally about the world around you. It could be the legislative environment around you. It could be the mood of the public. In our case, it could be the specific ideas that seem increasingly relevant or less relevant due to technical progress. But the world will just decide to kill your idea one day, and it won’t care that you’ve worked really hard on it. A legislator will not like the bill, so the bill disappears. The public will suddenly be exposed to a news story that biases them against your political position, and so on. So you need to have that grit and willingness to take a bunch of bets and not get too upset when some of those bets inevitably disappear or fail.
Biggest bottlenecks to progress
Robert Wiblin: What do you think of as the biggest bottlenecks to progress in the field? Are there kind of clear bottlenecks, or is it just we need to progress across the board?
Jack Clark: I think it’d be helpful to have more institutions. We are an organization trying to create this field, along with our peers at places like DeepMind and other AI developers. There are also people coming into it from more traditional policy environments and/or traditional think tanks. But what we sort of lack are big coordinating institutions. Partnership on AI may sort of scale to do this eventually. We’re in the stages now where it’s quite promising, but it’s one that should be among many. And what I’d really like is to have big institutions focused on AI policy that are somewhat academic in flavor and which are also quite linked to governments in a bunch of geographies, because that seems like it would make coordination easier, give us an easier time to get calibrated, and would sort of provide a more natural route into AI policy for people who are coming up through academia especially, because then they could just sort of divert into this.
Amanda Askell: I think that, and increasingly I hope that there’s going to be more sort of material for people who are interested in the field as it grows, because one of the key issues is, at the moment, if you go into any other field, if you study mathematics or you study anything at the university, there’s this set curriculum that really guides you through it initially. And that is an excellent way of getting people to come through into the kind of fields that require those skills, whereas at the moment there isn’t such a kind of pipeline for people working in AI policy. I hope that that will be something that people who are currently working on this will basically lay the groundwork for going forward.
Amanda Askell: My idea would be, in not too long, it isn’t in fact hard to learn about this as a field on its own, because there will just be material that you can just look at roughly and get kind of an education in it. So I think the lack of that material is making things a little bit harder at the moment.
Robert Wiblin: I guess it seemed like a couple years ago, one of the key skills that was needed was kind of the audacity to go in and tackle questions that are somewhat poorly formed, where it’s not clear even what an answer would be. To what extent is that still the case, do you think, if it ever was?
Amanda Askell: I think it’s becoming less true. As we said, as problems become identified, you still kind of want that skill, in the sense that I think a useful skill is being able to point out when you think that things are poorly specified. You shouldn’t expect a field to have everything in order and to be identifying the right problems or the right solutions completely, and so I think you do also still … you want a little bit of that, but ideally, I also want it to be the case that people don’t have to have that skill going forward.
Ideal most influential jobs
Robert Wiblin: Pushing on from preparing and as you go into it to actual jobs, this is perhaps a question that could be uncomfortable, but what are the ideal most influential roles within this entire field, where having someone excellent really does improve the prospects?
Jack Clark: Can it be a role that does not exist yet?
Robert Wiblin: Yes.
Jack Clark: Okay. The UK has the Government Office of AI, which is actually located, if I recall correctly, within the Cabinet Office, so it’s pretty empowered, and it’s pretty connected. Some countries have their own minister of AI. I think that we could get, in the same way heads of state today have a science and technology advisor, I think if AI continues to grow in importance and significance, we could imagine having an AI advisor be its own role there, on here’s exactly what you, Mrs. or Mr. or Ms. Politician, need to think about with regard to AI. That seems like it could be uniquely high leverage, especially as we think about things like the dynamics of competition between nations and these people needing to take highly consequential decisions which have a race component. You probably want there to be a person in the room who says, “Ooh, that’s not such a good idea.” I think that role should exist and probably will.
Robert Wiblin: Yeah. Are there any other vacancies that exist at the moment that you might want to highlight as things that people should seriously consider applying for? I mean, what positions are available at OpenAI, for example?
Jack Clark: We’re currently hiring for research assistants and research scientists. Research assistants are designed for people who either have significant experience in other domains and want to transition into AI policy, and this role gives them the chance to work with our research scientists on defined projects while developing their own subject matter expertise to empower them to go and be their own researchers eventually. And research scientists are people who are going to go and carry out a specific research endeavor within AI as a whole. We’ve tried to create these two roles. We are probably going to create a projects manager role as well, because as I said, we’ve felt that need, and that will be, of the three, the most, you can think of accessible from people outside of AI. It will be most friendly to the people with the broadest amount of experience.
Robert Wiblin: Can people without much relevant experience just apply for these roles and potentially get a position, or does it have to be a more gradual transition than that, where they kind of prove themselves through what they’ve written?
Miles Brundage: We’re not yet at the point where we have a super coherent sort of career trajectory, but we’re starting to get to that point, in the sense that there are now, in a way that there weren’t before, opportunities to get a PhD funded to work on actually relevant AI policy topics that you might be interested in, at FHI at Oxford, for example. Likewise, there are starting to be internships and research assistants and various junior positions that could scale up.
Miles Brundage: But in terms of what is the optimal trajectory, I think it would probably be irresponsible for us to be too prescriptive at this point, because there’s uncertainties around timing of these issues and how much lock-in there will be, where to go on the explore-exploit spectrum, and how grad school might fit into that and so forth. I think it’s kind of hard to give super generic advice.
Robert Wiblin: Given that quite a lot of material in this area isn’t yet published, is there any easy way for people to get up to speed on what’s known? And maybe is there a conference at this point for this field yet? Might there be one in a year or two?
Jack Clark: There should be. I don’t know what Miles and Amanda think. It would be so much simpler if there was one place we could all go and hang out and talk. I currently feel like we are composing communities out of other bigger ones, but there’s definitely enough interest here that we can create our own shared thing. I would love for people, especially some of the listeners and readers of this, to think about what such a conference might look like, how it could be made most useful, and how you could create really good incentives for people like not just OpenAI but Google and also people like the National Institute for Standards and Technology, NIST, in the US to all support it. Because there’s definitely room, and I’m sure that there’s something which could be quite different to current conferences. This could be a good way into AI policy for the right event coordinator or manager, if they can think up what this should look like.
Robert Wiblin: Do you want to pitch working at OpenAI specifically, I guess, while we’re here? What would you say to someone who’s listening who’s thinking, “Oh yeah, maybe I should plan to work at OpenAI in future”?
Miles Brundage: If you’re interested in making sure that AI goes well, and you roughly subscribe to the simplified model I gave before, of figure out the right thing to do and get people to do it, then I think OpenAI is a somewhat unique organization, in that it is large enough to be significant in the AI technical world, and it has enough credibility to take actions that people actually notice and debate. But it’s not so large as to be un-steerable and un-influenceable. So it’s a place where you can have an actual impact on the norms in the AI community.
Amanda Askell: I think it’s also an extremely mission-driven organization, and so one thing I’d recommend that people do is kind of take a look at the charter, for example, and if these are the sorts of values that they agree with, then I think this can be a really excellent place to work. If people have questions about this, they can also reach out to learn more.
Our US AI policy careers article
Robert Wiblin: You might’ve seen, I think a week or two ago we released this a fairly lengthy article about US AI policy careers, written by Niel Bowerman, which I think probably several of you have either read or edited or commented on. Did you have any reactions to that that you want to share? Anything that you think that’s particularly important for people to note from it or where it might have gone wrong?
Miles Brundage: I basically agreed with the high-level point that it could have very high expected utility, but it could also be a disaster and a huge waste of time. I think people should be very mindful of their comparative advantage and timing and so forth if they’re thinking about going into government, because there’s a pretty good chance of no impact, but also potentially some chance of huge impact.
Jack Clark: I think from my point of view it may underestimate the value of being highly skilled in very specific government domains like defense. I didn’t see quite as much discussion in there specifically of this, and I guess I’d just like to emphasize that defense and intelligence are an area where, like government will be doing stuff around AI, it’s an area where a lot of concerns from the AI research community sort of center on, but it’s also an area where you’re working in an environment that has underinvested in technology for many years and is determined to race to close that gap, which actually means that there’s a lot of opportunity for AI policy people to go and be advocates for safety or for coordination or other things that these organizations might not consider. I’d just like to say there should be more emphasis in the community on working on that, no matter how much it may conflict with the personal moral or ethical values of people who may be making that choice.
Importance of developing a network
Robert Wiblin: One of the things that quite a few people have emphasized to us is the importance of developing a network of people who are kind of experts in the area of policy and perhaps building a network in DC specifically, if you want to actually get your ideas implemented. Do any of you have any comments on that, on how you might meet the relevant people and the importance of having credibility and a good reputation and knowing people one-on-one?
Jack Clark: I’ve met quite a substantial amount of people in policy, not through my OpenAI affiliation, but through the fact that they subscribed to my newsletter. And they have invited me in because of the newsletter, which is interesting, because it sort of suggests that they have found that sufficiently useful that it’s transferred some credibility to me as the main author of it, and that’s allowed me to get in. And it’s also increased my belief that just producing stuff designed to be useful for your target audience, in this case policy people, is a really good action to take to let you get evidence about this and know how to meet and who to meet.
Jack Clark: I think that if you’re being introduced to people in policy land, it’s very helpful to have a good title or qualification or an institutional affiliation. That sort of goes through phases of career maturity, like if you’re established, you’ll have a good sort of job title. If you’re really, really, really, really smart and good at school, you might have a really good PhD or a really, really amazing degree. But the other thing you can do is go and get an affiliation with a think tank or with an institution that grants you that sort of imprint of legitimacy and lets you get in the room. And those are easier to get than the others, and just frequently require you to show interest and aptitude in the area.
Robert Wiblin: Yeah, one thing that can hold people back from wanting to enter an area is not knowing what nearby career opportunities are available if it doesn’t work out. That is, not being sure what they can go to do afterwards if they decide to leave the area. Do any of you have any comments on that, on how risky a career move it is to be doing what you’re doing, both in terms of what you could do instead if you don’t like it, and what you could do afterwards?
Miles Brundage: Yeah, first, I think it’s not clear what the best choice dream job should be for a particular person. I think it’s not clear that these are going to be super acute trade-offs where there’s one unreachable job and none of the others are any good. That’s one sort of ask is is what’s the tier? Which might be the wrong way of thinking about it, but let’s say there’s top-tier jobs. One way in which an effort to reach those top-tier jobs could fail gracefully is if you end up getting to the mid-tier or the bottom-tier jobs. But I think that’s, again, sort of a misleading way of thinking about it, because the tiers are not super clear, and there are different functional roles that people can play. The most impactful thing for one person could be at a different organization than for another person. So I think it’s less obvious to me what the default pathway is and escape plan should be for different people. I think it’s more dependent on the particular skills and interests of the person and what their opportunity costs are and how they think of these tiers.
Amanda Askell: Yeah, it’s a difficult question to answer, because it’s going to vary so much from person to person. I think in general, you’re going to learn a lot getting into a field like this, and often that’s extremely useful, even if you end up moving on to other things. You’ll just have acquired this really excellent skillset, but for some people that might not be sufficient to outweigh the risks, depending on what career path they’re on. I think I’m generally encouraging of people to be risk-neutral when it comes to their careers, because at least in my case, I think it’s very easy just to be very risk-averse, but take into account all of the information you have about your own personal circumstances.
Amanda Askell: Ideally, this shouldn’t be risky, because it should be the case that people who work in this area gain a huge amount of information that is extremely valuable, regardless of what they go on to do. But a lot of people are going to have better information about their own circumstances and how true that is for them.
Jack Clark: As someone who kind of closely observed the cloud computing boom and the big data boom, which at least in the technology industry were kind of huge deals that involved billions and billions of dollars in spending, just changing to radically different things, those had their own issues at the time. To go and do cloud computing was to run against all of the received wisdom about how to make things secure and high-performant. And yet, people did it, and it seems to have mostly been good for them, because they got to work in a controversial, rapidly growing sector of technology, as Amanda said, gained huge amounts of information about it, and then they have a skillset which basically implies they’re good at adapting to rapid change, which seems just like a general skill to have in the world and which the world values. So my belief is that if you get into it, and you spend at least a year or two doing stuff, especially near a technical organization, it should just kind of improve your CV, and in the worst case, mildly improve it.
OpenAI vs. work in government
Robert Wiblin: If someone was choosing between working at OpenAI and going to work in the US government in some form, what would you say to them to help them decide?
Jack Clark: I think, somewhat self-destructively, I’d mostly encourage them to go into the US government, and that’s because one of the things to think about is, OpenAI has a reasonable level of alignment as an organization around big hard decisions already, which makes what we do as the policy team, both gives us more latitude about what we do, and also means some of the challenges have been dealt with and lets us think about the hard problems. US government doesn’t have this. US government doesn’t have alignment, it doesn’t really have opinions, it has a very low level of knowledge in general about this. And one thing that all AI research organizations are looking for is more people in government that they go and interact with, where they don’t have to do quite as much translation. So I think my belief is that if we had more people inside the US government, hundreds more, you would have a dramatic effect on the ability for us to be sort of effective.
Pros and cons of working at OpenAI
Robert Wiblin: Yeah, what’s enjoyable about what you all do, and what is not enjoyable about it?
Miles Brundage: One thing I like about my job is that there’s a mix of exploration and exploitation, that sometimes I just think about big stuff and read semi-related books, and other times it’s like coordination meeting on the upcoming language release or responding to people’s comments on Twitter about whether we should have published this thing. I think it’s good to have that mix. Another thing I like about OpenAI is being close to the cutting edge. Not doing it myself, in terms of going back to the contributory thing, I’m not trying to push the state of the art of AI, but I get earlier access to some of the best systems, and for example, got to interact with the GPT-2 system a lot over the past few months and produce some of the samples that are in the blog post. So that was a lot of fun.
Amanda Askell: Yeah, I think the thing that I find fun is being able to do research that you can have this visceral sense of it’s doing something good in the world, so feeling like you’re producing something that may actually have an impact that’s positive, and also exploring the ways in which it might be positive or the things that you can do on the basis of it. With previous research, I think it can be really hard to go through that process without feeling like it’s going to any impact or going to kind of push things forward. That’s really exciting and really fun. Being at the cutting edge of technical research is extremely exciting. It’s lovely having people around who are just willing to give you their time and help you and talk through questions that you have.
Amanda Askell: The research itself is quite hard, and I think that you have to be thinking about lots of different things and lots of different variables and lots of different scenarios, in a way that with some traditional research questions you don’t. You have to try and develop a really good model of lots of different agents and lots of different organizations. I think that can be tricky. I also think that with respect to something that was asked before, it can be the case that you don’t want to have a bad impact, and I think there is something a bit more stressful about going into a field where you feel like if you did something badly, it could actually have a negative impact, versus a field where you’re like, “Well, even if I do my job really terribly, it will basically have zero impact on the world.” You know, that is a transition that is tricky and some people will find it quite difficult.
Jack Clark: I’d say some of the not fun stuff is that policy by nature involves controversial or difficult conversations, and I think that you need to be comfortable with the fact that you’re going to get into conversations where people express a lot of felt emotions. Frankly, this is going to be happening today, and has been happening during this conversation, where we as an organization have taken a decision about publication with respect to one of our results, and it’s elicited a lot of opinions from people. And some of these people are expressing the emotions they feel, along with their opinions, which are all valid. And our job is to respond to those people’s valid feelings and talk about our position, but on a personal level, I can find that to be a bit draining or a bit of a downer sometimes, because you’re just … people are like super mad, because you’ve done a thing. And you’re going to absorb that.
Jack Clark: The other kind of not fun thing is, and maybe I’ve done this quite a lot, is ultimately, when you really believe something is correct in policy, you may need to go and get the rest of the org to believe that, and that’s going to require you to advocate for a position that other people may not have considered. And so you have to be willing to go and do that side of things as well. Policy, partially because it’s qualitative, in a technical organization, you’re going to need to work hard to make it seem like a reasonable input to the decision-making of people who are primarily quantitative in their world model. And again, that is something you need to be willing to invest time into doing. It’s very satisfying when it works, and frequently it does, but it’s a thing.
Jack Clark: As for stuff that’s fun, I mean, where else do you get to work on something that is kind of a telescope into the future of computation? Being able to compute has kind of changed civilization, and multiple times in just a few decades. And being able to work at an organization where you don’t just know what the state of the art is, but you have a suggestion as to what the future milestone results might be, I think gives you a pretty cool calibration as to what the whole future of existence is going to look like, which I personally find fun.
Robert Wiblin: Jack, I ran across your LinkedIn profile in prep for this interview, and I noticed that in the area where it has description, it just says, “Things are going to get weird.” I’m curious to know, what are you driving at with that?
Jack Clark: The first time I wrote that description was when I was obsessed with the idea that we would move to photonics for a lot of on-chip communication, and that’s actually starting to happen now. Then the second time, because you know, you update LinkedIn, and you choose whether to keep things in, the second time I wrote it was when I was really convinced that we were about to get memristors, which was wrong. And then I think the third time I updated it, it was because I was convinced we were going to get really big cloud computing installations, which I think was true. And then I kept it again when I started to think about AI, because I have this bet on AI.
Jack Clark: What I mean is that there’s this sort of East Asian kind of curse of “May you live in interesting times,” and I mean a sort of cheerful variant of this. “Things are going to get weird” is a statement of fact about the world, and it’s also a suggestion that we should be willing to consider ideas that might seem crazy or might seem weird in other times. But maybe they’re relevant now.
Robert Wiblin: A surprising amount of discussion in this area seems to happen via Twitter. There’s a lot of back and forth in that. Two of you, at least, have been tweeting away during this conversation, I guess reacting to people’s reactions to what you just posted today. Miles, I think you have two satirical accounts related to your Twitter account. One is Brundage Bot, an ML-based imitation.
Miles Brundage: One is satirical. That’s Bored Miles Brundage, and it’s from a friend. Brundage Bot is a Twitter bot that is based on a neural network that is based on my tweets, that tries to predict which papers I’ll tweet. And it’s actually a useful utility, because I don’t have time to be as thorough as I used to be.
Robert Wiblin: And I guess, Jack, you tweet quite a lot, and I guess this is another way that someone can gain control over people’s attention by providing a useful service, and then you also get to inject your opinions in there. I guess it’s a little bit like running a podcast, I suppose. But yeah, how valuable do you think it might be potentially for our listeners to get Twitter accounts and engage in policy discourse in this way? And perhaps Amanda, do you have any comment on why you don’t spend hours every day on Twitter?
Jack Clark: I mean, I’m comfortable with the fact that it’s hacked my brain. Let’s be clear about that. I think it is useful for people to at least follow along on what people are saying, because an uncommonly large amount of the primary technical researchers use Twitter to announce results, talk to each other, and exchange ideas. And I think if you better understand how those people kind of talk, and how they think, and what they feel, you’re going to have an easier time having conversations where you sound like a person from the same community as them, rather than someone coming in from outside and sort of offering opinions that they may not feel are valid.
Amanda Askell: And in defense of myself, I think interestingly, when you said that, my first thought was, “Yeah, I guess the problem is that these things just haven’t hacked my brain yet.” The reason I’m not more active on Twitter or Facebook or other things is just kind of like I don’t think about them. And I think I compartmentalize a lot of things in my life, so I might be inclined to … if Twitter were just a nice thing that I could check the latest results on, like once on a Friday, that would be perfect for me. That’s what I want. I want to not have to think about it for the rest of the week when I’m doing other things.
Jack Clark: Your version of Twitter is a magazine, a weekly magazine.
Amanda Askell: Yes, or just looking at archive updates once a week or something. And then it’s just kind of like, “Well, I could send out this one tweet,” but yeah, I don’t know, I’m just like maybe I’m a good test case for social media. They can try and do better or something.
Robert Wiblin: Well, the organization is OpenAI, and my guests today have been Amanda, Miles, and Jack. Thanks for all joining and doing this experimental four-person episode.
Jack Clark: Thanks very much for letting us do this quad ep.
Amanda Askell: Yeah, thanks for having us.
Miles Brundage: Thanks for having us. It was a lot of fun.
Post episode chat
Robert Wiblin: All right, folks. Some reactions to all of that. I’ve got Niel Bowerman, our AI policy specialist, and Michelle Hutchinson, our head of advising. How are you doing, guys?
Michelle Hutchinson: Great, thanks. Good to be here.
Niel Bowerman: Yeah.
The reaction to Open AI’s release of GPT-2
Robert Wiblin: So while that episode was being recorded, the guests were freaking out a little bit about the reaction that people were having on Twitter to Open AI’s release of GPT 2. Their kind of language producing algorithm, and I guess in particular, their decision not to release the source code.
Robert Wiblin: Hey listeners. Rob here, just interjecting with a quick explanation of GPT-2:
It’s a language model with 1.5 billion parameters (apparently that’s a lot), and it has a simple objective: predict the next word, given all of the previous words within some text.
It was trained on a dataset created by scraping content from across the internet — they used outbound links from Reddit, links that received at least 3 karma, an indicator for whether people found the links interesting.
The diversity of the dataset allows the model to create a wide range of eerily human-like passages.
You can check out the link in the show notes to read a news story on the discovery of unicorns, a new ending to The Lord of The Rings, a rant on how bad recycling is for the world, and a speech from a re-animated JFK — all written by GPT-2.
Back to the chat.
Robert Wiblin: I’ve heard you guys know a little bit more about this than I do, and have some opinions about it. So what was actually going on there, and what do you think of their decisions?
Michelle Hutchinson: Yes, so this was a bit of a break in how this research is usually done, and people didn’t necessarily all take kindly to it. So I’ve been looking forward actually to hearing Niel’s take on how this is going to affect the practice of other researchers and organizations in the field.
Niel Bowerman: Yeah, so what was sort of interesting, as you saw this Twitter storm play out across the Internet over the three or four days following the release, was that we saw sort of two broad themes emerging. One was a debate on this openness and transparency versus caution and safety emerging where you had, on the one side, some people saying, “Hey! Science should be open, results should be reproducible.” And on the other side, sort of people saying, “Hey. This does seem maybe a little premature, but they were trying to start a conversation here”. And it seems useful that we’ve got groups experimenting with, “What do you do when you have a powerful technology and you don’t want to release it fully blown into the public?’”
Niel Bowerman: So that was sort of one side of the debate that was happening. And then the other one was just … as you may be aware, AI researchers tend to not be very happy about how AI research is portrayed in the media. And having a whole bunch of headlines that say, “This algorithm was too dangerous to release,” just fuels a whole bunch of kind of hysterical media articles. And a bunch of people were upset about this fueling of that media narrative. And so Toby Shevlane at the Future of Humanity Institute did a bunch of analysis on this, and pulled these different themes out. And I think in the end, people did recognize that there was a use in doing the release the way they did. But maybe one criticism that could be leveled is that they could’ve made it more reproducible. For example, by giving other groups access, like a couple of other groups access to the full code and full data, so that other groups could try and reproduce it, so that we don’t have this problem of reproducibility, which is such a common thing that people try to strive for in science.
Robert Wiblin: I guess just to me, naively, I don’t understand why you would expect that an organization, if they came up with this very complicated, potentially powerful algorithm, would be expected by default to release it, even if they think it’s bad for the world, even if they think they’re concerned about how it might be used. Do you think that’s kind of their prerogative, to decide whether it’s a good idea to release their own invention? But it seems like that’s very much not the culture in AI research.
Niel Bowerman: Yeah, so I think the arguments … were not making the claim that it’s not their prerogative. I think some people were arguing that what they were releasing was not that powerful. Humans have been able to write artificial text on behalf of other people or other organizations for a long time. All this does is make it easier to do it at scale. So that was one line of argument. And the other line of argument being, “Yes, there are reasons for not releasing things but also we need to respect reproducibility in science, and OpenAI didn’t go far enough to respect the importance of reproducibility in science by making this reproducible by other groups.”
Michelle Hutchinson: I heard from some people that this was thought to maybe be a bit of a PR stunt, which maybe follows from your first point, that this isn’t that novel. But it seems pretty surprising as a way of getting PR, to release a thing that leads to a lot of headlines of “the research we’re doing, is potentially really dangerous”. Does that strike you as a plausible objection?
Niel Bowerman: I think when you have a model of people as being purely self-interested, then you try and find the closest plausible explanation for why you’d do a thing like this. And it did get a bunch more PR than it maybe would’ve otherwise. But yeah, I think a much more plausible alternative is that they were actually concerned about the direction these technologies are heading, and wanted to start a conversation about how to do releases in cases where these technologies are more powerful and might have negative societal implications.
Robert Wiblin: It sounds like they expect to come up with algorithms that they’re more and more worried about how they might be applied. At some point, they have to draw a line and say, “Well, this is the point at which we’re going to say, ‘We’re not going to release the full thing. And we’re going to start a conversation about the risks.'” And maybe they did that too early. Maybe they should’ve waited until they had a more dangerous looking algorithm. But yeah, I guess they decided where they did.
Niel Bowerman: Personally, I thought it was a reasonable point in time to start the conversation. I personally was pretty shocked and surprised by the human like quality of the texts these algorithms are producing.
Robert Wiblin: Yeah, me too. It’s uncanny.
Niel Bowerman: Yeah. For example, Amanda Askell wrote a two-paragraph Facebook post announcing the results. And I didn’t notice, but the entire second paragraph was generated by the algorithm. And it was only when someone pointed this out to me that I actually realized this was just a whole bunch of algorithmically generated text that Amanda had just slipped into a Facebook post without me realizing.
Michelle Hutchinson: So how’d you think this is actually going to affect other organizations doing similar things? ‘Cause on the one hand, it seems like they’ve really led the way, and makes it possible for other groups to follow suit. And then, on the other hand, they had this Twitter backlash, which could easily make other organizations worry about doing the same thing.
Niel Bowerman: I don’t know for sure. My hope is that they’ve slightly stretched the Overton window, and it’s going to be easier for groups. Say Microsoft, who were worried about their facial recognition technologies. My hope is that it’s going to be easier to have those sorts of conversations out in the open now that OpenAI have sort of started that conversation and stretched the Overton window.
Robert Wiblin: What’s the reason that scientists value reproducibility in this case so much? Is it that they’re worried that the results might be fabricated? ‘Cause it’s not as if you’re doing a lab experiment, where maybe you messed it up and you got the wrong result. ‘Cause if it’s able to spit out this text, either you’re defrauding them, or it did. I guess possibly, you could’ve chosen really bad examples where it’s able to perform much more easily, and you want to test it on a wider range of situations.
Niel Bowerman: Yeah, so lot of the examples used in the release were cherry-picked.
Robert Wiblin: I see.
Niel Bowerman: And OpenAI sort of recognized this. But they also wanted to demonstrate the power of their system, and so I thought it was sort of reasonable to have done. In each case, they tell you how many examples they had to go through before they found one that was as good as the one that they published. And so you might worry about this cherry-picky nature of it. But a second thing is just that science is built on reproducibility, the idea that other groups and other people are going to be able to take this result, and then build the next brick on top of it. And so if you can’t take those results and build the next brick on it, then you sort of have to go back to do that work all over again, which slows down the progress of science.
Robert Wiblin: I can understand that point, if you’re forcing other people to reinvent the wheel, it’s going to slow things down a lot. There’s a difference, I think, between having the open data, where you release all of the results, and releasing the technology itself. If you’re trying to make a claim about what actually works, then you should have to release through all the data that would allow someone to prove it or not. You’re saying if you can’t … you need replication in those instances. You need other people to be able to do the experiment again, and in this case, they’re potentially denying them the ability to do that. So perhaps it could be that this algorithm is much less powerful than they’re portraying. Suppose it seems they could’ve had a middle ground where they say, “Well, you send as a bit of text, and we’ll tell you what it spits out.” Did you know if they did that?
Niel Bowerman: So they did that with certain people. So for example, certain media outlets got access to the model in that sort of input-output context. But I think the worry about releasing that to the entire world, in say the way Google did for DeepDream, is that you could see people using it for malicious uses with that sort of level of release. So I think that’s why they didn’t want to do a public release of an interface to let people put in text and see what the system generated.
Jack’s critique of our US AI policy article
Robert Wiblin: So Niel, what did you think of Jack Clark’s criticism that your article doesn’t have much to say about defense and intelligence careers?
Niel Bowerman: So I basically agree with Jack that in the US AI policy context, really, a lot of the ballgame is in the sort of national security context. And that’s where we’re going to see a lot of precedent and use of these technologies. And where I think policy is going to be worked out in a bunch of these issues. And so I’m really excited to see people going into those areas and focusing in that domain. And I think this is actually, in some ways, different from how it works in other countries. So for example, in the UK, the Ministry of Defense has less power in government as a whole. It has a smaller budget, less of a mandate. And so you see these conversations happening more in say the Government Office for Science, or the Cabinet Office in places like that. And so in the UK, in seems more reasonable for people to be scaling up in say emerging tech policy. Whereas in the US context, I’m really excited to see people doubling down on these national security contexts and national security applications of AI.
Working in government
Robert Wiblin: So one of the things that Jack said that surprised me was that he’s more excited, sounded like, to have people go and work in government, in the executive branch or something like that, than to have them come and work at OpenAI, which is probably not what I expected him to say. Do you agree with that, and what do you think of the reasoning?
Niel Bowerman: Yeah, I thought Jack made this interesting argument about how OpenAI has a lot of people in it that are thinking very long-term about the societal implications of AI. Whereas in government, sort of by nature of the election cycle and the pressures on policy makers, there’s less space for long-termist thinking. And so having people that are thinking about AI from a long-term perspective go into government is a real opportunity to have a positive impact in the world. And so I kind of agree with Jack on that, that on the margin, I think a lot more people should be going into government and thinking about AI from a long-term perspective than is currently happening.
Michelle Hutchinson: There’s also just so many more roles here than there is at the very few non-profits that work in this area. And one of the reasons why it might seem less clear that 80,000 Hours is really positive about this is that it’s harder for us to be able to properly map the space than it is for some of these smaller organizations. But that doesn’t mean we’re not actually just really excited for people to go into a whole bunch of these different areas, and in fact, feel that it has great information value for other people are potentially thinking along similar lines.
Niel Bowerman: Yeah, so one of the things we’ve been trying to do on the 80,000 Hours job board is to put up a lot more policy related roles. And these are both roles that will have a lot of impact right now; for example, in roles that the Center for Security and Emerging Technology, which has just been set up at Georgetown, but also roles that allow you to build career capital, and advance to more senior policy roles in the future. So do go to the job board and check out the whole host of policy roles that we’ve been recently putting on there.
Robert Wiblin: Yeah, the jobs on there are very exciting. I guess often senior positions. I imagine somebody might look at them and think, “Well, I really need to be focusing on this career capital stage. How do I get my foot in the door? How do I build myself up to be capable of getting those roles? Did you guys have anything to add on that?”
Michelle Hutchinson: Yeah, I think you’re right that a lot of jobs on the job board are going to be hard for more junior people to get. And often the first step in this process is going to look like getting a further degree, a master’s, or even a PhD in any number of different fields. So it might be law, international relations. Could be a master’s in security studies, like you could do in Georgetown or Johns Hopkins, or public policy. Could even be doing a master’s or a PhD in machine learning, if you really want to go into AI policy.
Niel Bowerman: Yeah, what these master’s can do is set you up really well for an entry level policy role. So some of the best roots we think are the Presidential Management Fellowship. If you have a master’s degree, or if you have a PhD in a STEM subject, then the Triple S Fellowship is really an extraordinary way to go right into the heart of government and be dealing with issues that are important and topical right from day one. There’s also places like Tech Congress that allow people with say three to five years of experience in big tech companies to go into Congress and apply their expertise there, as well as a whole host of other opportunities that you can check out on the job board.
Michelle Hutchinson: I think one of the things to remember about these kinds of roles is that often they’re pretty difficult to get. You’re going to have to start probably by doing internships, by doing a lot of networking. So one benefit of going to somewhere like Georgetown for your master’s is that you’ll be in DC already, and so be able to meet a lot of useful people. And so this is going to look like making quite a lot of applications, trying to talk to a lot of people. But we ultimately think that it’s really worth it.
Writing content for specific audiences
Robert Wiblin: A bit of career advice that Jack gave that I’m very sympathetic to, that I guess did earlier in my career, was just trying to write content that’s of interest to the people that you’re interested in working with in the future. Did you have any ideas about how to actually go about that in practice, kind of focusing on this particular audience?
Michelle Hutchinson: Yeah, I thought that seemed like a really great piece of advice from Jack. But I can imagine it seeming quite intimidating to people, to immediately think of starting a newsletter. So we were thinking a bit earlier about how one could start off more slowly in this, and thinking that one thing someone might want to do is just write a one-off blog post about something they’re particularly interested in in this kind of area, and start off by sharing it on maybe their own feed, on Facebook, or on the Effective Altruism forum, to start getting some discussion going on it. And then if you find that that’s something that you’re interested in doing, you could start your own blog, and then move on from there to a newsletter.
Niel Bowerman: One of the real skill sets here is in synthesizing the whole hosepipe of information that’s coming out of the AI policy world and boiling it down to something that’s very digestible not just for policy makers, but for other people like me and you who are trying to get up to speed in this field, and are just overwhelmed by amount of information out there. One of the things that someone I’ve been chatting to recently, John Croxton, is doing is every single day, he is going and pulling all of the AI policy news that has happened, and taking out key quotes, and putting links to them. And this just makes it way easier for someone like me to get a sense of what’s been going on in the world, because someone that’s already done this synthesis for me.
Niel Bowerman: And providing those sorts of services to policy makers through newsletters, or Google docs, whatever it might be, is really valuable. We don’t yet have, as far as I know, a good DC focused AI policy newsletter. I think that’s a thing someone could do. We don’t yet have a Westminster, Whitehall focused AI policy newsletter. And I think that would be a really interesting for someone to do. I think there’s a lot of opportunity for people to jump in here and take on these projects. It is a big commitment, doing a newsletter is a day a week type commitment, so you’ve really got to be sure that this is a thing you want to do. And you probably want to do a bunch of scaling up and learning before you dive into it, but ultimately, writing a newsletter is very much the way to get you noticed. And as Jack mentioned, he’s had all sorts of meetings in DC that have been organized because of his newsletter. And I think as a first step in your career, I think it’s an excellent thing to do, maybe while you’re at university, or in an early job.
Michelle Hutchinson: Yeah, I agree.
Robert Wiblin: Do you think you have to really on the ball? Or I guess quite informed already to say what quotes you need to pull out? Or what actually is the important news here that’s relevant to people? Is that maybe harder than it at first looks?
Niel Bowerman: I think there’s two different skills here. One is essentially just, “What happened?” And I think that almost anyone can do. And then there’s a second, much more challenging skill of, “What does this mean?” And that’s more of a skill that you’ll develop over time. And you can start out trying to do that. And sort of bouncing ideas off of friends, and then as you become more confident that your analysis is actually good analysis, then you can start sharing it and getting feedback on it on the wider Internet. And I think ultimately, that will become a real skill set.
Michelle Hutchinson: I think I’m not sure if I agree with your first point, that almost anyone could do the summarizing. ‘Cause I think actually that really what’s lacking in these kinds of spaces is really easy to digest information here. And so actually, you do have to pretty good at synthesizing a lot of information, and then summarizing it really concisely. And so it’s for someone who really likes writing, likes writing relatively fast, and thinks pretty analytically. But having said that, I do think it’s pretty accessible for people who have interest in those areas.
Robert Wiblin: So just quickly, to finish, what was most memorable about the interview? What stuck out from it, and I guess what do you think, what do you hope people might keep in mind?
Michelle Hutchinson: I think I was particularly interested in Miles talking about how he had changed his mind from thinking there was this distinction between long-term AI safety issues, and shorter term ones, to think that these issues are just deeply interwoven, and building AI that’s fundamentally safe and aiming at the things that we most care about is a better way of thinking about things. I think that’s going to be fairly useful to keep in mind for our coachees.
Niel Bowerman: Yeah, for me, one of the things that I really liked, sort of was reinforced throughout the interview in a number of different places, was this idea that trust building between all of the different groups involved in AI development and AI policy and even the general public, is essentially central to this project of ensuring that AI continues to be safe and beneficial to all of society as it becomes more and more powerful. And so thinking of this as a problem of, essentially, how do you build trust between groups as diverse as the Chinese government and Silicon Valley, and the US Department of Defense? Then sort of reframe some of these questions from feeling like real, abstract, technical questions, into questions that I find it a lot easier to relate to, of just, “How do we get these different people to trust each other?” And even if it just starts off with small one-on-one relationships between different people, that can grow into larger structures of trust. And seeing this reflected in everything from Miles talking about trust, but verified through to these ideas around scientific collaborations between the US and China, it’s this threat of trust almost weaves through everything that’s being taught today. And I thought that was a really interesting way of thinking about the problem.
Robert Wiblin: All right, well thanks for making time to join this post-episode chat, guys.
Michelle Hutchinson: Thanks, Rob.
Niel Bowerman: Thanks so much for inviting me on the show.