#58 – Pushmeet Kohli on DeepMind’s plan to make AI systems robust & reliable, why it’s a core issue in AI design, and how to succeed at AI research
#58 – Pushmeet Kohli on DeepMind’s plan to make AI systems robust & reliable, why it’s a core issue in AI design, and how to succeed at AI research
By Robert Wiblin and Keiran Harris · Published June 3rd, 2019
On this page:
- 1 Highlights
- 2 Articles, books, and other media discussed in the show
- 3 Transcript
- 3.1 Introduction
- 3.2 Pushmeet's current work
- 3.3 Concrete problems machine learning can help with
- 3.4 Deepmind's research in the near-term
- 3.5 DeepMind's different approaches
- 3.6 Long-term AGI safety research
- 3.7 How should we conceptualize ML progress?
- 3.8 Machine learning generally and the robustness problem are not different from each other
- 3.9 Biggest misunderstandings about AI safety and reliability
- 3.10 What things we can learn from software safety?
- 3.11 Are there actually a lot of disagreements within the field?
- 3.12 Forecasting AI development
- 3.13 Career advice
- 3.14 Pushmeet's career
- 4 Learn more
- 5 Related episodes
When you’re building a bridge, responsibility for making sure it won’t fall over isn’t handed over to a few ‘bridge not falling down engineers’. Making sure a bridge is safe to use and remains standing in a storm is completely central to the design, and indeed the entire project.
When it comes to artificial intelligence, commentators often distinguish between enhancing the capabilities of machine learning systems and enhancing their safety. But to Pushmeet Kohli, principal scientist and research team leader at DeepMind, research to make AI robust and reliable is no more a side-project in AI design than keeping a bridge standing is a side-project in bridge design.
Far from being an overhead on the ‘real’ work, it’s an essential part of making AI systems work in any sense. We don’t want AI systems to be out of alignment with our intentions, and that consideration must arise throughout their development.
Professor Stuart Russell — co-author of the most popular AI textbook — has gone as far as to suggest that if this view is right, it may be time to retire the term ‘AI safety research’ altogether.
With the goal of designing systems that reliably do what we want, DeepMind have recently published work on important technical challenges for the ML community.
For instance, Pushmeet is looking for efficient ways to test whether a system conforms to the desired specifications, even in peculiar situations, by creating an ‘adversary’ that proactively seeks out the worst failures possible. If the adversary can efficiently identify the worst-case input for a given model, DeepMind can catch rare failure cases before deploying a model in the real world. In the future single mistakes by autonomous systems may have very large consequences, which will make even small failure probabilities unacceptable.
He’s also looking into ‘training specification-consistent models’ and formal verification’, while other researchers at DeepMind working on their AI safety agenda are figuring out how to understand agent incentives, avoid side-effects, and model AI rewards.
In today’s interview, we focus on the convergence between broader AI research and robustness, as well as:
- DeepMind’s work on the protein folding problem
- Parallels between ML problems and past challenges in software development and computer security
- How can you analyse the thinking of a neural network?
- Unique challenges faced by DeepMind’s technical AGI safety team
- How do you communicate with a non-human intelligence?
- How should we conceptualize ML progress?
- What are the biggest misunderstandings about AI safety and reliability?
- Are there actually a lot of disagreements within the field?
- The difficulty of forecasting AI development
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
The 80,000 Hours Podcast is produced by Keiran Harris.
As an addendum to the episode, we caught up with some members of the DeepMind team to learn more about roles at the organization beyond research and engineering, and how these contribute to the broader mission of developing AI for positive social impact.
A broad sketch of the kinds of roles listed on the DeepMind website may be helpful for listeners:
- Program Managers keep the research team moving forward in a coordinated way, enabling and accelerating research.
- The Ethics & Society team explores the real-world impacts of AI, from both an ethics research and policy perspective.
- The Public Engagement & Communications team thinks about how to communicate about AI and its implications, engaging with audiences ranging from the AI community to the media to the broader public.
- The Recruitment team focuses on building out the team in all of these areas, as well as research and engineering, bringing together the diverse and multidisciplinary group of people required to fulfill DeepMind’s ambitious mission.
There are many more listed opportunities across other teams, from Legal to People & Culture to the Office of the CEO, where our listeners may like to get involved.
They invite applicants from a wide range of backgrounds and skill sets so interested listeners should take a look at their open positions.
Highlights
If you think about the history of software development, people started off by developing software systems by programming them by hand and sort of specifying exactly how the system should behave. We have now entered an era where we see that instead of specifying how something should be done, we should specify what should be done. For example, this whole paradigm of supervised learning where we show examples to the machine or to the computer, that for this input you should provide this output and for this input you should provide that output. You’re telling the machine what you expect it to do rather than how it should do it.
It has to figure out the best way to do it. But part of the challenge is that this description of what you want it to do is never complete, it’s only partial. This is a partial specification of the behavior that we expect from the machine. So now you have trained this machine with this partial specification, how do you verify that it has really captured what you wanted it to capture, and not just memorized what you just told it? That’s the key question of generalization, does it generalize? Does it behave consistently with what I had in mind when telling it, when giving it 10 correct examples. That is the fundamental challenge that all of machine learning is tackling at the moment.
Suppose you are trying to test a particular individual, you are interviewing them, you ask them a few questions and then you use the answers and how they performed on those questions to get a good sense of who they are. In some sense you are able to do that because you have some expectation of how people think, because you yourself are human.
But when you are reasoning about some other intelligence — like a bird — then it becomes trickier. Even though we might share the same evolutionary building blocks for reasoning and so on, the behavior is different. So that comes to the question now, if there’s a neural network in front of you and you are asking it questions, you can’t make the same assumptions that you were making with the human, and that’s what we see. In ImageNet you ask a human, “What is the label of this image?” And even experts are not able to identify all the different labels because there a lot of different categories and there are subtle differences. A neural network would basically give you a very high accuracy, yet you slightly perturb that image and suddenly it will basically tell you that a school bus is an ostrich.
We are trying to go beyond that simple, traditional approach of taking a few questions and asking those few questions. What we are thinking about is, can we reason about the overall neural network’s behavior? Can we formally analyze it and we see what kinds of answers it can give, and in which cases does the answer change?
Optimization is used in a general context for various different problems in operations research and game theory, whatever. Optimization is key to what we do. Optimization is a fundamental technique that we use in safety work to improve how our systems conform to specifications. We always are optimizing the performance of our systems, not to produce specific labels, but to conform to the more general problem or to conform to these general properties that we expect. Not to the simple properties of, well for this input it should be this output. That’s a simple a property, but you’re going to have more sophisticated properties and in traditional machine learning you are going to optimize consistency with those simple properties on reduce your loss or empirical risk and in our case, we are reducing our loss or reducing the risk of inconsistency with the specifications.
Articles, books, and other media discussed in the show
DeepMind
- DeepMind’s Blog
- DeepMind’s Safety Blog on Medium
- Current vacancies
- Towards Robust and Verified AI: Specification Testing, Robust Training, and Formal Verification by Pushmeet Kohli, Krishnamurthy (Dj) Dvijotham, Jonathan Uesato, and Sven Gowal
- Designing agent incentives to avoid side effects by By Victoria Krakovna (DeepMind), Ramana Kumar (DeepMind), Laurent Orseau (DeepMind), Alexander Turner (Oregon State University)
- Understanding Agent Incentives with Causal Influence Diagrams by Tom Everitt
- Scalable agent alignment via reward modeling by Jan Leike
- Building safe artificial intelligence: specification, robustness, and assurance by Pedro A. Ortega, Vishal Maini, and the DeepMind safety team
- AlphaFold: Using AI for scientific discovery by Andrew Senior, John Jumper, and Demis Hassabis
- The Story of AlphaGo
- AlphaZero: Shedding new light on the grand games of chess, shogi and Go
- AlphaStar: Mastering the Real-Time Strategy Game StarCraft II
- DeepMind fold: How one scientist coped when AI beat him at his life’s work
Everything else
- Concrete Problems in AI Safety by Amodei et al.
- Pushmeet’s thesis: Minimizing Dynamic and Higher Order Energy
Functions using Graph Cuts - The Long-Term Future of Artificial Intelligence by Stuart Russell
Transcript
Table of Contents
- 1 Introduction
- 2 Pushmeet’s current work
- 3 Concrete problems machine learning can help with
- 4 Deepmind’s research in the near-term
- 5 DeepMind’s different approaches
- 6 Long-term AGI safety research
- 7 How should we conceptualize ML progress?
- 8 Machine learning generally and the robustness problem are not different from each other
- 9 Biggest misunderstandings about AI safety and reliability
- 10 What things we can learn from software safety?
- 11 Are there actually a lot of disagreements within the field?
- 12 Forecasting AI development
- 13 Career advice
- 14 Pushmeet’s career
Introduction
Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, where each week we have an unusually in-depth conversation about one of the world’s most pressing problems and how you can use your career to solve it. I’m Rob Wiblin, Director of Research at 80,000 Hours.
Today, I’m speaking with a research scientist at DeepMind, which is probably the most advanced developer of machine learning systems around today.
As you may know, DeepMind was the outfit that first beat top Go players with a system called AlphaGo back in 2016.
Since then it has developed another ML system called AlphaZero, which can learn to play chess at the very highest level, with just a day of self-play on DeepMind’s processors.
And, more recently, DeepMind has been working on AlphaStar, which now plays StarCraft 2 at the level of the world’s top professionals.
DeepMind itself says that its aim is to ‘solve intelligence’ and develop a general artificial intelligence that can reason about and help to solve any problem.
All of this is very impressive and exciting, but regular listeners will know that I worry about how we can ensure that AI systems continue to achieve outcomes that their designers are pleased with as they become more general reasoners, and are given progressively more autonomy to intervene in an incredibly complicated world.
Naturally, that is something DeepMind takes a big interest in and has been hiring researchers to work on.
If DeepMind succeeds at their mission, the products that emerge from their research could end up making choices everywhere across society, and even having more influence over the direction of Earth-originating life than flesh and blood humans.
With any new technology as powerful as this, it’s essential that we look ahead and devise ways to make it as robust and reliable as possible. And fortunately that’s just what today’s guest is trying to do.
Alright, here’s Pushmeet.
Robert Wiblin: Today, I’m speaking with Pushmeet Kohli. For the last two years, Pushmeet has been a principal scientist and research leader at DeepMind. Before joining DeepMind, he was a partner scientist and director of research at Microsoft Research and before that he was a postdoctoral associate at Trinity Hall in Cambridge. He’s been an author on about 300 papers which have between them been cited at least 22,000 times. So thanks for coming on the podcast, Pushmeet.
Pushmeet Kohli: Thanks.
Robert Wiblin: I mentioned that I was going to be interviewing you on social media and I have to say I’ve never gotten so many enthusiastic question submissions from the audience. Unfortunately for all of you, we’re only going to be able to get to a fraction of those.
Pushmeet’s current work
Robert Wiblin: I expect we’ll get to cover how listeners might be able to contribute to the development of AI that consistently improves the world, but first, as always, what are you working on at DeepMind and why do you think it’s really important work?
Pushmeet Kohli: I joined DeepMind as you just mentioned, two years back. In the past I’ve worked in a variety of different disciplines like machine learning, game theory, and information retrieval computer vision. When I first came to DeepMind, I realized that the amount of work that is happening at DeepMind is just quite at a different order from what it is at other institutions. I quickly realized that making sure that the powerful techniques that we are building are stress tested and are robust and can be safely deployed in the real world is a topic of extreme importance.
Pushmeet Kohli: All the founders of DeepMind actually were very supportive of this particular view, like Demis, Shane, Moose, they’re all very clear on the spot that we really need to deploy AI and machine learning techniques safely. So that became the focus of my initial work, which is making sure that machine learning techniques are robust and safe when we deploy them in the real world.
Pushmeet Kohli: More recently, I was also made in charge of the science program at DeepMind where the idea is we want to use techniques from AI and machine learning to accelerate progress in scientific disciplines. Science, we think is a source of great challenges as well as great opportunity and it’s one of the key tools that humanity can use to solve some of the key challenges that we’re facing. So our AI for science program aims through that.
Robert Wiblin: Yeah. So you’re the research leader on these two different projects, AI for science as well as the secure and robust AI team. Maybe you could tell us a little bit more about like what exactly each of those teams does and how do you balance your time with so many responsibilities?
Pushmeet Kohli: Let me start with the safe and robust AI team. The idea behind this team was to make sure that all the systems that we’re developing and the tools and techniques that the machine learning community is developing can be properly stress tested, and we can check consistency with the properties that we would expect these tools to have. And if these tools are not behaving consistently with our specifications, how can we encourage them to behave or conform to the expectations of society? Finally, how can we formally verify that their behavior is consistent, not just based on some statistical argument, but a formal mathematical argument where we can prove that these techniques will conform to the properties we expect.
Robert Wiblin: So the statistical approach is kind of sampling and being like, well, most of the time it seems like it falls within these parameters, so that’s fine, whereas the formal one will be like proving that it can’t fall outside particular bounds?
Pushmeet Kohli: Yes, absolutely.
Robert Wiblin: Talk a little about the AI for science project and maybe like what things really excite you or what are the potential outputs from these projects that you think could really improve the world?
Pushmeet Kohli: Science is a very broad area and it is one of the key topics which gives us a way to understand about the world that we live in and even who we are. In terms of the topics, we have no constraint on topics. We are looking for problems in the general area of science, whether it’s biology, whether it’s physics, whether it’s chemistry, where machine learning can help, and not just that machine learning can help, but a way of doing machine learning where you have a dedicated team which works with conviction towards a very challenging problem could help.
Pushmeet Kohli: So, if a problem can be solved by using machine learning, off-the-shelf type of machine learning techniques by some PhD student or some postdoc, then that might not be a good project for us because we are in this unique position where we have some of the best and most talented machine learning researchers and we have the ability to galvanize these people towards one very ambitious goal. So we look for projects where that approach really can make a difference.
Concrete problems machine learning can help with
Robert Wiblin: Yeah, so what are some of the concrete problems that you think machine learning can help with?
Pushmeet Kohli: One of the problems that we have already spoken about is our work on protein structure determination. So you know about proteins, that they are the building blocks of all of life, and everything about our own bodies is informed by how proteins interact with each other. They are like the machines, they are these nanomachines that are operating our whole body. We see the effects of it, but actually these micro-machines are actually what are making us work.
Pushmeet Kohli: So understanding how they work has been a key challenge for the scientific community. One aspect of that challenge is if you have a protein, which you have specified as a sequence, can you figure out what would be its structure? Because in many cases, the structure actually informs what kind of work that protein does, which other proteins it will bind to, whether it will basically interact with other agents, and so on. This has been a longstanding problem in proteomics: how do you infer the structure of proteins? There are people who have spent their PhDs in trying to find a structure of one protein. So, it’s an incredibly hard and challenging problem. We took it on because we thought if we can make progress in this area it can have a very dramatic effect on the community. So this is an example of one of the problems that we tend to look at in the science team.
Robert Wiblin: Yeah. So if we managed to solve the protein folding problem, I guess that helps a lot with designing medicines that would have to interact with any proteins that are folding because then you know their shape and then you can potentially play with them.
Pushmeet Kohli: As I mentioned, protein structure informs protein functionality. That is the hypothesis, and in many cases it does. Then in terms of protein functionality, it has implications for antibody design, drug design, various different very challenging problems that different scientific disciplines have been trying to tackle.
Deepmind’s research in the near-term
Robert Wiblin: If listeners in five or 10 years time found themselves saying, “Wow, DeepMind made this amazing product that has made my life better.” What do you think that would most plausibly be? Maybe it already is like just improving maps, or improving lots of services that we use online indirectly.
Pushmeet Kohli: I think the way DeepMind thinks about this issue is in abstraction. Like intelligence is an abstraction and in some sense, it’s the ability to solve many different tasks. That informs how DeepMind is structured and how DeepMind operates. We are not looking at one specific task. Of course, we need tasks to ground the progress that we’re making on intelligence, but we’re working on this overall enablement technology which can enable a lot of different tasks. So we don’t evaluate ourselves on, what did we do on this particular task? But we generally evaluate ourselves on what technologies did we develop and what did they enable?
Robert Wiblin: So you’re trying to do more fundamental research into general intelligence, or intelligence at a broader level rather than just single applications?
Pushmeet Kohli: Absolutely, but at the same time I’d have to say that tasks are extremely important because they ground us, they tell us, they inform us how much progress we are making on this very challenging problem. .
Robert Wiblin: Otherwise you get disconnected?
Pushmeet Kohli: Yeah, exactly.
Robert Wiblin: On the safe and robust AI team, what are some of the problems with current or near future AI systems that researchers are hoping to fix?
Pushmeet Kohli: Yeah, I think this is something that the machine learning community as a whole is sort of thinking about. If you think about the history of software development, people started off by developing software systems by programming them by hand and sort of specifying exactly how the system should behave. We have now entered an era where we see that instead of specifying how something should be done, we should specify what should be done. For example, this whole paradigm of supervised learning where we show examples to the machine or to the computer, that for this input you should provide output. For this input you should provide that output. You’re telling the machine what you expect it to do rather than how it should do it.
Robert Wiblin: Then it’s meant to figure out itself the best way to do it?
Pushmeet Kohli: It has to figure out the best way to do it. But part of the challenge is that this description of what you want it to do is never complete, it’s only partial. This is a partial specification of the behavior that we expect from the machine. So now you have trained this machine with this partial specification, how do you verify that it has really captured what you wanted it to capture, and not just memorized what you just told it? That’s the key question of generalization, does it generalize? Does it behave consistently with what I had in mind when telling it, when giving it 10 correct examples. That is the fundamental challenge that all of machine learning is tackling at the moment.
Robert Wiblin: Yeah. How big a problem do you think this is? I know there’s a range of views within ML and outside of ML. I guess some people think this is a problem like any other and we’ll just fix it as we go, whereas other people are more alarmed thinking, no, this is like a really fundamental issue that needs a lot of attention. Do you have any views on that?
Pushmeet Kohli: I think machine learning people have thought about, it’s not as if it’s a new problem, generalization has been studied. The question of generalization has been studied ever since the beginning of machine learning. Like what are the inductive biases? What will machines learn? The question becomes much more challenging when you put it in the context of the complexity of the systems that we’re developing today, because the systems that we’re developing today are not simple linear classifiers, they are not as simple SVMs. They are much more complicated nonlinear systems with variable compute and like a lot of different degrees of freedom. To analyze exactly how this particular model behaves or generalizes and which specifications it would be consistent with, is a new type of challenge. So in some ways, it’s the same challenge that we have always been looking at, but in other ways it’s a completely different part of the spectrum.
Robert Wiblin: So I guess the concern might be that as ML models have to interact with the complexity of the real human world in trying to actually act and improve things, there’s a lot more ways for them to act out of like how you expected them to than when they’re just playing chess, where it’s like a much more constrained environment?
Pushmeet Kohli: Yes, absolutely. If you think about software systems, if you are thinking about a software system for, I don’t know, like you wrote a program in BASIC, or your first program in C++, and you don’t really care about what it did, like when you start it. But if that program gets installed in, say an airplane-
Robert Wiblin: Or the electricity grid.
Pushmeet Kohli: Or the electricity grid. You should care. So, even the software industry has considered this problem. There’s a long history, remember what used to happen with Windows, the Blue Screen of Death. It was quite common, it was a real technical challenge. Microsoft at that point of time was dealing with a lot of different … Was building a framework which could interact with various different devices and it was a challenge to be able to lead robustly. The last two or three decades of work that has happened in formal verification, in testing software systems, has come to the point that we now expect the failure rate of these operating systems to be extremely small. It’s not as common as we used to encounter in the 80s and 90s.
Robert Wiblin: Is there a concern that when Windows 98 had a problem, it would have the Blue Screen of Death and stop, whereas it’s possible that machine learning algorithms, when they have a problem, they just boldly go ahead and do things that you didn’t intend and maybe you don’t notice until later?
Pushmeet Kohli: That’s a problem that happens with software systems generally. Termination analysis, for example, is an incredibly hard problem. How can you verify that a method will terminate? So if your software system doesn’t halt, it’s still a big problem, it doesn’t go away. So in some sense I don’t think you should have that distinction between normal software systems and machine learning systems. I think it is the same problem, it’s just that software systems are also being deployed in mission critical domains. Machine learning systems are beginning to be deployed in mission critical domains. Software systems are complex and machine learning systems are also complex, but in a different way. So I think the complexity is very different and the scale is very different. The underlying problem is the same, but the types of challenges that appear are different. As machine learning and AI techniques are deployed in many different domains, these challenges will become even more critical.
Robert Wiblin: Is there any way to explain, in plain language, the approaches to AI robustness that your team is working on?
Pushmeet Kohli: We’ve mentioned this particular view, that when we try to test someone, when we do test them is basically ask them a bunch of questions. Suppose you are trying to test a particular individual, you are interviewing them, you ask them a few questions and then you use the answers and how they performed on those questions to get a good sense of who they are. Then in some sense you are able to do that because you have some expectation of how people behave, because like you and I are both humans.
Robert Wiblin: Because we experience it with humans.
Pushmeet Kohli: Exactly, and because you yourself are human.
Robert Wiblin: Right. Okay.
Pushmeet Kohli: But when you are reasoning about some other intelligence then-
Robert Wiblin: Like a bird.
Pushmeet Kohli: Like a bird, then it becomes trickier. Even though we might share the same evolutionary building blocks for reasoning and so on, the behavior is different. So that comes to the question now, if there’s a neural network in front of you and you are asking it questions, you can’t make the same assumptions that you were making with the human, and that’s what we see. In ImageNet you ask a human, “What is the label of this image?” and even experts are not able to identify all the different labels because there are a lot of different categories and there are subtle differences. A neural network would basically give you a very high accuracy, yet if you slightly perturb that image then suddenly it will basically tell you that a school bus is an ostrich.
Pushmeet Kohli: So what we are trying to do is basically go beyond that simple approach of taking a few questions, the traditional view of taking a few questions and asking those few questions. What we are thinking about is, can we reason about the overall neural networks’ behavior? Can we formally analyze it and can we see what kind of answers it can give and in which cases does the answer change?
Robert Wiblin: That makes total sense. I guess you’re trying to more formally analyze what is the envelope or what’s the range of behavior that a machine learning system can engage in beyond just sampling within the normal range of questions that you might give?
Pushmeet Kohli: Yeah. The traditional approach would be that you take a particular input and then what you do is basically you take that input and then you see how that input leads to activations in the neural network and eventually the neural network gives you an answer. Then you can see the path that the input took through the neural network to reach that answer. What we are doing, is we are saying, we are not going to ask you one question, we are going to ask you a space of questions and now we’re going to see what is the response to that space of questions all throughout the network. So in some sense, we are asking the neural network infinite number of questions at the same time.
Robert Wiblin: How do you do that?
Pushmeet Kohli: So the way-
Robert Wiblin: A lot of compute?
Pushmeet Kohli: If you were to do it naively, you would spend the compute of the Universe, and still not be able to verify even a very small ImageNet network or even for an MNIST network, you will not be able to specify it even if you use all the computation in the Universe. How we do it is by basically saying, let’s try to encapsulate or let’s try to represent that space compactly, not by those infinite points, but by certain geometries which allow us to capture that space in some low complexity. So if you are trying to bound … If you think about all the points in this particular room, there are an infinite number of them, but they’re bounded by just these four walls and the ceiling and the floor. So just these six equations of these planes bound all the infinite things that are inside this room.
Robert Wiblin: So you kind of try to shrink the full space into a lower dimensionality? Is that idea or?
Pushmeet Kohli: No. We are operating in the same dimensionality, we are just representing that space compactly. We’re using fewer things to represent that space and now we are going to say, how is the space of questions? So we have now infinite questions in this particular space, but the space itself is only represented by, I don’t know, like eight equations or 10 equations. But there are infinite questions that live in that space and now we are going to see how the neural network answers these infinite questions rather than just one question.
Robert Wiblin: And then it becomes tractable once you’ve defined it that way.
Pushmeet Kohli: Exactly.
DeepMind’s different approaches
Robert Wiblin: I was reading a blog post that you published, I think it was a month ago, Towards Robust and Verified AI: Specification Testing, Robust Training, and Formal Verification, it’s great that you have this safety research blog. I think it’s on Medium, we’ll stick up a link to it so people can check it out, there’s a lot of great posts on there. It sounded like you don’t only take that approach, you also try to like actively seek out, I think, the specific niche cases where the system might act completely differently from what you intended?
Pushmeet Kohli: Exactly. That’s like adaptive testing. So people who have given the SAT or GRE would … There is some sort of adaptive testing. You answer one question and then the question that you are asked depends on the answer that you gave. So this adaptive way of questioning also is much more efficient than just, I’ll prepopulate the 10 questions and you have to answer the 10 questions. If I can choose which questions I’m going to ask you, depending on the answers that you’ve given me, then I’m more powerful in finding places where you might be inconsistent or you might give the wrong answer.
Robert Wiblin: So you pose a question to it and then you get the answer, or you get a range of answers, then you choose the worst one and then you move from there, and then apply a bunch of similar questions that are even harder and they’re like, and then take the worst answer from that and then like keep going until you just find the most perverse outcome that you can search for?
Pushmeet Kohli: Yeah, at a simple level, this is how the technique works, but in some cases you might not even know that in the first answer. For example, you ask a car to drive from point A to point B. So what is the answer there? The answer is basically you look at how the car is driving from point A to point B and just the behavior of how the car drives from point A to point B, gives you a lot about how the car is probably reasoning. So there is no perverse thing that it did, it basically did it correctly, but it gave you a lot of insights as to what might be that policy it’s thinking about and that informs your next question, rather than you selecting, going from point A to point B, point C to point D and so on. So the next point that you decide is informed by you observing how the actual car drove itself.
Robert Wiblin: How confident do you feel about these methods? You’re like, yeah, we’re killing it, we’re going to solve this problem, it’s just a matter of improving these techniques.
Pushmeet Kohli: Yeah. If the answer is obvious, then it’s not a good question for DeepMind to answer. So DeepMind is, in some sense, in a unique position, where we are working in this ecosystem where there is academic research, there is industrial applied research and then there is AI fundamental research and we have a unique strength in the sense of how we are structured, the conviction with which we go for problems and so on. So in some sense, that always forces us to ask the question, is this the most challenging problem that we can work on and is this the best way we can contribute to the community?
Robert Wiblin: So you want to push the envelope?
Pushmeet Kohli: Yes.
Long-term AGI safety research
Robert Wiblin: There’s another group at DeepMind, called the technical AGI safety team, are you familiar with what they do and how it’s different from your own work?
Pushmeet Kohli: The technical AGI safety team reasons about AGI right, where it talks about what are the issues that might come up as intelligent systems become more and more powerful. As basically systems become extremely powerful, these questions of, does the system align with my incentives? There are other safety issues that come in at that spectrum. So the machine learning safety team and the technical AGI safety team, are two sister teams, which work very, very closely with each other. We share the techniques, but the problems that we look at are at the spectrum, whereas the technical AGI safety team is looking at problems which are extremely hard, but are going to happen in a few years’ time, while the machine learning safety team is looking at maybe sometimes using the same techniques at problems which are happening today.
Robert Wiblin: Yeah. So what is the relationship between this like near-term AI alignment and the longer term like artificial general intelligence issues? Did you think that one is just naturally going to merge into the other over time as the AI systems that we have get more powerful?
Pushmeet Kohli: One would hope so, because it’s all about … The fundamental issues are quite similar, so if you think about value alignment or specification consistency, we talk about these things in the near-term and long-term safety regimes as in these ways, but at the basic level the problems are quite similar.
Robert Wiblin: Yeah, it’s interesting. I feel like just two years ago, I heard a lot of people say that the short-term issues with ML systems are quite different from the long-term issues and working on the problems that we have now won’t necessarily help with the long-term issues, but that view seems to have become much less common, much less fashionable. Have you noticed that as well or is that just the people that I know?
Pushmeet Kohli: Yeah, I think there is a gradual realization that many of the problems are shared. Now, of course the long-term AGI safety research has some unique problems, which the short-term doesn’t have.
Robert Wiblin: Yeah. What are some of those?
Pushmeet Kohli: One of the things is when we talk about specification at the moment, when we think about deploying machine learning systems, we’re talking about it in a very specific domain. So the specification language, the way we express what we want the machine to do can be constrained. The language needed for specifying what the behavior should be can be limited, but when you talk about an AGI, an AGI basically can solve any problem on the planet.
Robert Wiblin: In principle.
Pushmeet Kohli: In principle. So then what is the language in which you specify it?
Robert Wiblin: How do you communicate?
Pushmeet Kohli: How do you communicate to that very powerful agent. It is a unique challenge.
Robert Wiblin: Do you think we’ll end up having to use human language? Because that’s like the most like high bandwidth method of communication that we have with other people, maybe it’s the highest bandwidth method we have with an AI system as well?
Pushmeet Kohli: It goes back to the question that our intelligence evolved in a particular way and in some sense there’s the good coupling between language, human language and human intelligence. So what comes first? What came first? But there is some notion that human intelligence is able to deal with concepts expressed in human language. Now, a very powerful intelligence, a different intelligence might have its own language, might have its own concepts, might have its own abstractions, but in order for us to communicate between it, we’ll need to either build a translator within those two languages or somehow try to make sure that the intelligence that we’re building conforms or is very similar to a human language so that it can understand the same abstractions and concepts and the properties that we expect of such systems.
Robert Wiblin: Are there any other differences between the work that you’re doing and the work of the AGI, or any different challenges that the AGI team faces that are worth highlighting?
Pushmeet Kohli: There are, at a very high level, the problems are quite similar, but the practical machine learning systems throw up a number of different sorts of issues. Some I think are shared, like the question of privacy and security and these are all questions that grew in both contexts. I don’t think there are many problems which are different. The approaches and the problem instances might be different, but at the basic level, at the abstract level, the problems are quite similar.
How should we conceptualize ML progress?
Robert Wiblin: Thinking about another division between different kinds of work. I think one framing that a lot of people have about AI safety and reliability is that there are some people who are working on capabilities like making AI more powerful and then there’s other people perhaps like you who are working on reliability and safety and alignment and all of that. It’s like, I guess, the extreme version of this view’s like, well the capabilities people are creating the problem and then like you’re cleaning it up, you’re fixing it up and making it better and solving the issues that are arising. Then possibly on that view, you think, well working on capabilities is like possibly even harmful? It’s not clear how helpful that is? I think this is a view that was more common in the past and it’s like has also been like fading, but do you have any comments on whether that’s a good way for people to conceptualize the whole deal with the ML progress?
Pushmeet Kohli: That’s a very interesting question. A lot of people see ML safety work as a sort of tax. That you have to pay this tax to make sure that you are doing things correctly. Safety is something that is necessary, you have to do it, but if you’re not driven by it. I don’t see it that way in some sense. So it’s not … I don’t think basically an organization which is doing safety work is paying a tax. In fact, it is to its advantage. How do we explain this?
Pushmeet Kohli: Suppose, we were, you and me, took a unique mission and the mission was that we’re going to drive a car around the planet. One approach is you sit in your car seat and you drive off, without putting your seatbelt. The other issue would be, you put your seatbelt on. If we were just going from point A to point B and like maybe one kilometer, the probability of an accident is so low that you might actually reach the destination without putting the seatbelt on. But if your destination is so far, we are circumnavigating the whole world, if we think about the probability of who reaches the end, you will see the person who put on the seatbelt.
Robert Wiblin: Makes a meaningful difference.
Pushmeet Kohli: Exactly. So it’s about enablement, it’s not a sort of a tax, it’s basically enabling the creation and development of these technologies.
Robert Wiblin: Can’t remember who, but someone gave me this analogy to bridge building where they’re like, we don’t have bridge builders and then bridge safety people who are completely separate from the people, but this is like, it’s not a bridge unless it doesn’t fall down, there’s no like anti-falling down bridge specializers. That’s just part of it. I guess you’re saying there’s like it’s not meaningful to talk about like building good ML systems without them reliably doing what you want. That’s like an absolutely core part of how you design it in the first place.
Pushmeet Kohli: Yeah, absolutely.
Robert Wiblin: Is there any steel man of this position, that like well maybe we don’t want to speed up like some sort of capabilities research, we might prefer to delay that until we’ve done more of the work that you’ve done? Perhaps we want to put extra resources into this alignment work, or reliability work but as early as possible.
Pushmeet Kohli: Yeah, I think the answer to that question actually is very contextual. In certain contexts, people have all already made that case, that when you try to deploy machine learning systems in very safety-critical domains, you ought to understand what is the behavior of the system. In other cases where you’re trying to do some experimentation or the proof of concept and so on, it’s fine to be able to do that sort of stuff for fun and so on and to see what are the limits of things. But I think there is a spectrum and the answer is contextual, there is no clear answer.
Robert Wiblin: Yeah. Is it generally the case that work that improves like AI capabilities or allows it to do new applications or have better insights also increases safety as you go or also increases alignment as you go? Because that’s just like part and parcel of improving algorithms?
Pushmeet Kohli: Yeah, absolutely. In some sense if you think about machine learning, what is machine learning sort of …I think of machine learning as a translation service. Machine learning is a translation service where you bring it some specifications or some specifications on what behavior you want out of your system and it translates it into a system which claims to have those properties. So, what is inside that box, inside that translation system? It has various different inductive biases. They’re either in the form of regularizes or different types of machine learning models, which have different inductive biases and different optimization techniques and so forth. But all of those techniques are essentially trying to solve this translation problem, converting your specification, which could be in proper examples or could be just the input examples, in the case of unsupervised or supervised learning or it could be interactions with the world and translating it into a classifier or a policy depending on the problem type.
Machine learning generally and the robustness problem are not different from each other
Robert Wiblin: I imagine there are some listeners out there who know quite a bit about ML and are considering doing during careers in ML and they would think that their passion is doing the kind of work that you’re doing or the work that the AGI team is doing. Imagining that they couldn’t actually find … There’s only so many people doing this alignment, reliability work. Imagine that they couldn’t get one of those positions but they could get some other general ML role to improve their skills, but then they’re nervous because ultimately what I really want to do is the thing that you’re doing. What advice might you give them? Will you just say dive in and you’ll be able to … Well, like in any role you can find some reliability thing to contribute or at least to be able to change into a more like reliability focused role later on?
Pushmeet Kohli: Machine learning generally and the robustness problem are not different from each other. In some sense every machine learning practitioner should be thinking about the question of generalization. Does my system generalize? Is my system robust? These are problems that not-
Robert Wiblin: Shows up everywhere.
Pushmeet Kohli: This is everywhere. The key advice that I would give to people is, when they approach the problem they should not approach it from the perspective of, well, if I take this particular input, apply this tool, I get this output. But think of it as to why that happens, or what are we after and how will we get that? Instead of, well here are very systematic view of like what needs to be done, but not how we should work, but what are we after?
Robert Wiblin: Yeah. Are there any ML projects or research agendas that don’t carry the label, safety, that you think like are going to be especially useful for safety and reliability in the long term? People wouldn’t think of it as an especially like reliability focused project, but it turns out that actually it is like, it’s going to have a huge influence on that potentially. Any of them which stand out?
Pushmeet Kohli: I think anything to do with optimization. So, optimization is the general area. It’s not tied to robustness or it’s not tied to even machine learning. Optimization is used in a general context for various different problems in operations research and game theory, whatever. Optimization is key to what we do. Optimization is a fundamental technique that we use in safety work to improve how our systems conform to specifications. We always are optimizing the performance of our systems, not to produce specific labels, but to conform to the more general problem or to conform to these general properties that we expect. Not to the simple properties of, well for this input it should be this output. That’s a simple property, but you can have more sophisticated properties and in traditional machine learning you are trying to optimize consistency with those simple properties or reduce your loss or empirical risk and in our case, we are reducing our loss or reducing the risk of inconsistency with the specifications.
Biggest misunderstandings about AI safety and reliability
Robert Wiblin: What do you think the general public or listeners, like what would be their biggest misunderstandings about AI safety and reliability?
Pushmeet Kohli: That’s a very hard question.
Robert Wiblin: Because there’s a lot of them or just how it depends on who it is?
Pushmeet Kohli: Yeah, it depends on who it is. I think one interesting thing that people need to think about is in the same way that we don’t expect every human to be the same, we shouldn’t expect that every machine learning system is the same. The second thing is that when we test our machine learning systems and then we make claims about, well, our model performs very well on a particular data set, say ImageNet or some other data set for, I don’t know, speech recognition or … You are solving that data set. Solving that data set does not imply that you have solved that problem. There’s a difference between solving a benchmark, or getting a high performance on the benchmark versus solving the problem.
Pushmeet Kohli: Then the key question is, if solving a benchmark does not imply solving a particular problem, then what does? That is a question where I think a lot more needs to be done because that is the fundamental problem of what do we want or what do we expect out of systems? When do or how do we articulate? What does it mean to solve a image classification or some other problem? That is where the general public need to think about what is it that they’re after and what do they expect out of these systems?
What things we can learn from software safety?
Robert Wiblin: Before you were talking about this analogy between just general software debugging and security and robustness of AI. Do you want to expand on that analogy and what things we can learn from software safety?
Pushmeet Kohli: There are certain things that we can learn. First of all like, even though it is incredibly hard, progress can be made. We knew that solving the halting problem was undecidable, yet we now have very good tools for termination analysis. That doesn’t mean that we’ve solved the halting problem, yet it means that for certain instances we can show that things will terminate and programs do not just hang all the time and they sometimes do-
Robert Wiblin: Less than they used to.
Pushmeet Kohli: Less than they used to. So even though some problems appear incredibly challenging when you first look at them technically, overtime you make progress and you find ways to somehow approximate what we’re after. So I think that is a good thing to learn from the software reliability issue. The other thing to learn is when you think about defects in software. There are those defects, it’s not enough to say that, oh, there is a defect, but nobody will find it. That is something that we should learn, because there are always people who will find it.
Robert Wiblin: Because they’re actively trying or just because there’s so many people using something that eventually they run into it?
Pushmeet Kohli: For both reasons. So there is nature versus adversary. Nature will find your bug or the adversary will find your bug and they will both use it and you will incur a cost for both cases. So you have to think about how do you want to make sure that your machine learning system is robust to nature and to the adversary.
Robert Wiblin: So you think people systematically underestimate how likely it is that the problems they know are in their software or their AI system will actually materialize and create problems?
Pushmeet Kohli: Yeah. I think, nobody starts off by saying, oh, I should write this software, which can be hacked by some hacker and use it to steal some information. Everyone, every software engineer is trying to make sure that their program does what it says on the tape. But still-
Robert Wiblin: It’s hard.
Pushmeet Kohli: But it’s hard. Even after decades of work, we still see that defects in or certain bugs in machine learning, or like in normal software systems are sometimes exploited by people who can then use it for various different purposes.
Robert Wiblin: Yeah, definitely. It seems like computer security is just an unsolved problem and like a severe ongoing problem. I guess … Would you take from that analogy like we could have just like many years where we’re going to have issues with ML not doing what we want and it’s just going to be like a lot of real slog for potentially a decade or two before we can fix it up.
Pushmeet Kohli: In some sense, one thing that we have to take is we have to take it seriously. The second thing that we have to take in, is that have to learn from history. There is already a lot of work that has already happened. The third most optimistic thing that we can take is that, yes, the systems that we are building are extremely complex but they’re also simple in other ways. There’s simplicity in basically the building blocks and that simplicity should help us actually do much better than the traditional software systems which are messy in their own way.
Robert Wiblin: Can you explain how they’re simpler?
Pushmeet Kohli: We were talking about this whole idea of not asking the system one question but asking infinite questions. That technique of asking the machine infinite questions or reasoning about how it is going to perform not on just one particular input, but a set of inputs, or a space of inputs. That is called abstract interpretation, in the software analysis and software verification community. Abstract interpretation like when I was talking to you in the context of neural networks, because our operators are simpler in some sense, there are these neurons, which behave in a specific way, we can capture what transformations they are going to do to the input. While in a traditional program there are so many different types of operators they are very different behaviors than so on. So you can do that as well, but it’s slightly more complicated.
Robert Wiblin: Yeah. That leads into my next question, which was, is it going to be possible to formally verify safety performance on the ML systems that we want to use?
Pushmeet Kohli: I think a more pertinent question is, would it be possible to specify what we want out of the system, because at the end of the day you can only verify what you can specify. I think technically there is nothing, of course this is a very hard problem, but fundamentally we have solved hard search problems and challenging optimization problems and so on. So it is something that we can work towards, but a more critical problem is specifying what do we want to verify? What do we want to formally verify? At the moment we verify, is my function consistent with the input-output examples, that I gave the machine learning system and that’s very easy. You can take all the inputs in the training set, you can compute the outputs and then check whether the outputs are the same or not. That’s a very simple thing. No rocket science needed.
Pushmeet Kohli: Now, you can have a more sophisticated specification saying, well, if I perturb the input in some way or transform the input and I expect the output to not change or change in a specific way, is it true? That’s a harder question and would be showing that we can try to make progress. But what other types of specifications or what other type of behavior or what kind of rich questions might people want to ask in the future? That is a more challenging problem to think about.
Robert Wiblin: Interesting. So then relative to other people you think it’s going to be figuring out what we want to verify that’s harder rather than the verification process itself?
Pushmeet Kohli: Yeah, like how do you specify what is the task? Like a task is not a data set..
Robert Wiblin: How do you? Do you have any thoughts on that?
Pushmeet Kohli: Yes. I think this is something that … It goes into like how this whole idea of, it’s a very philosophical thing, how do we specify tasks? When we talk about tasks, we talk about in human language. I can describe a task to you and because we share some notion of certain concepts, I can tell you, well, we should try to detect whether a car passes by and what is a car, a car has something which has four wheels and something, and can drive itself and so on. And a child with a scooter, which also has four wheels goes past and you say, “Oh that’s a car.” You say, “No, that’s not a car.” The car is slightly different, bigger, basically people can sit inside it and so on. I’m describing the task of detecting what is a car in these human concepts that I believe that you and I share a common understanding of.
Pushmeet Kohli: That’s a key assumption that I’ve made. Will I be able to also communicate with the machine in those same concepts? Does the machine understand those concepts? This is a key question that we have to try to think about. At the moment we’re just saying, oh input, this is the output, input this output that. This is a very poor form of teaching. If you’re trying to teach an intelligent system, just showing it examples is a very poor form of teaching. There’s a much more richer, like when we are talking about solving a task, we are talking in human language and human concepts.
Robert Wiblin: It seems like you might think that it would be reliability enhancing to have better natural language processing that, that’s going to be disproportionately useful?
Pushmeet Kohli: Natural processing would be useful, but the grounding problem of does a machine really understand-
Robert Wiblin: The concepts, or is it just pretending, or is it just aping it?
Pushmeet Kohli: Exactly.
Robert Wiblin: Interesting. So that you think … Is that a particular subset of language research is trying to like check whether the concepts, the underlying the words are there?
Pushmeet Kohli: Absolutely. That I think is, and many people are thinking about this key question.
Robert Wiblin: How do we even check that?
Pushmeet Kohli: So, yeah, that’s why we’re all here.
Robert Wiblin: Yeah, interesting. I suppose you just like … Do you like, you change the environment, change the question and see whether it has like understood the concept and it’s like able to transfer it and so on?
Pushmeet Kohli: Exactly. That is a particular form of generalization testing where you are testing generalization under interventions. So you intervene and then you say, “Oh, now can you do it?” In some sense you are testing generalization.
Robert Wiblin: Forgive my ignorance, but can you ever check whether a system is like, say understands the concepts by actually looking at the parameters in the neural net or something like that? Or is that just like beyond … It’s like I can’t like understand you by like checking the neural connections because we don’t even understand what that means.
Pushmeet Kohli: It depends on the concept. If I can analytically describe a concept in terms of an equation, then I can do something very interesting. I can say, here is an equation and now I will try to find consistency between that equation and how the neural network operates. But if I cannot even analytically describe what I’m after that, how will I verify it?
Robert Wiblin: Yeah. Okay. So if you designed as a system to say, do a particular arithmetic, then you could try to find … Then you could like, we know how to search for that, but if you have like concept of a cat, not really?
Pushmeet Kohli: Yeah. So now, the question is basically, how should we change the specifications? What should be a specification language in which people can describe analytically things that they’re after?
Robert Wiblin: Are there any other parallels between robustness of AI and software debugging and security that you want to highlight?
Pushmeet Kohli: I think there’s so many parallels it’s sort of difficult to say. Software testing has gone through its own exercises about static testing and … It’s like static analysis and dynamic analysis. In static analysis you look at just the software system and try to reason about it without even actually executing it. In dynamic analysis you actually execute it. So we do both kinds of things in machine learning. We test, we actually run the model to see how it is performing and in other cases they just look at the model structure and say, well, I know it will be translation invariant, because it’s a ConvNet, and a convolutional network gives us translation invariance, so I don’t need to even run it to show that it’s translation invariant. So there are different types of reasoning that you can do.
Are there actually a lot of disagreements within the field?
Robert Wiblin: Let’s talk about predicting the future, which apparently is pretty tricky. How forecastable do you think progress in machine learning is and what do you think causes people to disagree so much?
Pushmeet Kohli: I don’t disagree with many people, so … I tend to think that people are talking about the same thing, but from slightly different perspectives. In some sense, I don’t find that there are so many disagreements, even when people try to portray that there are disagreements, sometimes if you really look deep inside the arguments they’re talking about the same thing.
Robert Wiblin: I guess I was thinking that there’s been surveys of asking people in ML and related fields, “When do you think we’ll have an AI system that can like do most human tasks at a human level?” And you just get like answers from like five years away, to 100 years, 200 years, never, it’s impossible. It’s like all over the map and then so it leaves someone like me just totally agnostic about it.
Pushmeet Kohli: This is a very good example of what I mean by people are answering different questions.
Robert Wiblin: Yeah. Okay, so you think it’s like question interpretation is driving a lot of this?
Pushmeet Kohli: Exactly. So what does it mean to be good at human level sort of tasks? It is interpreted in different sorts of ways.
Robert Wiblin: Interesting. So you think if you could get all of those people answering the survey in the same room to like hash out exactly what they mean, then the answers or the timelines or the forecasts would-
Pushmeet Kohli: Definitely, it would change.. Of course, some people would have different biases. They will have different … More information, some would have less information, but I think the variance would definitely decrease.
Forecasting AI development
Robert Wiblin: I guess, do you want to comment on like what you think AI systems might be able to do in like five or 10 years that would be interesting to people? Someone submitted the question now, “What’s the least impressive accomplishment that you’re very confident won’t be able to be done within the next two years?”
Pushmeet Kohli: I don’t know. Again, it’s a very subjective question as to “What is the least impressive”. Somebody might say, “Well, changing a baby’s diaper would be a very sort of …” It’s something that everyone can do.
Robert Wiblin: It’s prosaic. On the other hand, very hard.
Pushmeet Kohli: Who would trust a robotic system with their six-month old baby? So to get that level of trust for another intelligence. And we’re talking about another intelligence. We can’t make this assumption that you and I like regularly make about each other, because we think that there’s … Like in some sense, we are the same, we have so many similarities in our DNA that we are going to think similarly or act differently and we have the same requirements. You would eat, I would eat. You need to be breathe air. Like when you’re talking about … When you’re thinking about a different intelligence you can’t make those assumptions. To get to that level of trust is a very difficult thing to do.
Robert Wiblin: One attitude I hear about, in terms of like AI forecasting, is just people who read the news and are like, “My God, these systems now are killing it at Go, we’re like killing chess, like now it’s playing StarCraft II, it’s like doing these amazing strategies.” Now we’ve got ML systems that can write uncanny essays that look kind of like they were written by a human. This is amazing, there’s so much progress. Then I guess I’ve heard other people who are like, “Well, if you put this on a plot and then you map out like the actual progress it just looks like it’s linear.” Well, we’ve like thrown a lot more people at it, and there’s a lot more people working in ML than there were 10 years ago.
Robert Wiblin: Yet like in some sense it seems like it’s just linear progress in terms of the challenges that ML can meet. Do you have any thoughts on that? It’s like, is ML progress kind of impressive at the moment or is it just like what you would expect? Maybe are we … Do we have to throw more people at it because it’s getting harder and harder to make incremental progress?
Pushmeet Kohli: My PhD was titled, “Minimizing Dynamic and Higher Order Energy Functions using Graph Cuts”. It was like some particular topic in function optimization … Software and function generalization and so on. When I was citing the papers, in my thesis, I could find maybe like 50 or 60 very relevant papers and they went from the 1970s, to like the late 90s and so on. If you look at the same … If you do a similar analysis for the type of work that we are doing now, the field is growing exponentially. The type of things that we are able to do is increasingly changing and we are not in the same for context because the technologies that we have for research these days, there was no Google Scholar at that point of time, people had to go to libraries or actually look at journals and see what is the relevant paper, that was published this journal?
Pushmeet Kohli: These days when you go to a conference, by the time you have reached the conference, the paper is already old and two or three iterations have already happened. So there are a lot more people working in it, working on the domain. There are advances being made. Do we think that we can solve the problem completely? It depends on what is the definition of the problem. Yes, we can solve benchmarks very, very quickly. The rate of progress at which we are making on benchmarks is amazing, but I think the real problem will lie in the problem definition. How do we define the problem?
Robert Wiblin: Can you flesh out what you mean by that? Maybe not … You’re like … The question is like what do we want it to do really or?
Pushmeet Kohli: Or how do we measure progress?
Robert Wiblin: Okay.
Pushmeet Kohli: Right. We were talking about how do you specify a task and then the question is how do you specify an associated metric?
Robert Wiblin: To be a little bit facetious, I’m like, well I want an ML system to do my job or can do whatever I do. Is that … that’s too vague, I guess.
Pushmeet Kohli: Yeah. Exactly. If I think –
Robert Wiblin: That’s the mistake that people make.
Pushmeet Kohli: Yes. I think basically once you’ve tried to formalize it, then it becomes very interesting because then you can actually measure it and in some cases, machine learning systems are miles ahead, and in other places, they’re far behind. If you ask a translator, well, I want to do some sort of transcription. For example, a transcriber, right. Now, we have machine learning systems which can do very, very fast transcriptions and quite accurately. But, if the transcription quality was needed, like you are discussing something at the UN and there could be a war if something was not transcribed correctly, you wouldn’t do it. So I think those are the sorts of places where your interpretation of the task and your interpretation of the metric becomes extremely important.
Robert Wiblin: You keep returning to this issue of like we have to specify exactly what we want like properly. I guess, is that something that you think other people in ML like maybe don’t fully appreciate how important, how essential that is?
Pushmeet Kohli: I think people do. Like most of the work in machine learning is about basically regularization and basically generalization and inductive biases. What are the inductive biases that basically certain regularizers have, or certain model architectures have and so on. People sort of deeply think about these sorts of issues. Like you have some points, but how do you hallucinate between those points and what happens, what’s the behavioral system between those points, or outside those points, right, away from those points? And that key question of generalization, everyone thinks about, but we were thinking about it in an abstract low dimensional sort of world, where those dimensions sometimes did not have meaning and now suddenly machine learning is in the real world, where all these things have meaning and have implications. Failures of generalization along various different dimensions have different implications, and just coming to grips with that realization that actually whatever you do will have implications and it’s not about that generalization bound that you can prove. It’s about that generalization bound has an influence in society or has an influence in how something will happen in the future.
Career advice
Robert Wiblin: Cool, let’s talk about some advice for the audience, so we could get like people helping you at DeepMind or work in other ML projects. If there was a promising ML PHD student who for some reason just couldn’t work at DeepMind, what other places would you be like excited to hear about them going to?
Pushmeet Kohli: I think there is a lot of machine learning research happening across the board in academia, in various industrial research labs as well as labs like OpenAI. I think, there is a general healthy ecosystem of AI research and there is no optimal place for everyone. There are different roles and every organization is contributing in its own right. Some people want to really impact, I don’t know, a specific application., It’s good for them to work on that particular application and think about these questions that we were talking about. How do you actually specify what success means on that specification? Some other people could say, “Oh, I’ll look at the broader track level and think about what should be the language in the specifications we define.” So it’s like there’s a whole ecosystem and people are … It’s important for them to work at places which allow them to think about the problem, which gives them room to develop themselves and learn about the area rather than just apply something known.
Robert Wiblin: On that, how do you think academia compares to the industry? You’re briefly doing a postdoc at Cambridge before you went to Microsoft?
Pushmeet Kohli: I’ve supervised a number of PhDs in the past and I think that academia and industrial research both have their unique strengths. In academia, you have this very rich environment where you are exposed to a number of different ways of thinking. Then you have time to reflect upon your own philosophy, your own way of thinking about things. It gives you a lot of freedom and allows you to basically … Gives you time to build a philosophy. Compare that with DeepMind. DeepMind also gives you some room to grow and think about your philosophy and so on, but the unique strength of DeepMind is basically where you’re working with 20, 30 different people together. So collaboration is something which is key. If you go to an academic lab, if your goal is basically to teach people, or basically supervise a number of different students, then taking up a good academic offer or teaching or research role, in a university is a very good option for you. But if you would want to work on a very difficult problem with peers, then DeepMind becomes a very good role for you.
Robert Wiblin: I imagine that you have some role in hiring or recruitment for the two teams that you’re involved with. I guess, what reservations do people potentially have about coming and working on those teams at DeepMind, and what do you say to them? One might be they don’t want to move to London. I was guessing that could be sometimes a sticking point for people.
Pushmeet Kohli: Yeah. Sometimes people have family constraints and of course they want to be in the place where they have certain sorts of geographies where they can be. I mean DeepMind is also really flexible in terms of our geography,but at the same time we also have to make sure that if there are projects which have critical mass… Because that’s how we operate, we operate in teams in bigger projects which are focused. So it’s important for us to have critical teams. So you can’t have one person work on one particular team on one side of the planet working on the same project with the other. So if there are people, if there’s a critical mass of people, working on a project in a particular geography, that makes sense. But sometimes that doesn’t work out for some people.
Robert Wiblin: Yeah. I just moved to London and I’m really loving it a couple of months in. If that’s anyone’s reservation, send me an email and I can tell you about how great London is. It sounds like, I guess, possible reservation that some people might have is that they don’t maybe want to work in such large teams, or they don’t want to work in such and such a team-y environment, like they’re used to more small groups or individual research?
Pushmeet Kohli: Yeah, I think, that’s totally fair and I think have different expectations and they have different preferences for the type of work they want to do. If you are an algorithmic researcher, where you want to spend your time proving certain results of certain algorithms, you can do it anywhere. You can think about it, of course, you would basically get feedback from peers and so on and it would be valuable for you, but you can get that whilst still traveling and getting … You can still collaborate with some people at DeepMind . So it’s not a necessary thing that you have to be part of a very big team, but if you want to, say build the next AlphaGo or solve something really big, then you need to be part of a big team. So I think for those kind of people, those kind of researchers really are attracted by the mission and the execution strategy of DeepMind.
Robert Wiblin: Other than I guess passion for DeepMind’s mission, what kind of listeners would be best suited to working here?
Pushmeet Kohli: One of the most important attributes, in my opinion is the willingness and the hunger for learning, because we are constantly learning.
Robert Wiblin: Is that because the technology’s advancing quite quickly that you just have to always be learning new methods?
Pushmeet Kohli: Exactly. There are certain roles in which you say, well, I went to university, I learned this stuff and now I’m going to apply this stuff. DeepMind is where you are constantly learning because we are continuously changing. We’re making progress in our understanding of what’s possible, so we’re constantly still learning about new techniques, new algorithms, new results, new approaches. So you are a lifelong student. So if you are comfortable with that situation, like where you are, where you really like to grow continuously in terms of your knowledge base, then DeepMind is very a good place for you.
Robert Wiblin: Are there any particularly exciting projects or roles at DeepMind that listeners should be aware of, that maybe they should apply for now or keep in mind for the future?
Pushmeet Kohli: I think in DeepMind is hiring a lot of different … Across the board. In technical teams, in research teams, in engineering roles, in communication, which is not just important for making clear what we are doing to, to the real world, but also to understand how people perceive tasks. This whole question of like, what are we after, right? One single researcher cannot come up with the definition of what does that task mean? It is a very … You have to communicate with people as to understand what they are really after in a particular problem.
Robert Wiblin: Well, yeah, obviously we’ll stick up a link to DeepMind’s vacancies page so people can find out what’s on offer at least at the point that the interview goes out.
Pushmeet’s career
Robert Wiblin: Let’s talk a little bit about your own career since you were like a rising star at Microsoft and now a rising star at DeepMind as well. How did you advance up the hierarchy in machine learning so quickly? I guess, especially starting from India where I guess you had to go overseas and like build your reputation somewhere that you didn’t grow up.
Pushmeet Kohli: I have been extremely lucky in terms of getting some very, very good mentors and very good colleagues. I grew up in India. I did computer science in my undergrad and it just so happened that I had a very good teacher who was in automata theory course and that got me really interested into formal methods and so on. I did some research during my undergrad years and that led me to Microsoft Research in Seattle. And there I was working with a very, like one of the best teams in formal methods in the world. I don’t think any … Many undergraduates would even dream of working in that, so I was extremely lucky to basically be interning with that team and then I spent a lot of time in that group.
Robert Wiblin: They sought you out, when you were you’re doing a PhD in India, I think, and Microsoft emailed you and tried to get you in this internship. Is that normal?
Pushmeet Kohli: I was not doing my PhD, I was doing an undergrad.
Robert Wiblin: What?
Pushmeet Kohli: Apparently, at that point of time Microsoft had started a research lab in India and as part of that initiative, they had I think an internship program where they would ask different departments, different computer science departments across the country to nominate students and then they will interview these students and then take four or five students from the whole country. One fine afternoon, I got this email from a research scientist at Microsoft Research in Redmond saying, “Your department has nominated you for an internship position in Seattle, in Redmond.”
Pushmeet Kohli: First of all, I did not know that they had nominated me. You see, like you just get this email out of the blue and you’re like, “I don’t know, what is this about?” and so they said “Can you meet me like in seven or eight hours?”. That’s like 1:00 AM, 1:00 or 2:00 AM India time, because this is like an interview that is happening in Seattle time. I’m like half sleepy, like I sat this interview call. They asked me about what I was working on and I told them about some of the things I’ve done. I had just written a technical paper on some of the stuff that I had done and I forwarded that to them as well. Then a few weeks later I get this letter saying, “You should come to Seattle to do an internship.”
Pushmeet Kohli: Yeah, it was a very strange experience.
Robert Wiblin: I guess, what can we learn from this, to get your supervisors to put you forward for things, or like-
Pushmeet Kohli: I think at that point in time, I had no plans to leave India at that point of time. My idea was that I’m going to do my … Finish my undergrad studies and I will stay in India and I wanted to be close to my family and so on, and then they asked me to do this and I said, okay, like, “Yes, this sounds like a great learning opportunity, so I’ll go.” So it’s important to take that initiative and sometimes leap into the unknown because you don’t know, at that point in time I didn’t know, it didn’t make any sense for me to leave. I should have finished my undergrad and taken up a full time job, but here I am taking an internship in a research lab.
Pushmeet Kohli: At that point of time I had no intention of doing a PhD and I went to that research lab, did the internship and then they convinced me that I need to do a PhD. And then one of the researchers in Microsoft Research then was moving to academia as a professor and offered me a PhD position. So in fact, I did not even apply for the PhD position and somehow I was enrolled in a PhD program. So it’s like sometimes these things happen but you have to take, you have just go and do it, yeah, roll with it.
Robert Wiblin: Is it the case that today, people who are doing well in ML or CS degrees, whether in the US or UK or India, that they’re getting sought out by organizations like Microsoft or Google, to be like headhunted in a sense? Or was that just something that was happening at that particular era?
Pushmeet Kohli: I think, people are constantly looking. Organizations are constantly looking for the best people. People are what make organizations, organizations are not like this particular building or this particular room or this particular computer. It’s basically, it’s people who make an organization. And organizations are constantly looking out for the right individual. So it doesn’t matter if you are at MIT or at Berkeley or at some random university in some random country or basically you have not even done any computer science education. If you look at the problem from the right perspective and think about it from the right perspective, and show that you are making a contribution and you’re thinking about the problem in a deep way, then people will seek you out.
Robert Wiblin: Seems like you advanced up the hierarchy in Microsoft and Google, pretty quickly. What do you think makes someone, perhaps you, a really productive researcher?
Pushmeet Kohli: The most important thing is to always be a student. Keep learning. Part of the learning process is sharing knowledge. When you share knowledge and you collaborate with people, and you talk to people, you learn a lot.
Robert Wiblin: Is there a lot of socializing at DeepMind?
Pushmeet Kohli: You can call socializing but it’s more about collaboration. Be passionate about what other people are working on, try to learn what they’re working on. Try to see if there are any insights that you have that might help them in achieving their mission. I think that is the best way of learning. If you can contribute to someone’s success, that is the best way for you to learn from them, to earn their respect, to actually contribute to the organization. So constantly basically … A constant thirst for learning about what people are doing and contributing to what they’re doing.
Robert Wiblin: Yeah, speaking of constantly learning, I think, your original background is in software verification and formal methods. Was it hard to make the transition into machine learning?
Pushmeet Kohli: I did my undergrad in computer science and then worked in this research group on formal verification, then did my PhD in discrete optimization and applying it to do inference in Markov random field or machine learning models, applied to computer vision. So in fact I did my PhD in a computer vision group where I was developing methods for efficient inference in these more sophisticated models that were all the rage at that particular time.
Pushmeet Kohli: Then I moved to Microsoft Research and one of the first projects I did in Microsoft Research was in computer graphics. So for a long, long time I was working in computer graphics and 3D construction and these kinds of things. At some point in time, Microsoft worked on Kinect, the human pose estimation system in Kinect. That is I think, the first time I started thinking very deeply about discriminative learning and like high capacity machine learning models. So I had sort of … I was a Bayesian from my PhD upbringing but then the discriminative machine learning type of projects I first encountered them at Microsoft and over time I naturally wanted to combine these two approaches.
Pushmeet Kohli: I did some work … I did some projects in probabilistic programming and alongside this I worked with a collaborator on game theory, so I’ve done quite a bit of work on game theory and applications of machine learning in information retrieval. So like I wanted to somehow learn about a lot of different views and so once you have worked on these problem areas then you get these insights from various aspects of machine learning. Then I finally came into more formal machine learning. It was not at all very difficult because you are working on applications, you knew already what are the problems. In some sense you will have a very big advantage because you know what the issue is. Like the data sets are always biased. So what generalizations you would need, how would you do that? What are the hacks people do to get those generalizations? Can you formalise those?
Pushmeet Kohli: So in some sense it becomes very easy for you because you have already been in the trenches, so you understand what are the issues involved and then coming back to proper deep learning, to think about what needs to be done, comes very naturally to you.
Robert Wiblin: As I mentioned earlier, not everyone who wants to help with AI safety, AI alignment and reliability etc. has what it takes to be a researcher. I certainly know I wouldn’t. What other ways are there that people can potentially help at DeepMind or collaborate with DeepMind, I guess in like communications or program management or recruitment. I guess, are there other kinds of supporting roles?
Pushmeet Kohli: All the roles that you mentioned are extremely important. Everyone at DeepMind, from the program managers, to the AI ethics group, to the communications group is playing an extremely important and necessary role at DeepMind. These are not optional roles. All these are necessary roles. We were talking about specifications, like us trying to understand from people what do they mean by a task? At a fundamental level that is a communication problem. You are trying to induce what is it that people are after, what do they want? That is a communications problem. So, DeepMind’s very holistic in that sense and we are not just like a bunch of people who are working in optimization or deep learning or reinforcement learning. There are people who are coming from various different backgrounds and who are looking at the whole problem very holistically.
Robert Wiblin: Can you describe some concrete ways that those roles can help with alignment and safety in particular for someone who might be a bit skeptical about that?
Pushmeet Kohli: Think about ethics. The role of ethics. Like the whole ethical frameworks that have been built in the past in the literature. Now, a machine learning researcher or an optimization researcher or one of the best students in the world who has just come out of reinforcement learning role, they might not know about the ethical implications of how research is done or what are the biases that are there in data sets or what are the expectations from society even, what are … And similarly, legal experts who know what are the regulations that you need to conform with as a responsible organization. So there is a very important role that all these different types of people play in finally shaping up the research program and deployment.
Robert Wiblin: We’ve covered machine learning career advice and study advice on the show before so we don’t have to go over all of it again, but do you have any unusual views about like underrated ways that people might be able to prepare themselves to do useful ML research or underrated places to work or, yeah, or build up your skills?
Pushmeet Kohli: I think basically, working on the real problem and trying to understand, trying to actually solve a problem and trying to actually stress test the learned model to break it. That’s a great way of getting insight as to what the system has learned and what it has not learned.
Robert Wiblin: So you’re like in favor of concreteness, like trying to solve actual problem rather than like staying in the abstract?
Pushmeet Kohli: Yeah. You have to do both, but I think this is quite an underrated approach to actually try to build something, even if it is a very simple thing, and see if you can break it.
Robert Wiblin: Break it how?
Pushmeet Kohli: Like make it behave-
Robert Wiblin: Behave wrong?
Pushmeet Kohli: Behave wrong.
Robert Wiblin: Oh, interesting, okay.
Pushmeet Kohli: Then try to understand why it behave wrong.
Robert Wiblin: Oh, interesting. So if you’re interested in working on alignment and you want to like find ways that AI becomes unaligned by accident and actually explore that. Do you have any concrete advice on how people can do that?
Pushmeet Kohli: Just take any problem. Like you can take machine learning competitions in Kaggle or like even some of the very simple toy data sets or benchmarks that people have like, say MNIST is a very simple data set and benchmark. You can try to play with MNIST and see what kinds of things you can do to the images, such that the classifier stops recognising. Even though the human mind would say, “Oh yeah, that is a four. Why are you not saying it is a four?” And the model says “No, I don’t know, it’s a one.”
Robert Wiblin: Is it possible for people at home to create like adversarial examples, like do their own, like optimizing the failure of ML systems?
Pushmeet Kohli: You can create your own adversarial example by hand. You can just draw a picture and say “Try detecting this four. This is a perfectly valid four, I will ask 20 people and they will tell me this a four and I want to create a four which you will not accept.”
Robert Wiblin: It makes sense, yeah. Is that something … I guess if someone had been doing that you’d be like more interested in hiring them potentially. They’ve got the right mindset since they’re throwing themselves into it. Yeah. Cool. To what extent do you think people can pick up ML or AI knowledge by doing data science jobs where it’s kind of incidentally useful and learning as you go as opposed to like formally studying a PhD. Do you have any comments on like whether people should definitely do PhDs and maybe who should and shouldn’t?
Pushmeet Kohli: Well, I don’t think PhDs are at all necessary. They’re a mechanism and that mechanism allows you to build up certain types of competencies. You get a lot of time. You are forced to think individually, but that can be done in many different contexts as well. Some people want that kind of structure and in a PhD you will require a lot of self discipline, because for many people PhD’s don’t work out, because it very open ended and then without any specific structure you might not know what to do. But for other people, it basically gives you an overall framework under which you can explore different ideas and take your career further. But that is not a necessary thing, even in the data science role if you are really asking the right questions and if you’re really going after, “Why is the problem working in the way it is working and what can we make or break it or what is the real problem?”
Pushmeet Kohli: Those are the questions that you can try to answer and think about in any context, whether it’s a PhD or a job.
Robert Wiblin: Do you have any advice on what people can do to stand out such that you or other people at DeepMind would be more excited to hire them? Perhaps someone who already does know a fair amount of of ML.
Pushmeet Kohli: I think the most important thing that a person can do, is really think about problems which are extremely important that other people are not thinking about. So problem selection is the most important, in my view, is one of the most important things in research. Once you select the problem, yes, you know, okay, there are a lot of different techniques and sometimes you have to invent techniques and so on to solve a particular problem. But selecting the right problem is a very important skill to have. So thinking about what is the right problem, what are problems that people are not asking today? What are the questions people are not asking today but they will be asking in two years time or five years time or 10 years time. So thinking about it in that fashion will-
Robert Wiblin: Make you stand out.
Pushmeet Kohli: Make you stand out and be unique. .
Robert Wiblin: Because it’s so difficult, so that’s why it’s impressive.
Pushmeet Kohli: And you will be ahead of the curve.
Robert Wiblin: What’s the best reason not to pursue a career in ML or I suppose like alignment and robustness specifically? What’s the biggest downsides, if any?
Pushmeet Kohli: I think at the end of the day, everyone wants to contribute to the world, like you want to be relevant. And people have different unique strengths and if you can leverage your unique strengths in a different way and channel it in a different role, then it’s completely fine. At the end of the day,people are motivated by different things and working on machine learning is not the only way that you can channel what you want to do and what you want to achieve.
Robert Wiblin: So it’s a question of personal fit to some extent?
Pushmeet Kohli: Yes.
Robert Wiblin: What do you think is the most impressive accomplishments so far that have come out of DeepMind? Is it like, something like StarCraft II, the flashy stuff that the media covers and that I’m impressed by, or are there more subtle things from a technical point of view? What impresses people inside DeepMind?
Pushmeet Kohli: You’re asking a question like pick your favorite car and the only problem is that the cars keep changing every week and the cars that you would be aware of have long been superseded. We are constantly sort of saying, “Oh, yeah, I really liked this stuff.” Next day you turn up, well it’s all new, and we say “I like this new stuff!”
Robert Wiblin: Yeah, interesting. So internally it seems like things are changing a lot. Like the methods are just constantly evolving.
Pushmeet Kohli: Absolutely. Yeah. We should be surprised if that is not happening.
Robert Wiblin: Are there any approaches to robustness that haven’t been written up yet that your teams are experimenting with?
Pushmeet Kohli: I think that this whole idea of asking infinite number of questions, it’s a very challenging sort of area in the context of say, in simple models it’s hard, but you can manage it. But what does it look like in the context of reinforcement learning, in the context of policies, in the context of sequence to sequence models, various different types of models, various different types of applications. These are all very interesting areas that we are currently looking into and yeah hopefully we’ll find something new.
Robert Wiblin: So there’s the stereotype of like software engineers, computer science people perhaps like lacking some of the soft skills that are necessary to work in an office in these big teams as people do at DeepMind and I guess at like lots of other software companies. Do you have any ideas for how people in computer science can like improve their soft skills like teamwork ability to explain things to some extent?
Pushmeet Kohli: I think the best way to learn these kind of skills is on the job, in the sense that if you really want to build these… Like go and try to talk to someone and help them. If you’re trying to help, genuinely help someone succeed, they will be interested in communicating to you. So even though you might have barriers and it might be hard, you will have encouragement at least from one person on the other end who, if you can convince them, that you are indeed there to help them. And in some sense just that notion of altruism, you can gain a lot from that because indirectly you are learning how to communicate. You’re learning a completely new field. The amount that you gain might be even larger than what you have contributed. So definitely reach out to people and try to help them.
Robert Wiblin: Are there any last things you want to say to someone who’s maybe on the fence and is like, yeah, maybe I’m going to go like do this research, but I’m not quite sure. Like what gets you, what excites you in the morning to come in and do this research?
Pushmeet Kohli: It’s about communication. Communication is hard. Like even humans, we constantly misunderstand. People have so many misunderstandings in the world. Like the world is very polarized and so on and so forth. So people are looking at things from a different perspective. It’s like everyone is right but in their own view and so it’s important for us to solve that communication problem and in some sense that’s what we are doing in machine learning, we are building a communication engine with which we can translate our wishes or our expectations to silicon-based machines. Can you express what-
Robert Wiblin: What you really think and want.
Pushmeet Kohli: Yes.
Robert Wiblin: I guess some people they’re worried about AI because it’s so different from people, but I guess your angle is like a more extreme version of the problems that people face working with one another and like communicating between themselves and coordinating. That’s interesting. Okay. Final question, I suppose somewhat whimsically. Imagine that things go really well and ML systems are able to do most of the work that humans do, and you’re out of a job because they’re better than you at what you do. Do you think you’d keep working hard for mental stimulation, or do you think you’d just go on holiday, throw a big party and try to just have fun?
Pushmeet Kohli: There are so many YouTube lectures and books that are on my read list or watch lists that I think it will last me a lifetime, even if I start from today.
Robert Wiblin: Yeah. Okay. So it’s kind of intermediate fun learning, lots of podcasts and books to get through?
Pushmeet Kohli: Yep.
Robert Wiblin: Cool. All right. My guest today has been Pushmeet Kohli. Thanks so much for coming on the podcast, Pushmeet.
Pushmeet Kohli: Thank you.
Robert Wiblin: If you’d like to hear some other, and sometimes conflicting perspectives on AI reliability, the episodes to head to next are, in my suggested order:
No.44 – Dr Paul Christiano on how we’ll hand the future off to AI, & solving the alignment problem
No.3 – Dr Dario Amodei on OpenAI and how AI will change the world for good and ill
No.47 – Catherine Olsson & Daniel Ziegler on the fast path into high-impact ML engineering roles
No.23 – How to actually become an AI alignment researcher, according to Dr Jan Leike
The 80,000 Hours Podcast is produced by Keiran Harris.
Thanks for joining, talk to you in a week or two.
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.