#40 – How well can we actually predict the future? Katja Grace on why expert opinion isn’t a great guide to AI’s impact and how to do better
#40 – How well can we actually predict the future? Katja Grace on why expert opinion isn’t a great guide to AI’s impact and how to do better
By Robert Wiblin and Keiran Harris · Published August 21st, 2018
Experts believe that artificial intelligence will be better than humans at driving trucks by 2027, working in retail by 2031, writing bestselling books by 2049, and working as surgeons by 2053. But how seriously should we take these predictions?
Katja Grace, lead author of ‘When Will AI Exceed Human Performance?’, thinks we should treat such guesses as only weak evidence. But she also says there might be much better ways to forecast transformative technology, and that anticipating such advances could be one of our most important projects.
Note: Katja’s organisation AI Impacts is currently hiring part- and full-time researchers.
There’s often pessimism around making accurate predictions in general, and some areas of artificial intelligence might be particularly difficult to forecast.
But there are also many things we’re now able to predict confidently — like the climate of Oxford in five years — that we no longer give ourselves much credit for.
Some aspects of transformative technologies could fall into this category. And these easier predictions could give us some structure on which to base the more complicated ones.
One controversial debate surrounds the idea of an intelligence explosion; how likely is it that there will be a sudden jump in AI capability?
And one way to tackle this is to investigate a more concrete question: what’s the base rate of any technology having a big discontinuity?
A significant historical example was the development of nuclear weapons. Over thousands of years, the energy density of explosives didn’t increase by much. Then within a few years, it got thousands of times better. Discovering what leads to such anomalies may allow us to better predict the possibility of a similar jump in AI capabilities.
Katja likes to compare our efforts to predict AI with those to predict climate change. While both are major problems (though Katja and 80,000 Hours have argued that we should prioritise AI safety), only climate change has prompted hundreds of millions of dollars of prediction research.
That neglect creates a high impact opportunity, and Katja believes that talented researchers should strongly consider following her path.
Some promising research questions include:
- What’s the relationship between brain size and intelligence?
- How frequently, and when, do technological trends undergo discontinuous progress?
- What’s the explanation for humans’ radical success over other apes?
- What are the best arguments for a local, fast takeoff?
In today’s interview we also discuss:
- Why is AI impacts one of the most important projects in the world?
- How do you structure important surveys? Why do you get such different answers when asking what seem to be very similar questions?
- How does writing an academic paper differ from posting a summary online?
- When will unguided machines be able to produce better and cheaper work than humans for every possible task?
- What’s one of the most likely jobs to be automated soon?
- Are people always just predicting the same timelines for new technologies?
- How do AGI researchers different from other AI researchers in their predictions?
- What are attitudes to safety research like within ML? Are there regional differences?
- Are there any other types of experts we ought to talk to on this topic?
- How much should we believe experts generally?
- How does the human brain compare to our best supercomputers? How many human brains are worth all the hardware in the world?
- How quickly has the processing capacity for machine learning problems been increasing?
- What can we learn from the development of previous technologies in figuring out how fast transformative AI will arrive?
- What are the best arguments for and against discontinuous development of AI?
- Comparing our predictions of climate change and AI development
- How should we measure human capacity to predict generally?
- How have things changed in the AI landscape over the last 5 years?
- How likely is an AI explosion?
- What should we expect from a post AI dominated economy?
- Should people focus specifically on the early timeline scenarios even if they consider them unlikely?
- How much influence can people ever have on things that will happen in 20 years? Are there any examples of people really trying to do this?
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
The 80,000 Hours podcast is produced by Keiran Harris.
Highlights
There are basically two kinds of things people talk about, where one of them is something like human level AI and one of them is something like vastly superhuman AI. I think when people talk about human level AI, they are often vague on the details that might matter a lot. For instance, there’s AI that can do what a human can do but for like a billion dollars an hour. It’s different from AI that can do what a human does at the price of a human, and often people are ambiguous about which one they’re talking about. Also, like, are we thinking about physical tasks as well? Like does the robotics have to be ready?
Directly asking [machine learning researchers] when high level AI is gonna appear is probably pretty uninformative. I probably still think if it was very close, I would still expect to get a bunch more close answers. But yeah, I think we should heavily discount the possibility … Like I don’t think we should ask them when they think it is and then take that as our main guess. I think it’s a small amount of evidence. But I think there might be good ways to use AI experts in combination with other things to come up with good estimates – there might be better forecasting methods and that sort of thing.
Often when people think that there will be a discontinuity in AI progress, they implicitly have some theory about it. Because it’s sort of an algorithm and maybe it’s likely to be a very simple one or something. So, we can say okay, are things that are algorithms more likely to undergo fast progress. So, we usually measure these things in terms of like how long would it have taken to have this amount of progress at the usual rates. Nuclear weapons were six thousand years of previous rates in like one go, so that’s big.
The next biggest one we could find was high temperature superconductors. Where they underwent maybe like a hundred years of previous progress. So, I think this was people that are discovering different materials that could be superconductors. They hadn’t really realized that there’s a whole different class of things that could be superconductors. And I think they might have like sort of had some theory that ruled it out, then they came across this class and suddenly things went very fast.
So, I think it’s interesting that both of these are sort of like discovering a new thing in nature.
Suppose you’re a company and maybe you’re mining coal and you make some AI that cares about mining coal. Maybe it sort of knows about human values enough for like in the next ten years or something not do anything terrible, but overall let’s say it’s like a bunch of sort of agents who are smarter than humans and better than humans in every way, but they just care a lot about mining coal. I expect in the long run for them to basically accrue resources and decision making and control over things and so on, ’cause they’re basically better than us in every way. And in the long run he let us move toward just like trying to mine a lot of coal and not do anything that humans would have cared about, which you know, might be fine if they’re the right kind of creatures who really get a lot of pleasure from the coal mine or something.
But you also might imagine that they’re not even conscious or anything, but the consciousness thing doesn’t really matter for what will happen in the world. Like, they might still be very good at like taking control of things. I guess it seems similar to what happened with say pre-human like you know, chimp like species and so on. If they had a choice to like start off humans existing it seems like it was probably a bad idea for them even if they could maybe kill a particular human or something. They quickly lost control of the situation ’cause we were just like better at everything.
Articles, books, and other media discussed in the show
- Work from AI Impacts:
- Katja’s blog
- Overcoming Bias
- When will AI exceed human performance? Evidence from AI experts by Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans
- New Scientist article on the paper
- Leó Szilárd and the Danger of Nuclear Weapons: A Case Study in Risk Mitigation by Katja Grace
- Intelligence Explosion Microeconomics by Eliezer Yudkowsky
- AI and Compute by Open AI
- Google Duplex Demo
- Superintelligence by Nick Bostrom
- Superintelligence reading group
- Superforecasting: The Art and Science of Prediction by Philip Tetlock
Humans are still substantially better than AI at playing Angry Birds as of mid-2018.
Attitudes to safety research within ML:
- No, the Experts Don’t Think Superintelligent AI is a Threat to Humanity by Oren Etzioni
- Yes, We Are Worried About the Existential Risk of Artificial Intelligence by Allan Dafoe and Stuart Russell
- An AI researcher who thinks safety will be easy to deal with: Ethical Guidelines for A Superintelligence by Ernest Davis
- And a response from someone who disagrees: Davis on AI capability and motivation by Robbie Bensinger
Transcript
Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, where each week we have an unusually in-depth conversation about one of the world’s most pressing problems and how you can use your career to solve it. I’m Rob Wiblin, Director of Research at 80,000 Hours.
At the start of last episode I gave some advice on aps I use to listen to articles, podcasts and books on my phone. Some people wondered if that was an ad. Crazy though it is I was just trying to help out with some useful recommendations.
The show now has over 10,000 subscribers, and episodes are getting around that number of downloads in their first month now. That’s not a bad result after one year.
I would really personally appreciate it if you could tell your friends about the show so we can continue reaching people who’d enjoy listening and might be able to act on what they learn.
And if you particularly enjoy an episode why not share it on social media? You’ll be doing me a favour, and most importantly everyone will think you’re hella smart.
Regular listeners to the show will likely be interested in a series of articles we have coming out over the next year, which we’re calling our ‘advanced career guide’. The first is up on the site now and gives our candid views on which 11 career paths we think are most impactful. If you want to get all of those as they come out, you can subscribe to our research newsletter at 80000hours.org/newsletter/ .
In today’s episode I talk with Katja Grace. Katja completed a Bachelor of Science at the Australian National University, completing her thesis in Anthropic Reasoning under the supervision of philosopher David Chalmers. She then started a PhD in Philosophy at Carnegie Mellon University but left to work on forecasting the future of artificial intelligence technologies. She now leads AI Impacts, an organization which tries to forecast when AI systems will achieve particular capabilities and the impacts that should be expected to have. She blogs at Meteuphoric, is a research associate at the Future of Humanity Institute at Oxford University, and happens to have been my housemate all the way back in 2010.
Here’s Katja.
—
So thanks for coming on the podcast Katja.
Katja Grace: It’s a pleasure to be here.
Robert Wiblin: We plan to talk about whether more people should work on the kind of the technological forecasting that you’re doing and how promptly people ought to wash their dishes when they both live in the same house. But first, what does AI Impacts actually do?
Katja Grace: Well, I think there are two things to say about what it does. One way of framing is that it’s a research organization trying to answer big questions about the future of artificial intelligence. So, is there going to be human level intelligence at some point? What’s gonna happen then? Is the world gonna go crazy? Is it gonna be business as usual but faster and better or something? Are humans gonna go extinct? What kind of AI is likely to be that good? Is it not gonna be agents at all? Is it going to be something else? We’re interested in these high level questions, but we’re mostly answering much lower level questions that will hopefully shed light on those, like for instance what do current hardware trajectories look like? Like how much hardware is a human brain equivalent to? That sort of thing.
The other thing that AI Impacts is, is a website that has a whole bunch of pages that each have a topic, for instance like “How much hardware is the human brain equivalent to,” where all of the considerations that we know of are written up nicely, hopefully, so that an audience that doesn’t know about this area that much can read them.
Robert Wiblin: So why do you feel that the work that AI Impacts does is one of the most important projects in the world?
Katja Grace: Well I think that AI risk is one of the most important problems that I won’t go into the details of here. We can talk about that some other time. And I think the details of it are not very well known, like I think overall I don’t think we understand that well what will happen. It could be that one day, some sort of super AI takes over the world or it could be that there are a whole bunch of non agent T-systems over a long time doing something strange, or it could be that there are various weapons. Like, we don’t have a very good understanding of this and I think in general, if you’re facing a really giant problem, having a better understanding of it than no understanding or than a very poor understanding, is quite useful. And I think at the moment, where we are, there are just a lot of really tractable projects that you could do to have a better understanding of this, which I think would help direct efforts to AI safety efforts and efforts to improve policy and governance and that sort of thing, to projects that are more useful for dealing with whatever the real world is actually like.
Yeah I think there’s a lot that could be done and as I say, there are a very small number of people working on it.
Robert Wiblin: So, what is AI Impacts? How many people work on it and how long have you been running and how do you operate?
Katja Grace: So it’s about two full-time people equivalent, and it’s been around … for about three years.
Robert Wiblin: Okay so it’s a pretty niche organization.
Katja Grace: Ah yeah.
Robert Wiblin: And I guess your funding is pretty small and is everyone based in Berkeley, California?
Katja Grace: The two of us who work on it most are based in Berkeley and then we have some contractors who are usually elsewhere.
Robert Wiblin: How do you guys choose what questions to look into and how do you actually go about answering these things?
Katja Grace: Well we have a giant research agenda somewhere that is sort of like a hierarchical list of, for instance, so the top question is perhaps what are AI timelines like? And then we sort of brainstorm the different ways you might be able to get any evidence about this. So like, “Well you could ask experts or you could like try and work out what hardware timelines are like and what software timelines are like and how important they are and relate them somehow, or you can try to look at overall capability timelines.” I think we had some other things.
And for each of these things, you can sort of think about how you could make progress on that question. So then we end up with a bunch of concrete projects that are pretty far down and I think my colleague, Tegan McCaslin, is currently working on figuring out how smart pigeons are. I forget how that was related to anything, but yeah, it can get detailed. And so we sort of have this list of things and we try to prioritize them somewhat based on a mixture of how well answering a particular question will at least let us have a sort of first guess at some higher level answer that’s important and just how well positioned anyone working at the moment is to answer the question and other kind of things like that.
Robert Wiblin: So this area’s gotten pretty hot lately, but you were into it several years before people started talking about it everywhere. How did you end up in this field?
Katja Grace: I guess early on I was interested in giving all of my money to poor people in Africa. And I decided maybe that wasn’t the best thing to do for making the world better. So I decidedI should do best thing, I guess. I guess I’ve been interested in doing “the best thing” since early teenager-hood or something. At some point I think you actually introduced me to the blog Overcoming Bias, where some people were talking about AI. And so I wrote to Eliezer Yudkowsky, who was at Overcoming Bias at the time, being like, “I want to save the world. Do you think you can save the world like this? Do you want my help?” It turned out nobody wanted my help at that point, given I didn’t know much about signaling at that point. There was really no evidence that anybody could be helpful in any way, so I eventually went to visit anyway because I was on holiday in America and became friendly with some of the people there and was invited to live in someone’s garage and think about this stuff and it seemed like a good deal.
Robert Wiblin: So when was that?
Katja Grace: 2008, I went to America.
Robert Wiblin: Okay.
Katja Grace: For the first time.
Robert Wiblin: And that’s when you first kind of got seriously interested in this prediction of AI?
Katja Grace: It’s when I first got interested in AI risk as a potentially the best way to make the world better. And I guess prediction of it, in particular, I probably got interested in like 2014 or something like that.
Robert Wiblin: Yeah, what made you think that that was a really important thing to work on?
Katja Grace: I think I can’t remember the exact way that things went.
Robert Wiblin: You’re being unreasonably reasonable here, Katja. People always invent some story that explains why they’re doing what they’re doing.
Katja Grace: Right, I mean I can come up with a story now, in retrospect, for why I think it was a good thing to do.
Robert Wiblin: Sure! Yeah.
Katja Grace: So I think AI is very likely to change the world in lots of different ways. I think it’s sort of hard to imagine AI not making a big difference to the world in like the next century or two at least. And probably sooner. And so I think how that goes is going to make a big difference. But I also think it means that other things that might have seemed important are much less likely to be important, like for instance if I thought improving education of gifted children was important, I just think any progress that we would’ve made toward that is likely to be obsolete once there’s AI that’s better than us at lots of things.
Robert Wiblin: Obsolete because the AI would be able to do the work that those smart humans would be doing? Or-
Katja Grace: Partly that.
Robert Wiblin: Because they would be able to figure out how to improve education better?
Katja Grace: Yeah, both of those things. Like probably the humans won’t be employed in the long run, doing thinking work and also, even if you wanted to educate them well, there’ll probably be better ways of improving such things. Yeah, so AI seems like it’s likely to be a big deal and affect lots of things. I also think, sort of mainline prediction for how it goes, includes a lot of big risks. I think extinction risks are, in general, the biggest problem in the world and I think extinction risks from AI are probably the most plausible and soon-seeming ones to me.
Robert Wiblin: Okay, so we’ll return to some of this big picture stuff later on. But first, to kind of guide people through why we’re worried about artificial intelligence in general and what’s kind of the baseline forecast that we could make, given what we know now. We’ll talk about this paper that you published last year, which Altmetric Attention Score ranked as the 16th most discussed paper of 2017. So, it was a very hot topic. It’s called “When Will AI Exceed Human Performance?: Evidence From AI Experts.” We’ll hold out on the audience and cover the method before we cover the conclusions, but what did you try to do in this paper?
Katja Grace: Well we tried to talk to a bunch of people who’re publishing in machine learning, in good conferences, so basically central machine learning researchers and I guess we tried to ask them about all kinds of things that we were curious about, so we asked when they thought basically human level AI, when they thought that would happen. But also a whole bunch of things about how important they thought safety was, if they think the world will be destroyed, which inputs they think are important to AI progress happening, a whole bunch of narrow AI near-term predictions. So like, when will AI be able to write a new Taylor Swift song better than Taylor Swift can? When will Taylor Swift be obsoleted? Yeah.
Robert Wiblin: So what are the different levels of AI that people talk about? Because you sometimes hear, you know, human level AI or an AI that can do all the work that humans can do better and more cheaply and this is all seen as super-intelligence and that it just vastly out-classes humans. Sounds like you were asking about an AI that can do all the humans do equally, as well?
Katja Grace: I think that there are basically two kinds of things people talk about, where one of them is something like human level AI and one of them is something like vastly superhuman AI. I think when people talk about human level AI, they are often vague on the details that might matter a lot. For instance, there’s AI that can do what a human can do but for like a billion dollars an hour. It’s different from AI that can do what a human does at the price of a human, and often people are ambiguous about which one they’re talking about. Also, like, are we thinking about physical tasks as well? Like does the robotics have to be ready?
So I guess when were writing the survey, we probably spent like half a day thinking about what the definition of this should be exactly, and ended up asking about … Well, I guess we asked about two different definitions, basically, which we thought should be quite similar but we got very different answers for them. One of them was meant to be quite similar to the past surveys and was, “When will AI be able to do every task that a human can do at least as well as the best human at doing that task,” not like an average human because that was a previous ambiguity where people were like, “Oh, we’ll have human level AI but it won’t be able to do AI research or anything because, you know, humans can’t usually do AI research, they have to be an AI researcher.”
Robert Wiblin: Hmm.
Katja Grace: And so we asked about the sort of best human performance, and I think there’s also a question of like, “Does it need to be one machine that can do all of these things or does it have to be that for any task there is some machine that can do it as well as a human?”
Robert Wiblin: Or that you can develop one if you tried.
Katja Grace: Yeah. So I forget the exact details here, but I think we asked about machines can do this thing. Somehow.
Robert Wiblin: So, how did people get your survey and what was the response rate like?
Katja Grace: We surveyed roughly 1600, and 21% of them responded, so that’s 352 people.
Robert Wiblin: So how did you produce this list? Did you have to look all across the internet for every MR researcher? Maybe you made a list of people invited to a conference?
Katja Grace: Yeah, it was everyone who was published at NIPS or ICML, which are two big machine learning conferences in 2015, so we were doing this in 2016. And I guess both of those conferences have all of their papers online, so we basically just went through their papers and got their email addresses from them.
Robert Wiblin: Did you do any fancy stuff, like kind of randomizing the order of the questions, or giving people different questions or like pre-committing to do a particular style of analysis ahead of time?
Katja Grace: We did all of those things and more.
Robert Wiblin: What’s the more?
Katja Grace: The thing that I thought was best, actually, was we basically made a survey with all the questions and we ran a bunch of interview versions of it, where we sat down with a particular person and got them to answer it in front of us. And then after every question, we’d be like, “So, what did you think that question meant, then? Why did you write that?” And doing that, we discovered some things that I guess had been in previous surveys and people had just assumed or well understood, like people were just completely missing the point of the question essentially. I think there was a question about how soon is there super intelligence after this human level intelligence or something. And people were just like, not noticing “super intelligence” instead of human level or something.
Robert Wiblin: Oh wow.
Katja Grace: And so we did a bunch of rounds of that, adjusting the questions and trying again. Which makes me feel better about the questions being understood.
Robert Wiblin: Okay, so I’ve heard that running surveys can be a colossal pain in the ass, basically. That, you know, to get quite simple results it can take an awful lot of time and an awful lot of followup. So was this a huge pain in the ass?
Katja Grace: Yes. Probably the biggest pain in the ass of any project I’ve ever done, I must say.
Robert Wiblin: Yeah, specifically how was it so bad?
Katja Grace: I guess the effort was spread out over more than a year, I think, so I think just an ongoing background thing. I feel like the writing-the-paper aspect of it was actually much more annoying than I thought and maybe, I’m not usually in the habit of writing papers, partly I expect it to be much more arduous than writing up a page about it on AI Impacts. It’s a bit mysterious to me why that is.
Robert Wiblin: Is it because it’s a different style that people less familiar about writing in, or is it perhaps because it feels so important, it’s easy to get very anxious and sensitive about everything you’re writing when you think it’s a paper?
Katja Grace: There’s probably some of both of those and I guess for AI Impacts, since it’s sort of my thing, I can just be like, “Okay, I’ve decided the writing should be like this,” and it’s no big deal to anyone else. I guess I don’t usually work so closely in collaboration with other people. I guess collaborating is somewhat harder than not collaborating.
Robert Wiblin: Right, because everyone wants to have their say on what it should be exactly.
Katja Grace: Yeah, or just like if I write a thing on AI Impact, it’s not necessarily gonna bother anyone. Like I’m not really doing it on anyone else’s behalf, whereas I think where anything seems like you might be doing a bad job of something, and it matters to someone else, I think it’s harder. I guess both of us were on Overcoming Bias before and I think I found it harder to write on Overcoming Bias than on my own blog, because you know, someone else’s thing is at stake.
Robert Wiblin: Yeah. That would make sense. So, when you are putting together this survey in 2015 and ’16, how reliable did you expect the answers from these machine learning experts to be? Did you think they were in a good position to predict when AI would be able to do various different things?
Katja Grace: I think it was a fairly open question, how reliable they would be. I mean, one thing that’s been clear in the long run is that, evident in many surveys of AI experts, is that they give a fairly wide variety of different answers to the same question, which suggests any given person is incorrect. However, like in many cases, in predicting how many jellybeans there are in a jar, you can put lots of peoples’ projections together and get a better answer. So I think it was unclear ahead of time, whether timing of AI things was like that.
Robert Wiblin: Or something that they really had knowledge about.
Katja Grace: Yeah. Where everyone has sort of a noisy estimate and if you look at the average, maybe it’s good. I think I would’ve thought I would expect them to be more reliable on something like, “If AI was gonna happen in five years, they would be noticing,” or something. And so maybe I wouldn’t expect them to know exactly which year it’s gonna happen or something or definitely I wouldn’t expect them to know that. But I might expect them to be able to tell the difference between in three years and in 50 years, or something. So if they’re all saying in 50 years, that’s like some evidence against three years.
Robert Wiblin: So let’s get to the results from the survey. You asked about high level machine intelligence, which is an achievement when unaided machines can accomplish every task better and more cheaply than human workers, so when did ML researchers think they would be able to get there?
Katja Grace: It’s a bit complicated because we asked that in several different ways and combined the results, complicatedly, but our final result was 45 years. However, we also asked them a very similar question, we thought fairly similar question, about when all occupations would be fully automatable, that is any occupation machines could be built to carry out the task better and more cheaply than human workers. So not necessarily that they were automated but just that it would be possible to without spending too much time and effort on it. And for that question it was 120 years, even though if you put these questions side by side, people often agree automating all current human occupations should be a subset of all tasks that human can do, automating all tasks.
Robert Wiblin: So they got it around the wrong way and also off by like a factor of almost three?
Katja Grace: Yeah. It seems like they’re answering very similar questions very differently based on exactly how it’s framed. The occupations one was also different in that we got them to think of some particular occupations ahead of time. Like when do you think all parts of being a surgeon will be automated? When do you think all parts of being an AI researcher will be automated? What do you think is an occupation late to be entirely automated? So we sort of gave them a step by step process that led into that, it had a few more steps, they didn’t mention though.
Robert Wiblin: Interesting. Okay, so hold on. So this 45 years and 120 years, they were kind of the median response people gave? When they said it was 50-50, likely, that we’ll have machine level intelligence?
Katja Grace: It’s something like that except that we also divided the people again, so for each of these questions half of them were asked in 20 years or 40 years, what do you think the chance will be? Or I didn’t figure exactly what the number of years were, but for three different numbers of years, what is the probably that it will have happened by that year, and the other half were given probabilities and asked in what year will there be that probability? Like in what year will there be a 10% chance of this having happened, and in what year will there be a 50% chance? So these numbers are sort of the median, if you turn all of these estimates into distributions and then take the median, it’s that number.
Robert Wiblin: Okay, that makes sense. So in one case, you went from the year to the probability, and in the other case you asked them to go from the probability to the year.
Katja Grace: Yeah.
Robert Wiblin: And they get quite different answers?
Katja Grace: They gave consistently different answers. I think they were, I figured exactly how much they were off by, I think something like 10 years or something. We also did this for the narrow task questions that I mentioned earlier, like when will AI be able to build things out of Legos or something according to instructions? And we also gave the same questions to mechanical turk people. And across lots of different questions and different people, they are different groups of people, they always are, most of the time, thought that the distribution was earlier if they were given probabilities and asked in what years they would happen, rather than the other way around.
Robert Wiblin: Okay, so we’ve got two slightly odd things here. One is that if you ask about when everything’s gonna be automated or when all human jobs will be automated, they say it’s gonna take way longer than having an AI that can do all things that humans can do but more cheaply or just as well or better.
Katja Grace: Right.
Robert Wiblin: And then you’ve also got this oddness where people are asked about the probability, then are asked to give the year, then they predicted it would happen sooner than when they are given the year and asked for the probability.
Katja Grace: Right.
Robert Wiblin: Interesting. So when you were doing … When you give these 45 and 120 year figures, with the probability-to-year versus year-to-probability thing, you just kind of take the average of the two of those answers? Because you get half to one and half to the other?
Katja Grace: Something like that but it’s a bit tricky to average them because we have sort of different points in distributions from both, so what we did was turn them all into distributions that were likely from the three points that we had for each person. Like we were making some assumptions about what their distributions might look like, and then we have these overall distributions and we can take the median overall distribution.
Robert Wiblin: Okay. So do you have an explanation for the first peculiarity? That you got to get 45 years in one case and 120 in another case for two questions that seem very similar?
Katja Grace: I don’t have a firm idea of this but I think there are several plausible explanations. Like one is in the occupations case, we just ask them to think about concrete things in a lot more detail than in the other one, so like maybe when they think of old tasks, they’re mostly thinking of things you can do in five minutes that don’t involve anything else and once you’re thinking of occupation, you’re like, “Oh, well, in order to be a surgeon maybe you need to have some sort of high level thing going on over many hours that you’re doing,” and maybe that wasn’t the thing they thought of as tasks. So that’s one kind of thing.
I guess we also interviewed some AI researchers early on and asked them about some things to do with what we were going to ask, ultimately. And one of them said he thought that the survey and the people who were understanding the questions quite differently, and he suggested that we ask this question about occupations because he thought we would get a very different answer. So bayes points for him, and I think his thought was, what was going wrong, was people were understanding human level to mean level of basic human without any skills or anything. So, they were saying, “Human level in 20 years, but can do AI research in 60 years,” or something. And that was sort of making sense to them because that’s beyond human level.
So, it’s possible that he’s right about that. Though we also tried to make our question clearer that that was definitely not what it was about, I think. But people don’t read questions that carefully.
Robert Wiblin: Yeah. Do you know how long roughly they spent filling out these surveys? Is it possible that they’re barely giving it any thought?
Katja Grace: I think it seems likely that they’re barely giving it any thought. I can’t remember how long it actually took them. I think we were aiming at the end to take about 12 minutes.
Robert Wiblin: Okay, so that was 12 minutes to answer 10, 20, 30 questions, something like that?
Katja Grace: Something like that. It was somewhat complicated by different people getting randomly different questions and some of the questions asking for three different probabilities for four different things or something, where it’s kind of one question with lots of parts. But yeah, I think they can’t have spent very long each.
Robert Wiblin: Yeah, that makes sense. So, what about the fact that they gave shorter timelines to the development of AI when they’re asked about the probability and then asked of the year rather than the other way around?
Katja Grace: Yeah, I think we’re pretty unsure why that was. My own guess is something like, or my own speculation which I’m not very confident in, is that people would basically like to give low probabilities all of the time if they possibly can. So if you give them different years and ask for the probability, then they just give low probabilities for all of the years. Whereas if you give them some high probabilities and they have to figure out something to do with them, then they’re like, “Well let’s put it really far out, like in 50 years or something.” Whereas if you had given them 50 years, they would’ve given a low probability.
Robert Wiblin: Yeah that’s interesting, so just to be clear, each person only got one of those for one of those directions, right? They didn’t do both of them.
Katja Grace: Correct. Each person had to give three answers but they were all of the same type in a given question.
Robert Wiblin: Yeah. So I guess does this show that it’s super important to do this cross-check so we ask questions different ways and then see whether people get radically different answers?
Katja Grace: I think so, yeah. Which I guess we suspected a little bit ahead of time which was why we did that. But yeah.
Robert Wiblin: Is this the first time that one of these surveys into the various certain machine learning researches on when AI will appear has done these kind of cross-checks?
Katja Grace: So I guess the previous surveys, that I know of, were probably not of machine learning researches, et cetera. They’re … Each one is sort of a different demographic. I can’t think of any other ones that had that much cross-checking, and I think also an interesting thing is that the past ones basically always asked about and gave probability than asked in which year they were, so it suggests that past surveys were maybe saying AI would be sooner than the average of these or something. And I guess they were all asking the question about when will AI be able to do all tasks rather than like occupations. So of this sort of four ways we asked here, the past surveys were basically asking in the most optimistic possible, or soon-possible, way.ou
Robert Wiblin: Yeah. So, you know, coming out of this, this part of the survey, what do you think now about whether machine learning researchers kind of have any wisdom to share about when high level AI is going to appear?
Katja Grace: I think that directly asking them when high level AI is gonna appear is probably pretty uninformative. I probably still think if it was very close, I would still expect to get a bunch more close answers. But yeah, I think we should heavily discount the possibility … Like I don’t think we should ask them when they think it is and then take that as our main guess. I think it’s like a small amount of evidence. But I think there might be good ways to use AI experts in combination with other things to come up with good estimates, like there might be better forecasting methods and that sort of thing?
Robert Wiblin: I guess you could, just to begin with, you get them to actually spend some time thinking about it and then trying to form a consistent view inside their own head.
Katja Grace: Yeah, we actually asked them whether they had done that, but we haven’t done anything with that information yet.
Robert Wiblin: Did you know what fraction of them did that?
Katja Grace: I don’t remember but I think it relatively, it was like high compared to my guess.
Robert Wiblin: Figured they might have been overstating how much thought they put into it?
Katja Grace: I’m not sure. I could just be wrong about how much they think about this, yeah.
Robert Wiblin: So there’s a bunch of other interesting findings. I guess we should take them all with a pinch of salt, but at least the kind of differences between the answers that people gave were often interesting and almost even if the answers aren’t right on average you can still see how people differed.
Katja Grace: I guess for other questions, we tried to ask more about things they would actually know about. I mean I think they are experts on AI, it’s just like what are the social consequences of this and when is it happening or not, closely related to their expertise.
Robert Wiblin: Yeah, so let’s go through a bunch of those. One thing I noticed in the first figure is you had these curves drawn out, these probability distribution curves, of when high level AI would appear, for a bunch of different individual people in the survey. And it seemed like there were a number who seemed to think that it was almost 100% likely to happen in like 10 or 15 years, does that suggest that they were misunderstanding the question?
Katja Grace: My guess is that they’re not. I think that there’s a subsection who are very optimistic.
Robert Wiblin: Hmm.
Katja Grace: Yeah.
Robert Wiblin: I guess fortunately they get somewhat washed out in the median estimates, right? Because, I mean I know also that they’re also kind of counterbalanced by who has said there’s virtually no chance of this happening even in a hundred years.
Katja Grace: Yeah.
Robert Wiblin: So you got kind of sum like-
Katja Grace: It’s a very, very open question it seems like. We also ask them how much they thought their own views disagreed with that of the typical AI researcher. Actually, I think the most popular answer was like little, which surprised me. I think they did, they don’t realize how much disagreement there is.
Robert Wiblin: Interesting, so they were just all over the shop, from like believing it’s gonna happen certainly very soon to it’s almost not chance it will even happen in century’s time, and they all thought everyone agreed with them.
Katja Grace: Well they at least thought that they agreed with a typical person.
Robert Wiblin: Okay, so they all thought they were close to the middle?
Katja Grace: I guess. I haven’t looked in detail at like whether the ones that think they are close to the middle or more likely to be close to the middle but at least yeah, it seemed like their overall view was in favor of, yeah, pretty agreeing. So I think part of the value of this kind of thing is even if the forecasts aren’t very good, it’s nice to have a baseline of what people think so that people can then talk about it more. Like know that other people are thinking this might happen too.
Robert Wiblin: Yeah. Okay, so we’re pretty skeptical of their answers about the long term development of AI and the things that are quite a long distance from what machine learning is capable of doing now. But I guess we might put more stock in their views about what things are gonna happen in the next five or 10 years and where there’s basically already projects working on these things?
Katja Grace: Yeah.
Robert Wiblin: So did you want to talk about some of the things that they thought would be most likely to happen, happen soon?
Katja Grace: I guess for a whole bunch of the narrow things we asked them about, they thought they would happen in like the next 10 years, so I think the very soonest one was play Angry Birds at human level and there’s a annual Angry Birds contest that last I looked was getting close. There’s folding laundry, playing Starcraft, there were various translation ones, assemble Legos based on Lego instructions that involves reading the instructions and doing the manual thing. Playing all Atari games, reading text out loud writing a high school essay, explaining your own actions in a game, as well as being about to play the game well.
Robert Wiblin: Just something that machine learning AI rules can’t really do now at all, very well, like they can’t really explain why they’re making the choices that they are.
Katja Grace: I don’t know much about it. But yeah, I think that’s right. So I think that one was a little over 10 years, but the ones I mentioned earlier I think were all less than 10 years away. So part of the hope here is, even if we’re not sure how much to trust these predictions is that in 10 years we will know how it went at least.
Robert Wiblin: Ah okay, so you’re saying we won’t have to wait that long to see whether they were just way off on these predictions?
Katja Grace: Right, I guess within the next five years we can know how they’re doing on Angry Birds and the World Series of Poker and some others here.
Robert Wiblin: Giving out the foot of that in 2016, we’re already basically two years out, so yeah, even maybe next year we’ll find out what they read about Angry Birds.
Katja Grace: Yeah.
Robert Wiblin: So what’s one of the first actual jobs that they were suggesting might be automatable at reasonable cost?
Katja Grace: I think truck driver, but we really didn’t ask them about what they think are gonna be really early jobs, we asked them about four specific jobs, which I think were truck driver, retail sales person, surgeon and AI researcher. And then we asked them for things that they think would be very far off, and so I guess retail salesperson was the next one at a little under 15 years.
Robert Wiblin: So I noticed there was a telephone banking operator which-
Katja Grace: Oh yeah. This chart actually mixes together the occupations where we said they had to be entirely automated and some narrow tasks, so I feel like the definitions are slightly different, but the telephone banking operator is probably like being able to carry on the conversation on the telephone, not necessarily anything else that a telephone banking operator does in life. But yeah, that was pretty soon as well. That was under 10 years.
Robert Wiblin: Yeah, have you seen the Google Duplex?
Katja Grace: I’ve heard rumors about it, haven’t watched the thing.
Robert Wiblin: Yeah, so I checked this out a couple weeks ago. Yeah, basically Google’s been working on I guess this voice system that can call up businesses and book appointments and ask them things like when are their opening hours and can schedule a haircut and when can we reserve a table, that kind of thing. I mean it obviously a very narrow domain in a sense, but within it that it can deal with background noise and weird accents and people giving non-static responses to these questions they can guess what people are meaning.
Probably better than I am, then.
Robert Wiblin: And it’s also speaks in a surprisingly human way. Like the people on the other end of the phone don’t usually pick up because they’re not speaking to a real human, because it kind of pauses at the right points based on how long a human would normally think before answering a question like that. It does “um” and “ah,” so if you know ask like how many people do you want to book this table for, it goes, “Ah, seven.” So they’ve done a whole lot of things to it to mimic humans. And it seems like, I guess Googles been investing quite a bit in this because they see some value in using this as a kind of an assistant that they can sell on Android phones and things like that.
Katja Grace: Yeah.
Robert Wiblin: I think they’re also planning on calling up businesses all the time and asking what their opening hours are, they can keep Google Maps up to date. So they’ve been throwing some money at this, but its seems like they’re perhaps actually not that far off, being able to have a basic telephone operator.
Katja Grace: Yeah, so I guess we might also learn in two years that all of these things they thought would happen in 10 years actually happen in two years and then we can be like, “Oh, dear.”
Robert Wiblin: So that would be, I guess, exciting/nerve racking.
Katja Grace: Terrifying, yeah.
Robert Wiblin: Maybe before we put this up, I’ll take look at what’s the state of the art in Angry Birds playing mission learning and we can see how right they were about that, that soonest one.
Were there any other peculiar or amusing results that showed up when you went to analyze the data?
Katja Grace: I guess I had fun looking through the list of occupations that people thought would be very late to be automated. For instance, train driver was one of them, which I was confused by, although maybe they’re thinking, well train drivers look like they could be automated now, but we still have them, so apparently they’re doing some sort of mysterious magic.
Robert Wiblin: I was thinking more about the politics.
Katja Grace: Right. Apparently, they have evaded automation. I think other ones that were up there were like psychiatrist, author, and philosopher.
Robert Wiblin: Yeah, I guess that makes sense. I suppose humans barely know what’s good philosophically.
Katja Grace: Yeah, that’s true.
Robert Wiblin: So it’s very hard for machine learning to figure it out as well.
Katja Grace: Yeah.
Robert Wiblin: I also noticed that AI researcher showed up as pretty hot. Is that just sell flattery, do you think?
Katja Grace: I don’t think so. I think they often put other jobs as substantially later than AI researcher. Like when we ask them what they think will be late, if I recall, usually it wasn’t AI researcher. So I think they said it was later than the other three that we gave them, but I think, I don’t know, truck driver, and retail sales person doesn’t seem that surprising to me that they think AI research is harder than those things.
Robert Wiblin: Yeah, fair enough.
Katja Grace: Yeah.
Robert Wiblin: So within the AI safety community, people tend to think that once you got to the point where an AI could itself be very good at programming AIs, then you’d get a pretty rapid increases in abilities. You’ve got this positive feedback loop that the smarter it gets, the better it can program itself to make itself even smarter, but I noticed that it didn’t seem like the people responding to the survey had that perspective because they, in one case, said that it would potentially take decades after you had, the AIs were the best in the world at doing AI research before they could ever automate all tasks cheaply.
Katja Grace: Yeah. We asked them directly about this too. We asked them what they thought the chance was that the intelligence explosion argument is broadly correct. Twelve percent thought it was more than 80 percent likely to be correct. Seventeen percent thought it was more than 60, but less than 80. 21 percent for about even. I guess, I think views were sort of leaning toward no, but spread across the board more than you might think.
Robert Wiblin: Okay. So more people thought that was unlikely than likely, but it was basically, there was like a pretty decent number of people who thought it was very likely and then some who thought it was ridiculous.
Katja Grace: Yeah. Not likely wasn’t close to zero. There are a bunch of people for like 20 percent chance of us being right. We also asked them, we tried to ask about the intelligence explosion in several different ways as well. Similar line of reasoning to the, when will AI happen thing. So we also asked them what they thought the chance was that global technological progress would dramatically increase at this point, which we thought was sort of close.
So we asked them about two years after high level machine intelligence and 30 years after and I think the median answers were 20 percent chance and 80 percent chance of our global technological progress dramatically increasing. So I guess you’re saying that 20 percent chance that within two years it’s sort of undergone an abrupt increase, and 80 percent chance that like 30 years later is much faster, which might’ve been like a slow change to that.
Robert Wiblin: Okay. So what did they think about whether progress in machine learning overall would be positive or negative for the world?
Katja Grace: I think they had a broad mixture of views. We asked them to divide 100 percent between like five different outcomes between very good and terrible, where I think we gave them examples like, for instance, human extinction, roughly. I think the median answer was like five percent that it would be terrible.
Robert Wiblin: So the median person thought there was a five percent chance that progress machine learning would result in human extinction or something similar?
Katja Grace: Yeah, something similarly bad.
Robert Wiblin: Right. I noticed it was like a decent fraction who thought that it would be neutral. Right? Something like 20 percent thought that it would just unbalance, not really make very much difference.
Katja Grace: Yeah. I’m not sure what’s up with that. For all of these questions, we sort of randomly chose a few people to ask them afterwards, like, I forget the exact things, but sort of like, what were you thinking there? And so I looked over that, but was not able to figure out what’s up. There was some things that were like, well we think it’s going to be terrible for some people, but great for some other people. Maybe it’s going to be great for rich people and some people are going to suffer and it’s going to be kind of like.
Robert Wiblin: So it could make life better in some ways, but worse in others, like a lot of normal technology does?
Katja Grace: Yeah.
Robert Wiblin: Okay.
Katja Grace: Not like everything is just going to be the same.
Robert Wiblin: Yeah. So I guess most of the people that I know think that if we had human level or far above human level machine intelligence, then it would either be extremely good or extremely bad and this middle ground kind of doesn’t exist. Do you think, has this convinced you at all to reconsider this path? Or do you still think it’s implausible and if they’ve thought about it more, then they’d be convinced otherwise?
Katja Grace: It causes me to think it’s slightly more likely. My guess is that they’re mistaken. I think that also part of what’s probably going on is, I don’t know, if you ask someone to divide something between five buckets in a row, I feel like it’s just intuitively weird to put it all in the end buckets and like nothing in the middle of bucket.
Robert Wiblin: Ah, yeah. I can see that because normally you put a lot in the middle and a little on the edges, but here, you’re being asked to do a U shape.
Katja Grace: Yeah. And I think it makes sense to put a bunch in the non-edge non-middle ones as well, not extinction but things going quite badly or quite well.
Robert Wiblin: Okay. Yeah. But there’s kind of … It doesn’t really make much difference one way or the other. It seems pretty odd.
Katja Grace: Yeah. I guess, I don’t know, if you look at past technology, I feel like it’s gone pretty well overall. But I know that other people disagree with that. So maybe those people would say if we extrapolate perhaps we should expect it to continue being entirely ambiguous, whether it’s going well or better.
Robert Wiblin: Yeah. So what are some of the other ways that people gave strange answers?
Katja Grace: So one thing that was strange was we’ve had these past surveys, which were sort of different groups and so on, but basically people who know about AI. Since then, there’s been this big boom in machine learning or deep learning in particular. And so you might think that people now would think that we are much closer, but in fact, I think they thought that they gave slightly further out years and it’s sort of unclear what happened there. Also, these are people working in machine learning, so you might think that the people working in the field that is going really well, just after it went really well, would think it was coming sooner than other people in the past.
Robert Wiblin: Yeah. Interesting. I suppose maybe one answer would be that they’ve realized that it’s harder than they thought a few years ago, even though they’re making progress on narrow tasks.
Katja Grace: Yeah. I think another explanation I’ve heard is sort of sociological, like there’s a temptation to be like, oh my God, this is amazing. We’re going to take over the world soon. And then that’s sort of embarrassing. There’s like a story that you shouldn’t be too optimistic about AI, that like, it’s always been tempting to say, “Well, we’re going to have amazing AI soon,” and everyone will laugh at you.
So it’s important, as a researcher on the thing that is going really well to be like, no, everything’s fine. Everything’s just going to go slowly and we’ll make some progress. And the more things are going well, the more people feel the need to stick to this calm not over optimistic narrative. I don’t know how likely that is, but I think maybe like multiple people I’ve mentioned this to have said something like that.
Robert Wiblin: Right. Okay. So like the faster things are going, the more people feel like they have to seem like sober people who are not getting over enthusiastic about what’s going on.
Katja Grace: Yeah. Something like that.
Robert Wiblin: Right. So like the faster things go the longer people will say it takes longer and longer because you’re right, at some point they’ll crack. So yeah. On this general topic, there’s a couple of folk myths that people will always tell me about when I talk about these forecasts of AI development. One is that people always predict that some revolutionary technology is going to appear in 20 years time, which I guess is long enough that people will have forgotten the forecast by the time it happens, but not so far off that people totally lose interest in the thing that they’re working on. Did the survey kind of support that or reject that?
Katja Grace: The view that everyone gives a similar prediction seems clearly wrong and then people give predictions that are across the board. As far as like, do their predictions stay the same over time? Since we just have one survey, at this point, this survey probably doesn’t say a lot about that. We’ve previously looked at other surveys that exist and also, I guess, a lot of edgy … I guess the thing we’ve looked at is sort of public statements about AI predictions.
Like at some point some people collected like every time they could find that someone came out in public and was like, “I think there’s going to be AI in 2046,” or whatever, and wrote them all down. Looking at those things, I think the distribution of when people are saying AI might be, like in terms of how many years out they were saying it might be, was sort of similar in the kind of earlier half of the data as in the later half of the data.
Robert Wiblin: So you’re saying like over time people have become less confident about or more more dispersed in their predictions about when you’ll get high level machine intelligence?
Katja Grace: No. I guess I’m saying like … I guess this data set doesn’t have very many early predictions and the early predictions were earlier or shorter timelines.
Robert Wiblin: Okay.
Katja Grace: But for the more recent ones and the very recent ones, it looks like the median is kind of like 30 years and it sort of remained 30 years in the earlier and the later set. So I think there’s still support, there’s people having roughly the same distribution over time, somewhat. It’s all kind of messy and there are lots of biases going on in the whole data center. It’s a real mess.
Robert Wiblin: Yeah, right. So I guess there’s two different ways that you could phrase this idea. One is that currently today, everyone thinks it’s going to take 20 years or 30 years or something like that?
Katja Grace: Yeah.
Robert Wiblin: I guess the other would be that consistently, in the past and today, that the average has been about 20 or 30 years. And you’re saying that that second one is true or that there’s some evidence that that second one is true, that like fairly consistently, the middle answer has been about that time.
Katja Grace: About 30 years, something like that, though the very early forecasts were at least somewhat earlier.
Robert Wiblin: I was just looking at this graph. So on the first one, it looks … The fraction of respondents who said that the probability of high level machine intelligence would be 50 percent in between say 15 years and 30 years is only about 20 percent of them. So I guess it doesn’t support that overwhelmingly they’re tending to give this kind of middle ground answer that it’s an intermediate amount of time.
But actually, the other thing you were talking about brings us to the next question I was going to ask, which is, how have things changed dramatically over time? Because there’s this kind of folk story that we have that people have always said that AI is 20 or 30 years out, although people were saying this in the forties and fifties, and they were saying it in the seventies and eighties, and now they’re just saying the same thing today. So we should be a bit skeptical because nothing ever changes.
Katja Grace: I guess I’ve heard two complaints about early AI forecast where, I guess, one complaint is that people have always predicted the same thing. The other one is that early forecasts were just incredibly naive and like, we’re going to do this summer or something. My impression is that like, so I guess we collected these statements. We only know of like six or so that are before 1980 even, but still, I think some of them were relatively earlier. I think those were somewhat early. I remember they were sort of like 15 or 20 years instead of the 30 year median for later times.
But there was also a big survey in like 1972 or big relative to this smattering of other data points that we have. This was the Michie Survey where it was a survey of computer scientists rather than AI people, but I think their answers then to when they thought AI would come look fairly similar to later ones. Like the median answer was 50 years or something.
Robert Wiblin: Which isn’t so far off what we’re getting today.
Katja Grace: Right.
Robert Wiblin: Yeah.
Katja Grace: And it’s like, it only had a few buckets. Like I think it had to be like 25 years or 50 years or something like that. You couldn’t give any answer you wanted. So it’s like less informative.
Robert Wiblin: Could you give later? Presumably you could give later than 50 years, right?
Katja Grace: Yes, but it was 50 or over 50 was the next bucket.
Robert Wiblin: Oh, so that makes it quite hard to then … But I guess you could still take the median potentially might be fifty.
Katja Grace: Right. Yeah, I think the median is 50.
Robert Wiblin: So just to back up for a second, you had those, you said that you found like six kind of predictions, what, in the media or in books that people had made personally about when they thought AI, you’ll get human level AI?
Katja Grace: It was that kind of thing. We actually didn’t collect the data. Some other people collected it for MIRI, the Machine Intelligence Research Institute, and then we took it over.
Robert Wiblin: Yeah. So I suppose with six predictions, there’s not that much you can say, but presumably you would expect those predictions to be weird because this is an oddly selected group of people who decided to make predictions about this off their own bat.
Katja Grace: Yeah. And I think comparing this whole data set of sort of public statements that people have made about AI, like where they just made their own prediction and put it up in public, they’re somewhat earlier than the survey data. This is afterward, like trying to control a bit for the different groups of people involved.
So like some of the people in the surveys and in statement data, some of them are AI researchers, some of them are AI researchers focusing on AGI, Artificial General Intelligence. So like making things that are sort of like humans, something like that, rather than sort of more narrow like translation AI or something. And those people seem to be like reliably more optimistic about when AI will come. So tried to take into account those differences in populations to figure out what the other biases are based on different groups and so on. So all of this is messy.
Robert Wiblin: Yeah. But there was some evidence that the median forecast has been fairly constant over time or at least like based on the little data that we have. Is that unreasonable though? Could it just be rational to always think that something is roughly 30 years off?
Katja Grace: Uh, yeah. I think if you didn’t know anything about what was going on, I think there are like different priors that might be reasonable to have. I think at least one of the fairly reasonable ones to have would behave like that.
Robert Wiblin: Okay. We’re just like, I guess until you start seeing it happening when you just always think it’s going to be roughly a constant period of time away?
Katja Grace: Right.
Robert Wiblin: Huh. Interesting. Okay. So does that suggest that people are kind of adopting this kind of prior? I assume this is some sort of like uninformed prior where you just say, “Well, I don’t really know, so it could be anywhere between now and forever,” and sort of ends up being at some point. It was just kind of always there.
Katja Grace: Something like that. I mean, I very much doubt that people have thought through this. I’m not sure to what extent their intuitions being something like that is sort of aligned with reality.
Robert Wiblin: Okay. Maybe they’re adopting it by accident. I think another bit of folk wisdom is that people always predict that you’ll get a transformative technology just around the time that they die because I guess then they’re off the hook for any of the predictions that they made. So that would mean that older people are a lot more optimistic.
Katja Grace: I think it’s also supposed to be because then they don’t have, they can imagine that they die, right?
Robert Wiblin: Oh, so then they’ll be able to live forever because there’ll be an AI that will save them.
Katja Grace: Something like that. Yeah, or like they’ll get to see the thing, but otherwise [inaudible 00:48:10] But I think that this theory is just wrong, I think there’s been some effort to check people’s expected lifespan and their predictions and I think they’re just not very related.
Robert Wiblin: Alright. So another bunch of questions you asked about were concerning kind of our risks from artificial intelligence and the attitude of these researchers towards safety oriented machine learning research. What did that turn up?
Katja Grace: Well, about just under half of them were in favor of more efforts going on safety than are currently happening, and roughly the other half were in favor of the current level, and very few people thought that there should be less effort. I think the suggestion is like a lot of support for AI safety research. Whereas a few years ago I think AI safety was considered a pretty out there concern.
Robert Wiblin: So a lot of people will think that it’s too early to do anything? Like even though these issues are important, it’s premature to start working on them? Did they have a view about that?
Katja Grace: Yeah, I think 35 percent of people thought that the value of working on the problem now … Sorry, this is for a narrower problem, this is for like the problem of aligning an AI with human values rather than any other kind of safety thing like war or something. So for that problem, 35 percent of people thought it was at least as valuable as other problems in the field of AI. I think that’s quite a bit of support for that.
Robert Wiblin: Okay. So it’s like a decent minority.
Katja Grace: Yeah. We also asked them other things like, how important the role is and how hard it is, that sort of thing.
Robert Wiblin: Yeah. What did they say about that?
Katja Grace: Forty percent of them thought that it was at least an important problem and I guess the difficulty of the problem relative to other problems in AI, I think the most popular answer, 42 percent was as hard as other problems in AI and then they were kind of spread out on both sides of that.
Robert Wiblin: Okay. So overall, a reasonable number. I think that it’s like an important problem, it’s not much more important than the others, but neither is it less important, nor is it particularly harder nor less hard, unbalanced. And people want, I guess, either about the same amount or more to be going into this kind of AI alignment work.
Katja Grace: Something like that, yeah. For all of them there’s sort of like a decent minority thinking that it’s relatively important and valuable.
Robert Wiblin: Yeah, great. So there was a bunch of articles, I think, a year or two ago about whether, are machine learning researchers worried about AI safety? I think it was in like some technology review. We’ll stick up links to these articles, but I feel like the survey reasonably settles that because it’s a fairly representative group and they’ve been asked these specific questions.
Katja Grace: Yeah, I think that’s true.
Robert Wiblin: On the other hand, we have reasons to doubt whether they have particularly informed views at least about the timeline. So perhaps they like not giving really thoughtful answers to these questions either.
Katja Grace: I think on the question of timelines, I would expect them to do less well on the question of, are these kinds of risks realistic for the technologies that they’re building? Like the particular risk we described to them in Stuart Russell’s terms, sort of if you have a system and you give it a number of variables, then it’s kind of optimized and you just forget to tell it about some variables that you care about, it’s going to do something crazy on those variables probably. And so that was the question we were asking them, how hard and important it was and so on. And so I think that their expertise should be relatively good for saying like, is this a realistic problem in our field?
Robert Wiblin: Yeah. That makes sense. Did you look at the relationships between people’s answers here? So between their perception of severity of the risks or the likelihood of the downsides, and whether they would like to see more resources going into safety?
Katja Grace: We haven’t.
Robert Wiblin: Okay.
Katja Grace: There were a huge number of questions and a huge number of interesting comparisons between questions, and we haven’t got to most of them.
Robert Wiblin: Yeah, that makes sense. Is the data public anywhere or is it still sensitive?
Katja Grace: No. We’d also like to make it public, but have not got up to it yet, actually, because we have this giant data set with names of all of the questions that are just like [inaudible 00:52:04] and we have to interpret them into something before putting it up.
Robert Wiblin: Okay.
Katja Grace: We are in favor.
Robert Wiblin: So as we mentioned earlier, this was a pretty popular paper or at least a widely discussed paper. How do people react to the publication and did you end up doing a lot of media?
Katja Grace: Yeah, I guess the media reacted by emailing me a lot about wanting to discuss it as it was in New Scientist pretty early on and I think maybe lots of people saw from there. I guess it’s still not actually published in any journal though we’ve been accepted in a journal and also at a conference, but it was just up on the archive and I guess lots of journalists were interested. I think they were especially interested in the automation of jobs aspect and I guess also the near term tasks. I think various people put up timelines of the different tasks and when they’d be automated.
Robert Wiblin: Okay. So were they taking an angle of, who’s gonna lose their job from this and what cool things will AI be able to do in the next five years?
Katja Grace: Yeah, I think there was a fair bit of that. I think they’re often interested in some of the other things as well.
Robert Wiblin: Yeah.
Katja Grace: A bunch of variety.
Robert Wiblin: Do you know anyone who’s looked at this and been convinced that risks from artificial intelligence should be taken more seriously or there should be more funding for safety work?
Katja Grace: Yeah. I’m not sure about that. I think that people who are trying to encourage those views have found it useful to cite, but I don’t know where they’ve got to with that.
Robert Wiblin: Did you do many interviews or anything like that? Did you get on television, on radio?
Katja Grace: Yeah, I guess. I went to Chile. I was invited to their Futures Congress there to talk to the public of Chile about all of this. So I did that and that involved a lot of … I think maybe the whole thing is on television. I’m not quite sure. I think I was like live on the radio in Chile talking about this, which I didn’t really expect until five seconds before it happened. It was fun times.
Robert Wiblin: They had a translator I guess?
Katja Grace: Yeah, I guess the woman doing the interview could translate, but yeah, I talked to a whole bunch of media people there and I basically didn’t know what was going on the whole time because they just expected me to know that like whatever thing in Spanish was like a magazine or something. Whereas I just had no clue.
Robert Wiblin: It sounds like a fun adventure anyway.
Katja Grace: Yeah.
Robert Wiblin: Actually one thing, before we move on from the paper, an interesting thing that you found was that machine learning researchers living in Asia, they expected machine learning to improve much more quickly than did those living in North America or Europe.
Katja Grace: Yeah.
Robert Wiblin: Do you have any explanation for what’s going on there?
Katja Grace: I don’t have any that I’m very confident in, but ones I’ve heard are, I guess related to this earlier story that people don’t want to be too optimistic about AI, that just the norm is about how optimistic to be about things might be different in Asia compared to America and in particular, it’s currently more fashionable in Asia to be like, yay, things are going well. We’re all going to try harder. We’re going to have AI soon.
Robert Wiblin: Right. So it could be kind of a social thing, like the researchers in North America feel self conscious when they start giving this very booster-ish because they worry that they might seem naive, whereas in China, people, it’s less so.
Katja Grace: Yeah, which might just have historical reasons.
Robert Wiblin: Yeah. China’s being grown so quickly that perhaps they’re just more optimistic about the future. Just across the board, they expect things to go faster.
Katja Grace: Yeah. And I guess maybe the times that people have laughed at other people being too optimistic, maybe mostly in American culture or something.
Robert Wiblin: Right.
Katja Grace: It could also be that things are going well in China.
Robert Wiblin: Oh yeah.
Katja Grace: Maybe AI progress is-
Robert Wiblin: Maybe they’re noticing the amount of research spending has been ramping up so fast there that they expect it to improve more quickly. I guess when you notice that this clustering of views, that people from a particular group have one view and people from a different group have a different view and these aren’t independently distributed across the groups, I guess that makes it seem less informative I suppose.
Katja Grace: Yeah. Though I think if you sort of suspected that there was some kind of bias, then if you can actually pin down exactly what the bias is, I think it means you can make more use of the information that you have. If you can, I guess earlier we looked at AI researchers and AGI researchers specifically who are more optimistic. And you can see, okay, this may be 10 years of difference between them in general. We’re like okay, if we sort of take the average of that and one of them is considered right and the other one’s wrong or something, we know it’s somewhere within 10 years of that.
Robert Wiblin: Yeah. So let’s say that you found, that there was a big difference between researchers in China versus America, but you surveyed more Americans perhaps because you knew more Americans.
Katja Grace: Right.
Robert Wiblin: Well, it starts to seem a little bit arbitrary what weighting you put on these two different groupings. Is that right?
Katja Grace: Yeah. That seems right.
Robert Wiblin: Because why would you trust the opinion of one group just based on the number of members that it has, given that they all seem to be drawn towards the same answer? This is kind of like pseudo replication that you’re getting where you ask one American, you ask another one, and basically they’re all just-
Katja Grace: They’re all correlated.
Robert Wiblin: They’re all correlated. That’s because they’re all reading the same things.
Katja Grace: Yeah. I mean, there might be some reasons you would expect the larger groups to get it more correct or something, like if you’re modeling them as a whole bunch of independent views of reality, but probably that’s not that.
Robert Wiblin: Yeah. You do get the wisdom of crowd effects, but that might run out after a while, and then it’s you want this diversity of perspective. So you want to survey people from as many different places with different knowledge as possible.
Katja Grace: Yeah, I agree that’s a problem. We don’t have a good way of dealing with it.
Robert Wiblin: Okay. So let’s push on from this specific paper which maybe has some wisdom in it, but shouldn’t completely guide our views, and just think about what you think perhaps, all things considered. So do you do have a particular view on when you expect artificial intelligence to achieve particular competencies or are you just kind of very agnostic about all of this?
Katja Grace: I’m pretty agnostic about it actually. I guess the project of AI, in fact, is we sort of have an ambition to have a better timelines, but a lot of the things we’ve been working on are very low level questions like, what do hardware timelines look like, or something. And I think it takes a bunch of effort to integrate those things into an actual good timeline, and we haven’t done that step yet. So I think to the extent that I personally have views on when AI will happen, they’re sort of similarly uninformed as someone who didn’t research this all the time, I think.
Robert Wiblin: Don’t tell them that. I guess, do you see it as a useful research finding to have asked people who know a reasonable amount about this and just find out that they don’t really have any shared view?
Katja Grace: Yeah. I think as I was saying in the past, it seems pretty plausible that what AI researchers think is a decent guide to what’s going to happen, I think pretty much demonstrated that that’s not the case. I think there are a variety of different ways we might go about trying to work out AI timelines and talking to experts is one of them. I think we should weight that one down a lot.
Robert Wiblin: Yeah. I suppose it does show that no one has offered a decisive argument that artificial intelligence couldn’t come fairly soon, and no one’s offered a decisive argument that it will right away. So I guess it should just cause all of us to be a bit more agnostic.
Katja Grace: Yeah. I think a lot of agnosticism is fairly reasonable at this point. I think even if people give very inconsistent answers with different framings of the question, I might expect that if it was very soon that they would start to, their answers would start to get in line and be very soon or-
Probably still interpret it as some evidence against that.
Robert Wiblin: Okay. Yeah. That’s coming in the next decade or something like that.
Katja Grace: Yeah.
Robert Wiblin: Are there other people who you’d be interested in surveying in the future who you think would have a more informed view on these AI timelines than this group
that you surveyed last time?
Katja Grace: I guess natural alternate experts to talk to would be experts in forecasting or tech history instead of AI per se. I’m not sure if I’m more optimistic about that. It seems like it might be possible to do some sort of combined thing where we have some AI people who also know something about the sort of forecasting literature or something and do some more in depth process.
Robert Wiblin: Yeah. Could you bring them together in the same room to talk a whole bunch and share what they know?
Katja Grace: Something like that, yeah.
Robert Wiblin: Yeah.
Katja Grace: I think at this point, I feel less optimistic about the sort of asking experts to think about it, things. I think if someone wanted to think about it a lot, I think there are a lot of empirical facts that it would be good for them to have and I feel like it’s more promising to collect those empirical things at the moment. I think if I recall correctly, one of the things that is known about predicting things is that experts are not very good compared to like linear extrapolations or simple models, things like that. I think you could also read that as simple models and linear extrapolations are a great way to predict things relative to talking to experts.
Robert Wiblin: So is that kind of the direction that the research is going now, is more towards trying to figure out, what is the linear extrapolation model that we should be looking at?
Katja Grace: Yeah, pretty much. I mean, I think we’ve always been more focused on that sort of thing and I guess yeah, the survey was kind of a weird thing for AI Impacts to do.
Robert Wiblin: Right. Well I suppose it got a whole lot of attention.
Katja Grace: Yeah. If we wanted a whole lot more attention, it’s possible we should do another thing like that, but I feel like it’s not the best for rapidly informing our views.
Robert Wiblin: Yeah. Okay. So we’ll talk about this, these linear extrapolation models and what kind of data you think people should be collecting in just one second, but do you know where the super forecasters of the kind that Tetlock has been working with to forecast international relations events? Have you ever been asked about these questions?
Katja Grace: I think that maybe someone is working on that happening through all the details.
Robert Wiblin: Okay. Well, I’ll see if I can find out and I’ll put up a link to that if it turns out that someone is. Alright, so let’s talk now about some of the other work that AI impacts has been doing, what other questions you’ve been asking in order to try to collect data that would actually allow you or anyone to have an informed view about how quickly we should expect AI to progress. What is the research agenda there?
Katja Grace: So in the past we’ve tried to figure out exactly what the rate of hardware progress is, so how quickly hardware is getting cheaper, which is a nice thing to try and forecast because it’s sort of famously pretty straight. For a long time, Moore’s law was going and I guess there are a bunch of related Moore’s laws. In particular, the price for computer performance has been fairly predictable year after year for many decades.
And I think on a scale of several decades, it varies a bit, but it’s one of the more predictable things. So we’ve tried to pin that down over the last few years, which is surprisingly confusing and hard to do given that it should be so straightforward, but also it’s surprisingly hard to find things on it. The other people keeping track of this seem to be just like a few old professors who personally keep a list of different things in one place or something. It’s not like an organized place to find this information.
Robert Wiblin: So here, you’re trying to kind of draw a graph of the cost per… processing capacity over time and it’s just saying there’s not really any prestigious body that’s doing this work. It’s just a bunch of guys on the internet.
Katja Grace: Yeah. Also, I found two different guys on the internet and they both just lost interest in 2014 or something like that. Yeah.
Robert Wiblin: That’s kind of astonishing.
Katja Grace: Yeah. There’s a bunch of other data. You can sort of collect the data yourself. There are benchmark websites and you can look up particular computers and find their performance, and then you can find their price somewhere else, but doing this, I found that the apparent performance of the same computer just varies by a factor of five over a couple of years. So why is that? So I wrote to the site and asked them what’s up? And they were like, “Well we changed it, what our benchmark means.”
Robert Wiblin: Huh. Maybe the chip’s getting older? Do they tend to get slower?
Katja Grace: No. I mean they just changed the measurement.
Robert Wiblin: They changed the measurement.
Katja Grace: By a factor of five.
Robert Wiblin: Wow.
Katja Grace: There’s a different benchmarking site that we found. It looks like something similar kind of happened. Yeah, I guess there are a bunch of data sets that are all confusing in some way or another.
Robert Wiblin: Right.
Katja Grace: So we’ve sort of done a bit of this. It still seems quite confusing to me. So that’s one thing. Another area that’s related is trying to work out how much computing hardware you need to do something like what the human brain is doing, like trying to somehow equate to the human brain to a pile of hardware-
Robert Wiblin: Say at what point would they be able to do the same number of calculations?
Katja Grace: Something like that, yeah. If you wanted to run something like doing what a human brain is doing on computers, how much compute would you need for that? So I guess the past estimates for that that we could find varied by like 10 orders of magnitude or something like that. It’s a tricky question.
Robert Wiblin: 10 orders? Okay, hold on. So that’s like by a factor of a billion?
Katja Grace: Something like that. So this is, I think, partly because we don’t really know what compute is in the brain, if we understood in detail what’s happening there, we would be further on AI.
Robert Wiblin: So how many calculations do you think are going on in the brain depends on, I guess, where you think relevant calculations are happening. Is it happening in the neurons, is there something complex going on inside there, or is it just the transmission of messages between them?
Katja Grace: Yeah. Just like, what happening in the brain is a calculation even?
Robert Wiblin: Right, because it doesn’t look like a transistor.
Katja Grace: Right. So we did this little tentative thing and decided to just not think about computations at all and instead measured in terms of communication, so like sending messages around in the brain. So for big computers, sending the messages around is a bottleneck for doing the computation, even if you have like lots of bits of computer that can do really fast computation, you can’t send them messages back and forth fast enough. So people made up a new benchmark for measuring that and so that’s easier to compare to the brain.
So that’s what we’ve done because we can at least count the message is going around in the brain and maybe there’s some uncertainty about how much information there is in a message or something like that out, but you can have a better idea than 10 orders of magnitude.
Robert Wiblin: Right. Okay. That’s very cool. So you’re trying to compare the number of signals moving between neurons in the brain to the flow of information in different part of a computer processor?
Katja Grace: Yeah.
Robert Wiblin: Okay. What has that turned up?
Katja Grace: Well, it wasn’t very much probably, but we estimate that the human brain performs between like .18 and 6.4 times ten to the fourteenth traversed edges per second which is something like an existing supercomputer.
Robert Wiblin: Okay. Hold on.
Katja Grace: Like the biggest super computers or something.
Robert Wiblin: Okay. So, a human brain very approximately, you’re guessing basically is something like the biggest super computer that we have now.
Katja Grace: Something like that, yes.
Robert Wiblin: And what is a traversed edge?
Katja Grace: This is the benchmark for measuring computation, so I think the way that the benchmark works for computers is that the computer imagines a giant graph of nodes that are connected by edges so like dots with lines between then and the question is how fast can it send the thing along all of the edges in the graph. Or something like that.
Robert Wiblin: As a comment to that, so let’s not dive into it too much, so bottom line conclusion was that we now kind of have the hardware that could run a human brain possibly.
Katja Grace: Yeah, something like that.
Robert Wiblin: Yeah.
Katja Grace: With much uncertainty around it. And then you consider put these things together and say like if computing costs are coming down at some rate, and we know what that rate is, and we know sort of how much computing power you need for a human brain, we can ask when a human brain’s worth of computing power will sort of be a similar price to a human.
Robert Wiblin: Okay, yeah. What’s that?
Katja Grace: Somewhere between in the past and in the near future, like we’re actually kind of near the point I think.
Robert Wiblin: Oh, well how are you assessing the cost of a human?
Katja Grace: I think we’re treating the cost as something like a hundred dollars an hour or something at the price of paying an expensive human or something. So, we’re saying like supposing that the software was just easy when would running a human brain cost about the same as –
Robert Wiblin: Cost about the same as running a super computer.
Katja Grace: Right.
Robert Wiblin: And you think the supercomputer costs something like a hundred dollars an hour or at least it’s not orders of magnitude off.
Katja Grace: So, I think this might be one to two years out of date. Super computers seem to cost like somewhere between $2,000 to $40,000 to run and our estimate for the current cost of running a human brain’s worth of hardware is like about $5,000 to $200,000.
Robert Wiblin: An hour?
Katja Grace: An hour.
Robert Wiblin: Okay, right. So, we’re some way off, but I guess it’s something that you can imagine … you can imagine that they’re the same cost in like here in a couple of decades.
Katja Grace: Right. I mean, yeah, hardware prices change quickly. So, we estimated that there’s like a 30% chance that we’re actually already post-human level hardware, and given all of the uncertainty about human brains and stuff.
Robert Wiblin: I see ’cause it could be that the human brain is doing far fewer calculations than what you though.
Katja Grace: Right. So, I think this is like an interesting update ’cause probably having AI or having human level AI depends on software as well as hardware. But I think we don’t have good idea of how important the hardware and the software are relative to each other and many people I think, think that hardware is a much bigger deal and that basically once we have enough hardware everything else will kind of go smoothly.
Robert Wiblin: Okay, so there’s some people that think that the limiting factor is how many processes we can make and how fast we can make them run and some people who think no, that’s not the main issue the problem is the software that we have currently just like isn’t doing the things that allow you to have a general intelligence.
Katja Grace: Something like that, expect that I think like probably everyone agrees that both are somewhat important and they question is like how they trade off against one another or something.
Robert Wiblin: And I guess this is an update in favor of thinking that it’s about software rather than hardware, because it seems like we already have quite a lot of hardware.
Katja Grace: Right. I think this would cause you to somewhat think we’re in the software’s important world and somewhat cause you to think that if we’re in the hardware’s
important world, things are gonna happen soon.
Robert Wiblin: Okay, so this raises the question of how quickly the processing capacity that is being applied to solving these machine learning problems has been increasing over time, and I saw that you’re involved in a blog post that Open AI put out that was dealing with this question. So, yeah how quickly is the processing capacity being thrown at machine learning increasing?
Katja Grace: Yeah, I don’t know about overall how much is being thrown at it, but this blog post looked at how much is being thrown at training particular things. Like the sort of headline papers, how much computers use for training one thing. And I guess the answer was it’s doubling every 3.5 months, which is pretty fast.
Robert Wiblin: Yeah, it was something like a three hundred thousand fold increase over the last seven years.
Katja Grace: Yep. Maybe it’s six years. Yes, since 2012.
Robert Wiblin: Six years, okay. So, again astonishingly rapid increase. Is that because there’s like changes in the designs of the chips or are they just spending a ton more money buying them? Did you have any sense of what’s driving it?
Katja Grace: I think it’s not the chips getting cheaper. So, I think they must be spending a lot more on it. Whether that’s the sort of underlying thing that’s driving it or whether they’re sort of more able to make use of more computer or something, I’m not quite sure. But I think this does mean that like the earlier model I was describing, we were talking about when we’re going to have human level at a human cost is probably not like, it’s like somewhat of an indicator of what you might expect, but it’s not exactly what you’d expect because people are paying very different amounts of money for things and you might expect that people will try to do this when it’s much more expensive than running a human level thing at human cost.
I guess what I’m saying is if you’re wondering when there will be some human level AI in the sense of AI that is able to do what a human does or something, but you don’t care at what price it happens, you might expect that someone is willing to do that when it’s very expensive.
Robert Wiblin: I see, because it will be such an achievement.
Katja Grace: Right.
Robert Wiblin: So, you might be able to do it once. You might be able to get the equivalent of one brain if you’re willing to spend a billion dollars.
Katja Grace: For instance.
Robert Wiblin: And someone might be motivated to spend that much.
Katja Grace: Right and maybe that’s part of what happens on the way to making it chapter.
Robert Wiblin: Very interesting. And I guess then the rate of progress goes back to you know, how much are we willing to just build more and more of these chips and also how quickly is more as well progressing or how quickly are the chips getting better. I’ve heard that there’s like different kinds of chips, right? That you can run these machine learning algorithms on. So, you’ve got like a normal CPU which basically people don’t use anymore and you’ve got this graphics processing unit so they’re much more efficient at doing the particular calculations that are relevant. You’ve got these tensor processing units that Google has developed. Do you want to clarify any of that for the audience? ‘Cause I’m a little bit confused about it.
Katja Grace: All of that sounded correct.
Robert Wiblin: Yeah.
Katja Grace: I think in general things can be more or less well suited to a particular applications. So, I guess there is some who are having much more efficient processors soon and I think they would be just like better suited to the particular applications. I think recently there have been some GPUs for instance can do a lot. In general, things can do like double precision, or single precision, or half precision operations.
Robert Wiblin: Is this where you have a greater error tolerance?
Katja Grace: To do it like how many decimal places each sort of number that’s being moved around is I think. And I think you can do like deep learning with half precision often, which means that you can do deep learning much more efficiently than you might have thought if you can have chips that can do half precision.
Robert Wiblin: Okay, so each calculation kind of close enough is good enough. It comes out in the wash in the broader picture, so they just kind of … each calculation is slightly half assed, but overall it works out fine.
Katja Grace: That’s my understanding, but I guess this means that like some chips can now be very efficient at doing deep learning so if you are measuring the same thing over time, you mightn’t have seen like fast progress for this being some new thing that was much cheaper.
Robert Wiblin: They switched them to a different class and then that’s maybe, you get a bit of a jump and then it improves a bit more quickly.
Katja Grace: Yeah.
Robert Wiblin: Interesting. Okay. So that was two different things that you’ve been looking at. I guess you know, what amount of hardware is equivalent to a human brain and how quickly is the hardware improving. Are there any other questions that AI Impacts has looked at over the last few years or plans to look at in the coming years?
Katja Grace: Yeah, many. I guess in the past there’s been a class of things to do with how likely this continuity is in AI progress. So, how likely it is that they’ll be some sort of sudden jump in capability. Where I think many people expect something like that that maybe one day someone will like discover the algorithm for intelligence or something or maybe they’ll be an intelligence explosion that will be very fast. But like we won’t really be expecting it and then someday they’ll be really good AI and then maybe it’s game over or something. It’ll take over the world.
So, we’ve previously looked into just like the base rate of any technology having a discontinuity in it that’s pretty big.
Robert Wiblin: So you just saying out of lots of technologies that we’ve had in the past how often have they had some sudden take off?
Katja Grace: Right. Yeah.
Robert Wiblin: Or sudden jump.
Katja Grace: Yeah. Where I guess my impression was that it’s relatively rare and I think that’s been basically what we’ve found, but we haven’t really finished that investigation. Things are going on gradually. So far, we’ve just been collecting cases that were big jumps.
Robert Wiblin: I see.
Katja Grace: And we know of like four of them.
Robert Wiblin: Okay. Yeah, what are they?
Katja Grace: Well, the biggest one ever was nuclear weapons as a discontinuity in explosive power per gram of material. (Katja’s note: this is no longer the biggest. So, I guess over thousands of years or something, the explosiveness of different explosives increased by like not that many times. And then within like a few years, it got thousands of times better or something.
Robert Wiblin: Interesting. And I guess that’s because you started harnessing a totally different kind of energy or totally different source of explosive power. So, you’ve gone from zero to one and it’s just a totally different product.
Katja Grace: Yeah, so we’re also interested in what do such things have in common, because often when people think that there will be a discontinuity in AI progress, they implicitly have some theory about it. Because it’s sort of an algorithm and maybe it’s likely to be a very simple one or something. So, we can say okay, are things that are algorithms more likely to undergo fast progress. So, we usually measure these things in terms of like how long would it have taken to have this amount of progress at the usual rates. Nuclear weapons were six thousand years of previous rates in like one go, so that’s big.
The next biggest one we could find was high temperature superconductors. Where they underwent maybe like a hundred years of previous progress. So, I think this was people that are discovering different materials that could be superconductors. They hadn’t really realized that there’s a whole different class of things that could be superconductors. And I think they might have like sort of had some theory that ruled it out, then they came across this class and suddenly things went very fast.
So, I think it’s interesting that both of these are sort of like discovering a new thing in nature.
Robert Wiblin: Right. A new material, basically.
Katja Grace: Yeah, pretty much. And then there are a couple of other ones that are like more than ten years, one is like jet propelled vehicles. So, the land speed record was going up with sort of more normal cars I think and then at some point jet propelled cars.
Robert Wiblin: Somebody stuck a rocket on the back of a car.
Katja Grace: Right. If I remember right, they may have done that a few times before it really beat them. I think it was sort of like there were two curves where one of them was going up slowly and then the rocket one was going up quite fast and it went past the other one quickly.
And the other one is airplanes. And we also have maybe 30 other things that people have suggested to us just continuous and we haven’t finished looking into.
Robert Wiblin: Okay.
Katja Grace: Yeah, we had a bounty out for suggestions and I think maybe many of these suggestions don’t look great, but I think some of them are probably good. So expect to I don’t know, maybe find another ten or something.
Robert Wiblin: Okay. Seems like dysfunctions are rare but not exceedingly rare, but they do sometimes happen when you get different material or a totally different approach to dealing with a question which I suppose if what matters is just improving hardware, and that seems like it’s going to be more incremental whereas if there’s some total change in the algorithm that you’re running that just like suddenly flips you onto a different kind of intelligence then it could be much more abrupt.
Katja Grace: Note that lots of things are new things to that level of newness. In fact there are new algorithms for things like for many kinds of things. There have been.
Robert Wiblin: But typically they’re only incremental improvements nonetheless.
Katja Grace: That’s my impression, yeah. But much uncertainty about this.
Robert Wiblin: Do you have a view on this issue on whether we should expect AI progress to be discontinuous or suddenly go very quickly or not? Or like also what are the best arguments one way or the other?
Katja Grace: Yeah, I guess on a more recent project of collecting up all the arguments this I guess saying like well, things don’t seem to be discontinuous that much, but like other good reasons to think that AI might be especially likely to be. My own impression is that none of these arguments that are around are that great. Currently they think some of them with like more detail could be good or something.
There’s some we’ll probably investigate more, but my current impression is that they’re aren’t good arguments for it, but many people think this so like maybe it’s right and maybe they have some good intuition or maybe I’m misunderstanding the arguments. But it’s very much an open question. But the kinds of things we might investigate more are like so this intelligence explosion idea where the idea is that we’ll build AI that can basically work on building AI and then that will speed up the AI progress. Then it will be even better at building AI and basically there’s a feedback loop. And I guess the argument as I’ve heard it often is sort of like, well, they’ll be a feedback loop, so it’s gonna go crazy. And I think this argument is pretty lacking. Just ’cause there are lots of feedback loops in the world. Usually the world doesn’t explode. That’s my impression.
Robert Wiblin: So, you get a feedback loop, but the effect is sufficiently gradual. So, it takes time for the feedback to happen and each stage it’s only increasing somewhat so it’s gradual.
Katja Grace: It doesn’t really say much about the rate.
Robert Wiblin: Okay.
Katja Grace: Like, you might say yeah, actually there’s already an intelligence explosion. Like the economy is already making tools that are helping us make better tools and thinking thoughts that are helping us to think better thoughts.
Robert Wiblin: Yeah, which is true I guess. It’s just very gradual.
Katja Grace: Right. And I mean if you look at the economy over the long term it does indeed sort of look like it’s gonna take off. You know?
Robert Wiblin: It’s just on a human time scale. It’s not so bad.
Katja Grace: Right. It seems to me that you could actually have a better idea of how fast this feedback loop is going to go, because the things happening in the feedback loop are not entirely alien things that we haven’t seen before. They’re like research progress or like some amount of effort is being put into research progress and is getting some kind of results and the results are leading to like an increase in capabilities. And so the question is just like what happens when you make loop this back around into a loop. Instead of it just being a one way path from people making effort to increases in capability. So, that’s the thing that I’m working on at the moment.
Robert Wiblin: Interesting. You could have a situation I suppose where so now AI is mostly programming itself ’cause it’s better than us, but getting smarter just becomes harder and harder once you’re at this like frontier of intelligence. It gets progressively more difficult to find any new improvements, and so nonetheless it slows down.
Katja Grace: Yeah, I think perhaps a key part of like making this more quantitative model of what an intelligence explosion could look like is like how does research effort turn into like getting results. There’s research on this in general. And my impression … I haven’t looked into it that much, but my impression is that it’s sort of confusing and looks like we’ve been putting increasingly much effort into various research things and only getting like linear outputs.
Robert Wiblin: Yeah, so this is where the inputs grow exponentially but the problem also gets harder at an exponential rate as well so on balance you only get a linear improvement over time.
Katja Grace: My impression is that it looks like something like that is happening though I’m unclear exactly what’s going wrong.
Robert Wiblin: Okay, in which case, I guess you could have an explosion in capability but only a linear improvement overall. Well, it’s interesting. I’m kind of contradicting myself there.
Katja Grace: Yeah, I think what would happened is it would just make the overall feedback would be fairly slow.
Robert Wiblin: Okay.
Katja Grace: I think it should still go faster than research progress currently goes.
Robert Wiblin: Just not infinitely so.
Katja Grace: Yes.
Robert Wiblin: Okay. What’s the best argument against expecting that there’d be some abrupt discontinuity?
Katja Grace: I think the best argument is just that there aren’t usually abrupt discontinuities. So, I guess I feel like even if this is on the person saying that there will be one, to come up with a good argument, and then I guess we have this whole list of arguments where none of them seem great, but that’s pretty debatable. And I guess I’m also working on debating that as well, but yeah.
Robert Wiblin: Okay, so the outside view says it’s unlikely.
Katja Grace: Something like that, yeah.
Robert Wiblin: Yeah. So, the researchers and the survey that we were talking about earlier thought that a sudden take-off in progress in artificial intelligence was possibility, but seemed on balance not that likely. Which I guess kind of matches up with your view.
Katja Grace: Yeah, that’s true.
Robert Wiblin: Okay. What other things has AI Impacts looked into.
Katja Grace: I guess an amusing thing that was related to the earlier measuring of how much hardware is in the brain is we also tried to estimate how much hardware there is in the world and how fast that’s increasing. I guess there’s also a bunch of uncertainly around. We found someone else’s estimates in how much hardware there is in the world, but as far as I could tell this would mean that hardware is like more than all of the world’s value with hyper ability somewhere between like 40% and 400%.
Robert Wiblin: Of all of the wealth in the world.
Katja Grace: Something like that.
Robert Wiblin: Okay, so you kinda rejected that one and went back to the drawing board.
Katja Grace: Well, we sort of half went back to the drawing board. Anyway, this area is sketchy. But you can use the estimates of like how much hardware there is in the world and how much hardware there is in a human brain to say like okay, in these kinds of scenarios that people talk about where like amazing hacking ability causes some project to take over like much of the world’s hardware ’cause for Ai to like run their new hacking Ai or something, like how much extra capability does that get you. Like how many extra human brains worth of hardware are you stealing if you take all of the world’s hardware?
Robert Wiblin: Okay.
Katja Grace: Which is perhaps not owed a great proxy for what might happen, ’cause it might be that if you have a hundred brains worth or hardware in one giant brain maybe it’s like unimaginably better than a hundred brains.
Robert Wiblin: Certainly.
Katja Grace: Right. Yeah, but ignoring that for now, we calculated that you get about a hundred or a thousand extra brains if you took all of the world’s hardware at this point, which I think is not very many compared to I don’t know, usual thought.
Robert Wiblin: Right. Okay. So, currently you’ll get the equivalent of possibly of around a thousand people working together if you had all the hardware. Which evidently is not enough to run some … like that’s just a normal scale –
Katja Grace: Not enough to take over the world.
Robert Wiblin: Right. Absolutely.
Katja Grace: Like, if you wanted an extra like hundred or a thousand brains to do your AI project, building Ai is not the way to do that.
Robert Wiblin: Yeah. So, I guess there’s enormous error values around that that though, ’cause yeah. At all the different points. Are there any other topics that you wanna bring up in this kind of smorgasbord, sampling plate of projects on the AI Impact –
Katja Grace: Well, I guess those are mostly past projects. I could say something about projects that we are likely to do. I guess some of those are ongoing, but I think one thing I’m pretty interested in is figuring out whether we’re in this hardware is very important world or like this software is more important world. And I think one way to make progress on this is to just look at cost AI progress and say how much of the increase in capabilities that we have seen came from hardware versus software and that’s kind of easy to figure out I think.
Robert Wiblin: Okay.
Katja Grace: In particular cases, then you need to look at a lot of cases maybe. Like, I’ve done a bit of this and I guess the bit that I’ve done it looked like maybe it was unclear, but something like 50/50 was not ruled out.
Robert Wiblin: Okay, so just explain this thing to me. As I understand how machine learning works, you have to throw a lot of processing power at this training process where you kind of develop an algorithm that it’s gonna use to make decisions. But then having done that you can operationalize, you can run that algorithm a lot more with a lot less processing capacity. Is that right?
Katja Grace: That’s right.
Robert Wiblin: Okay. So, when you switch from like training to actually implementing you can implement like many, many copies of it. Interesting.
Katja Grace: Yeah.
Robert Wiblin: So, kind of when Google does you know, image recognition or something for Google maps, it has to do this big training process and then once it spat out the product, then it can do it across all sorts of streets like lot more easily.
Katja Grace: Something like that, yeah.
Robert Wiblin: Yeah. So, give me what you’ve been seeing so far. It seems like this whole area of forecasting Ai is quite undeveloped. It seems like there’s only a few people working on it, and this is lots of questions that you’ve only had in couple of days really to look into. That’s seemed really important.
Katja Grace: That seems about right. I mean, often it takes us more than a couple of days to do what seems like it could take a couple of days, but from that –
Robert Wiblin: I’d say that’s true almost everywhere.
Katja Grace: Everything is very underdeveloped. And I guess it seems to me like there’re are lots of really attractive ways to develop it as well. Lots of things that haven’t even been touched and actually would be great to do.
Robert Wiblin: Yeah, I mean, but the fact that we don’t have a good record of how Moore’s Law has been progressing really or that you are going to pull all this together and nobody’s even really recording the data right now is astonishing.
Katja Grace: Yeah, I agree.
Robert Wiblin: What do you think explains this? Why don’t people care about predicting? You’d think maybe even Google or some other organization would care from a business point of view that they’d want to know when they’ve gotta do different things.
Katja Grace: Yeah, so I guess it seems plausible that there are things like that inside companies that we don’t know about or that there are outside companies and we failed to find them or something. I think overall forecasting AI seems to widely be considered very hard basically pointless to try to do or something. I think this is largely based on something like past efforts at predicting AI being considered failures. The past efforts are mostly these instances where people have just like made up numbers on single occasion and been like, “Yeah, I think it’s gonna be 2028.” And I guess Ray Kurzweil has put more thought into this. I think not that many other people have put that much thought into it and so I guess I think people are using entirely the wrong standard for judging like is this feasible as in I think people have tried very little to do it and then it’s sort of unclear how well it’s gone, and they’ve been like “Well, that was embarrassing. Let’s not do that again.”
Robert Wiblin: But in reality they just put almost no effort in.
Katja Grace: Yeah. Like, I guess I like to compare it to something like climate change. It seems like it’s probably less of a severe problem than AI risk. But if you look at like the amount of effort that’s going into predicting what will happen with that I think that’s probably more appropriate for a problem of this scale.
Robert Wiblin: Right. It’s also exceedingly difficult to predict that.
Katja Grace: Right. Yeah.
Robert Wiblin: It’s similarly difficult to predict that people will spend hundreds of millions of dollars effectively trying to do that.
Katja Grace: Right. I think on the margin AI should be much easier to like start to predict.
Robert Wiblin: Yeah, because almost nothing’s been done.
Katja Grace: Right. And there are a bunch of linear extrapolations to look at. I think also there’s kind of like a notion that predicting things is quite hard in general, which it is in some sense, but there are a bunch of things that we can predict well. That we have predicted and we sort of don’t give ourselves credit for them because they’re like easy to predict. For instance, we can predict like what the climate in Oxford is going to be like in five years roughly, but like we know that the thing that’s sort of predictable. So, yeah. Usually if we’re having a prediction tournament or something, the things that are in the prediction tournament are things that are kind of hard to predict.
Robert Wiblin: I see.
Katja Grace: Whereas if the metric is just like how are we doing against reality, like can we figure out the answers the questions we need to know, it’s like, not clear how easy or hard those questions are going to be. And so I think for some of the past AI things, it seems like it could have been very easy to predict to them. I’m not sure to what extent people did, but like for instance, I guess for most of Chess Ai, it was as good as some human level, but when was it gonna beat all of the humans? I think you could have seen that coming from a long way off, because it was just fairly incremental over decades.
Robert Wiblin: Okay, yeah. So, you’re saying people think that forecasting Ai might be extremely … progress might be very hard, but in fact if it’s just kind of linear which is kind of constant in a sense, just like the weather in Oxford is fairly constant over a five year period, or the climate in Oxford is fairly constant over a five year period, then in fact it might be fairly straightforward. You just draw out the line, and I suppose there’s a question of at what point on the line will it have particular capabilities, but this could give you a pretty good idea. And no one’s even done that.
Katja Grace: Yeah, I guess I think there are probably many things that are hard to predict and that are sort of surprising and so on, but I think the bits of this that should be easy to predict that at least give you some structure than the more complicated things might happened on top of, I think we haven’t really tried to do that well.
Robert Wiblin: So, there’s very hard parts and there’s some easy parts and the easy parts kind of get you some, part of the way there, but we haven’t even taken the learning info.
Katja Grace: Yeah, roughly.
Robert Wiblin: Do you consider yourself as like having kind of colleagues who are working on this in other organizations? Is it just a couple of people at AI Impacts that are doing this. At least doing it publicly.
Katja Grace: I think there are like some other organizations who are doing like some things that are relevant to it, but I’m not sure if anyone else is like sort of full-time dedicated to forecasting Ai. I guess Future Humanity Institute does some things related to this. Some strategy type things. I mean Open Phil is looking for someone to do Ai forecasting and there are various AI organizations who might have something like this going on. Or like some such things or people who are doing a mixture of things to do with like making Ai go well. Where some of them are Ai forecasting related.
Robert Wiblin: Okay, so what originally motivated you to work on you know, positively shaping the development of artificial intelligence in general?
Katja Grace: I think AI seems basically guaranteed to happen at some point. And I think it seems very likely to change the world a lot just like every corner of the world a lot. It seems like basically the biggest deal around. And I think more likely than not if we don’t really do anything about this stuff with aligning it with human values, like if we don’t cause any AI that is quite powerful to care about human values, in the long run I expect humans to basically lose control of everything. So, I don’t necessarily expect that they’ll be one day when Ai takes over the world, or something like this. I think I’m much more uncertain about a lot of the specific scenarios than many of the people around.
I don’t even necessarily expect you know, very fast progress or something, but I think even if things happen very slowly, I basically the same problem to happen in the long run where the same problem people are concerned about often, which is the problem of AI being very powerful and very good at making decisions and not having human values. So, in the long run, all of the decisions are made not in favor of what humans want and everything is terrible forever. So, the sort of slow moving scenario that basically looks the same would be something like, I don’t know.
Suppose you’re like a company and maybe you’re mining coal or something and you make some AI that like cares about mining coal. Maybe it sort of knows about human values enough for like in the next ten years or something not do anything terrible, but overall let’s say it’s like a bunch of sort of agents who are smarter than humans and better than humans in every way, but they just care a lot about mining coal. I expect in the long run for them to basically accrue resources and decision making and control over things and so on, ’cause they’re basically better than us in every way. And in the long run he let us move toward just like trying to mine a lot of coal and not do anything that humans would have cared about, which you know, might be fine if they’re the right kind of creatures who really get a lot of pleasure from the coal mine or something.
Robert Wiblin: They like coal for the right reasons.
Katja Grace: Yeah. But you also might imagine that they’re not even conscious or anything, but the consciousness thing doesn’t really matter for what will happen in the world. Like, they might still be very good at like taking control of things. I guess it seems similar to what happened with say pre-human like you know, chimp like species and so on. If they had a choice to like start off humans existing it seems like it was probably a bad idea for them even if they could maybe kill a particular human or something. Like, they quickly lost control of the situation ’cause we were just like better at everything.
Robert Wiblin: Okay, so given that very few people have been working on this it would be valuable I guess to get more people to actually spend some time trying to do this forecasting work that is fairly untouched. How can people potentially work on this? What steps should they take if there’s a listener who thinks, “I could be doing this. I’d like to contribute.”
Katja Grace: We’ve published some lists online of concrete projects that we think people could do. I guess we’ve also marked one that we think people could maybe do on their own basically if they want to try this kind of thing. If people want to try things like that, and then send us their efforts, we’re happy to like talk to them about whether they’ve done well or not. Whether they should be getting some more particular expertise.
One thing that’s sort of unusual about this field is that there are really a bunch of different areas covered. Like, sometimes we’re trying to figure out what happened with nuclear weapons or some historical thing or like Ai hardware or like brains of monkeys. I think people with quite a range of different areas of expertise probably could find useful things to to do here. And I think often there are fairly modular projects where you can do a small amount of effort and hopefully get some output. I would recommend sort of trying it quite early on and seeing how it goes before for instance like going off to get a PhD or something, but this might be my idiosyncratic lack of desire for stability or something.
Robert Wiblin: Yeah, okay. So, you think a reasonable path them is just looking at interesting questions and then trying to actually tackle them? That you don’t necessarily need to have a ton of training or like not more training than like many Westerners would already have. They don’t need to have any particular access either. You can just do these things from home, if you have the gumption to follow through.
Katja Grace: Yeah, that’s my impressions. I mean I think that’s sort of what I have done to a large extent and yeah, I think a lot of them you don’t really need much other than the internet. I think you probably need some … I think there are probably things you need that are hard for me to see. For instance, suppose you have a high level question like how important is hardware relative to software. I think there’s sort of a mental skill of being able to work out how to answer that. That seems maybe relatively rare. Or like that’s a thing that I think people have problems with. If you are happy to just sort of find someone else to work out what a good low level projects are to do and then do those, maybe that’s fine.
Robert Wiblin: That’s not so bad, yeah. I guess do you feel like people need or with them trying to do this research independently that the main thing they’re gonna lack is people to talk to who have an interest and knowledge about this to give them … just clue them into the basic knowledge that they need and keep them motivated perhaps.
Katja Grace: Yeah, that’s seems probably true. I think many people have trouble working on their own. Yeah. So, I think if people are interested in working on this kind of thing and like working on it on their own, for a bit is going well, you know they should get in touch with us, ’cause we’re trying to hire people. Yeah, and I guess there are other places who are also looking to hire people to do this kind of thing. Though they might be … I guess we’re happy to have people come for like couple of months or something and see if it goes well. Other people might be less into that.
Robert Wiblin: Okay, yeah. What kinds of people would you be like most interested in hiring? What would be the ideal candidate to walk through the door or a couple of different archetypal candidates?
Katja Grace: So, I think being comfortable with a wide range of different research areas somewhat like being able to jump in and out of like to be thinking about hardware one moment and chimpanzees another moment is useful. I’m not quite sure how rare that is, but I guess, yeah.
Robert Wiblin: Somewhat rare in academia, but yeah.
Katja Grace: Yeah. Overall things that make people good researchers in general, I think. General research skills. Yeah, this thing I mentioned of sort of being able to, I guess, sort of be a bit like a detective or something, and get a weird open ended question and figure out which other things in the world would give you evidence about that.
Robert Wiblin: Yeah, to not be demoralized if it’s not obvious what the path forward is.
Katja Grace: Right, and if you just googled what the path forward was and you didn’t find it, to be able to think about it on your own and I guess do that well. Some people might not be demoralized, but they might also not do it well.
Robert Wiblin: Not be able to, yeah.
Katja Grace: I think I could be wrong about what the most important things are. I haven’t hired that many people in the past.
Robert Wiblin: Yeah, so you didn’t mention kind of what major or background training people have. Is that not maybe as important as people expect?
Katja Grace: I do think that general ability to read something you’re unfamiliar with and think about it is probably more important. Being familiar with any one of these fields that’s relevant is good, but it seems like you’re likely to spend a bunch of time in areas that you don’t know that much about.
Robert Wiblin: So there’s no one who really has expertise in all the areas.
Katja Grace: Right.
Robert Wiblin: I mean, if there’s something about one of them, then it’s like you’re as well placed as kind of anyone going in.
Katja Grace: Yeah, and I guess if a lot of expertise on one topic is important we hope to sort of interview people or something about that. I mean, that said, expertise on one of the things would be good, but it’s probably not a main thing that we’re looking for.
Robert Wiblin: What in your experience is kind of the biggest barriers for people who want to get involved in this or that stop people from getting involved in it?
Katja Grace: My guess is that not many people get involved in it in part because it’s not clear how to get into it. I think I got into it via sort of unconventional method of thinking it was worth doing and then hoping that people would give me money for it, and then people magically appearing and giving me money for it, but I think many people would not do that. Yeah, it’s like not an obvious path, like there wasn’t sort of someone there being like, “Oh, would you like to take the make AI Impacts option here?”
Robert Wiblin: Right, okay, so you kind of had to start the thing and then people were like, “Oh, this is kind of cool, so we’re happy to fund this.”
Katja Grace: Right.
Robert Wiblin: But you took the initial risk.
Katja Grace: Yeah.
Robert Wiblin: Yeah, why is it that like paths here … It seems there are no conventional paths. I mean, why couldn’t someone try to do this as PhD topic, for example, and then they kind of keep their options perhaps a bit more open?
Katja Grace: Yeah, I think they could do that. My impression is that you can improve your understanding of an area relatively quickly, so I guess I would be hesitant to spend like three years on a project when you could spend like two weeks and have a rough idea and then get onto the next two week long project or something.
Robert Wiblin: I see.
Katja Grace: But that’s partly because my goal is to try and improve my understanding of a broad range of things quickly, but I think is you were interested in making sure that you sort of have that PhD to fall back on and I guess got a bunch of research experience, that could easily be a good thing to do.
Robert Wiblin: Your concern is that someone in a PhD would be required to look into things in an excessive amount of depth, like given just the practical outcomes that you’re trying to create?
Katja Grace: Something like that, yeah. I think there are many projects where you could look into them into lots of depth for three years and-
Robert Wiblin: Yeah, how smart is a pigeon?
Katja Grace: For instance, but I think there are probably, I don’t know, high level ones that you could do that and I think that would be very valuable, so I think if you’re keen to do a PhD I think there are lots of things you could do a PhD on that would be a good idea.
Robert Wiblin: Yeah. Do you have a list of those anywhere? I suppose there’s just the questions on the AI Impacts site that you think people should look into.
Katja Grace: Yeah, I guess we have a much longer list of questions that I haven’t gotten around to putting up, but we have short list on the site at the moment. I guess if people are doing PhDs in particular areas and want advice on like which questions we know about that might be in that area, we’re sort of happy to field emails on that.
Robert Wiblin: Yeah. Katja is not hard to find. Just Google her name, Katja Grace, and then you’ll get her email. What do you think listeners would be most likely to misunderstand about the work that you do or this entire field of inquiry?
Katja Grace: I think the misunderstanding that I most come across is just about how tractable this kind of thing is. I think people are like, “Oh, this has been like a whole person for several years doing this? Is the field basically running out of things to do or something? Is there still anything to make progress on?,” which just seems like a very radical understanding of the situation.
Robert Wiblin: Yeah, why do you think that is? What do you think is biasing people towards feeling that something that sounds like is exceedingly neglected is not so? Because there’s been quite of an explosion of people going into AI technical safety work and people don’t think there’s like one person who’s looked into it for a few years so it’s full. Not by any means, but on this they feel more that way.
Katja Grace: Yeah, I’m sort of confused about that. I think somehow people use different meanings of like hard or something. They’re like, “Well, we did this and it was kind of hard, so we shouldn’t’ do that anymore. Let us try to build an AI that will take over the world.” That also sounds hard, but maybe worth it. I don’t know.
Robert Wiblin: Yeah, so I suppose this is kind of a social science issue that feels a bit less technical, a bit less mathematical and perhaps, given that other people are drawn to this, are just more rigorously technical computer science people. They see paths forward on the technical stuff, but not on this?
Katja Grace: Yeah, that may be right. I guess maybe in general society has made more progress on very technical things than social things, like we’ve managed to go to the moon, we’ve managed to stop war, so maybe in general social science-y things seem like more of a mess.
Robert Wiblin: I wonder if Philip Tetlock’s work showing that expert judgment or expert forecasts weren’t that reliable, which is very well known I think in this community also, perhaps discourage people a bit.
Katja Grace: Yeah, I think that’s really right. I mean, I think they’re sort of general, summarized ideas that are like prediction is hard, AI prediction is ridiculous, like hard and also goes badly. We’ve chatted about this in many conversations over the years probably-
Robert Wiblin: And yet we haven’t answered it.
Katja Grace: Right.
Robert Wiblin: Is it surprising that there isn’t kind of an academic discipline, or that some people in academia have just realized that this is kind of the thing that they should be doing? That this is their calling? I suppose maybe there’s just a lot of gaps in academia, so we shouldn’t be too surprised if this one happens to be one of them.
Katja Grace: Yeah, I think that doesn’t seem too surprising to me. I mean, I think probably forecasting technology in general seems like a fairly small area, I think.
Robert Wiblin: Yeah. Do you see any paths of people who want to work on this question, but want to do it in like a prestigious, safe way? Other than maybe doing a PhD.
Katja Grace: I mean, I think there are various organizations who are probably looking for at least one person to be working on things like this. I would imagine various AI organizations-
Robert Wiblin: Would be interested to have at least one person trying to predict, yeah, how quickly they’ll go, or maybe I guess if you know-
Katja Grace: Yeah, [crosstalk 01:46:10].
Robert Wiblin: Yeah. I guess if you know them you could meet them and try to convince them that it’s worth funding, but not that many people have direct contacts already. What kind of events are you and other people who are interested in this question likely to be found at if people want to come and meet you and, I don’t know, test out whether they want to join in?
Katja Grace: Occasionally we run workshops and that sort of thing discussing particular questions. I guess we had one before that was trying to come up with questions to do with what are called multipolar scenarios, like where there are lots of different AIs or lots of different parties in general and things didn’t get very unified in whatever kind of AI transition happened. We had an event where a bunch of people vaguely interested in this kind of thing came together and thought about research projects that would shed light on that. I guess if you want to be invited to things like that if they happen you can also email me. We occasionally go to conferences or something like that, but there are a lot of conferences in the world.
Robert Wiblin: Yeah. Do you go to the AI conferences? I guess sometimes you to go EA Global, right?
Katja Grace: That’s true, yeah. This year I’m going to miss it, but usually I go to EA Global. This year I’m going to go to an AI conference, but usually I haven’t been to any AI conferences.
Robert Wiblin: Okay, so people just shouldn’t expect you to be there, but they could email and-
Katja Grace: Probably not, unless this one goes really well.
Robert Wiblin: Potentially try to meet. Let’s talk a bit more about your personal background. What do you think in your background has prepared you to do this? Was the training relevant or do you think you would have just been better off starting it many years before you did?
Katja Grace: I think that the sort of formal education I’ve done doesn’t seem to have been very relevant except, I don’t know, like high school math and that sort of thing, and so I don’t think it really matters what I did there, but I think also if I had started such things much earlier I imagine them going less well. I think probably a lot of the differences from talking to people and I guess coming into contact with a lot of the other research work that’s happened in this area, which probably wasn’t through formal education channels, but I guess I’ve been visiting FHI every now and again for years and chatting to people in Berkeley about their interest in this kind of thing.
Robert Wiblin: You started your PhD, but decided not to continue it?
Katja Grace: Yeah.
Robert Wiblin: Was that because it just didn’t seem relevant to this question that you thought was the most important thing for you to be working on?
Katja Grace: No. I guess maybe somewhat. I didn’t have a particular question that I thought was most important to be working on. I did want to be working on something to do with like saving the world broadly, like reducing existential risk and that sort of thing, so I was hoping to work somewhere at like FHI, the Future of Humanity Institute, or something like that and I had the impression that you needed a PhD for that from like my prior efforts to get hired by such places. I think partly I had an opportunity to do that kind of work anyway, and so getting the PhD didn’t seem so useful and the work during the PhD didn’t seem so useful, and also I was really hating it.
Robert Wiblin: Why were you hating it?
Katja Grace: I just had really bad anxiety disorder, so sort of not that related to the PhD in particular, but I was … I guess I was living in Pittsburgh and day to day I’d just have like panic attacks all the time, but when I was in California I wasn’t having panic attacks all of the time, so I thought why not be in California.
Robert Wiblin: Yeah, that makes a lot of sense. I guess maybe because you had more friends here? It’s just like a more pleasant environment?
Katja Grace: Yeah, I think some things like that. I basically didn’t know anyone that well in Pittsburgh, so I think I really didn’t like having to be in classes. I think philosophy classes in particular seem particularly likely to make me have a panic attack and Berkeley just didn’t involve any philosophy classes at all, whereas my PhD was pretty insistent on going to philosophy classes. It was an idiosyncratic problem.
Robert Wiblin: I’m sorry it was so unpleasant, but I guess it led you in the right direction probably in the end.
Katja Grace: Yeah, I think if it hadn’t I hopefully would have tried harder and gone back, but it turned out to pretty quickly seem to much more promising.
Robert Wiblin: To leave.
Katja Grace: Yeah.
Robert Wiblin: Did you have a lot of resistance to leaving the PhD because you felt it would … It was kind of just closing a door on a particular path and people can often be pretty reluctant to do that kind of thing.
Katja Grace: I think by the time I left I was fairly keen to leave. I think it was probably easier because my program was very open to me coming back later. They sort of said, “We’ll probably throw away your paperwork in 10 years,” or something, at which point it will become more awkward for you to return, but it didn’t seem that much like I was forever cutting off the possibility.
Robert Wiblin: Yeah, and I guess you had a project to kind of go to in California, so-
Katja Grace: Yeah. I guess I usually have a bunch of projects that seem more exciting than any particular university course.
Robert Wiblin: Okay, so we talked about some things that you’ve done that you don’t think really helped you to succeed in the path that you’re on now, but what are some things that you’ve done over the years that you think really were worth the time and effort?
Katja Grace: As I was saying, I think meeting people and talking to a bunch of different people who are interested in some similar things has been quite useful. I think in particular in the effect of altruism community and the rationalist community sort of related to the AI safety community, yeah, like a lot of good discussions over the decade and a half or something. I think I’ve been pretty willing to move around and move to the other side of the world at the drop of a hat because it seemed good or something. I don’t know if that goes well for people in general, but I feel it’s worked out well for me.
I said that the courses I’ve done and so on haven’t seemed that useful. I did do this honors year project on anthropics with David Chalmers and that seemed pretty good I think. Mostly it just involved me hanging out by myself for a year and thinking about stuff, but it seemed like good practice doing that and having someone good check whether it’s going well or not.
Robert Wiblin: Yeah, whether your thinking it right or not, yeah.
Katja Grace: Yeah.
Robert Wiblin: One thing that I thought you might say is you’ve blogged online for a long time. I think since maybe 2008 or 2009.
Katja Grace: Oh yeah, that’s true.
Robert Wiblin: One thing that seemed unusual about you is that you’re actually just willing to investigate questions and try to answer them, and I wonder whether that mentality somewhat comes from the fact that you’ve just been writing up things that you think and things that you’ve learned about for a long time. You already kind of have this research process going on all the time and it’s just a matter of like turning it towards a particular focus and then writing those things up.
Katja Grace: Yeah, I think that seems true. My guess is that both of them stem from some like underlying difference in perspective and I’m not quite sure what causes that. I think-
Robert Wiblin: Insatiable curiosity or just incredible intelligence.
Katja Grace: I feel like it’s … I don’t think I’m insatiably curious. It might be … I don’t know. I feel like one thing that maybe changed my perspectives on things somewhat were like when I was a kid I read various books about philosophy that were sort of like you might be a kid and be curious about things and think the world is really strange and crazy and there are these things you should try and figure out, and then as you grow older you’ll sink down into society and just care about whether you have a job and stuff, whether you’re going to have a good funeral or something. Don’t do that, kids. Remain impassioned by how strange the world-
Robert Wiblin: The big picture questions.
Katja Grace: Yeah, the big picture questions, and I think I was a bit like, “Whoa, yeah, that sounds dangerous.” I feel like that’s just stuck in the back of my mind a bit.
Robert Wiblin: Yeah.
Katja Grace: Yeah, I don’t know how much that actually affects things.
Robert Wiblin: I know there’s some machine learning researchers in the audience. If some of them were really interested in what you’ve been saying, would it be useful for them to potentially spend some of even all of their time doing this kind of work?
Katja Grace: I think if someone had a good understanding of machine learning and was interested in this kind of thing, I’m sure there are a lot of questions that they’d be in a particularly good position to work on.
Robert Wiblin: How does that compare with the value of doing technical AI safety research? Do you have a view on whether people can make a greater contribution doing one versus the other?
Katja Grace: I guess my tentative view is that I should be trying to work on this rather than AI safety type stuff. I don’t have a very strong view. I guess my overall take is that this kind of stuff is just so neglected, whereas there’s quite a few people working on technical AI safety. I guess this also seems just very tractable. I think technically AI safety, it seems like it probably needs to be done at some point, so maybe that’s like some argument for doing it, though I think-
Robert Wiblin: Whereas maybe we could get away without doing what you’re doing?
Katja Grace: Yeah, but I think then we’re sort of hoping to get lucky somewhat. I think of the overall thing as like there’s a big problem that we’re coming up upon probably and we have a very poor understanding of what it’s like. It’s sort of like walking down a dark tunnel and we know that there’s maybe something iffy down there, but we don’t know if it’s like a dragon, or a giant pit, or what. It seems like just having a little bit more light is very useful there, that you might be like, “But whatever the danger is down the tunnel, if we just had more intelligence that would be good,” or something. Maybe there’s some things that you definitely know you’re going to need, but I think it’s easy to sort of be too enthusiastic about the AI safety over things that help you strategize more.
I think often doing like the AI safety research, if you thought it was going to be quite a way off doing the AI safety research, now might be a bad idea relative to other things you could do to make the situation better I think.
Robert Wiblin: Okay, before we move on, is there any kind of final advice you want to give to people?
Katja Grace: I think if you’re interested in doing this kind of research a good place to start, like on the sort of maybe two hour level of getting started, is like ask yourself what you actually think about what will happen with AI. If you can, why you think that and what would maybe change your views about it? Try and sort of come up with a consistent picture that you endorse and see which things are really open questions for you in the hope that you can find things that you’re very curious about.
Robert Wiblin: I guess there’s a whole lot of information that people can read on the AI impacts website as well.
Katja Grace: Right.
Robert Wiblin: You’ve kind of built this mini encyclopedia of the work that you’ve done on these questions, which would definitely help to get people off the ground.
Katja Grace: Yeah, that’s true.
Robert Wiblin: AIimpacts.org, right?
Katja Grace: That’s right, yeah.
Robert Wiblin: Okay, speaking of trying to figure out what your view actually are, let’s talk now a bit about your overall view of the AI safety landscape, like how you think things are going, how you think things are going to go in the future, accepting at the outset that no one knows all that much? What we want to find out and kind of what your view is so we can add that to the pool of everyone’s opinions.
Katja Grace: Sure.
Robert Wiblin: Yeah, what about AI today most excites you? Do you see any positive signs that the future is going to get better than maybe you thought five years ago?
Katja Grace: Five years ago AI risk in general was a pretty obscure concern, I think, and it looked like maybe it was just going to be a very small number of people outside of AI who were worried about it. Was that five years ago? Probably. I think in recent years it’s become a much more mainstream issue, which I think is very promising. I guess in some ways it’s not promising because you might think if it seems like an obscure problem that might just be that we are wrong and it’s a really stupid thing to be worried about, and so maybe if you find that the people who actually know about AI, many of them are like, “Oh yeah, that’s a problem.” That’s bad news, but supposing that you’re right, then it’s good to have a lot of people actually working on AI thinking that safety is an important thing to pay attention to.
Robert Wiblin: Do you see any positive signs in how machine learning algorithms or products they’re actually developing that things might not be as dangerous as we thought or that we’re finding ways to make them safer?
Katja Grace: I don’t think that we have that much evidence from the things we’ve seen about any kind of long term problems. It seems like things are indeed pretty safe and that this is perhaps reason to not be so worried about AI risk, but I think that you sort of could have made that argument before and the argument is basically for all technologies that we’ve ever made if you sort of use them beyond where you’ve built them to do what you want then something bad will happen, but we always manage to not use them in that case. We make cars to go along the road. If they have a driver in them, you could be like, “What if you just didn’t put a driver in them and just let them drive in some direction?” They could go off the road, but we know that, so we don’t do it. You might think if we can’t manage to get AI to do exactly what we want we just won’t use it in those cases and I think that what we’ve seen so far is sort of basically in line with that, but so is the rest of having products and so on.
I guess the reason for concern in the long run would be something like that we’re going to err more on that it’s going to be harder to keep track of when you’re using a thing beyond its scope for doing what you want or that there will be incentives to do that. I guess externalities where someone is using the thing but it’s causing much worse things for someone else.
Robert Wiblin: It seems like in the past people who have been worried about artificial intelligence safety have mostly been worried about the scenario where AI suddenly becomes dramatically smarter than humans and has almost godlike abilities to just totally outwit us at every turn, and effectively very quickly will have ceded control to an AI that could basically run ram shot over us. These days it seems like there’s many more people who are less worried about this kind of godlike AI scenario and more worried just about a gradual ceding of influence to perhaps many different AI systems that each are doing different partial things. It’s not so much that the AI is able to completely outwit us, but just that gradually most of the functions in society would be taken over by these AI systems. Where do you kind of stand on that? Do you have any view on how transition to like a very AI dominated economy might play out?
Katja Grace: Yeah, I think I’ve always found the sort of sudden godlike AI takeover scenario less likely than I think other people around, mostly because I think that doesn’t usually happen with things, and so you need a pretty good, positive case for thinking it will happen here. I guess it’s like closely related to these arguments about discontinuous progress. Basically it’s like in order for someone to take over the world you kind of need for that someone to have seen quite sudden progress. Otherwise people are going to be just behind them. Even if you expect to have like very good AI in five years, if in four and a half years you had almost very good AI then maybe no one takes over the world because there are a bunch of people with almost very good AI.
Robert Wiblin: Yeah.
Katja Grace: Yeah, I’m more inclined toward the non-godlike AI scenarios given the difficulty of thinking about questions and the fact that lots of people disagree with me here, I wouldn’t put that low of a chance on it. I guess, as you say, views on this seem to have changed somewhat in recent years perhaps. I think I’m not that optimistic that they’ve like changed based on good arguments floating around the place rather than sort of demographic differences in like who tends to think which things.
I think, for instance, the field of worrying about AI has got bigger and maybe more people with more mainstream view on things are in it, and then they’ll tend to have more mainstream views on scenarios that are likely, which will be less crazy sounding scenarios.
Robert Wiblin: It seems like a part of it, at least among people that I know, is that Paul Christiano, who is a very well respected person in this area, has written a number of posts explaining why he thinks things will go more gradually. At least that kind of … His ideas have filtered through to people I know and I think they’ve softened their views at least on this intelligence explosion stuff.
Katja Grace: Yeah, sweet. Yeah, that seems plausible as a large influence.
Robert Wiblin: Yeah, but perhaps not in the world as a whole.
Katja Grace: Yeah.
Robert Wiblin: What do you think of this argument that, even if it’s fairly unlikely that the development of superhuman AI systems would come soon or that it would happen very abruptly, at a very quick pace, that people who are paying attention now should focus on that anyway because this is a scenario in which they might be able to have an outsized influence because they’re paying attention earlier and they’re a larger fraction of all of the people paying attention now relative to the fraction of people that they represent in 50 or 100 years time.
Katja Grace: I haven’t thought a huge amount about this, but on the face of it I’m not sure that in the very soon scenario you have that much more of an outsized impact, if at all, compared to the other one. It seems like in the case where it’s further out and there are a bunch of people who will ever pay attention to it, we’re still among the very early people paying attention. In either case there are a few people now, and so in the longer term one we will probably have an outsized impact on what happens with the longer term efforts, and so I’m not sure overall how much of a factor I expect between those two. I think less than people might usually expect.
Robert Wiblin: Do you think that we should expect to have more influence with this kind of discontinuous progress in AI or if it’s just business as usual, or would it just be the same basically either way or no particular reason to expect to be able to control one more than the other?
Katja Grace: I guess the discontinuous progress scenario involves someone having a huge amount of influence and many other people losing all influence, so I guess I expect that one to be higher variance at least and perhaps if we’re paying more attention we’re especially likely to be among the small number of people who have a large influence, but that’s a pretty abstract consideration and perhaps it depends a bit on what the scenarios look like.
Robert Wiblin: What’s the disagreement that you have with people in machine learning in general? Perhaps the kinds of people who filled out your survey.
Katja Grace: I think a key disagreement that maybe explains the differences in our behavior a fair bit is about whether you can do things now to affect something that’s like maybe 20 years or more into the future, which I guess I’ve studied somewhat though I probably don’t know that much more about it than them. My impression is that people very rarely try to do things that will affect something 20 years ahead that are not, for instance, also going to affect things much sooner or that are not like very similar to past cases or something. People save for a time, and even though it’s more than 20 years away, but it’s kind of analogous to past retirements that have happened. Yeah, so I think there’s not much of a track record of people doing that first, to say how successful or unsuccessful it is, but it seems like they are right that it’s not usually the done thing, but my impression is that it’s worth trying because it’s really important.
Robert Wiblin: Yeah, so you don’t see strong reasons to think that it’s impossible perhaps like they do?
Katja Grace: Right, yeah. I guess I’m not even sure if they really think … If they’ve thought about it and think that it’s impossible or if it’s just sort of like, wait, this isn’t what we do. It’s sort of weird to worry about overpopulation on Mars long before you’ve colonized Mars, for instance.
Robert Wiblin: Yeah.
Katja Grace: But it’s not obviously wrong to.
Robert Wiblin: Yeah, especially if you have like projects that are literally trying to colonize Mars, which is basically we do right now.
Katja Grace: Right, yeah. If it looks [crosstalk 02:05:32].
Robert Wiblin: I mean, we have both a literal one and metaphorical ones in this case.
Katja Grace: Right.
Robert Wiblin: Are there any examples at all of people trying to kind of shape the development of a future technology a substantial period ahead of time?
Katja Grace: Yeah, I think so. I actually looked into this for Mary before and I looked into two case studies that seemed like they might have been. I had a list of ones that were plausible, but it’s sort of unclear ahead of time exactly what people were thinking and what they expected and so on.
One that does seem like plausibly this was Leo Szilard and his efforts to, I guess, keep secret various findings to do with nuclear weapons I guess in the ’30s and early ’40s between when he realized that nuclear chain reactions were possible and when the first atom bomb was set off. I guess that was less than 20 years say, but I think at the beginning it probably looked like it should have been more like 20 years or more, but America putting that much money into the Manhattan project was surprising.
Robert Wiblin: [crosstalk 02:06:32].
Katja Grace: I think it’s quite hard to say how successful such things were because I guess there are a lot of counterfactuals and if an action works then maybe lots of different things happen in history and then it’s sort of unclear what would have happened or something.
Robert Wiblin: Right, you can’t see the change that it makes so clearly.
Katja Grace: Right, and also maybe things would have been helpful in expectation under a bunch of scenarios, and then it turns out you’re in a different scenario. Do you count that as good or not?
Robert Wiblin: They got unlucky, yeah.
Katja Grace: Right, so I think figuring what happened with Leo Szilard is pretty unclear unless we got like … I think nothing he did was very clearly helpful, but I think his efforts to keep papers secret and not let the Germans see the papers seemed plausibly helpful. I guess he also wrote this letter with Albert Einstein that helped prompt the Manhattan Project, which seems plausibly helpful.
Robert Wiblin: Okay, yeah, so-
Katja Grace: The details-
*
*Robert Wiblin:** [crosstalk 02:07:28].
Katja Grace: Right.
Robert Wiblin: Yeah, so you wrote a report about this?
Katja Grace: Yeah.
Robert Wiblin: Okay. Yeah, we’ll get a link to that and readers can go and learn more about it. I don’t know a lot about it, but I’m hoping to do an episode with Toby [Ord 02:07:38] at some point-
Katja Grace: Cool.
Robert Wiblin: Because he’s looked into it quite a bit and is very excited about that example.
Katja Grace: Huh, I look forward to hearing it.
Robert Wiblin: What’s a disagreement that you think you might have with listeners at large? Is there anything that a lot of people listening you think are getting wrong?
Katja Grace: I’m not sure off the top of my head except for like how promising this kind of research is, where I assume listeners at large don’t agree with me, or at least didn’t in the past since they’re not here working on it. We asked on the survey what the machine learning researchers, what disagreements they thought they had with people who are worried about safety, and therefore perhaps with many of the listeners who are also not in machine learning, and they thought … I guess this was an open ended question, so I sort of categorized all their answers, but popular answers were along the lines of you just don’t realize how narrow AI is, like how general it isn’t. They hear people worrying about general AI and sort of like this is a really weird concern. We can’t make it even a tiny bit general. It just does this one thing that we wrote about in a paper or something.
Robert Wiblin: Right, so it can play a game in this very sort of specific, narrow case, but as soon as you change the parameters then it gets lost very fast.
Katja Grace: That sort of complaint, yeah, though I’m not sure of the exact range of generality and specificity that they were pointing at.
Robert Wiblin: Yeah. I mean, what do you think about that? Are they right?
Katja Grace: I haven’t checked, but I would guess that this is a thing that they have a lot of expertise on relative to the people speculating about this from the outside.
Robert Wiblin: But on the question of like whether this means that we shouldn’t worry about general AI now.
Katja Grace: Right, yeah. I think I disagree with them about that and I think I less expect them to be right about, if things are not very general now, what does that mean about how general they are in five years or something. I think the question of how quickly things can change probably-
Robert Wiblin: Isn’t as much their area of expertise.
Katja Grace: Right, yeah.
Robert Wiblin: Alright, what few things would you recommend that a listener who has been interested in this conversation should read next to kind of get up to speed?
Katja Grace: I think Eliezer Yudkowsky had an article called Intelligence Explosion Microeconomics that was maybe sort of introducing this whole area of research somewhat and offering some ideas. I think that’s often interesting to people. The book Superintelligence goes in to various strategic considerations here and I think it’s a good background for just what do people think about these topics and maybe what are open questions that would be good to know more about. I ran an online reading group on LessWrong for that in the past, so I guess it goes chapter by chapter and has a bunch of discussion and further things related to each chapter, which I think is probably also useful if you want to get into this kind of thing.
Then I guess just looking at articles and AI impacts probably give you a better idea of how to do this kind of thing and what the questions are, or we have featured articles that are better than our other articles, so I recommend those ones.
Robert Wiblin: Alright, so we’ve been going for quite a long time and we should both get on to some other work, but just a final question. Is AI impacts in need of more funding and what kind of things would you be able to do if you had more money, and I guess as a result more people?
Katja Grace: Yes. We are in a position to use more money well I think. I think there are many people around who could be useful doing this kind of research and, as mentioned earlier, quite a lot of promising research projects, I think, so we’re interested in hiring more people and doing those projects. I think they could really make a difference to our understanding of what will happen with AI.
Robert Wiblin: I guess people can find out more about that and potentially donate at AIImpacts.org?
Katja Grace: Yes.
Robert Wiblin: Alright, my guest today has been Katja Grace. Thanks for coming on the show, Katja.
Katja Grace: Pleasure.
Robert Wiblin: The 80,000 Hours Podcast is produced by Keiran Harris.
Thanks for joining – talk to you next week.
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.