#176 – Nathan Labenz on the final push for AGI, understanding OpenAI’s leadership drama, and red-teaming frontier models

Note this interview was released in two parts in December 2023 and January 2024 — both parts are included in this video. Check out the blog post for Part 2 here: Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps.

OpenAI says its mission is to build AGI — an AI system that is better than human beings at everything. Should the world trust them to do this safely?

That’s the central theme of today’s episode with Nathan Labenz — entrepreneur, AI scout, and host of The Cognitive Revolution podcast. Nathan saw the AI revolution coming years ago, and, astonished by the research he was seeing, set aside his role as CEO of Waymark and made it his full-time job to understand AI capabilities across every domain. He has been obsessively tracking the AI world since — including joining OpenAI’s “red team” that probed GPT-4 to find ways it could be abused, long before it was public.

Whether OpenAI was taking AI safety seriously enough became a topic of dinner table conversation around the world after the shocking firing and reinstatement of Sam Altman as CEO last month.

Nathan’s view: it’s complicated. Discussion of this topic has often been heated, polarising, and personal. But Nathan wants to avoid that and simply lay out, in a way that is impartial and fair to everyone involved, what OpenAI has done right and how it could do better in his view.

When he started on the GPT-4 red team, the model would do anything from diagnose a skin condition to plan a terrorist attack without the slightest reservation or objection. When later shown a “Safety” version of GPT-4 that was almost the same, he approached a member of OpenAI’s board to share his concerns and tell them they really needed to try out GPT-4 for themselves and form an opinion.

In today’s episode, we share this story as Nathan told it on his own show, The Cognitive Revolution, which he did in the hope that it would provide useful background to understanding the OpenAI board’s reservations about Sam Altman, which to this day have not been laid out in any detail.

But while he feared throughout 2022 that OpenAI and Sam Altman didn’t understand the power and risk of their own system, he has since been repeatedly impressed, and came to think of OpenAI as among the better companies that could hypothetically be working to build AGI.

Their efforts to make GPT-4 safe turned out to be much larger and more successful than Nathan was seeing. Sam Altman and other leaders at OpenAI seem to sincerely believe they’re playing with fire, and take the threat posed by their work very seriously. With the benefit of hindsight, Nathan suspects OpenAI’s decision to release GPT-4 when it did was for the best.

On top of that, OpenAI has been among the most sane and sophisticated voices advocating for AI regulations that would target just the most powerful AI systems — the type they themselves are building — and that could make a real difference. They’ve also invested major resources into new ‘Superalignment’ and ‘Preparedness’ teams, while avoiding using competition with China as an excuse for recklessness.

At the same time, it’s very hard to know whether it’s all enough. The challenge of making an AGI safe and beneficial may require much more than they hope or have bargained for. Given that, Nathan poses the question of whether it makes sense to try to build a fully general AGI that can outclass humans in every domain at the first opportunity. Maybe in the short term, we should focus on harvesting the enormous possible economic and humanitarian benefits of narrow applied AI models, and wait until we not only have a way to build AGI, but a good way to build AGI — an AGI that we’re confident we want, which we can prove will remain safe as its capabilities get ever greater.

By threatening to follow Sam Altman to Microsoft before his reinstatement as OpenAI CEO, OpenAI’s research team has proven they have enormous influence over the direction of the company. If they put their minds to it, they’re also better placed than maybe anyone in the world to assess if the company’s strategy is on the right track and serving the interests of humanity as a whole. Nathan concludes that this power and insight only adds to the enormous weight of responsibility already resting on their shoulders.

In today’s extensive conversation, Nathan and host Rob Wiblin discuss not only all of the above, but also:

  • Speculation about the OpenAI boardroom drama with Sam Altman, given Nathan’s interactions with the board when he raised concerns from his red teaming efforts.
  • Which AI applications we should be urgently rolling out, with less worry about safety.
  • Whether governance issues at OpenAI demonstrate AI research can only be slowed by governments.
  • Whether AI capabilities are advancing faster than safety efforts and controls.
  • The costs and benefits of releasing powerful models like GPT-4.
  • Nathan’s view on the game theory of AI arms races and China.
  • Whether it’s worth taking some risk with AI for huge potential upside.
  • The need for more “AI scouts” to understand and communicate AI progress.
  • And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Milo McGuire and Dominic Armstrong
Transcriptions: Katy Moore

Highlights

Why it's hard to imagine a much better game board

Rob Wiblin: Do you want to say more about how you went from being quite alarmed about OpenAI in late 2022 to feeling the game board really is about as good as it reasonably could be? It’s quite a transformation, in a way.

Nathan Labenz: Yeah. I mean, I think that it was always better than it appeared to me during that red team situation. So in my narrative, it was kind of, “This is what I saw at the time; this is what caused me to go this route.” And I learned some things and had a couple of experiences that folks have heard that I thought were revealing.

So there was a lot more going on than I saw. What I saw was pretty narrow, and that was by their design, and it wasn’t super reassuring. But as their moves came public over time, it did seem that at least they were making a very reasonable… And “reasonable” is not necessarily adequate, but it is at least not negligent. At the time of the red team I was like, this seems like it could be a negligent level of effort, and I was really worried about that. But as all these different moves became public, it was pretty clear that this was certainly not negligent. It, in fact, was pretty good, and it was definitely serious. And whether that proves to be adequate to the grand challenge, we’ll see. I certainly don’t think that’s a given either.

But there’s not a tonne of low-hanging fruit, right? There’s not a tonne of things where I could be like, “You should be doing this and this and this, and you’re not.” I don’t have a tonne of great ideas at this point for OpenAI. Assuming that they’re not changing their main trajectory of development, for things that they could do on the margin for safety purposes, I don’t have a tonne of great ideas for them. So that overall, just the fact that I can’t — other people certainly are welcome to add their own ideas; I don’t think I’m the only source of good ideas by any means — but the fact that I don’t have a tonne to say that they could be doing much better is a sharp contrast to how I felt during the red team project with my limited information at the time.

So they won a lot of trust from me, certainly, by just doing one good thing after another. And more broadly, just across the landscape, I think it is pretty striking that leadership at most — not all, but most — of the big model developers at this point are publicly recognising that they’re playing with fire. Most of them have signed on to the Center for AI Safety extinction risk one-sentence statement. Most of them clearly are very thoughtful about all the big-picture issues. We can see that in any number of different interviews and public statements that they’ve made.

And you can contrast that against, for example, Meta leadership — where you’ve got Yann LeCun who’s basically, “This is all going to be fine; we will have superhuman AI but we’ll definitely keep it under control, and nothing to worry about.” It’s easy to imagine to me that that could be the majority perspective from the leading developers, and I’m kind of surprised that it’s not. When you think about other technology waves, you’ve really never had something where — at least not that I’m aware of — the developers are like hey, this could be super dangerous, and somebody probably should come in and put some oversight, if not regulation, on this industry. Typically they don’t want that. They certainly don’t tend to invite it. Most of the time they fight it. Certainly people are not that quick to recognise that their product could cause significant harm to the public.

So that is just unusual. I think it’s done in good faith and for good reasons, but it’s easy to imagine that you could have a different crop of leaders that just would either be in denial about that, or refuse to acknowledge it out of self-interest, or any number of reasons that they might not be willing to do what the current actual crop of leaders has mostly done. So I think that’s really good. It’s hard to imagine too much better, right?

What OpenAI has been doing right

Nathan Labenz: Yeah. I mean, it’s a long list, really. It is quite impressive. One thing that I didn’t mention in the podcast or in the thread, and probably should have, has been that I think that they’ve done a pretty good job of advocating for reasonable regulation of frontier model development, in addition to committing to their own best practices and creating the Forum that they can use to communicate with other developers and hopefully share learnings about big risks that they may be seeing.

They have, I think, advocated for what seems to me to be a very reasonable policy of focusing on the high-end stuff. They have been very clear that they don’t want to shut down research, they don’t want to shut down small models, they don’t want to shut down applications doing their own thing — but they do think the government should pay attention to people that are doing stuff at the highest level of compute. And that’s also notably where, in addition to being just obviously where the breakthrough capabilities are currently coming from, that’s also where it’s probably minimally intrusive to actually have some regulatory regime, because it does take a lot of physical infrastructure to scale a model to, say, 1026 FLOPS, which is the threshold that the recent White House executive order set for just merely telling the government that you are doing something that big, which doesn’t seem super heavy-handed to me. And I say that as, broadly speaking, a lifelong libertarian.

So I think they’ve pushed for what seems to me a very sensible balance, something that I think techno-optimist people should find to be minimally intrusive, minimally constraining. Most application developers shouldn’t have to worry about this at all. I had one guest on the podcast not long ago who was kind of saying that might be annoying or whatever, and I was just doing some back-of-the-envelope math on how big the latest model they had trained was. And I was like, “I think you have at least 1000x compute to go before you would even hit the reporting threshold.” And he was like, “Well, yeah, probably we do.”

So it’s really going to be maybe 10 companies over the next year or two that would get into that level, maybe not even 10. So I think they’ve really done a pretty good job of saying this is the area that the government should focus on. Whether the government will pay attention to that or not, we’ll see.

Not to say there aren’t other areas that the government should focus on too. It definitely makes my blood boil when I read stories about people being arrested based on nothing other than some face-match software having triggered and identifying them, and then you have police going out and arresting people who had literally nothing to do with whatever the incident was, without doing any further investigation even. That’s highly inappropriate in my view. And I think the government would be also right to say, hey, we’re going to have some standards here, certainly around what law enforcement can do around the use of AI.

Arms racing and China

Rob Wiblin: Is there anything else that Sam or OpenAI have done that you’ve liked and have been kind of impressed by?

Nathan Labenz: Yeah, one thing I think is specifically going out of his way to question the narrative that China’s going to do it no matter what we do, so we have no choice but to try to keep pace with China. He has said he has no idea what China is going to do. And he sees a lot of people talking like they know what China is going to do, and he thinks they’re overconfident in their assessments of what China is going to do, and basically thinks we should make our own decisions independent of what China may or may not do.

And I think that’s really good. I’m no China expert at all, but it’s easy to have that kind of… First of all, I just hate how adversarial our relationship with China has become. As somebody who lives in the Midwest in the United States, I don’t really see why we need to be in long-term conflict with China. That, to me, would be a reflection of very bad leadership on at least one, if not both, sides, if that continues to be the case for a long time to come. I think we should be able to get along. We’re on opposite sides of the world. We don’t really have to compete over much, and we’re both in very secure positions, and neither one of us is really a threat to the other in a way of taking over their country or something, or them coming and ruling us. It’s not going to happen.

Rob Wiblin: Yeah. The reason why this particular geopolitical setup shouldn’t necessarily lead to war in the way that ones in the past have is that the countries are so far away from one another, and none of their core, narrow, national interests that they care the most about overlap in a really negative way — or they need not, if people play their cards right. There is no fundamental pressure that is forcing the US and China towards conflict. That’s my general take, and I think you’re right that if our national leaders cannot lead us towards a path of peaceful coexistence, then we should be extremely disappointed in them, and kick them out and replace them with someone who can. Sorry, I interrupted. Carry on.

Nathan Labenz: Well, that’s basically my view as well. And some may call it naive, but Sam Altman, in my view, to his significant credit, has specifically argued against the idea that we just have to do whatever because China is going to do whatever. And so I do give a lot of credit for that, because it could easily be used as cover for him to do whatever he wants to do. And to specifically argue against it, to me, is quite laudable.

Rob Wiblin: Yeah, it’s super creditable. I guess I knew that I hadn’t heard that argument coming from Sam, but now that you mention it, it’s outstanding that he has not, I think, fallen for that line or has not appropriated that line in order to get more slack for OpenAI to do what it wants. Because it would be so easy — so easy even to convince yourself that it’s a good argument and make that. So yeah, super kudos to him.

OpenAI's single-minded focus on AGI

Nathan Labenz: I think there is a pretty clear divergence in how fast the capabilities are improving and how fast our control measures are improving. The capabilities over the last couple of years seem to have improved much more than the controls.

GPT-4 can code at a near-human level. It can do things like, if you say to it, with a certain setup and access to certain tools, if you say, “Synthesise this chemical,” and you give it access to control via API of a chemical laboratory, it can often do that. It can look up things, it can issue the right commands, and you can actually get a physical chemical out the other end of a laboratory just by prompting GPT-4 — again, with some access to some information and the relevant APIs — to just say do it, and you can actually get a physical chemical out the other end. That’s crazy, right?

These capabilities are going super fast. And meanwhile, the controls are not nearly as good. Oddly enough, it’s kind of hardest to get it to be violating kind of dearly held social norms. So it’s pretty hard to get it to be racist. It will bend over backwards to be very neutral on certain social topics. But things that are more subtle, like synthesising chemicals or whatever, it’s very easy most of the time to get it to kind of do whatever you want it to do, good or bad.

And that divergence gives me a lot of pause, and I think it maybe should give them more pause too. Like, what is AGI? It is a vision, it’s not super well formed. People have, I think, a lot of different things in their imaginations when they try to conceive of what it might be like. But they’ve set out, and they’ve even updated their core values recently, which you can find on their careers page, to say the first core value is “AGI focus.” They basically say, “We are building AGI. That’s what we’re doing. Everything we do is in service of that. Anything that’s not in service of that is out of scope.”

And I would just say the number one thing I would really want them to do is reexamine that. Is it really wise, given the trajectory of developments of the control measures, to continue to pursue that goal right now with single-minded focus? I am not convinced of that. At all.

Sam Altman has said that the Superalignment team will have their first result published soon. So I’ll be very eager to read that. And let’s see, right? Possibly this trend will reverse, possibly the progress will start to slow — certainly if it’s just a matter of more and more scale. We’re getting into the realm now where GPT-4 is supposed to have cost $100 million. So on a log scale, you may need a billion, you may need $10 billion to get to that level. And that’s not going to be easy even with today’s infrastructure.

So maybe those capabilities will start to slow, and maybe they’re going to have great results from the Superalignment team, and we’ll feel like we’re on a much better kind of relative footing between capabilities and control. But until that happens, I think the AGI single-minded “this is what we’re doing and everything else is out of scope” feels misguided to the point of… I would call it ideological. It doesn’t seem at all obvious that we should make something that is more powerful than humans at everything when we don’t have a clear way to control it. So the whole premise does seem to be well worth a reexamination at this point. And without further evidence, I don’t feel comfortable with that.


Nathan Labenz: I find it very easy for me and easy to empathise with the developers who are just like, “Man, this is so incredible and it’s so awesome, how could we not want to?”

Rob Wiblin: This is the coolest thing anyone’s ever done.

Nathan Labenz: Genuinely, right? So I’m very with that. But it could change quickly in a world where it is genuinely better than us at everything — and that is their stated goal. And I have found Sam Altman’s public statements to generally be pretty accurate and a pretty good guide to what the future will hold. I specifically tested that during the window between the GPT-4 red team and the GPT-4 release, because it was crazy speculation; he was making some mostly kind of cryptic public comments during that window. But I found them to all be pretty accurate to what I had seen with GPT-4.

So I think that, again, we should take them broadly at face value in terms of, certainly as we talked about before, their motivations on regulatory questions, but also in terms of what their goals are. And their stated goal very plainly is to make something that is more capable than humans at basically everything. And yeah, I just don’t feel like the control measures are anywhere close to being in place for that to be a prudent move.

So yeah, your original question: what would I like to see them do differently? I think the biggest-picture thing would be just: continue to question that, what I think could easily become an assumption — and basically has become an assumption, right? If it’s a core value at this point for the company, then it doesn’t seem like the kind of thing that’s going to be questioned all that much. But I hope they do continue to question the wisdom of pursuing this AGI vision.

Transparency about capabilities

Nathan Labenz: I think it would be really helpful to have a better sense of just what they can and can’t predict about what the next model can do. Just how successful were they in their predictions about GPT-4, for example?

We know that there are scaling laws that show what the loss number is going to be pretty effectively, but even there: with what dataset exactly? And is there any curriculum-learning aspect to that? Because people are definitely developing all sorts of ways to change the composition of the dataset over time. There’s been some results, even from OpenAI, that show that pretraining on code first seems to help with logic and reasoning abilities, and then you can go to a more general dataset later. At least as I understand their published results, they’ve certainly said something like that. So when you look at this loss curve, what assumptions exactly are baked into that?

But then, even more importantly, what does that mean? What can it do? And how much confidence did they have? How accurate were they in their ability to predict what GPT-4 was going to be able to do? And how accurate do they think they’re going to be on the next one? There’s been some conflicting messages about that.

Greg Brockman recently posted something saying that they could do that, but Sam has said, in the GPT-4 Technical Report, that they really can’t do that when it comes to a particular “Will it or won’t it be able to do this specific thing?” — they just don’t know. And this is a change for Greg, too, because at the launch of GPT-4, in his keynote he said, “At OpenAI, we all have our favourite little task that the last version couldn’t do, that we are looking to see if the new version can do.” And the reason they have to do that is because they just don’t know, right? I mean, they’re kind of crowdsourcing internally whose favourite task got solved this time around and whose remains unsolved?

So that is something I would love to see them be more open about: the fact that they don’t really have great ability to do that, as far as I understand. If there has been a breakthrough there, by all means we’d love to know that too. But it seems like, no, probably not. We’re really still guessing. And that’s exactly what Sam Altman just said about GPT-5. That’s the “fun little guessing game for us” quote that was out of the Financial Times argument. He said, just straight up, “I can’t tell you what GPT-5 is going to be able to do that GPT-4 couldn’t.”

So that’s a big question. That’s, for me: what is emergence? There’s been a lot of debate around that, but for me, the most relevant definition of emergence is things that it can suddenly do from one version to the next that you didn’t expect. That’s where I think a lot of the danger and uncertainty is. So that is definitely something I would like to see them do better.

I would also like to see them take a little bit more active role interpreting research generally. There’s so much research going on around what it can and can’t do, and some of it is pretty bad. And they don’t really police that, or — not that they should police it; that’s too strong of a word —

Rob Wiblin: Correct it, maybe.

Nathan Labenz: I would like to see them put out, or just at least have their own position that’s a little bit more robust and a little bit more updated over time. As compared to just right now, they put out the technical report, and it had a bunch of benchmarks, and then they’ve pretty much left it at that. And with the new GPT-4 Turbo, they said you should find it to be better. But we didn’t get… And maybe it’ll still come. Maybe this also may shed a little light on the board dynamic, because they put a date on the calendar for DevDay, and they invited people, and they were going to have their DevDay. And what we ended up with was a preview model that is not yet the final version.

Why no statement from the OpenAI board

Nathan Labenz: I mean, it is a very baffling decision ultimately to not say anything. I don’t have an account. I think I can better try to interpret what they were probably thinking and some of their reasons than I can the reason for not explaining themselves. That, to me, is just very hard to wrap one’s head around.

It’s almost as if they were so in the dynamics of their structure and who had what power locally within — obviously the nonprofit controls the for-profit and all that sort of stuff — that they kind of failed to realise that the whole world was watching this now, and that these kind of local power structures are still kind of subject to some global check. They maybe interpreted themselves as the final authority, which on paper was true, but wasn’t really true when the whole world has started to pay attention to not just this phenomenon of AI but this particular company, and this particular guy is particularly well known.

Now they’ve had plenty of time though to correct that, right? That kind of only goes for like 24 hours, right? I mean, you would think that even if they had made that mistake up front and were just so locally focused that they didn’t realise that the whole world was going to be up in arms and might ultimately kind of force their hand on a reversal, I don’t know why… I mean, that was made very clear, I would think within 24 hours. Unless they were still just so focused and kind of in the weeds on the negotiations — I’m sure the internal politics were intense, so no shortage of things for them to be thinking about at the object level locally — but I would have to imagine that the noise from outside also must have cracked through to some extent. You know, they must have checked Twitter at some point during this process and been like, “This is not going down well.”

Rob Wiblin: Or the front page of The New York Times.

Nathan Labenz: Right, yeah. It was not an obscure story, right? And this even made the Bill Simmons sports podcast in the United States, and he does not touch almost anything but sports. This is one of the biggest sports podcasts, if not maybe the biggest in the United States. And he even covered this story. So it went very far. And why still to this day — and we’re, what, 10 days or so later? — still nothing: that is very surprising, and I really don’t have a good explanation for it.

I think maybe the best theory that I’ve heard, maybe two, I don’t know, maybe I’m going to give three leading contender theories. One, very briefly, is just lawyers. I saw Eliezer advance that: Don’t ask lawyers what you can and can’t do. Instead ask, “What’s the worst thing that happens if I do this and how do I mitigate it?” Because if you’re worried that you might get sued or you’re worried that whatever, try to get your hands around the consequences and figure out how to deal with them or if you want to deal with them, versus just asking the lawyers “Can I or can’t I?” because they’ll probably often say no. And that doesn’t mean that no is the right answer. So that’s one possible explanation.

Another one, which I would attribute to Zvi, who is a great analyst on this, was that basically the thinking is kind of holistic. And that what Emmett Shear had said was that this wasn’t a specific disagreement about safety. As I recall the quote, he didn’t say that it was not about safety writ large, but that it was not a specific disagreement about safety.

So a way you might interpret that would be that they… Maybe for reasons like what I outlined in my narrative storytelling of the red team, where people have heard this, but I finally get to the board member, and this board member has not tried GPT-4 after I’ve been testing it for two months, and I’m like, “Wait a second. What, were you not interested? Did they not tell you? What is going on here?” I think there is something, a set of different things like that perhaps, where they maybe felt like in some situations he sort of on the margin underplayed things, or let them think something a little bit different than what was really true — probably without really lying or having an obvious smoking gun.

But that would also be consistent with what the COO had said: that this was a breakdown in communication between Sam and the board. Not like a direct single thing that you could say this was super wrong, but rather like, “We kind of lost some confidence here. All things equal, do we really think this is the guy that we want to trust for this super high-stakes thing?” And you know, I tried to take pains in my writing and commentary on this to say it’s not harsh judgement on any individual. And Sam Altman has kind of said this himself. His quote was, “We shouldn’t trust any individual person here” — and that was on the back of saying, “The board can fire me. I think that’s important. We shouldn’t trust any individual person here.”

I think that is true. I think that is apt, and I think the board may have been feeling like, “We’ve got a couple of reasons that we’ve lost some confidence, and we don’t really want to trust any one person. And you are this super charismatic leader” — I don’t know to what degree they realised what loyalty he had from the team at that time; probably they underestimated that if anything, but you know — “charismatic, insane dealmaker, super entrepreneur, uber entrepreneur: is that the kind of person that we want to trust with the super important decisions that we see on the horizon?” This is the kind of thing that you maybe just have a hard time communicating, but still I think they should try.

The upside of AI merits taking some risk

Rob Wiblin: I think I agree with you that it would be nice if we could maybe buy ourselves a few years of focusing research attention on super useful applications, or super useful narrow AIs that might really surpass human capabilities in some dimension, but not necessarily every single one of them at once.

It doesn’t feel like a long-term strategy, though. It feels like something that we can buy a bunch of time with and might be quite a smart move — but just given the diffusion of the technology, as you’ve been talking about, inasmuch as we have the compute and inasmuch as we have the data out there, these capabilities are always somewhat latent. They’re always a few steps away from being created.

It feels like we have to have a plan for what happens. We have to be thinking about what happens when we have AGI. Because even if half of the countries in the world agree that we shouldn’t be going for AGI, there’s plenty of places in the world where probably you will be able to pursue it. And some people will think that it’s a good idea, for whatever reason: they don’t buy the safety concerns, or some people might feel like they have to go there for competitive reasons.

I would also say there are some people out there who say we should shut down AI, and we should never go there actually — people who are saying not just for a little while, but we should just ban AI basically for the future of humanity, forever, because who wants to create this crazy world where humans are irrelevant and obsolete and don’t control things? I think Eric Hoel, among other people, has kind of made this case that humanity should just say no in perpetuity.

And that’s something that I can’t get on board with, even in principle. In my mind, the upside from creating full beings, full AGIs that can enjoy the world in the way that humans do, that can fully enjoy existence, and maybe achieve states of being that humans can’t imagine that are so much greater than what we’re capable of; enjoy levels of value and kinds of value that we haven’t even imagined — that’s such an enormous potential gain, such an enormous potential upside that I would feel it was selfish and parochial on the part of humanity to just close that door forever, even if it were possible. And I’m not sure whether it is possible, but if it were possible, I would say, no, that’s not what we ought to do. We ought to have a grander vision.

And I guess on this point, this is where I sympathise with the e/acc folks: that I guess they’re worried that people who want to turn AI off forever and just keep the world as it is now by force for as long as possible, they’re worried about those folks. And I agree that those people, at least on my moral framework, are making a mistake — because they’re not appropriately valuing the enormous potential gain from, in my mind, having AGIs that can make use of the universe; who can make use of all of the rest of space and all of the matter and energy and time that humans are not able to access, are not able to do anything useful with; and to make use of the knowledge and the thoughts and the ideas that can be thought in this universe, but which humans are just not able to because our brains are not up to it. We’re not big enough; evolution hasn’t granted us that capability.

So yeah, I guess I do want to sometimes speak up in favour of AGI, or in favour of taking some risk here. I don’t think that trying to reduce the risk to nothing by just stopping progress in AI would ever really be appropriate. To start with, the background risks from all kinds of different problems are substantial already. And inasmuch as AI might help to reduce those other risks — maybe the background risk that we face from pandemics, for example — then that would give us some reason to tolerate some risk in the progress of AI in the pursuit of risk reduction in other areas.

But also just the enormous potential moral, and dare I say spiritual, upside to bringing into this universe beings like the most glorious children that one could ever hope to create in some sense. Now, my view is that we could afford to take a couple of extra years to figure out what children we would like to create, and figure out what much more capable beings we would like to share the universe with forever. And that prudence would suggest that we maybe measure twice and cut once when it comes to creating what might turn out to be a form of successive species to humanity.

But nonetheless, I don’t think we should measure forever. There is some reason to move forward and to accept some risk, in the interests of not missing the opportunity — because, say, we go extinct for some other reason or some other disaster prevents us from accomplishing this amazing thing in the meantime.

Articles, books, and other media discussed in the show

Current context on OpenAI and Sam Altman:

Nathan’s work:

Areas where Nathan thinks OpenAI is doing well:

Areas where Nathan has concerns about OpenAI:

Capabilities and trends in AI:

Other 80,000 Hours podcast episodes:

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.