Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

AI is a great thing to spend lots of R&D money on and have a really strong public research infrastructure around. A good amount of that research should be on safety and interpretability. And we should really want this to work, and it should happen.

And it’s actually not that expensive. I mean, it is expensive for most companies, which is why OpenAI has to be attached to Microsoft and DeepMind had to be part of Google and so on. But from the perspective of a country’s budget, it’s not impossible to have real traction on this.

Ezra Klein

In Oppenheimer, scientists detonate a nuclear weapon despite thinking there’s some ‘near zero’ chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI think the chance their work puts an end to humanity is vastly higher than that.

In response, some have suggested we launch a Manhattan Project to make AI safe via enormous investment in relevant R&D. Others have suggested that we need international organisations modelled on those that slowed the proliferation of nuclear weapons. Others still seek a research slowdown by labs while an auditing and licencing scheme is created.

Today’s guest — journalist Ezra Klein of The New York Times — has watched policy discussions and legislative battles play out in DC for 20 years. Like many people he has also taken a big interest in AI this year, writing articles such as “This changes everything.” In his first interview on the show in 2021, he flagged AI as one topic that DC would regret not having paid more attention to.

So we invited him on to get his take on which regulatory proposals have promise, and which seem either unhelpful or politically unviable.

Out of the ideas on the table right now, Ezra favours a focus on direct government funding — both for AI safety research and to develop AI models designed to solve problems other than making money for their operators. He is sympathetic to legislation that would require AI models to be legible in a way that none currently are — and embraces the fact that that will slow down the release of models while businesses figure out how their products actually work.

By contrast, he’s pessimistic that it’s possible to coordinate countries around the world to agree to prevent or delay the deployment of dangerous AI models — at least not unless there’s some spectacular AI-related disaster to create such a consensus. And he fears attempts to require licences to train the most powerful ML models will struggle unless they can find a way to exclude and thereby appease people working on relatively safe consumer technologies rather than cutting-edge research.

From observing how DC works, Ezra expects that even a small community of experts in AI governance can have a large influence on how the the US government responds to AI advances. But in Ezra’s view, that requires those experts to move to DC and spend years building relationships with people in government, rather than clustering elsewhere in academia and AI labs.

In today’s brisk conversation, Ezra and host Rob Wiblin cover the above as well as:

  • Whether it’s desirable to slow down AI research
  • The value of engaging with current policy debates even if they don’t seem directly important
  • Which AI business models seem more or less dangerous
  • Tensions between people focused on existing vs emergent risks from AI
  • Two major challenges of being a new parent

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Milo McGuire
Transcriptions: Katy Moore

Highlights

Punctuated equilibrium

Ezra Klein: You need ideas on the shelf, not in your drawer. Don’t put it in your drawer: they need to be on a shelf where other people can reach them, to shift the metaphor a little bit here. You need ideas that are out there.

So this is a governing model that in the political science literature is called “punctuated equilibrium”: nothing happens, and then all of a sudden, it does. Right? All of a sudden, there’s a puncture in the equilibrium and new things are possible. Or, as it’s put more commonly: you never let the crisis go to waste. And when there is a crisis, people have to pick up the ideas that are around. And a couple things are important then: One is that the ideas have to be around; two is that they have to be coming from a source people trust, or have reason to believe they should trust; and three, they have to have some relationship with that source.

So what you want to be doing is building relationships with the kinds of people who are going to be making these decisions. What you want to be doing is building up your own credibility as a source on these issues. And what you want to be doing is actually building up good ideas and battle-testing them and getting people to critique them and putting them out in detail. I think it is very unlikely that AI regulation is going to come out of a LessWrong post. But I have seen a lot of good ideas from LessWrong posts ending up in different white paper proposals that now get floated around. And you need a lot more of those.

It’s funny, because I’ve seen this happening in Congress again and again. You might wonder, like, why do these think tanks produce all these white papers or reports that truly nobody reads? And there’s a panel that nobody’s at? It’s a lot of work for nobody to read your thing and nobody to come to your speech. But it’s not really nobody. It may really be that only seven people read that report, but five of them were congressional staffers who had to work on this issue. And that’s what this whole economy is. It is amazing to me the books that you’ve never heard of that have ended up hugely influencing national legislation. Most people have not read Jump-Starting America by Jonathan Gruber and Simon Johnson. But as I understand it, it was actually a pretty important part of the CHIPS bill.

And so you have to build the ideas, you have to make the ideas legible and credible to people, and you have to know the people you’re trying to make these ideas legible and credible to. That is the process by which you become part of this when it happens.

The 'disaster' model of regulation

Ezra Klein: The way I think AI regulation is going to happen is: something is going to go wrong. There is going to be some event that focuses attention again on AI. There’s been a sort of reduction in attention over the past couple of months. We’ve not had a major new release in the way we did with GPT-4, say, and people are drifting on to other topics. Then at some point, there will be a new release. Maybe DeepMind’s Gemini system is unbelievable or something.

And then at some point, there’s going to be a system powerful enough or critical enough that goes bad. And I don’t think it’s going to go bad in a, you know, foom and then we’re all dead — or if it does, you know, this scenario is not relevant — but I think it’ll go bad in a more banal way: somebody’s going to die, a critical infrastructure is going to go offline, there’s going to be a huge scam that exploits a vulnerability in operating systems all across the internet and tons of people lose their money or they lose their passwords or whatever. And Congress, which is nervous, that’ll be the moment that people begin to legislate.

And once you get into a process where people are trying to work towards an outcome, not just position within a debate, I suspect you’ll find people finding more points of common ground and working together a little bit more. I already feel like I see from where we were six or months ago, people coming a little bit more to Earth and a little bit nearer to each other in the debate. Not every loud voice on Twitter, but just in the conversations I’m around and in. I think you’ll see something like that eventually. I just don’t think we’re there yet.

How to slow down advances in AI capabilities

Ezra Klein: My view is you try to slow this down, to the extent you do, through forcing it to be better. I don’t think “We’re going to slow you down” is a strong or winning political position. I do think “You need to achieve X before you can release a product” is how you slow things down in a way that makes sense.

So I’ve used the example — and I recognise this example actually may be so difficult that it’s not possible — but I think it would be possible to win a political fight that demands a level of interpretability of AI systems that basically renders the major systems null and void right now.

If you look at Chuck Schumer’s speech that he gave on SAFE Innovation — which is his pre-regulatory framework; his framework for discussion of a regulatory framework — one of his major things is explainability. And he has talked to people — I know; I’ve been around these conversations — and people told him this may not be possible. And he’s put that in there, but he still wants it there, right? Frankly, I want it too. So maybe explainability, interpretability is not possible. But it’s an example of something where if Congress did say, “You have to do this, particularly for AI that does X,” it would slow things down — because, frankly, they don’t know how to do it yet.

And there are a lot of things like that that I think are less difficult than interpretability. So I think the way you will end up slowing some of these systems down is not, you know, “We need to pause because we think you’re going to kill everybody” — I don’t think that’s going to be a winning position. But you need to slow down, because we need to be confident that this is going to be a good piece of work when it comes out. I mean, that’s something we do constantly. I mean, in this country, you kind of can’t build a nuclear power plant at all, but you definitely can’t build one as quickly as you can, cutting all the corners.

And then there are other things you could do that would slow people down. One of the things that I think should get more attention — I’ve written about this — or at least some attention is a question of where liability sits in these systems. So if you think about social media, we basically said there’s almost no liability on the social media companies. They’ve created a platform; the liability rests with the people who put things on the platform. I’m not sure that’s how it should work for AI. I think most of the question is how the general underlying model is created. If OpenAI sells their model to someone, and that model is used for something terrible, is that just the buyer’s fault, or is that OpenAI’s fault? I mean, how much power does a buyer even have over the model? But if you put a lot of liability on the core designers of the models, they would have to be pretty damn sure these things work before they release them, right?

Things like that could slow people down. Forcing people to make things up to a higher standard of quality or reliability or interpretability, et cetera, that is a way of slowing down the development process. And slowing it down for a reason — which is, to be fair, what I think you should slow it down for.

The viability of licencing

Rob Wiblin: There’s another big cluster of proposals, and maybe the largest there is a combination of requiring organisations to seek government licences if they’re going to be training really large or very general AI models. And in the process of getting a licence, they would have to demonstrate that they know how to do it responsibly — or at least as responsibly as anyone does at the time. Those rules could potentially be assisted by legislation saying that only projects with those government licences would be allowed to access the latest and most powerful AI specialised supercomputers, which is sometimes called “compute governance.” How do you think that would come out of a messy legislative process?

Ezra Klein: On the one hand, if you take the metaphor that basically, what you’re developing now is a very powerful weapon, then of course, if you’re developing a very powerful, very secret weapon, you want that done in a highly regulated facility. Or you want that done by a facility that is highly trusted, and workers who are highly trusted in everything from their technical capacity to their cybersecurity practices. So that makes a tonne of sense.

On the other hand, if what you say is you’re developing the most important consumer technology of this era, and in order to do that, you’re going to need to be a big enough company to get through this huge regulatory gauntlet. That’s going to be pretty easy for Google or Meta or Microsoft to do, because they have all the lawyers and they have the lobbyists and so on.

I could imagine, as that goes through Congress, people get real antsy about the idea that they’re basically creating an almost government-protected monopoly — entrenching the position of this fairly small number of companies, and making it harder to decentralise AI, if that’s something that is truly possible. And some people believe it is. I mean, there’s this internal Google document that leaked about how there’s no moat. Meta has tried to talk about open sourcing more of their work. Who knows where it really goes over time. But I think the politics of saying the government is going to centralise AI development in private actors is pretty tough.

There’s a different set of versions of this, and I’ve heard many of the top people in these AI companies say to me, “What I really wish is that as we get closer to AGI, that all this gets turned over to some kind of international public body.” You hear different versions and different metaphors: A UN for AI, a CERN for AI, an IAEA for AI — you pick the group. But I don’t think it’s going to happen, because it’s first and foremost a consumer technology, or is being treated as such. And the idea that you’re going to nationalise or internationalise a consumer technology that is creating all these companies and spinning all these companies off, there’s functionally no precedent for that anywhere.

And this goes maybe back a little bit to the AI ethics versus AI risk issue, where it looks really, really reasonable under one kind of dominant internal metaphor — “we’re creating the most dangerous weapon humanity’s ever held” — and it looks really, really unreasonable if your view is this is a very lucrative software development project that we want lots of people to be able to participate in. And so I imagine that that will have a harder time in a legislative process once it gets out of the community of people who are operating off of this sort of shared “this is the most dangerous thing humanity’s ever done” sort of internal logic. I’m not saying those people are wrong, by the way. That’s just my assessment of the difficulty here.

Manhattan Project for AI safety

Rob Wiblin: Another broad approach that’s out there is sometimes branded as a Manhattan Project for AI safety: basically, the US and UK and EU governments spending billions of dollars on research and development to solve the technical problems that exist around keeping AGI aligned with our goals, and having sufficiently strong guardrails that they can’t easily be retrained to commit all sorts of crimes, for example. The CEO of Microsoft, Satya Nadella, has talked in favour of this, and the economist Samuel Hammond wrote an article in Politico that we’ll link to. What do you think of that broad approach?

Ezra Klein: That I’m very much for. I don’t think I would choose a metaphor of a Manhattan Project for AI safety, just because I don’t think people believe we need that, and that’s not going to be much of a political winner. But AI is a great thing to spend lots of R&D money on and have a really strong public research infrastructure around. A good amount of that research should be on safety and interpretability. And we should really want this to work, and it should happen. I think that makes a tonne of sense, and I think that’s actually a possible thing you could achieve.

Look, I don’t trust any view I hold about takeoff rates. But what I do think is that if we are in a sort of vertical takeoff scenario, policy is just going to lag so far behind that we almost have nothing we can do but hope for the best. If we’re in more modest takeoff scenarios — which I think are more likely in general — then building institutions can really work, and we can be making progress alongside the increase in capability capacity and danger.

So that’s where I think coming up with ideas that also just play into the fact that different countries want to dominate this, different countries want to get the most that they can out of this, different countries want to make sure a lot of this is done for the public good. And that it’s actually not that expensive. I mean, it is expensive for most companies, which is why OpenAI has to be attached to Microsoft and DeepMind had to be part of Google and so on. But from the perspective of a country’s budget, it’s not impossible to have real traction on this. Now, getting the expertise and getting the right engineers and so on, that’s tougher, but it’s doable.

And so, yeah, I think that’s somewhere where there’s a lot of promise. And the good thing about building institutions like that, even if they’re not focused on exactly what you want them to be, is that then, when they do need to refocus, if they do need to refocus, you have somewhere to do that. You know, if you have a Manhattan Project just for AI, well, then you could have a Manhattan Project for AI safety — because it was already happening, and now you just have to expand it.

So that’s where I think beginning to see yourself as in a foundation-building phase is useful. Again, it’s why I emphasise that at this point, it’s good to think about your policies, but also think about the frameworks under which policy will be made. You know, who are the members of Congress who understand this really well? And you’re hoping will be a leader on this, and you want to have good relationships with? Then, you know, keeping their staff informed and so on. And what are the institutions where all this work is going to be done? Do they need to be built from scratch? And what kind of people go into them? And how do you get the best people into them? And all of that is not, like, the policy at the end of the rainbow — but you need all that for that policy to ever happen, and to ever work if it does happen.

Parenting

Ezra Klein: I think one is that — and this is a very long-running piece of advice — but kids see what you do; they don’t listen to what you say. And for a long time, they don’t have language. And so what you are modelling is always a thing that they are really absorbing. And that includes, by the way, their relationship to you and your relationship to them.

And something that really affected my parenting is a clip of Toni Morrison talking about how she realised at a certain point that when she saw her kids, that she knew how much she loved them, but what they heard from her sometimes was the stuff she was trying to fix, right? “Your shoes are untied, your hair’s all messed up, you’re dirty, you need to…” whatever. And that she had this conscious moment of trying to make sure that the first thing they saw from her was how she felt about them. And I actually think that’s a really profound thing as a parent: this idea that I always want my kids to feel like I am happy to see them; they feel that they are seen and wanted to be seen. So that’s something that I think about a lot.

Then another thing is you actually have to take care of yourself as a parent. And you know, I worry I’m a little more grumpier on this show today than I normally am, because my kid had croup all night, and I’m just tired. And the thing that I’ve learned as a parent is that just 75% of how I deal with the world — like, how good of a version of me the world gets — is how much sleep I got. You’ve gotta take care of yourself. And that’s not always the culture of parenting, particularly modern parenting. You need people around you. You need to let off your own steam. You need to still be a person.

But a huge part of parenting is not how you parent the kid, but how you parent yourself. And I’m just a pretty crappy parent when I do a worse job of that, and a pretty good parent when I do a good job of that. But a lot of how present I can be with my child is: Am I sleeping enough? Am I meditating enough? Am I eating well? A I taking care of my stress level? So, you know, it’s not 100% of parenting a child is parenting yourself, but I think about 50% of parenting a child is parenting yourself. And that’s an easy thing to forget.

Rob Wiblin: Yeah. It is astonishing how much more irritable I get when I’m underslept. That’s maybe my greatest fear.

Ezra Klein: Yeah. It’s bad. Again, like, even in this conversation, I’ve been probably edgier than I normally am, and I’ve just felt terrible all day. It’s a crazy thing when you become a parent and you realise other parents have been doing this all the time. You see them it’s cold and flu season, and you understand that you didn’t understand what they were telling you before. And somehow, all these people are just running around doing the same jobs they always have to do, and carrying the same amount of responsibility at work and so on, just operating at 50% of their capacity all the time and not really complaining about it that much. A whole new world of admiring others opens up to you. Like, I have two kids and now my admiration of people who have three or four is so high. So, you know, it’s a real thing.

But it does open you up to a lot of beautiful vistas of human experience. And as somebody who is interested in the world, it was really undersold to me how interesting kids are, and how interesting being a parent is. And it’s worth paying attention to, not just because you’re supposed to, but because you learn just a tremendous amount about what it means to be a human being.

Articles, books, and other media discussed in the show

Ezra’s work:

AI advancement and regulation:

Other 80,000 Hours podcast episodes:

Everything else:

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.