#199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy

In today’s episode, host Luisa Rodriguez speaks to Nathan Calvin — senior policy counsel at the Center for AI Safety Action Fund — about the new AI safety bill in California, SB 1047, which he’s helped shape as it’s moved through the state legislature.

They cover:

  • What’s actually in SB 1047, and which AI models it would apply to.
  • The most common objections to the bill — including how it could affect competition, startups, open source models, and US national security — and which of these objections Nathan thinks hold water.
  • What Nathan sees as the biggest misunderstandings about the bill that get in the way of good public discourse about it.
  • Why some AI companies are opposed to SB 1047, despite claiming that they want the industry to be regulated.
  • How the bill is different from Biden’s executive order on AI and voluntary commitments made by AI companies.
  • Why California is taking state-level action rather than waiting for federal regulation.
  • How state-level regulations can be hugely impactful at national and global scales, and how listeners could get involved in state-level work to make a real difference on lots of pressing problems.
  • And plenty more.

Producer and editor: Keiran Harris
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Highlights

Why we can't count on AI companies to self-regulate

Nathan Calvin: It’s worth remembering that Mark Zuckerberg, in his testimony in front of Congress, said, “I want to be regulated.” You know, it’s a thing that you hear from lots of folks, where they say, “I want to be regulated” — but then what they really mean is basically, “I want you to mandate for the rest of the industry what I am doing now. And I want to just self-certify that what I am doing now is happening. And that’s it.” That is, I think, often what this really is.

So there’s some way in which it’s easy for them to support regulation in the abstract, but when they look at it… Again, I think there’s some aspect here of, even within these companies of folks who care about safety, there’s a reaction that says, “I understand this much better than the government does. I trust my own judgement about how to manage these tradeoffs and what is safe better than some bureaucrat. And really, it’s ultimately good for me to just make that decision.”

There are parts of that view that I guess I can understand how someone comes to it, but I just think that it ends up in a really dysfunctional place.

You know, it’s worth saying that I am quite enthusiastic about AI, and think that it has genuinely a tonne of promise and is super cool. And part of the reason I work in this space is because I find it extremely cool and interesting and amazing, and I just think that some of these things are some of the most remarkable things that humans have created. And it is amazing.

I think there is just a thing here that this is a collective action problem: you have this goal of safety and investing more in making this technology act in our interests, versus trying to make as much money as possible and release things as quickly as possible — and left to their own devices, companies are going to choose the latter. I do think that you need government to actually come in and say that you have to take these things seriously, and that that’s necessary.

If we do wait until a really horrific catastrophe does happen, I think you might be quite likely to get regulation that I think is actually a lot less nuanced and deferential than what this bill is. So I just think that there’s some level where they are being self-interested in a way that was not really a surprise to me.

But maybe the thing that I feel more strongly about is that I think they are not actually doing a good job of evaluating their long-term self-interest. I think they are really focused on, “How do I get this stuff off my back for the near future, and get to do whatever I want?” and are not really thinking about what this is going to mean for them in the longer term. And that has been a little bit disheartening to me.

One last thing I’ll say here is that I do think that there is a really significant sentiment among parts of the opposition that it’s not really just that this bill itself is that bad or extreme — when you really drill into it, it feels like one of those things where you read it and it’s like, “This is the thing that everyone is screaming about?” I think it’s a pretty modest bill in a lot of ways, but I think part of what they are thinking is that this is the first step to shutting down AI development. Or that if California does this, then lots of other states are going to do it, and we need to really slam the door shut on model-level regulation or else they’re just going to keep going.

I think that is like a lot of what the sentiment here is: it’s less about, in some ways, the details of this specific bill, and more about the sense that they want this to stop here, and they’re worried that if they give an inch that there will continue to be other things in the future. And I don’t think that is going to be tolerable to the public in the long run. I think it’s a bad choice, but I think that is the calculus that they are making.

SB 1047's impact on open source models

Luisa Rodriguez: So just to clarify exactly the thing that is happening that we might think is good that could be hindered by the bill, I think it’s something like: there are some developers who build on open source models, like Meta’s Llama. And we might think that they’re doing useful, good things by building on Llama. But if Meta becomes kind of responsible for what those downstream developers do with their model, they’re less likely to make those models open access.

Nathan Calvin: Yeah. To be clear, this is not applying to any model that exists today, including Meta’s release of Llama 405B, which I think is the most recent one. Those are not models that are covered under this, and people can build on top of those and modify those forever. I think that model, the best estimates I’ve seen are that it costs actually well under $100 million to train. And you also have, with the cost of compute going down and algorithmic gains in efficiency, the most capable open source model that is not at all covered by this bill is still increasing in capability quite a lot each year. I think that there have been folks in the open source community who really look at this, and say there’s quite a tremendous amount that you can do and build on it that is not even remotely touched by this bill.

I do think there is just a difficult question here around, let’s say that there’s Llama 7, and they test it and find out that it could be used to very easily do zero-day exploits on critical infrastructure, and they install in some guardrail to have it reject those requests. Again, this is also a problem for closed source developers, that these guardrails are not that reliable and can be jailbroken in different ways. So I think this is a challenge that also exists for closed source developers.

But if they just release that model and someone, for a very small amount of money, removes that guardrail and causes a catastrophe, for Meta to then say, “We have no responsibility for that, even though our model was used for that and modified” — in what we believe is a very foreseeable way — I think that is an area where I think just people disagree. Where they think that it’s just kind of so important for Meta to release the really capable model, even if it can be used to straightforwardly cause a catastrophe.

And I do think it’s important to say that, again, it is not strict liability; it is not saying that if your model is used to cause harm, that you are liable. It is saying that you have to take reasonable care to prevent unreasonable risks. And exactly what the line of that and what amount of testing, what amount of guardrails, what is sufficient in light of that are the types of things that people are going to interpret over time. And, these are terms that are used in the law in general.

But I do stand pretty strongly for the idea that for these largest models that could just be extremely capable, that saying you’re going to put it out into the world and whatever happens after that is just in no way your responsibility, I do think that that is just something that I don’t think makes sense.

Why it's not "too early" for AI policies

Nathan Calvin: One thing that I want to emphasise is that one thing I like about liability as an approach in this area is that if the risks do not manifest, then the companies aren’t liable. It’s not like you are taking some action of shutting down AI development or doing things that are really costly. You are saying, “If these risks aren’t real, and you test for them and they’re not showing up as real, and you’re releasing the model and harms aren’t occurring, you’re good.”

So I do think that there is some aspect of, again, these things are all incredibly uncertain. I think that there are different types of risks that are based on different models and potential futures of AI development. And I think anyone who is saying with extremely high confidence about when and if those things will or won’t happen, I think is not engaging with this seriously enough.

So having an approach around testing and liability and getting the incentives right is really a way that a policy engages with this uncertainty, and I think is something that you can support. Even if you think that it’s a low risk that one of these risks is going to happen in the next generations of models, I think it is really meaningfully robust to that.

I also just think that there’s this question of when you have areas that are changing at exponential — or in the case of AI, I think if you graph the amount of compute used in training runs, it’s actually super-exponential, really fast improvement — if you wait to try to set up the machinery of government until it is incredibly clear, you’re just going to be too late. You know, we’ve seen this bill go through its process in a year. There are going to be things that even in the event the bill is passed will take additional time. You know, maybe it won’t be passed and we need to introduce it in another session.

I just think if you wait until it is incredibly clear that there is a problem, that is not the time at which you want to make policy and in which you’re going to be really satisfied by the outcome. So I just think that policymaking in the light of uncertainty, that just is what AI policy is, and you’ve got to deal with that in one way or another. And I think that this bill does approach that in a pretty sensible way.

Luisa Rodriguez: Yeah, I’m certainly sympathetic to “policy seems to take a long time and AI progress seems to be faster and faster.” So I’m certainly not excited about the idea of starting to think about how to get new bills passed when we start seeing really worrying signs about AI.

On the other hand, it feels kind of dismissive to me to say this bill comes with no costs if the risks aren’t real. It seems clear that the risk does come with costs, both financial and potentially kind of incentive-y ones.

Nathan Calvin: Yeah. I mean, I think it’s a question of, as I was saying before, the costs of doing these types of safety testing and compliance things I think are quite small relative to the incredibly capital-intensive nature of training these models. And again, these are things also that we’ve seen companies do. When you look at things like the GPT-4 System Card and the effort that they put into that, and similar efforts at Anthropic and other companies, these are things that are doable.

There’s also something like, “Oh, I’m not going to buy insurance because there’s not a 100% chance that my house will burn down” or whatever. I think if you have a few percent risk of a really bad outcome, it is worth investing some amount of money to kind of prepare against that.

Why working on state-level policy could have an outsized impact

Nathan Calvin: If you’re someone who wants to get really involved with politics in some state that doesn’t have a big AI industry, I think there still are things that are pretty relevant in terms of conveying this expectation to companies: that if they want to do business in a jurisdiction, that they have to be taking measures to protect the public and to be following some of the word of what they’ve laid out in these voluntary commitments.

So I think that’s an underrated area that I would love to see more people get involved with. And in some ways, it’s almost been like too much, but my experience working at the state level versus at the federal level is that there just is a lot more opportunity to move policy. And I really think that it’s an exciting area that more people should think about seriously.

Luisa Rodriguez: Yeah. I’m curious about this underrated thing. Is there more you want to say about what makes state policy particularly underrated that our listeners might benefit from hearing when thinking about policy careers?

Nathan Calvin: One interesting thing that also kind of relates with another thing that I know some 80,000 Hours listeners care about is California passed this ballot initiative, Prop 12, which says that pigs sold in the state can’t be held in these really awful inhumane cages. And it included pigs raised out of state, and there were lots of agricultural and pork lobbyists who sued California. It went up to the Supreme Court, and the Supreme Court decided that California does have a legitimate interest in regulating the products that can be sold in its state, including activities out of state.

So I do think that it’s just a thing where there is often this reaction that the industry actually needs to be within that state itself in order for state regulation to have any effect. But there are questions about how it needs to be proportional and it can’t be protectionist; there are different things, and it was a close decision.

But I think in general, states have more power than they themselves realise. And I think there’s some reaction of like, “I’m a legislator in Montana or something; what am I going to do that’s going to be relevant to AI policy?” Companies want to offer a uniform product that’s the same across states. There’s some question of, when it’s a smaller state, there is some level at which if you push too far, then maybe it’s easier for a company to say they’re not going to offer their product in Montana than it is in California. And so you need to [act] based on your size.

But I do think it’s an interesting thing that states like New York or Texas or Florida, very large markets… I think the reason why California is doing this is actually in some ways more about the size of the market than actually about the fact that we have the AI developers physically located in the state, is I think where the power of this is coming from. And I think that’s something that has not really been understood by a lot of the observers of this.

Articles, books, and other media discussed in the show

Latest commentary and developments on the bill:

Supporters and opponents of SB 1047:

Other AI regulatory efforts:

Other 80,000 Hours podcast episodes:

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.