Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

If you and I and 100 other people were on the first ship that was going to go settle Mars, and were going to build a human civilisation, and we have to decide what that government looks like, and we have all of the technology available today, how do we think about choosing a subset of that design space?

That space is huge and it includes absolutely awful things, and mixed-bag things, and maybe some things that almost everyone would agree are really wonderful, or at least an improvement on the way that things work today. But that raises all kinds of tricky questions.

My concern is that if we don’t approach the evolution of collective decision making and government in a deliberate way, we may end up inadvertently backing ourselves into a corner, where we have ended up on some slippery slope — and all of a sudden we have, let’s say, autocracies on the global stage are strengthened relative to democracies.

Tantum Collins

In today’s episode, host Rob Wiblin gets the rare chance to interview someone with insider AI policy experience at the White House and DeepMind who’s willing to speak openly — Tantum Collins.

They cover:

  • How AI could strengthen government capacity, and how that’s a double-edged sword
  • How new technologies force us to confront tradeoffs in political philosophy that we were previously able to pretend weren’t there
  • To what extent policymakers take different threats from AI seriously
  • Whether the US and China are in an AI arms race or not
  • Whether it’s OK to transform the world without much of the world agreeing to it
  • The tyranny of small differences in AI policy
  • Disagreements between different schools of thought in AI policy, and proposals that could unite them
  • How the US AI Bill of Rights could be improved
  • Whether AI will transform the labour market, and whether it will become a partisan political issue
  • The tensions between the cultures of San Francisco and DC, and how to bridge the divide between them
  • What listeners might be able to do to help with this whole mess
  • Panpsychism
  • Plenty more

Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore

Highlights

The risk of autocratic lock-in due to AI

Tantum Collins: A prompt that I think about a lot that sometimes helps frame this is: If you and I and 100 other people were on the first ship that was going to go settle Mars, and were going to build a human civilisation, and we have to decide what that government looks like, and we have all of the technology available today, how do we think about choosing a subset of that design space? That space is huge and it includes absolutely awful things, and mixed-bag things, and maybe some things that almost everyone would agree are really wonderful, or at least an improvement on the way that things work today. But that raises all kinds of tricky questions.

Now, that’s obviously a thought experiment that’s removed from the real world. Here, things are messier. But my concern is that if we don’t approach the evolution of collective decision making and government in a deliberate way, we may end up inadvertently backing ourselves into a corner, where we have ended up on some slippery slope — and all of a sudden we have, let’s say, autocracies on the global stage are strengthened relative to democracies.

Rob Wiblin: Yeah, I guess it’s very natural to worry that countries that are already autocratic would use these tools to engage in a level of monitoring of individuals that currently would be impractical. You could just constantly be checking all of the messages that people are sending and receiving, or using the fact that we have microphones in almost every room to have these automated systems detecting whether people are doing anything that is contrary to the wishes of the government. And that could just create a much greater degree of lock-in even than there is now.

Are you also worried about these kinds of technologies being abused in countries like the United States or the UK in the medium term?

Tantum Collins: I’m certainly not worried in the medium term about existing democracies like, let’s say, the US and the UK, becoming something that we would describe as autocratic. Perhaps another way of reframing it would be: I worry that we’ve already left opportunities on the table, and that the number of opportunities we will end up leaving on the table could grow. Both to make government more effective in a sort of ideology-agnostic sense — doing things on time and in ways that are affordable and so on — and secondly, missing opportunities to make these institutions more democratic, say, to bind them to reflective popular will.

And we can look at even contemporary democracies: the bitrate of preference communication has remained more or less the same for a long time, while government capacity has expanded significantly. In that sense, we’ve sort of already lost some level of democratic oversight. And this is something that, no, I’m not worried about these countries becoming the PRC — but I do think that there’s lots of stuff that we could do to improve the degree to which we instantiate the principles of democracy.

What do people in government make of AI x-risk concerns?

Tantum Collins: Government is always very big and distributed, so I wouldn’t say this is an authoritative recounting, but I’m happy to give my take from the sliver of things that I’ve seen. Obviously the institution I’m most familiar with is the US government. I think that to some extent this can generalise, especially for other closely allied governments — so let’s say Five Eyes and G7 and so on.

It is certainly the case that the amount of attention dedicated towards AI in government has increased significantly over the past couple of years. And that includes the amount of attention even just proportionately: that there has been an increase in the consideration given to risks. That includes things that people would categorise as x-risk, as well as all kinds of regular, active, ongoing harms: algorithmic discrimination, interpretability issues and so on. So all of this is getting more attention.

I think in particular, some of the things that people who are worried about x-risk tend to focus on actually fit into preexisting national security priorities pretty well. So for instance, if we think about cyber-related x-risk having to do with AI or AI-related biorisk: these are categories of harms that, although previously not mainly through lens of AI-related stuff, there are plenty of institutions and people in the US government and in other governments that have been worried about those types of risks for a long time. And I think it’s safe to say that almost everyone, for almost all of these, in almost all these domains, AI risk is now top of mind.

That doesn’t mean that if you go into the White House, you will see people doing Open Phil-style very rigorous rankings based on importance and neglectedness and tractability and so on. And it doesn’t mean that the people whose work ends up reducing these risks would even articulate it in the way that someone in, let’s say, the EA community would. But I do think that increasingly a lot of this work in government sort of dovetails with those priorities, if that makes sense.

Rob Wiblin: Yeah, that does make sense. I think it’s also quite sensible that this is maybe the extinction risk or the catastrophic risk that governments are turning their attention to first, because it seems like the potential misuse of large language models or other AI models to help with bioterrorism or development of bioweapons or to engage in cyberattacks seems like something that could be serious in the next few years. I guess we don’t know how serious it will be; we don’t know how many people will actually try it, but it does seem like an imminent threat. And so people should be thinking about what we can do now.

Tantum Collins: Yeah, exactly. So I’m happy to say that there are absolutely smart, capable people working in government who are worried about those things. If that helps you and the listeners sleep easily at night.

Misconceptions about AI in China

Tantum Collins: I think that a lot of news coverage of China tends to fluctuate between extremes. And this is true in AI as well as more broadly, where there will be a hype cycle that is, “The rise of China is inevitable,” and then there will be a hype cycle that says, “A centralised system was never going to work. Why did we ever take this threat seriously?” And of course, almost always the reality is somewhere in the middle.

I think that when China began making investments in AI, in particular with the 2017 release of this report from the State Council that was AI focused, I think that that, for a lot of people, was the first time that they began paying attention to what might China-related AI things look like. And in some critical ways, China is, of course, different than the US and other Western countries. I think that initially there was a moment of not fully appreciating how different the research culture, and in particular the relationship between companies and the government, is in China, relative to Western countries.

But now I worry that things have gone too far, and that people have these caricatured versions in their mind of what things look like in China. There are many of these, but to list a few, there is a meme that I think is inaccurate and unhelpful: that China can copy but not innovate. And obviously, historically, China has been responsible for a huge amount of scientific innovation. And also recently, again, the number of papers coming out of China on AI has increased significantly, it’s more than any other country. I mean, there are loads of examples of very impressive, innovative scientific achievements coming out of China.

In particular, to cite Jeff again, he has written an interesting paper that describes the difference between innovation and diffusion of technologies in terms of the effect it has on national power. And so there are some countries that historically played a big role in early industrial technology, but didn’t manage to diffuse it across the country effectively enough to, for instance, make the most of it from a military perspective or economic growth. And there are others that did the inverse. So in the 1800s, there was a lot of commentary that the US was terrible at creating things, but was pretty good at copying and diffusing.

Jeff has a take — he has some interesting ways of measuring this — that today China is actually, in a proportional sense, better at innovation and worse at diffusion. And that one big strength that the United States has from a competitive angle is that it’s actually much better at diffusion. I think that will strike a lot of people as being the inverse of what they think, based on having read some stories about high speed rail and solar cell production and so on. They have this meme of China can copy these things and suffuse it across the country and reduce the cost of production, but it isn’t able to create things.

I think that’s an area where I can see where that stereotype came from, based on the specific economic investments that China made in the 1990s and 2000s, but I think it is inaccurate — and people risk misunderstanding how things look if they buy into that.

The most promising regulatory approaches

Tantum Collins: I think there are a few like level-zero, foundational things that make sense to do. One is, at the moment, especially in the West, there are major disconnects between just the way that people in government use language and think about things relative to the way that people in the tech world do. So one important step zero is just increasing communication between labs and governments, and I think recently there’s been a lot of positive movement in that direction.

A second and related thing, and this somewhat ties back to these democratisation questions, is that even under the most competent technocracy, I would be worried about a process that doesn’t involve a significant amount of public consultation, given how general purpose these systems are, and how pervasive the effects that they could have on our lives will be. And so I think that government has a lot of work to do — both in terms of reaching out to and engaging the AI community, and also in terms of engaging the general public.

There’s been a lot of cool work in this direction recently. I’d highlight what The Collective Intelligence Project has undertaken. They’ve led a series of what they call “alignment assemblies“: essentially exercises designed to engage large, ideally representative subsets of the population with questions about what kinds of AI things worry you the most.

Also, recently from labs there’s been some interest in this stuff. OpenAI has this democratic input for AI grant that people have just applied for. And then also there are several labs that are working on projects in the vein of, in particular, LLMs: how can we use these to facilitate larger-scale deliberative processes than before? And one of the projects I worked on when I was at DeepMind — and I’m actually still collaborating with some of my former colleagues at DeepMind on — is something in this direction.

So that would be some sort of very basic, before even landing on a policy stuff, that I think is important. Beyond that, I think that there are some areas that are relatively uncontroversially good. So to the extent that we think that AI will, at some level, be a public good, and that private market incentives will not sufficiently incentivise the kind of safety and ethics research that we want to happen, I think that allocating some public funding for that stuff is a good idea. And that’s the full gamut of x-risk alignment things — like more present-day prosaic ethics impact considerations, interpretability research: the full list.

And a final thing that I think is as close to a no-brainer as you can get is that clearly some kind of clearer benchmarking and standards regime is important — because right now it’s sort of the Wild West, and these things are just out there. And not only is it difficult to measure what these things can and cannot do, but there is almost nothing in the way of widely known trusted intermediary certifications that a nonexpert user can engage with to get a feel for how and when they should use a given system.

So there are a whole bunch of different proposals — some involve the government itself setting up regulatory standards; some involve some kind of third-party verification — but having something. And that could be model cards, it could be the equivalent of nutritional labels. It’s kind of a whole range of options there. But at the moment I think a lot of people are sort of flying blind.

Who's ultimately responsible for the consequences of AI?

Rob Wiblin: It’s very unclear to me how responsibility for the consequences of AI is split across various different parts of the US government. It feels a bit like there’s no identifiable actor who really has to think about this holistically. Is that right?

Tantum Collins: Yes, this is true. And in part this gets back to this issue of AI is, A, new; B, so general that it challenges the taxonomy of government stuff; and C, something that government has not until recently engaged with meaningfully in its current form. So various government research projects throughout time have used some kind of AI, but government was not really in any meaningful way driving the past decade of machine learning progress. And all of this means that there are a tonne of open questions about how government thinks about AI-related responsibilities and where those sit.

Rob Wiblin: Who are the different players though, who at least are responsible for some aspect of this?

Tantum Collins: So within the White House, the sort of main groups would be the Office of Science and Technology Policy, where I worked before. That, within it, has a number of different teams, several of which are quite interested in AI. There is one small group that is explicitly dedicated to AI; there is a national security team, that was where I sat, that handles a lot of AI-related things; and then there is the science and society team, that was the team that produced the Blueprint for an AI Bill of Rights. Each of these groups work together quite a fair bit, and each one has a slightly different outlook and set of priorities related to AI.

Then you have the National Security Council, which has a tech team within it that also handles a fair amount of AI stuff. At the highest level, OSTP historically has been a bit more long-run conceptual research-y, putting together big plans for what the government’s position should be on approach to funding cures for a given disease, let’s say. And the NSC has traditionally been a bit more close to the decision making of senior leaders. And that has the benefit of, in some immediate sense, being higher impact, but also being more reactive and less long-run thinking wise. Again, these are huge generalisations, but those are sort of two of the groups within the White House that are especially concerned about AI.

These days, of course, because AI is on everyone’s mind, every single imaginable bit of the government has released some statement that references AI. But in terms of the groups that have large responsibility for it, then of course there is the whole world of departments and agencies, all of which have different AI-related equities.

So there’s NIST, which does regulatory stuff. There’s the National Science Foundation, which of course funds a fair amount of AI-related research. There’s the Department of Energy, which runs the national labs. And the name is slightly misleading because they don’t just do energy stuff.

The Department of Energy is actually this incredibly powerful and really, really big organisation. [Before I came into government] I thought they do wind farms and things. But it turns out that, because they’re in charge of a lot of nuclear development and security, they actually, especially in the national security space, have quite a lot of authority and a very large budget.

Of course, in addition to all the stuff in the executive, then there’s Congress — which has at various times thrown various AI-related provisions into these absolutely massive bills. So far, I believe both the House and the Senate have AI-focused committees or groups of some kind. I’m not super clear on what they’re doing, but obviously there is also the potential for AI-related legislation.

Anyway, the list goes on, as you can imagine. Obviously the Department of Defense and the intelligence community also do various AI-related projects. But yeah, at the moment there isn’t a clear coordinating entity. There have been a number of proposals. One that’s been in the news is Sam Altman suggested during his testimony that there should be a new agency created to focus specifically on AI. I think it remains to be determined whether that happens and what that looks like.

How technical experts could communicate better with policymakers

Tantum Collins: This is a great question. It’s actually one area that I think LLMs could be very valuable, to go back to this parallel between translation across actual languages and translation across academic or professional vernaculars. I think that we could save a lot of time by fine-tuning systems to do better mappings of “explain this technical AI concept to someone who… is a trained lawyer.” And often then you can actually find that there are sort of these weird overlaps. Not necessarily full isomorphisms, but a lot of the conceptual tooling that people have in really different domains accomplishes similar things, and can be repurposed to explain something in an area that they’re not too familiar with. So this is an area where I think that there is a lot of cool AI-driven work that can be done.

In terms of practical advice to people trying to explain things, this is tricky, because there are many ways in which you want to frame things differently. I’m trying to think of a set of principles that capture these, because a lot of it is just very specific word choice.

Maybe a few off the top of my head would be: One, just read political news and read some policy documents to get a feel for how things are typically described, and that should be a decent start. Two, I think in general, in policy space, you obviously want to reduce the use of technical language, but even sort of philosophical-type abstraction that can be helpful in a lot of other domains. And so the more that things can be grounded in concrete concepts and also incentives that will be familiar to people. In the policy space, a lot of that has to do with thinking about what the domestic and foreign policy considerations are that are relevant to this.

I mean, obviously it depends on the group — like, is it a group of senators or people at OSTP or something — but broadly speaking, if you read global news, you’ll get a sense of what people care about. A lot of people are really worried about competition with China, for better or worse. So to ground this, one example here would be: to the extent that the framing of China competition is inevitable, one can harness that to make the case that, for instance, leading in AI safety is an area that could be excellent for the scientific prestige of a country, right? And it could improve the brand of a place where things are done safely and reliably, and where you can trust services and so on. You can take something that otherwise a policymaker might dismiss as heavy techno-utopianism, and if you are willing to cheapen yourself a little bit in terms of how to sell it, you can get more attention.

Obviously this is a sliding scale, and you don’t want to take it too far. But I think a lot can be accomplished by thinking about what the local political incentives are that people have.

Tension between those focused on x-risk and those focused on AI ethics

Tantum Collins: I have a few low-confidence thoughts. One is that there are some areas where there is, I think, the perception of some finite resource — and maybe that’s money or maybe it’s attention. And I think there is an understandable concern on the AI ethics side that there is sometimes a totalising quality to the way that some people worry about existential risks. At its best, I think that x-risk concern is expressed in ways that are appropriately caveated and so on, and at its worst it can imply that nothing else matters because of running some set of hypothetical numbers. Personally, I’m a bit of a pluralist, and so I don’t think that everything comes down to utils. I think that the outlook of, “If you reduce existential risk by X percent then this so dwarfs every other concern” is something that I can see why that rubs people the wrong way.

A second thing that I think sometimes brings some of these views or these communities into conflict is the idea that there are some types of behaviour — whether that’s from labs or proposed policies — that could help. I’m in particular thinking of things that would have some security benefits that people who are concerned about x-risk value very highly, but that might come at the cost of other things that we value in a pluralistic society — for instance, openness and competition.

A lot of the policies that we haven’t talked about yet — because so far we’ve been focusing on no-brainer, almost everyone should get behind these — there are a lot of very tricky ones that you can see a case for and you can see a case against, and often they pit these values against one another. If you’re really, really, really worried about existential risk, then it’s better to have fewer entities that are coordinating stuff, and to have those be fairly consolidated and to work very closely with the government.

If you don’t take existential risk that seriously — and if instead, you are comparatively more worried about having a flourishing and open scientific ecosystem, making sure that small players can have access to cutting-edge models and capabilities and so on; and a lot of these things historically have correlated with the health of open and distributed societies — then those policies look really different.

I think that the question of how we grapple with these competing interests is a really difficult one. And I worry that, at its worst, the x-rist community — which broadly, I should say, I think does lots of excellent work, and has put its finger on very real concerns — but at its worst, there can be this sort of totalising attitude that maybe refuses to grapple with a different set of frameworks for assessing these issues. And I think that’s sometimes exacerbated by the fact that it is on average not a super-representative community, geographically or ethnically and what have you. I think that means that it’s easy to be blind to some of the things that other people, for good reason, are worried about.

That would be my very high-level framing of it. But the bottom line is that I very much agree with your sentiment that most of the conflict between these groups is counterproductive. And if we’re talking about the difference between pie splitting and pie expansion, there’s a huge amount of pie expansion and a whole bunch of policies that should be in the collective interest. And especially since I think the listenership here is probably a little bit more EA-skewed, I’d very much encourage people to engage with — this sounds so trite — but really to listen to some of the claims from the non-x-risk AI ethics community, because there is a lot of very valuable stuff there, and it’s just a different perspective on some of these issues.

Articles, books, and other media discussed in the show

Tantum’s work:

Recent policy developments in AI:

AI and policy developments in China:

Democratic principles in AI policy work:

Effect of AI on the labour market and economy:

Next steps for working in US AI policy and governance:

Other 80,000 Hours podcast episodes:

Everything else:

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.