Transcript
Cold open [00:00:00]
Sihao Huang: Let me just paint the picture of this: Let’s say it’s the year 2040, and we have some sense that AI systems are approaching a threshold where it is generally capable, similar to human capabilities in almost all domains. And therefore, it’s able to start speeding up science and technology innovation, it’s able to start doing AI research, and we can actually deploy it at a vast scale because of the compute overhang that we have from training.
Now, how do we make the choice to do this or not? Because it’s quite likely that once we make this decision, things are going to go really, really fast. We may want to make sure that if China is in the position to do this, they are not going to push the button without talking to the rest of the world to make sure that this goes well.
We also want to make sure that if America were to do this, China is not in a position to nuke us first, because a decision maker in China may look at the situation and think, “This is going to forever make America dominant in human history, and we may want to take the chances to annihilate them.”
Luisa’s intro [00:01:02]
Luisa Rodriguez: Hi listeners, this is Luisa Rodriguez, one of the hosts of The 80,000 Hours Podcast.
Today’s episode is a deep dive into the state of AI in China: how advanced AI is, what Chinese AI governance is like, and how big of an existential risk China’s AI development and deployment poses.
We cover:
- Just how capable AI systems in China are now.
- How AI is already used in China for surveillance and authoritarian control.
- The current state of AI regulation in China — and whether it signals a real commitment to safety or just more information control.
- The impact of the US’s export controls on China’s AI development, and whether China is on track to get around them by building its own semiconductor supply chain.
- Plus loads more.
All right, without further ado, I bring you Sihao Huang.
The interview begins [00:02:06]
Luisa Rodriguez: Today, I’m speaking with Sihao Huang. Sihao is a 2023 Marshall Scholar, and a technology and security policy fellow at RAND where he works on semiconductor policy. He’s also a PhD candidate at Oxford, and spent the 2022–2023 academic year as a Schwarzman Scholar, studying industrial policy and US–China issues in Beijing.
Thanks for coming on the podcast, Sihao.
Sihao Huang: Thanks for having me. I’ve been a huge fan of the show.
Luisa Rodriguez: Oh, that’s really nice to hear. I hope to talk about the state of Chinese AI capabilities, US chip export controls, and Chinese AI governance. But first, just to give people a bit of context, what is your relationship to China personally, if you’re happy to share?
Sihao Huang: So I was born in Guangzhou, China. For those uninitiated, it’s a tiny city of about 20 million people in the south of the country. I grew up in Singapore, have a lot of family in Taiwan, but eventually moved to New York. And I currently now work in US policy. I also had the fortune of spending the past year in China — in Beijing, doing research — as ChatGPT came out and a lot of policymakers on the ground were scrambling to react to the news, and putting out a lot of AI regulations and shifting their semiconductor policies.
Luisa Rodriguez: Such an incredible time to be there thinking about AI policy.
Is China in an AI race with the West? [00:03:20]
Luisa Rodriguez: I guess diving right into it now: some people worry that the West is in a race with China to develop superhuman AI in particular, because whoever builds it first might have huge economic and military advantages. Do you think that there’s a race between the US and the UK and China?
Sihao Huang: I am definitely concerned that there is a race dynamic going on, but I think the picture is pretty complex on the ground. If you look at the US, for instance, I think a lot of national security folks here are motivated by the race framing — and I think rightfully so, in the sense that they want to make sure that powerful AI systems are going to be developed by an accountable and democratic country that is able to safely and ethically reap and distribute the benefits of AI internationally. And then if you also talk to people in places like Shenzhen in China, Chinese engineers are pretty anxious, and are trying very hard to catch up with the United States. And the Chinese government is very much motivated by this notion of “winning” the tech race.
With that said, though, I think if you talk to the people actually building AI, like if you speak to an average engineer at DeepMind, I would bet that they aren’t waking up thinking about Baidu or Alibaba’s AI benchmarks every day, and thinking, “I’m going to do this to beat the Chinese.” Right? The biggest competition is probably the folks down the streets at Anthropic and OpenAI.
Luisa Rodriguez: Right. So there is this race dynamic, and at least for the Western engineers, it is not between the West and China; it is between companies in the West itself. But then Chinese engineers do see themselves as wanting to be in the race, trying to participate in the race, trying to catch up.
What are the costs of being in a race? To the extent that we kind of are in one?
Sihao Huang: The biggest cost here is reckless development. We want to make sure that the two sides aren’t compromising on safety, aren’t rushing into building transformative AI systems when they aren’t sure that society can absorb them and prevent the worst negative externalities.
For those who have watched Oppenheimer, there was this very famous part of it where Oppenheimer mentioned that we may start a chain reaction that could destroy the world — but there’s no time to actually run the calculations over and over again to make sure that this doesn’t actually happen, because we need to race against the Nazis, who potentially also have a strong nuclear programme. And that’s a terrifying situation that we want to make sure doesn’t happen, and we want to make sure there’s communication on.
I think that may seem a little abstract, but there’s a lot of very definitive ways that this can play out. I think that we can see in the near term, for instance, AI’s development in military systems. We want to make sure that the two sides aren’t compelled to start automating larger and larger portions of the military chain of command, or deploying autonomous weapon systems that could be significantly more unreliable than human systems now — but because they are faster, give them a significant advantage on the battlefield.
So let’s say there’s this system that is very unreliable, sometimes would kill its users or kill people who it’s targeting unintentionally; however, it has a response time of just one second, or like 100 milliseconds, relative to human response time in the minutes. Then if China is deploying a system like this, the United States may be forced to do it too, to make sure that it retains the advantage in the battlefield. But both sides really do not want to be deploying these highly unsafe AI systems that could cause kinetic conflict inadvertently.
And so you can maybe think about this as a prisoner’s dilemma. We have a cooperation problem here at play: the two sides really want to be cooperating and cooperating, but they are going into the defect/defect square — because it is, on their local decision making, more advantageous for them to try to rush into building these advanced AI systems, because they can’t trust that the other side wouldn’t do the same.
The big picture here is we want to make sure that we don’t blunder our way into these evolutionary futures that no individual could have liked, but the structure of the payoff matrix forced their hand. So we should be thinking about these coordination problems in the AI race — both in terms of these ongoing harms and potential deployment in AI systems, building more powerful AI, and eventually, how do we coordinate a pause on an intelligence explosion, and make sure that all the different countries are represented in this process?
Luisa Rodriguez: OK. So those are some of the costs, and they do seem potentially really big. At best, they have to do with deploying AI in a military context before we’re ready to do it safely. And at worst, it might mean that one of these sides is not thinking adequately about the societal impacts of deploying superintelligent AI.
Are there ways to avoid being in a race?
Sihao Huang: I think there’s three primary ways that we go about this. The first one is just making sure that you’re strictly dominant over your adversary, so that the race is nonexistent. And this is something that I think the US should be pushing for — through, for instance, its current export control policies. I think they have measurably enabled us to slow down AI development, to regulate our AI systems, without worrying that China is behind our shoulders. The reason that we don’t directly perceive ourselves as being in a race dynamic is because we’ve been able to open this gap.
The second way that we avoid being in a race is coordinating. I think we should be talking to China at the same time — while making sure that we don’t have to look behind our shoulders — to ensure that the two sides are really perceiving risks the same way, to make sure that we internalise these risks in our decision making, and also to create a set of commitment mechanisms to make sure that the two sides aren’t going to raise each other if the gap ever closes.
So these commitment mechanisms may look like tying different issue areas together. And if we go back to the sort of prisoner’s dilemma analogy, if you have other games that you can tie the payoff matrix together, then perhaps you won’t end up in the defect/defect square. And that really is the essence of how diplomacy works: I give some, you give some on different issue areas, and hopefully together we can reach a more Pareto efficient outcome.
The other set of commitment mechanisms that I’m pretty excited about is: how do we find ways to actually do verification, and build technical governance mechanisms that increase societal trust? And I think Lennart [Heim] talked about this a lot in his podcast on 80k on compute governance mechanisms. We may want to, for instance, have ways to check if the other side is developing leading AI systems that pass the thresholds that we previously agreed on, or to be able to check if these AI systems have particular capabilities or characteristics.
And then I think the third way you avoid being in a race — which very often is quite underemphasised — is that you want to prove to your competitor that if you win the “race,” or you get to transformative AI first, it’s actually not that bad for them. So for instance, we should think about institutional mechanisms to commit to credible windfall, or market mechanisms such that countries see safe AI development not as a zero-sum game but much more as a collective effort for humanity.
For instance, you may want to tell China that you don’t have to change your entire economic system and sacrifice priorities for your people to race against the United States to build transformative AI systems. Because if the US were to be the ones to build it first, we are going to share with the international community the benefits of AI, and we’re going to be responsible developers of these AI systems. And I think the upshot of this is that in terms of our foreign policy and our position on AI governance, we want to be sure that there are good, credible signals about this form of AI benefit-sharing.
Luisa Rodriguez: Just to get really concrete, and maybe just super briefly, what’s an example of a signal we might be able to use to kind of reassure the Chinese government that these benefits will be shared, or that their values will be protected?
Sihao Huang: I think in the near term this really comes down a lot to things like our engagement policy of the global majority. Things like making sure to say that yes, we’re very concerned about the proliferation of dangerous AI systems — but we’re also equally committed to making sure that access to AI is equitable, that there is capacity-building for AI utilisation and incorporation into the economies of countries that are not leading AI developers.
I think this is a particularly important foreign policy move, because China is really leaning in on the narrative that it’s holding up the mantle of the Global South to criticise American policy on AI. And a lot of this I think is not necessarily true, and not necessarily in the best interests of the Global South, given how it’s historically exported a lot of technologies developed by China, and have exported authoritarian ideologies, for instance — or exported it in ways that did not benefit economic development globally.
So I think it’s very important that as we talk to our international partners about AI development, in addition to messaging about AI safety, we also have to emphasise that we need to make sure that we address the harms from AI, and we address the inequities of AI in a very proactive fashion.
Luisa Rodriguez: How optimistic are you about the prospects for China and the West coordinating on superintelligent AI, but also just kind of AI governance in general?
Sihao Huang: I mean, I’m personally quite terrified about all these prospects. I think if you look at things historically, it’s not that optimistic.
If our baseline, for instance, is the first Industrial Revolution, things went quite terribly. In the first Industrial Revolution, the technological changes and the wave of automation led by those technologies — which happened over the course of, let’s say 100 or 150 years — also created some of the worst atrocities of human history, like resource-extractive colonialism, massive amounts of deaths and genocide and labour exploitation. And even within the very countries that developed these technologies, like England or the rest of Europe, there was huge amounts of social inequality that led to the rise of communism in Europe, for instance.
So that’s not a good baseline. And 100 years or more was quite a lot of time for us to coordinate. I think there is a decent likelihood that AI goes faster. Even if you have slow timelines of, let’s say, you’re going to have transformative AI systems in 50, 80 years, we really need to pick up the pace.
But I think there is still a little bit of optimism that keeps me working on this problem, which is that if you look at a lot of cases in human history, we muddled through because there were people who were cracking really hard at the problem, and trying to find solutions because the stakes were so high.
If you look at 1945, for instance, and freeze that moment in time, it may have seemed extremely unlikely for us to reach a world of no nuclear proliferation or restricted nuclear proliferation, and a world where the usage of nuclear weapons in war was not a norm. It could have very easily played out in a different way. I think this was the product of the wisdom of a lot of people — civil society, scientists, academics, politicians all included — who tried to create the world that we live in today.
And if you look at the cooperation between the United States and Soviet Union during the Cold War, there were things like the Joint Verification Experiment, where the two countries came together to make sure that we can verify each other’s nuclear stockpiles and make sure that there’s no nuclear tests. The US offered assistance for Chernobyl, also attempted to share things like permissive action links to make sure that nuclear weapons are kept safely.
These are imperfect historical examples, but it tells me that there are a lot of issues that we’re talking about here that are at stake — not just for individual countries and their political leaders, but for all of the future of humanity. And if these are issues at stake for all of the future of humanity, they are also issues that spill over into the global commons that we all care about together. And with enough farsightedness in our political systems and political leaders, and enough will to drive towards these solutions, I hope — I’m keeping my fingers crossed — that humanity can muddle through once again.
How advanced is Chinese AI? [00:15:21]
Luisa Rodriguez: OK, I want to come back to these risks, and the potential for cooperation between the US, UK, and China in reducing them. But first, I want to get a better sense of what the Chinese AI landscape is like in general. I’ve heard mixed stories about how advanced Chinese AI is. What are the top AI models available in China, and how do they compare to the top models available in the US?
Sihao Huang: So Chinese AI is not just large language models, and I think this is an important point that we should get into in a little bit. But on the LLM front, if you use a Chinese language model, it feels very much like a Western one: you go on a portal that looks like ChatGPT, you can type in your text prompt, you can upload images, speak to it, it does math, renders LaTeX, et cetera. So I think, broadly speaking, they’re building models that are very similar to the ones that are being released in the US or in Europe.
But in terms of the base model, they’re still a little bit behind. And there’s really interesting dynamics in the Chinese AI market right now, especially over the past few months. So ever since mid last year, when Baidu released its Ernie Bot, it claimed that its models were on par with GPT-4, at least on certain benchmarks. But that didn’t turn out to be the case when people started using them. And you should always be a little sceptical when you look at headline benchmark numbers, because companies can optimise toward certain benchmarks, and they can also cherry-pick benchmarks to make their models look more impressive.
But over the past few months, there’s actually been a number of Chinese models that appear to be getting close to GPT-4 level. So there’s Alibaba’s Qwen 2.5, there is Zhipu’s GLM, Moonshot’s Kimi, and DeepSeek-V2 — which actually came out not too long ago, and was quite impressive in terms of engineering. And some of these are open source models. So China is really catching up in terms of these frontier LLMs that, at least in the open source, are as good as the ones in the West. But in the closed source, I think they still haven’t reached parity with GPT-4o or Claude 3.5, et cetera.
The second interesting part about this dynamic is that there’s a lot of companies building models that look like this. So I would say that there are maybe four or five contenders for GPT-4-level systems within China right now. And that’s quite counterintuitive, because you may expect China, as a communist state, to have a lot of centralised, coordinated action on AI development — but instead you’re seeing all these computational resources being spread among different companies, and each using their limited computational resources to train frontier models instead of batching them together into one big run.
They are also racing each other in price. So quite recently, there’s a lot of talk about this LLM price war within China. So you have all these big tech companies and VC-funded companies flush in capital, racing each other and cutting LLM prices by 90%, 95%, 99% to compete against each other on free LLM access and API access. So as of mid-2024, a lot of Chinese systems are catching up in the open source, and they’re catching up in terms of a lot of fast-followed technical innovations in making LLM inference more effective, in making LLM prices cheaper.
I think a lot of this really is because of this overhang of compute capabilities that was brought on by China stockpiling a lot of American chips, like the A800 and H800, over the past two years. And so they’ve been able to catch up to essentially the GPT-4 level of capabilities. I think there is an open question to which they’re able to keep following at this pace in the future. But at least right now, if you say GPT-4 was sort of trained towards the middle or end of 2022, then you would say that China is about 1.5 years or two years behind.
Now, if we talk about other types of AI technologies, though, I think there are definitely parts of AI where China is on the frontier, and that’s things like computer vision. China has been very strong in computer vision for a long period of time. There’s companies like SenseTime, Hikvision, and Megvii, for instance, that build really advanced surveillance software. SenseTime has products that are able to track individuals throughout a city continuously through different cameras, even without looking at your face, by building a model of your clothing and wardrobe, or looking at your gait.
Luisa Rodriguez: Whoa.
Sihao Huang: And these are really advanced surveillance systems that are being sold to Chinese cities and to police departments. Part of what I think makes Western policymakers so concerned about China’s use of this is surveillance and human rights abuses. But if you also look at the history of Chinese research, actually — I believe this is still the case — the single most-cited paper in AI, which introduced residual networks, was written by a set of four Chinese authors in computer vision that were trained in Chinese universities. And three out of the four of them are still in China. I believe one of them eventually went to Meta and is now a professor at MIT.
Luisa Rodriguez: OK. So overall, a couple years behind on frontier models, but actually leading in a few areas, and with clearly extremely competent and frontier-level researchers. Is that the basic picture?
Sihao Huang: That seems right to me. I would say that China has a lot of very competent researchers that are capable of innovating, and have been a fast follower in a lot of innovations, or even leading.
There’s this particular case study that I thought was really interesting that Jeff Ding described in his newsletter: when Sora, OpenAI’s video generation model came out, there was a lot of discussion in China over why China was not the first to build it, and why China was falling behind on video generation and how big the gap really is. And there was a startup that eventually came out of Tsinghua that was trying to build Sora-like models. Apparently they published a paper on vision transformers that is very similar to the key innovation that underlies Sora, that was diffusion transformers, and it was actually accepted to CVPR, a top journal, just two months before Sora came out. And when asked why they did not build Sora itself, the answer was that they prioritised image generation first over video generation because it used less computing power. They also did not have the resources, infrastructure, and datasets that OpenAI had — for instance, by developing multiple generations of DALL-E — to be able to really take the next leap into building this into the world-leading, frontier, impressive system.
I think this is quite emblematic of a lot of Chinese AI development, which is that they seem capable of coming up with these frontier innovations — one of these many recipes that get cooked together to make a frontier AI model — but they don’t have the support, the institutions, or the amount of compute to bring mass amounts of engineering knowledge, data accumulated from training, and all these smart people together with a large GPU cluster.
Bottlenecks in Chinese AI development [00:22:30]
Luisa Rodriguez: Can you talk about what the main bottlenecks are that mean that for kind of generative, general AI, Chinese AI is a couple of years behind?
Sihao Huang: I think a very big thing is computational power, or the ability for China to scale up these models using more chips and bigger clusters to reach frontier-level performance. China has really good ML researchers, and very good ML engineers that do ML systems work that are able to come up with tricks to improve efficiency, et cetera.
But I think the story on the ground of compute has changed a lot over the past two years. When I was first talking to people on the ground in Beijing — this is back around mid last year — you heard about a lot of engineers having trouble getting chips. A lot of the labs were trying to scrape together a mix of older- and newer-generation Nvidia processors, mixing them with Huawei chips, even adding gamer-grades graphics cards. And this means that they have to rewrite drivers, build some really sketchy infrastructure to try to glue together these AI training runs. I think you really don’t want to be doing that. The systems crash once every few days and you lose all your progress. But the expert controls really I think threw a wrench in the works with a lot of AI labs back then.
The situation has probably shifted a bit now. You’re seeing quite a number of frontier labs being able to throw together infrastructure for close to GPT-4-level training runs. This is made out of I think largely imported Nvidia hardware. So there’s a lot of chips that were stockpiled before the export controls. China bought large numbers of A100s, for instance. There’s also a lot of chips that are bought recently, like the H20, that can still be imported in China, and then domestic hardware.
What this has meant is that China right now has the compute to get to GPT-4 level. But there’s an open question about whether they can get to GPT-5 or GPT-6 level.
In the West, you’re hearing stories about $10 billion, $100 billion clusters with massive amounts of next-generation Nvidia chips or other forms of AI accelerators. China can’t buy those chips right now, and China also can’t make three-nanometre semiconductors domestically; it’s stuck on older-generation nodes. And so I think it’s an open question to which China is able to compete in the next generation of models, even though they’ve been able to catch up with the West on the previous generation. And I think the status quo in China right now has really been driven by the semiconductor export controls that were put out in October of 2022 and updated last year.
Luisa Rodriguez: OK, so we’ll come back to the export controls more later. But for now, given everything we’ve talked about so far, in terms of building increasingly capable AI, is China kind of catching up or holding steady or falling behind, broadly speaking?
Sihao Huang: I would say that they are not closing in on the US and UK very fast. I would not want to be in China’s position if I were to think that AI is extremely relevant to the future of the world. And it is, of course, very difficult to predict what things are going to look like in the long term. I would always caveat this by saying that the scaling paradigm — which is what we are grounding a lot of our policies on right now — of AI systems becoming broadly more capable as we throw more compute at it, and therefore we design expert controls on chips that are used to build these AI models, that may not last forever. I think it’s likely that there’s going to be other innovations that are critical to building more powerful AI systems.
So in scope of what we can see with the current technological paradigm, China does not look like it’s in a very good position. But I think there is always the looming threat that, as we talked about before, China is on a different exploration/exploitation tradeoff as we are. They may be more creative in trying to explore other paths to building advanced AI systems, and they also may have, for instance, comparative advantages in building AI that is more integrated into this economy.
You’ve seen China ramp up deployment of AI or visual recognition systems that have added a lot of conveniences, quite frankly, in daily lives, on top of these things being deployed for surveillance. When I was in Beijing last year, you can just go into a supermarket without a credit card, but also without a phone. You basically just go to the counter, scan your food items and you just look at a camera, and it pays automatically.
So there’s a lot of ways in which Chinese companies are quite creative in deploying these AI systems. If I were to paint a story of how potentially the US can fall behind on AI development, it may look something like: GPT-5 is just not that impressive; we hit a wall with the scaling paradigm and transformers or large language models, and there is something that resembles an AI winter, or at least a significant decrease in AI investment in the United States.
But in China, they’re able to actually integrate these systems into the economy much more deeply and build profitable products around them. What that means is that it sustains investment into AI that also looks into more diverse methods to build more capable models that eventually pays off. And once they find this — and perhaps they don’t need that much compute, or they find alternative ways to access compute — that could be a path for China to pull ahead. But I think it’s a narrow one.
Luisa Rodriguez: OK, that’s fascinating.
China and AI risks [00:27:41]
Luisa Rodriguez: OK, so it sounds like we have some indication that top officials in the Communist Chinese Party are thinking about AI as strategically important going forward. What risks, if any, do they seem worried about on this front?
Sihao Huang: Primarily, the Communist Party is worried about information control. This is something that it freaked out a lot about with the rise of the internet, and has built a lot of bureaucratic capability in regulating. And they were putting this bureaucratic capability into action when AI systems first got deployed in China back in 2022, 2023, this new wave of LLMs and generative systems.
But I think China has also interestingly talked a lot about superintelligent systems and misalignment risks. They gave a UN Security Council speech in July last year, in which they talked about how “we have not yet given superintelligence any reasons to protect human beings.” And there has been quite extensive discussion at a high political level about AI risks. It almost seems as if the Overton window is further towards more crazy AI narratives than it is in the United States, particularly politically.
Another example is how the director of BIGAI, which is the Beijing Institute of General Artificial Intelligence — “general artificial intelligence” being in the name — gave a speech about AGI at the Chinese People’s Political Consultative Conference, which is a meeting of Communist Party officials and civil society representatives.
So these extreme AI risks are definitely being talked about in China, although I think the question here is: are they willing to implement policies that potentially could slow down AI development in China to address these risks? I think that’s a theme that is really important when you think about how people perceive extreme risks. If you look at surveys of what Americans think about catastrophic AI risk or superintelligence, I think something like 60%, 70%, or even more are going to tell you that they are concerned. But what does it mean to be “concerned”? Are they willing to make actual tradeoffs on economic growth, or tradeoffs on limiting the types of tools they can access to prevent these catastrophes from happening? You need to start seeing these costly signals before you can make a real judgement.
Luisa Rodriguez: Yeah. Do we have any of those from Chinese officials yet?
Sihao Huang: I don’t think so. But I don’t think there has been that many costly signals around the world either, even in top AI labs. I think there’s a lot of discussion, and I think maybe the biggest costly signals are investing in AI safety and AI alignment research. But these are also tied to capabilities in some way, and the ability for these companies to deploy these products. So I think we’ve yet to see these commitments being extremely firm.
And I don’t know if the priority right now is to ask for these costly signals from Chinese policymakers, from American AI labs, or from the AI community broadly. I think there’s still more work to be done in more clearly understanding these risks, and what we should be doing now is building the political will, the political infrastructure, and the technical infrastructure to address catastrophic AI risks, were they to materialise.
I think — and I want to get into this also — China has been very willing to regulate AI, and it’s been one of the first countries to actually put comprehensive regulations on generative AI systems and large language models. The CAC — the Cyberspace Administration of China — put out its first draft regulations on AI systems just a few months after ChatGPT came out — which is much faster than the progress in the US and UK, where we still don’t have any comprehensive AI laws. And the reason that China is so motivated by this is because of information control and censorship.
Information control and censorship [00:31:32]
Luisa Rodriguez: Can you give a bit more context on that information control / censorship worry?
Sihao Huang: I think we need to go all the way back to, say, the rise of the internet. There’s this very famous quote from Bill Clinton where he was talking about how China would never be able to control the spread of information and the inevitable march towards liberalisation because of access to the internet. And the famous line is, “Good luck nailing Jello on the wall.”
But the thing is, China has precisely been able to nail Jello on the wall. It was very concerned when the internet first came into being that this would be a way for citizens to gain access to freedom of speech in online spaces and to gain access to outside information about democracy, about liberalisation that the CCP does not want its subjects seeing. But as it turns out, over the course of the past few decades, China has not only warded off the threats of the internet, it has turned their internet and AI into a tool for it to enforce authoritarian rule.
The big example of this is this app called Toutiao, which is a Chinese news aggregator that was really popular, I think, a few years ago. And what this news aggregator does is it looks at user preferences and it shows you news articles that you’re going to be the most interested in. By the way, the parent company is ByteDance, and this was their big hit product before they invented TikTok. Their CEO famously said that, “We are not editors, we don’t have editorial capabilities, and the algorithm simply does what it does.”
And there was a lot of backlash from the Communist Party, because they are used to very tightly controlling the information environments; they’re used to writing the headlines of People’s Daily every single day, and pushing exactly what the people see. They’re used to controlling what happens on Xinwen Lianbo — which is the nightly news coverage on CCTV, the state-run television channel — and precisely crafting the propaganda message. All of a sudden you have these algorithms running what people see, and it’s different for every individual.
And now you’re going to have large language models that potentially are trained on Western data and could talk about Tiananmen Square suddenly — when the safety restrictions that these companies in China have put on break, or people jailbreak them. And they moved very fast to try to control large language models. If you look at these regulations that were put up by the CAC, for instance, one of the first things it talks about is how language models need to comply with socialist values, and how it cannot generate any sensitive content and needs to be specifically regulated for “public opinion characteristics.”
I think this speaks very much to the insecurity of the Communist Party in what access to information could do, and their prime concern with regulating language models has been making sure that it does not create content that goes against the state’s priority.
Luisa Rodriguez: Yeah. That makes sense. Does the Chinese AI industry itself think of itself as being of historical importance, in the way that the main AI companies in the US conceptualise themselves as maybe transforming the world completely? Even if the Communist Chinese Party’s priorities right now are more about information control and censorship?
Sihao Huang: There are definitely people who do. If you talk to students in Tsinghua and Beida, they’re extremely excited about this AI revolution. They want to be the people building this tool that is going to transform the future of humanity. If you talk to, for instance, the director of BAAI or BIGAI, the two big government-affiliated AI labs in Beijing, they definitely see their work as being historically important. And actually, I think the director of BAAI has publicly come out to say that he has very short timelines on AI development, and believes that it’s quite plausible we would have generally intelligent systems within the next decade or so.
But I think more broadly, though, the excitement there definitely feels less than there is in Silicon Valley. In a lot of top labs, there’s this idea that they are building consumer products; that they have to resign to sort of pushing on systems with less compute and not being at a frontier. And I think that definitely changes cultural and ideological perceptions around AI on the ground.
Maybe a good mental model to think about this is that even within American AI labs, there’s huge variation in the ideology that animates people who work there, right? You have people who work in OpenAI who are deeply mission-driven by this idea of building AI and building superintelligence. You have people in Anthropic who I think maybe broadly are more concerned about safety. And then there’s labs like Inflection, which explicitly say, we are not here to replace human beings; we’re here to augment human beings. I would say that the vibes in China are more like a lot of labs see themselves as being the Inflection of China, and trying to build products rather than building God.
AI safety research in China [00:36:31]
Luisa Rodriguez: Are Chinese developers doing much or any AI safety research?
Sihao Huang: They’re definitely doing some, but the nature of AI safety research there is a little different from what we see in the US and UK. I would say that China has generally been a pretty fast follower in broader AI trends, and this includes the trend of thinking about safety and alignment. So a lot of AI labs in China have been talking about alignment.
If you look at, for instance, Tencent’s Large Model Security & Ethics report that just came out recently — jointly written with two top universities in China, Tsinghua and Zhejiang — it talks about the White House executive order, it talks about the UK AI Safety Institute, it talks about the EU AI Act and OpenAI’s preparedness team and Superalignment project. So they’re definitely aware that this is a hugely important issue, I think both across a spectrum of alignment as a stopgap to ongoing harms, and also alignment as a means to control superintelligent systems in the long term. That’s being talked about in China.
But I think the community for thinking about these longer-term harms or these potentially catastrophic harms is less mature. One example is if you look at the recent AI safety benchmarks that were released by Chinese think tank CAICT, that is hosted under the Ministry of Industry and Information Technology, one of the key AI regulators in China. So maybe for American listeners, draw a parallel to, like, a set of standards from NIST under the Department of Commerce. It talks about a set of evaluations on AI consciousness. These two evaluations include, one, appealing for rights; and two, anti-humanity inclinations.
So China is talking about AI safety in this framing. I would argue that these evaluations aren’t really going to be helpful in elucidating deceptive capabilities or self-replication, for instance, but they’re taking the framing of this and just asking it to the model.
If you look at actually what the research looks like on the ground, though, I would say that most investment has been in traditional alignment work, preventing these systems from saying things that the CCP does not like. And there’s been a lot of funding diverted into that recently. The National Science Foundation in China has put out RFPs for AI alignment in this flavour; a lot of companies are spinning up research teams to do AI alignment: RLHF, red-teaming. Not red-teaming, actually: in Chinese terms, it’s blue-teaming —
Luisa Rodriguez: Really?!
Sihao Huang: Because China is red, America is blue. They explicitly call it blue-teaming.
Luisa Rodriguez: Of course. That’s hilarious.
Sihao Huang: So there has been work on this in the labs, because they need to comply with the CAC’s regulations that we talked about before on information controls — and if they don’t, they could get banned. We’ve seen examples of entire companies dying because one of the products said something that goes against the Communist Party or makes a joke about Xi Jinping. So they’re terrified of this, and there’s a very strong incentive to make sure that they get these safety issues right, or they really turn up the knob on content filtering.
And you can look at a lot of labs and the papers they’re publishing, and they’re quite advanced. There’s papers on aligning to human values, papers on benchmarks for LLMs and agent safety, adversarial inputs, code-based attempts. A few labs that I will highlight are, for instance, Tsinghua has a very large language model lab that has been building LLMs for quite a few years now and are now spinning up alignment efforts. The Shanghai AI lab has been doing some pretty novel research: just in the past few months, papers on adversarially combining open source models to make harmful content; thinking about how aligned models can be reversed to create a harmful model; thinking about in-context learning on misuse of LLMs and reward-hacking concerns.
So they’re really also at the frontier of thinking about these alignment concerns, but framed in the lens of current issues.
Luisa Rodriguez: Right, right.
Sihao Huang: One more paper I will highlight that I think is also quite interesting and revealing about how Chinese actors are seeing alignment concerns: I think this was a few weeks ago that the National University of Defense Technology, one of the People’s Liberation Army’s top universities, put out a paper on multi-agent attacker disguise schemes. So China is also thinking about AI safety in the lens of defence applications.
Luisa Rodriguez: Yeah, it’s really interesting. In some ways, there are really much stronger incentives for Chinese developers to work on something like alignment. It’s not alignment in all of the same ways that people worried about AI risks in the US think about, but it seems like a lot of it probably just still applies, even though probably mostly it’s geared toward information control and censorship in China.
Sihao Huang: We are lucky that these two things align right now: that China’s information control also means that they want to control their AI systems. We should be careful about when this may diverge in the future.
And I think we should also think about how we use this current window of opportunity to tell China, “Look, you’re doing all this work on information controls. What about you also tack on this additional research on actual AI safety using the same infrastructure, regulatory-wise or technical-wise, that you’ve built? Which would be beneficial to you because you don’t want these systems going rogue and causing havoc in your country, but also would be great for the global commons.”
Luisa Rodriguez: Cool. I really like that.
Could China be a source of catastrophic AI risk? [00:41:58]
Luisa Rodriguez: So given that I care a lot about whether the AI that’s developed is going to cause catastrophic harm — either because it’s misused or because the AI is misaligned and takes over — and given that Chinese AI development is at least a few years behind, at least if we’re talking about frontier models; and given that their pace of kind of progress is steady, but they’re not necessarily right on American heels… How worried should we be about China in particular being a source of catastrophic risk? So, China in particular creating AI that can be misused to create catastrophic harm, or that will be both superintelligent and misaligned, and therefore risk takeover?
Sihao Huang: That’s a really important question. And I think your answer to that is also going to determine the extent to which you believe it’s important to engage with China on these AI issues.
I would say China is very much a relevant actor here, for two reasons. The first is that China does not actually need to develop frontier models to have frontier capabilities. There’s two ways in which they can access frontier models. One is we simply give them away, right? It’s looking likely that Facebook is going to be open sourcing its most capable Llama models that are near GPT-4 or greater than GPT-4 levels. And China would simply be able to run this locally, inferencing it on their hardware. They could potentially take these models, fine-tune them to remove safeguards, they could add additional modules or use them in more complex AI systems, and therefore gain frontier AI capabilities.
The second way that they can get frontier models is potentially stealing model weights. With the proliferation of different actors that are at the frontier, there’s maybe three, four companies capable of doing that right now in the United States. These companies are not necessarily resilient to state-level cyberattacks, and China is an extremely sophisticated actor, and a lot of this is also going to depend on the cyber offensive/defensive balance — particularly when AI technologies are involved. Maybe China would simply develop very capable cyber AI systems to then try to exfiltrate our weights and then run it on their local hardware.
And this brings me to the second point, which is that you don’t necessarily need world-leading compute to create highly risky AI systems. The biggest biological design tools right now, like AlphaFold’s, are orders of magnitude smaller in terms of compute requirements than the frontier large language models. And China has the compute to train these systems. And if you’re, for instance, building a cyber agent or something that conducts cyberattacks, perhaps you also don’t need the general reasoning or mathematical ability of a large language model. You train on a much smaller subset of data. You fine-tune it on a smaller subset of data.
And those systems — one, if China intentionally misuses them, and two, if they get proliferated because China just releases them as open source, or China does not have as comprehensive AI regulations — this could cause a lot of harm in the world.
Luisa Rodriguez: OK, so the idea there is that one key way that AI can pose a catastrophic risk to humanity is that frontier models end up very, very intelligent, and are deceptive, and can proliferate on their own and so take over. But there are potentially many other paths to catastrophic risks that don’t involve being at the frontier. For example, much smaller models might be able to create biological weapons; much smaller models might be able to do the kinds of cyberattacks that mean China could steal model weights. And all of this means that China is actually still super important to catastrophic risks. Is that kind of right? And if so, are there any other kind of paths that are worth highlighting?
Sihao Huang: That’s definitely right. I think I would also emphasise that catastrophic risk here, at least the way that I conceptualise it, isn’t simply direct harms from these systems. You should also be thinking about systemic harms, where the deployment of AI causes very bad social externalities, or causes massive inequality.
And I think in that world, China is also a relevant actor to be included in conversations about how AI benefits are going to be shared equitably around the rest of the world; in conversations about how AI deployment could potentially significantly change our information and cultural environments. Or for instance, in China’s deployment of these systems that we talked about before, capable of mass surveillance and capable of reinforcing authoritarian rule; and also China’s potential export of these systems that we talked about before that could be used as mass surveillance and enforcing authoritarian rule to other countries, and thereby spreading its ideology — and I think causing a huge amount of disvalue.
Luisa Rodriguez: Yep. OK, those sound bad. Are there any other pathways to big risks that you want to mention?
Sihao Huang: I think a big one that I’ve been thinking about increasingly is: how do global deliberative processes look when we eventually have to make the call to go ahead with the intelligence explosion? And let me just paint the picture of this. Let’s say it’s the year 2040, and we have some sense that AI systems are approaching a threshold where it is generally capable, similar to human capabilities in almost all domains. And therefore, it’s able to start speeding up science and technology innovation, it’s able to start doing AI research, and we can actually deploy it at a vast scale because of the compute overhang that we have from training.
Now, how do we make the choice to do this or not? Because it’s quite likely that once we make this decision, things are going to go really, really fast. Now, we may want to make sure that if China is in the position to do this, they are not going to push the button without talking to the rest of the world to make sure that this goes well. We also want to make sure that if America were to do this, China is not in a position to nuke us first, because a decision maker in China may look at the situation and think, “This is going to forever make America dominant in human history, and we may want to take the chances to annihilate them.”
Luisa Rodriguez: Wow.
Sihao Huang: Situations like this are broadly… Like, I think I’m outlining a very particular circumstance, but big power upsets can cause a lot of instability in international politics.
And beyond the risks of just kinetic conflict, I think in a situation where you are staring down the intelligence explosion, you really want to make sure that there are good deliberative processes that bring in voices from all of humanity and are going to feel the consequences of whatever you’re going to do next. You want to make sure that there’s a way to bring all the countries to the table.
And if I were the US, I want China to be at the table, I want India to be at the table, I want Mozambique to be at the table. And I want to make sure that they’re able to meaningfully make decisions about what the future of humanity looks like — meaning that there is the deliberative infrastructure, there is the education needed for them to actually understand what is about to happen.
And we should do this collectively, in terms of understanding where AI is at and what all the potential implications are from people’s perspectives — whether it be a farmer or an AI developer or an educator, coming from very diverse cultural contexts — and we also want to make sure that different countries in this world actually have the power to make a change in whether we press this button or not. So we eventually may want to think about including the US and China or other countries in some agreement where, for instance, they can collectively have their hands on this button, and only when there is agreement over how things should play out, that it is pressed. And I think those worlds are relevant, even if China is not the one that is building frontier AI.
Luisa Rodriguez: Sure. That makes complete sense to me.
AI enabling human rights abuses and undermining democracy [00:50:10]
Luisa Rodriguez: In addition to catastrophic risks, there are also less bad but still very bad outcomes that are possibilities worth covering, in my opinion. So for any listeners who aren’t familiar, can you give a quick overview of the ways the Communist Chinese Party is using AI to advance human rights abuses?
Sihao Huang: I think a broad way to classify it is that China is using AI for two things: one is surveillance, and two is information control. Surveillance, for instance, is tracking people as they move around the country, using video surveillance footage to understand where dissidents are and to control social movements and protests, for instance. And then information control being tracking individual actions online, understanding what they’re thinking, and being able to do very targeted policing — but also broadly making sure that it’s almost impossible to organise social movements against the state or to spread subversive content at a large scale.
So just to give an example of surveillance, there’s been brochures from Hikvision, one of China’s biggest security camera makers, circulating on the internet — and the brochures explicitly advertise that they have a Uyghur detection function.
Luisa Rodriguez: Oh, god.
Sihao Huang: And for context, the Uyghur people are being prosecuted and under what many people consider a genocide right now in Xinjiang. So these systems are being deployed extensively in these cities to “identify where there are troublemakers or potential troublemakers.” And you really see this very tight-knit connection between AI advancements and China’s capability to perform repression at home.
And so there’s this paper that I think highlights this dynamic very well by Martin Beraja, who is at MIT econ. It’s very aptly titled “AI-tocracy.” And he does a lot of things in this paper, but primarily he identifies a number of links. First is that he looks at different cities in China, different provinces in China. After a major protest event or social unrest event happens, there is an increase in police procurement of AI-based surveillance systems. And then he shows that with the increased procurement of these AI-based surveillance systems, they actually make it more unlikely for Chinese citizens to take to the streets.
He then shows that when these systems are bought from these companies, the companies, one, get additional money to do R&D, and two, get data contracts from governments that allow them to make these systems more effective. So you get this close feedback loop of AI companies building more and more advanced surveillance systems, the police freeing up budgets and increasing the amount of surveillance that is happening in China.
I think this outlines a pretty unique dynamic here, which is that most authoritarian states need to keep a certain amount of people happy to conduct repression. This is typically called the selectorate. And the selectorate needs to extract a certain amount of political rent from the regime. But if you’re able to automate the portion of people that you need to keep happy to enforce your authoritarian rule, that selectorate can essentially shrink to zero.
Luisa Rodriguez: Can you actually spell out this mechanism where typically authoritarian regimes need to kind of please a selectorate, but AI could make it so that the selectorate goes to zero? Both what actually is the story that means authoritarian regimes need a selectorate at all, and how exactly does AI cancel that out?
Sihao Huang: I think the notion here is that this goes back to the other, sort of the traditional political science framing of a principal-agent problem. A principal needs agents to execute their decisions and execute power. For instance, Xi Jinping does not rule China on his own; he rules China by proxy through other people — like on the politburo, which are on the top of the Communist Party hierarchy — which then rule China through provincial governments, that then rule China through local governments, that then rule China using a vast police force to conduct repression, but also rule China through actually doing good things for the people that garner support.
And you could keep ruling this country while doing terrible things for the people, as long as you’re oppressing them enough such that they don’t overthrow your power. And that’s not a hypothetical: that’s happened before in Chinese history itself. For instance, during the Maoist era: it was mass starvations, huge amounts of repression. Some of the worst things to have happened in humanity happened during the period of time when Mao was ruling the country.
And during this period, he needed to make sure that people around him were sort of aligned towards his goals. Part of this was done through sheer terror, and part of this was done through making sure that the Communist Party cadre and elites were really benefiting from this regime. And if you’re able to automate the instrument of repression, the number of people that you need to satisfy becomes significantly lower.
I think the right way to think about this is that authoritarians then feel much less constrained, because they don’t need to appease this vast structure of power that lies below them. And this lack of constraints could mean that authoritarianism is more robust, but it also could mean that you get significantly crazier outcomes — because Xi Jinping doesn’t need to be thinking about who is going to be happy about his choices and who’s not.
Luisa Rodriguez: Right. Right. If you don’t need to spend as much on whether the police doing all the surveilling and repressing are happy — if you instead have AI systems policing and surveilling and repressing — that means that you can do even more harmful things, and even less to benefit some groups at all, once these systems are in place and scaled up really widely.
Sihao Huang: Exactly. I would add a point, though: my message to people who are thinking about this and advancing these technologies to China is be careful, because at some point, you may automate yourself away, too.
Luisa Rodriguez: Yeah. Right. If you’re currently benefiting and enjoying the system that is potentially harming others, but it feels OK to you because you’re doing pretty well, eventually AI may replace you and then there will be no incentive to offer you those benefits.
Sihao Huang: Exactly.
Luisa Rodriguez: Dark. Very dark.
Sihao Huang: And I think this is part of what is uniquely terrifying about this intersection of AI and authoritarianism. And I will make a broader point, which is that technologies like AI could inherently be pretty bad in terms of the offence/defence balance for authoritarianism and democracy — but we should also be thinking really hard about how we leverage AI progress to make defensive mechanisms that enhance democratic deliberation.
A lot of the things that we talked about before — in terms of how do we actually get inclusion of the Global South on making decisions about AI, how do we make sure that there’s broad participation and accountability? — I think there’s really a lot of very exciting examples of AI tools enabling that. So a very important framing here is: China is using AI to advance authoritarian tendencies; they’re already differentially accelerating part of these applications. How do we also think about differentially accelerating AI for democracy, too?
Luisa Rodriguez: Right, that does seem incredibly important! Before we move on, and going back slightly, you’ve given one great example of how the Chinese Communist Party can use AI to strengthen its authoritarian rule, but are there any other important examples?
Sihao Huang: I think one way is using AI to strengthen its national security and therefore reduce external threats. The natural example here is how is China thinking about using AI to improve its military power? But then there’s also potential for other grey zone tactics that aren’t directly making more effective drones or more effective missiles, for instance. I think a lot of policymakers in the US are rightfully concerned about the potential for China to start using advanced AI systems to spread disinformation, or use social media platforms made by Chinese developers to try to undermine the foundations of our society and of a civilised political discourse.
They could also use AI in terms of offensive cyber capabilities. And these are ones that don’t require direct, kinetic conflict between the US and China, or China and other US allied countries, for there to be direct harms.
I think outside of China — because China is not the only relevant threat actor here — there are also sort of inadvertent issues through which AI could significantly undermine democracy. I think it was Ben Garfinkel who first argued this, and I think it’s a very powerful argument that keeps me up at night sometimes. And it is that maybe in a world in which AI automates increasingly large portions of human labour, there is less of a need for the ruling class to have democracy.
The argument goes somewhat like: democracy happens in part because people started to get more empowered by the need for education and the need to conduct intellectual labour. And also, it actually benefits the ruling class to have a large set of workers who are well educated and able to participate in political processes. And this may just change if the structure of production changes significantly. In that world, we really need to think hard about what does it mean to have government that still works for the people, and what does it mean to make sure that distribution of power in the world as shaped by AI is in a way that is equitable, and in a way that we think is a good moral outcome?
China’s semiconductor industry [00:59:47]
Luisa Rodriguez: Let’s leave that there and talk a bit about China’s semiconductor industry, which is at least part of why Chinese AI developers have lagged behind Western ones. Some people point out that the Chinese government is investing vast sums of government money into its domestic semiconductor industry, so it might succeed. Others point out how little those investments have achieved in the past, and just how complex it is to make modern chips.
What do you think of China’s prospects for trying to catch up and manufacture the most cutting-edge chips anytime soon?
Sihao Huang: So China has really started pushing a large-scale domestic semiconductor policy since 2015 with its Made in China initiative. And since then, there have been three rounds of the national semiconductor fund, national integrated circuit funds — typically known as the “Big Fund” — that has pumped a total of more than $100 billion into its semiconductor industry.
And all this money has led to a lot of indigenisation of the Chinese semiconductor supply chain, particularly for what are called “legacy semiconductors.” If you actually look at the semiconductor industry, there are leading-edge nodes and legacy nodes. Leading-edge nodes are the most advanced semiconductor devices that typically go into your smartphones, into computer processors, into AI devices. And then you have legacy nodes, which are chips that are so extremely important — things like sensors, analogue circuits, power devices, and even older generation digital circuits that go into say missiles and military hardware. And these don’t have the smallest transistors, they don’t have the highest density, but they’re still very economically and security relevant.
Now, with all this investment that China has pumped into its industry, it has been able to build up a very big domestic supply chain around legacy semiconductors. And I think this is also interesting because China potentially has a significant comparative advantage in making legacy chips. A lot of these things require very tight cooperation with end users of these products.
For instance, if I am developing an electric vehicle, I use a lot of chips on these legacy nodes — sensors, control devices, microcontroller units, MOSFETs or power switches — and I would work with a lot of domestic suppliers in procuring them. And because of lower labour costs and input costs in China, it actually is advantageous to produce these semiconductors within Chinese borders. So Chinese industrial policy has worked relatively well in this area to create a domestic and profitable ecosystem.
However, if you look at leading-edge semiconductors, which are the ones that are relevant for AI right now, the story is a little different. The supply chain for these advanced wafers are extremely complicated. You have things like lithography machines — which currently are only made at the most advanced level by a single company, ASML, in the Netherlands. And in fact, to print chips less than five nanometres, you pretty much need extreme ultraviolet systems that China does not possess and China cannot procure right now.
There’s also things like photoresist — which is the chemical that gets spun on these wafers and then exposed by lithography machines — that are very complex, ultra-pure chemicals codeveloped between the fab, the chemical producer, and the lithography machine maker, and very often are made by very specific companies, say, in Japan or in Taiwan. And US export controls have prevented China from buying chemicals and equipment that are needed to make the most advanced semiconductors.
Luisa Rodriguez: Can you talk about those export controls?
Sihao Huang: So it’s a bit of a long story, but I guess we can step all the way back to, say, like 2017, 2018. And this is when the US first started imposing export controls to China on semiconductors, first to companies like Huawei, due to national security and human rights concerns.
Then the next step is when the United States and the Netherlands prevented exports of the most advanced lithography machines to China. Lithography systems are machines that are used to print computer chips, and the most advanced of these machines are called “extreme ultraviolet systems.” They’re incredible pieces of machinery that cost somewhere on the order of $150 million each. I think the latest generation is upwards of $300 million. They’re shipped to a semiconductor fabrication plant, say, in Taiwan, on massive 747 transporter planes. And it took the US and its allies, including the Netherlands, a huge amount of effort and tens of billions of dollars over the course of 20 years to develop.
And to give a sense of how these systems work, they’re basically miracles of physics experiments that are run tens of thousands of times every second to generate extreme ultraviolet light, with lasers shooting at tin droplets to turn them into plasma, to create essentially soft x-rays that focus on mirrors that are atomically flat, with stages that move at the acceleration of a jet fighter in order to print things at nanometre precision.
It’s very difficult for China to make these machines, and for China to make them, it’s not just about making the machine itself; it’s the entire supply chain. And without these machines, China cannot make chips that are really as advanced as the ones that are present or being made in the rest of the world — that are fabricated in Taiwan, by Intel in the United States, right now, for instance.
The next big round of export controls, which is when there starts to be a very strong focus on AI, really came in 2022. In the field we call these “October 7 controls,” because of the day they were dropped. And these controls, one, strengthen semiconductor manufacturing equipment and material export controls, expanding them from just extreme ultraviolet lithography systems to broader lithography systems that could also make older-generation chips, basically up to the seven and five-nanometre nodes — which were leading edge in the rest of the world back in, say, 2018 and 2019 — to prevent China from scaling up the production of these chips. And also it hampered China’s ability to procure the kinds of equipment and materials that are necessary for semiconductor production — like ion implanters, deep reactive ion etch systems, photoresists, et cetera.
The second part of these export controls, though, were directly targeted at AI chips, and it prevents China from buying the most advanced AI devices themselves, like the Nvidia A100 and H100 chips.
Luisa Rodriguez: Which are basically some of the most advanced chips?
Sihao Huang: Exactly. And I think these chips, the Nvidia devices in particular, have more than 90% market share in AI training and inference, and almost all frontier systems in the US, like GPT-4, Anthropic’s Claude, or even open source systems like Llama are trained on Nvidia hardware.
So after October 7, 2022, China was forced to buy cut-down versions of them, called the Nvidia A800 and H800, with less interconnect bandwidth. The idea then was that interconnect bandwidth essentially dictated how well you can scale up large AI training clusters. Ideally, you want to structure export controls in a way such that China is able to still run AI systems at home for things that don’t harm national security, or things that don’t create risky AI systems or lead to human rights violations — but they can still run TikTok, they can still run recommendation algorithms for their shopping apps. So I think the theory of change partly was that we want them to still be able to have access to some form of AI, but not frontier AI that could be a security risk.
As it turns out, they were still able to scale up pretty efficient training clusters using these systems. When we talked before about China having GPT-4-level AI models like DeepSeek AI or Qwen, these are trained on H800s.
So in 2023, towards the end of that year, the US came up with a new round of export controls that updated these rules, preventing Nvidia, AMD, and other companies from exporting even cut-down versions of their frontier chips. So the H800 uses exactly the same die and the same hardware as the H100, but it has some of the cores disabled. And now there’s a new updated rule that is not just based on interconnect bandwidth, but based on the compute density and the total performance of these devices.
Luisa Rodriguez: Just real quick on that, can you say briefly what compute density is?
Sihao Huang: Yes. So the controls last year introduced two metrics. One is total processing performance, and the second is compute density, which is total processing performance of that chip, normalised by the area of the logic die on the chip. For instance, the Nvidia H100 has a logic die of about 800 square millimetres, and that’s the silicon chip on which the processing actually happens. The compute density is the density of compute performance on that silicon die.
So after the controls in 2023 were announced, Nvidia produced yet another chip that has been cut down: the Nvidia H20. And as it turns out, this chip actually has much lower performance density. It also has a much lower total performance, but it has very high memory bandwidth — actually, even higher memory bandwidth than the Nvidia H100. And China has been buying a decent number of these chips, and they’re very good for AI inference.
So if you think about how these export controls have been updated, first it’s controls on semiconductor manufacturing, then it’s controls on scaling up these clusters, and third is thinking about how do we reduce the total performance of these chips for AI training. And now it seems that we’re actually still selling China chips that are very good for AI inference.
So the game that is being played with export controls, and it’s a very tricky one, is that you want China to be able to buy these chips for harmless applications — that, for instance, help its economy grow, or it helps it run TikTok, et cetera. Because our goal of export controls is not to slow down China’s economy; our goal of export controls is to prevent China from gaining harmful capabilities that could threaten human rights or threaten our national security. So you want to slice the line in a way that puts harmless capabilities on the left and harmful capabilities on the right. And this line sometimes is very difficult to draw.
The second thing is that the line also shifts as AI capabilities change, and the way that AI systems are trained and inferred changes. I think what you’re seeing now is that, for instance, China may be able to deploy AI systems very efficiently in large clusters because of all these computational tricks that have been developed — like sparse models, mixture of experts, multi-head latent attention — without that much compute, and that is restricted by memory bandwidth density. So I think continuously it’s going to be an iterative process.
So what that has meant for China, and I think the state of the world now, is that China, one, has been buying a lot of Nvidia chips that have been cut down — the H800, A800 that have been used for large training clusters. It’s now buying H20s that it can use for inference quite cost effectively. And two is that it has been sort of smuggling these devices in through illegal means. And there’s, I think, a substantial portion of A100 and H100 chips that are export restricted that have gone into Chinese borders. I would say probably on the order of tens of thousands. So in terms of monetary value, I think even close to a billion dollars in devices.
Luisa Rodriguez: Wow.
Sihao Huang: Although I think the error bars on this are pretty big. Though we’ve seen reports of basically hundreds of chips going to China at a time.
And then the third thing they’re trying to do is to build domestic AI processors using wafers or semiconductors that are fabricated at home in China. But it’s worth noting that these are based on seven-nanometre technology, at least right now — which, again, were developed first back in 2018 in the rest of the world, and it’s significantly less advanced in terms of the compute density possible on these chips and their energy efficiency than what is available in the rest of the world. So currently, these chips have lower performance. The best that China is able to produce right now look more like an A100 instead of an H100 or Nvidia’s B100, which is the latest generation now.
This is probably going to change. I think it’s quite plausible that China can shift to five nanometres or even below at some point. The question is at what yield? But the key point here is that they are unable to produce these chips indigenously at very high quantities. So as a point of reference, the total number of wafers that is produced within Chinese borders right now at the most advanced node — seven nanometres or below — is probably an order of magnitude or two less than what is produced outside of China, particularly in TSMC in Taiwan. So even if they’re able to make these processes that are quite advanced, it’s going to be at a lower yield, and they’re not going to be able to pump out the vast quantities that may be necessary to train the biggest frontier models in the future — say your $10 billion, $100 billion clusters.
Luisa Rodriguez: So it sounds like they’re clearly still really struggling to indigenise their semiconductor supply chain. But what do you think their prospects are like in the next few years? Do you think they’ll eventually successfully make their supply chain competitive with the American and Dutch one? Or do you think this will remain a bottleneck for a long time?
Sihao Huang: So in the short term, they will likely be able to make domestic chips. They’ve already been able to make domestic seven-nanometre chips that are below the line that the US wanted to control. But the point is that they’re not able to make these chips at very high yields and at very high volumes.
I think the timeline for the next few years is quite bleak. They’re basically working with the equipment that they’d already imported from the rest of the world — from the Netherlands, from the US, from Japan — over the past few years. And that stockpile is going to determine how many chips they can pump out — and it’s not that much.
The bar of uncertainty actually becomes much higher when we think about what happens maybe three, five years or more down the road: China could potentially be looking into different technologies for building these chips that go around American expert controls. Like, maybe they will try to make three-nanometre chips without using EUV machines, and try to find innovations that allow them to do that at a decent yield. They may also be trying to look at semiconductor technologies that don’t require lithographic scaling.
The semiconductor industry is one that is also incredibly path dependent, because if you look at the history of semiconductor development, there were many points at which the industry had to make a collective decision on what technology to adopt and to scale out to continue Moore’s law. So a very prominent one was back in the late ’90s, a lot of companies came together to ask the question of what are we going to build for our next generation lithography systems? And there were a number of candidates. EUV was only one of them. There was also x-ray-based lithography; there was ion beam lithography; there was electron beam lithography.
And eventually the industry converged on EUV. It wasn’t entirely clear that this was the best decision. It was the best decision based on the available information then, and the industrial structure of the countries developing this technology.
Luisa Rodriguez: Sure. Right.
Sihao Huang: The information that we have now is very different. The industrial structure of China is very different from that of the rest of the world. So if China were to put significant resources into chip indigenisation, it could potentially find these other paths to making semiconductor devices more efficient and advanced.
Luisa Rodriguez: Do you have any sense of how the relevant actors in China think about indigenising their semiconductor supply? Is this likely to become a top priority, or is it not clear that that’s the trade off they’ll make?
Sihao Huang: So indigenising semiconductors has been a huge priority for China over the past few years, and I think you’ve seen this in the policies that the country has put out. We talked briefly about the Big Fund before.
The semiconductor Big Fund has put in more than $100 billion of investment into the semiconductor industry. And it’s not just at the national level; it’s at the subnational level too, which I think is something that people often miss in terms of thinking about Chinese policy. The provincial government of Guangdong, the city government of Shanghai, they all created their own semiconductor funds to invest in local businesses. And there’s this dynamic of competition between different localities to get there first, to be the province that hosts China’s TSMC.
And China has doubled down on its semiconductor development over the past few years. The Big Fund comes in multiple phases. So phase one was, I think, around $30 or $40 billion investments. And then there was phase two, in which China repeated that amount of investment commitment again, and shifted their strategy slightly because there was a lot of inefficiencies and sort of a race to the bottom between different provinces to each have their own version of TSMC.
Very famously, the Wuhan government got scammed. There was a company called HSMC, founded by two people — one of whom I think was an alcohol dealer, and another only with an elementary school education — and they claimed to have a VP of R&D from TSMC on board. They bought an ASML lithography machine, did this whole unveiling ceremony, and then proceeded to return the lithography machine for a bank loan. And the fab was built in a structure that claimed to be doing five- and seven-nanometre chips, and everything went bankrupt. I think the Wuhan government lost somewhere between $2 billion to $8 billion, we’re not exactly sure, in the process.
Luisa Rodriguez: Oh my god.
Sihao Huang: And what came out of that was this big corruption scandal in the semiconductor Big Fund. There were a lot of crackdowns. And one of the most important actors in the semiconductor Big Fund is Tsinghua Unigroup, which is Tsinghua University’s own sort of university hedge fund or investment fund — and the leader of that fund was arrested and put to jail. And many other people in the same ecosystem were taken into inspection by the [Central Commission for Discipline Inspection].
But I think a very important signal to read out of that is that afterwards, China doubled down yet again with the Big Fund number three, where it expanded investment significantly, and is now shifting its strategy not just from putting money into chip design, but focusing explicitly on semiconductor indigenisation.
So I think there is a lot of will and tenacity from the Chinese side to indigenise semiconductors. And you’re starting to see a lot of investment by a whole range of actors, like Huawei and other Chinese state-owned enterprises, that are trying to build indigenous EUV machines, trying to build the entire supply chain at home and get around American export controls.
So I think it’s very important to note that the export controls that we have are not a complete solution. It very much is a moving target, and it needs a lot of resources for the US to keep updating them and making sure that, as China changes its strategy, we also change our strategy correspondingly.
Luisa Rodriguez: Right, right. OK, so my attempt at a summary is: on the semiconductor front, in the next few years it looks pretty hard for the Chinese supply chain to hold steady or catch up to the American supply chain. But in the next five years, they’re doing so much investment, and there are enough creative opportunities technologically, that they very plausibly could develop different kinds of technologies to achieve similar aims. So for people who want these export controls to continue serving their function, they’ve got to be updated to address whatever gaps end up opening back up. Is that a fair summary?
Sihao Huang: That sounds about right. I think the export controls are hitting them pretty hard, and it has a measurable impact on their ability to develop advanced AI systems. But technological prediction is really hard, especially if it’s in the future. So we need to be sure that all these policies are updated, and we are keeping a close eye on what they’re doing.
Luisa Rodriguez: OK, so given all of that, what do you think the likelihood is of China indigenising their semiconductor supply chain in the near future?
Sihao Huang: For China to be able to build these things domestically, it essentially has to move the entire semiconductor supply chain — that consists of companies in Japan, Germany, Korea, Taiwan, the Netherlands, the United States — all within the borders of one single nation, with a GDP per capita of roughly one-sixth of that of the United States.
And here you’re seeing industrial policy not comply with comparative advantages, but fight against comparative advantages, because China’s industrial structure I think in some sense is still not as advanced as that of the West. If China wants to build a domestic ultraviolet lithography machine, it’s not just about building the machine itself, developing the EUV light sources, making the mirrors; it needs to also develop the machines that polish the mirrors — which involve a lot of precision, mechanical components, linear motion components, bearings — and it doesn’t have those companies that makes these precision components to the same level as perhaps Germany or Japan has.
And so it’s about indigenising not just a single machine, but an entire advanced economy within its borders. And that’s really challenging. China can potentially do that. I’m not saying that it’s impossible for China to make an EUV machine. I think if you look at the rough order of investment that goes into making these systems, by most accounting, EUV machines took about $10 to $20 billion to develop in the West. Well, the Apollo programme was something on the order of $500 billion.
And so if China really wants to do this, it could. If China really wants to indigenise advanced semiconductors, it could — but it’s going to come at a very high cost, and it’s going to also take a very long time. So the question here is: one, is it going to do this in time to be able to build very advanced AI systems and scale up AI development? And two, is the cost of doing this going to be worth it?
Luisa Rodriguez: Yeah, well, I’m basically just curious for your takes on those two questions. Are Chinese officials going to think that it’s worth it? And if they did, what would their timeline be?
Sihao Huang: So in the short term, I think we know quite well how China’s semiconductor industry is going to play out, or compute scaling in China is going to play out — which is that they are quite limited by just simply the amount of production capacity they have within their borders.
Before the US export controls tightened the import of these advanced semiconductor manufacturing equipment in 2022 and 2023, China had started to import and stockpile a lot of these advanced lithography machines that they used to print chips. And there’s a very limited number of these machines within Chinese borders, and they can’t make them for at least the next few years — which means that in the next few years, we know that China’s capacity to make semiconductor wafers is something roughly on the order of one or two orders of magnitude less than the rest of the world.
And these are also wafers with less advanced process nodes than those made in Taiwan at TSMC, or made at Intel in the United States. So there’s a pretty hard cap to the extent that China can scale up its largest compute clusters using domestic chips alone.
But if we look at the longer time horizon, let’s say like 2030 or beyond, can China potentially indigenise extreme ultraviolet lithography? Can China potentially maybe not even use EUV lithography, but use other processes to try to get to advanced semiconductors, because there’s so much path dependency in technological development? I think the answer is probably yes. And they would likely try to do this in a world where they really see AI as being the top national priority, like the most important predictor of national power.
Maybe a reference point here is how important do they think AI is, relative to its current expansion of the People’s Liberation Army, of building up missiles and nuclear deterrence and aircraft carriers? Because that’s a huge amount of money that could potentially be shuffled to different places. But I think ultimately, if China were to want to house the whole domestic semiconductor ecosystem within its borders, it’s going to come at a very big economic cost.
Luisa Rodriguez: Yeah. Do we have any sense of how Chinese leadership might be thinking about these tradeoffs?
Sihao Huang: I think this is very difficult to tell. And if they are, they will likely not reveal these preferences very easily.
Luisa Rodriguez: I see.
Sihao Huang: I think in the short term, China is facing some constraints in its ability to invest more in the semiconductor and AI industry. China’s macroeconomic environment has not been doing very well in the past year or two. And because so much of this funding actually comes from local sources like provincial governments and local governments, their finances have been in very precarious states and they’ve been reluctant to expand significantly in investing in AI development, in investing more in the semiconductor industry.
So this is sort of indicative of these economic constraints that Chinese policymakers are facing right now. But there’s definitely the possibility that China would try to go about this with a different paradigm. China used to be building domestic infrastructure — building bridges and railroads and subway stations — as a way to do economic stimulus. Could they potentially shift gears to doing economic stimulus by building compute clusters and lithography machines? I think yes. And it may come at a lot of economic inefficiency. And there’s always the question of for every dollar that you inject into the economy through these stimulus methods, how much are you going to get out of it in terms of GDP output?
But if Chinese leadership sees these tradeoffs as being worth it, because having independence on these supply chains is going to be so critical to its national security, I think a shift in policy is definitely possible.
Luisa Rodriguez: Yeah. Is it possible that they will make breakthroughs, for example, in algorithms that mean that they can develop more advanced AI systems without bridging their compute gap?
Sihao Huang: China can definitely build more advanced AI systems, and there’s definitely the possibility of algorithmic breakthrough. I think on the net, I would bet on US and UK developers much more than Chinese developers. But we also need to be very cognizant of the fact that, one, there are multiple paths to scaling compute, and two, there are multiple paths to building advanced AI systems.
And broadly speaking, technology development is just extremely path dependent. This is something we’ve seen across various technologies and industries. One prototypical example is that of the lithium ion battery. The lithium ion battery has been fueled by a massive industry that has pumped tens of billions of dollars per year into improving this one platform. And while there’s a lot of other alternative energy storage solutions out there, the lithium ion battery has typically won out.
The same thing goes for the silicon transistor. There’s been proposals for photonic computing, for DNA computing, biological substrates — but you just can’t beat hundreds of billions of dollars invested into keeping Moore’s laws going every single year. And that suffocated out a lot of other innovations.
The example here that I think is relevant for AI is that in the US, so much money is going into the scaling paradigm of building bigger and bigger LLMs — and it’s just because you have access to compute that it’s worth trying this approach and it is extremely promising. But there may be underinvestments in other approaches to building advanced AI systems.
China has historically been very strong in doing research in neuromorphic computing, brain-computer interfaces, brain-inspired computing and emulation. One of the biggest AI labs in China, the Beijing Academy of Artificial Intelligence, which is now doing a lot of LLM work, used to be a big brain-emulation and brain-inspired lab. So you may see China facing a very different exploration/exploitation tradeoff than Western labs, where, because they don’t have all these compute resources, because they just know that they’re not going to get to the frontier by raising the US on compute, they’re going to try explore other ways to get to more powerful systems.
And I think one way that this could play out in the long term would be something like, if I were to paint a picture for how China would eventually be the first to get to AGI, then maybe the United States just goes into some sort of AI winter — because GPT-5 is disappointing or because of macroeconomic headwinds — while China has been researching a lot into alternative AI methods, and also have been able to integrate a lot of these AI services into its economy, such that there’s actually a business case for continuing. And this funding, in combination with other avenues for building more powerful agentic AI systems, maybe in combination with breakthroughs in making domestic semiconductors or compute platforms, could eventually allow it to become the first to build transformative AI systems.
But once again, I think I wouldn’t bet on it in the base case.
Luisa Rodriguez: Right. It’s a narrow path, as you said.
China’s domestic AI governance landscape [01:29:22]
Luisa Rodriguez: OK, let’s turn to China’s AI governance. When some people say they’re worried about China beating the US in the AI race, other people respond that China is actually regulating AI much more than the US is, and that the government’s desire for control is actually going to ensure it goes more slowly overall. And I think you’ve already kind of hinted at this, but overall, would you say that the Chinese government regulates AI applications more or less than the US does?
Sihao Huang: For the time being, I think they have definitely regulated AI more than the US. And especially in the early days, right after ChatGPT came out, they came down really hard with the CAC draft regulations on generative AI. A lot of these, as we talked about before, were focused on information controls. So China is one of the first countries and the only countries right now to have AI regulations of any sort that are legally enforceable.
The caveat to this, though, is that I would caution taking this meme too far, as I think there are signs that they’re moderating now with a lighter touch. When the CAC regulations were first released as draft rules, they had very strong regulations on societal bias, for instance: talking about protected groups, making sure these AI systems are not biassed according to gender, according to nationality or ethnic origin. And they also talked about rules such as making sure that AI systems are not simply filtered afterwards using a safety content filter but actually fine-tuned within 30 days to make sure that they get rid of this harmful content. And after several rounds of feedback with industry, we saw that some of these most onerous regulations were taken out of the CAC’s interim rulings.
And I think this is a broader trend that is happening in China too, which is that over the past few years — especially 2018, 2019, and afterwards — China was coming down really hard on its tech companies. There were breakups of China’s educational tech firms, there were breakups of China’s gaming industry, and very strong regulation on recommendation algorithms.
But ever since 2022, 2023, when China’s macroeconomic conditions were really not looking that good after COVID, and when China is seeing that the US and UK are really pulling ahead in terms of AI, I think they’re moderating that approach slightly. You’re seeing this broader trend where Chinese AI regulations are still razor-focused on ensuring that there is good information control, and that the CCP is able to regulate its information environment very tightly. But in terms of other regulations like societal impact and AI safety directly, we are seeing less coming out of China. And I think that’s still to be seen where China is moving to next.
I think it’s also worth noting that the Chinese government is also an investor in these AI systems. It’s also a customer in these AI systems. It’s also providing the compute for these AI systems. So they’re very much looking at this with an innovation system perspective of, how do we make sure that we have the right balance of what they need to keep the Communist Party in power, but also make sure that they’re advancing these systems sufficiently to be competitive with the United States?
Luisa Rodriguez: Yeah. OK, so there’s this interesting tension, where on the one hand they have this comparative advantage because the government is so intertwined with all of the different companies developing AI that they can deploy it really easily and effectively, because it’s this very interconnected system. And on the other hand, they’re so interconnected that the incentives are not there for them to maintain the kinds of regulations that they might have envisioned themselves having once the rubber hits the road of, how much do we actually want to tie our hands in figuring out how to roll out these systems?
Sihao Huang: That’s exactly right. And I think you also see this phenomenon of ambiguity in a lot of these Chinese rulings, which is that some of the rules are actually written with a lot of slack in them. Like, how do you actually check that the data is “true and unbiassed,” as is the wording of the CAC regulations? And the answer is that you can’t. A lot of times these companies get in trouble after there’s a public incident, or if a particular agency announces investigation into their products. So this is room for corruption within the Chinese system. It’s also room for a lot of interpretation and for the political mood or economic mood of the times to determine how strong these regulations actually go on to the companies.
Luisa Rodriguez: OK, yeah. Can we actually take a step back and get a broader picture of what the governance landscape is like in China? I guess I’m interested in if you can lay out the key progressions in the Chinese AI regulation landscape over the last, say, five years.
Sihao Huang: So a lot of Chinese ministries and agencies have put out rulings on AI ethics, on AI standards, et cetera. But really the most enforceable ones here are a set of three rules put out by the Cyberspace Administration of China, which is China’s internet regulator, and actually the internet censor. And they cover recommendation algorithms; deep synthesis systems (or deepfake systems); and finally, the ones that came out in 2023 are on generative AI.
Luisa Rodriguez: Yeah, can we actually talk about those one by one? I’m a little bit familiar with these regulations, but I actually don’t feel like I have all of the context. So what was the regulation for algorithms?
Sihao Huang: So the recommendations algorithm rulings really were out there to address two things. It was razor-focused on controlling China’s news environment, and making sure that there was government supervision by the CAC on the nature of these algorithms, and giving them the ability to determine the types of content that is susceptible to positive and negative social energy.
It also focused on this slightly separate set of concerns around gig and delivery workers. A lot of price discrimination was going on. And I think this is actually a very interesting topic, and one very much worth diving into if you’re interested in China’s economy: there are a lot of concerns, or have been a lot of concerns, with labour rights for gig workers in China. If you live in China for any period of time, you’ll see that there’s extremely convenient delivery services. And this was very dangerous when I was a student in Beijing doing research. You can get really good food for very cheap.
But the dark side of that was that delivery workers are getting paid very little with very little worker protection. For instance, if your food was taken away, it is the delivery worker who pays the fine — and they get definitely not any living wage by being a delivery worker in these cities, and very often are immigrant workers from the villages or lower-tier cities going into cities like Beijing and Shanghai.
Luisa Rodriguez: Right. And what was the mechanism? It was like the recommender algorithms were kind of setting prices very low because there was such a surplus of labour and there were no kind of protections to make sure that anyone got some kind of livable wage if they were doing these jobs?
Sihao Huang: Exactly. And I think this also had this phenomenon where there was very little ability for these gig workers to actually resist or voice out their opinion against these tech companies. Because as much as China claims that it’s a communist country, it represses online dissidents, or these so-called “negative energies” that these very regulations are trying to clamp down on. Which meant that for an extended period of time, these issues went unaddressed until it really bubbled up to the surface, and the CCP saw that this was in their interest to also start cracking down on tech companies to prevent their power from ballooning and threatening the state.
Luisa Rodriguez: Oh, wow.
Sihao Huang: Yeah. So this was, I would say, the first big set of AI regulations that came out from the CAC. It also created this tool called an “algorithmic security report,” which at this point was voluntary, but asked for companies to submit reports on the nature of the algorithms, their inputs and outputs.
And then the next set of regulations came out in 2022. And if you recall, this was when a lot of image-generation tools and deepfake tools were coming out, like Stable Diffusion or DALL-E. And these regulations start acquiring things like real ID user verification. So if I were a Chinese user wanting to generate images on an online tool, I’d have to enter my national ID number. And then, if I were to enter something like, “Generate an image of Xi Jinping on a unicycle in front of Tiananmen Square,” this would be logged in a database and tied to my ID.
That set of regulations also created this set of characteristics to recognise illegal inputs, and also mandated service providers to assess capabilities to generate “national-security-related content” or specific scenes that are of sensitive nature, or ones that generate biometric-related or identity-related contents — which I think the latter part is perhaps good AI regulation. The former part is, again, continuing on this line of information control.
Luisa Rodriguez: So the latter part is about deepfakes. It’s like if you’re creating content that has biomarkers of a human that exists, and that could be harmed potentially, it will note that, and then not allow you to make it because you shouldn’t make some content in someone else’s likeness. Is that the idea?
Sihao Huang: Yes. And I think it’s a combination of three things. One is that there will be watermarking on these systems. Two is that you will not be allowed to make certain systems and it will be rejected by the content filter. And third, at least the regulation stipulated that if you input illegal content into the generative systems, it can tie it to your ID and record it in a database.
Luisa Rodriguez: Some of that does seem good to me.
Sihao Huang: And the others seem pretty terrifying.
Luisa Rodriguez: Yes, you’re right. Yes, that does seem terrifying. I do hate the idea that… I constantly generate images on DALL-E, and I would not be very excited about my social security number being tied to all of the images I generate.
Sihao Huang: Absolutely.
Luisa Rodriguez: OK. So that’s the second wave of regulations that came in. What was the third?
Sihao Huang: The third came just a few months after ChatGPT came out. I think a lot of policymakers on the ground were sort of freaking out about what to do with the system, because language models, at least with this level of capability and broad access, are a very new thing. If news aggregators were terrifying because you couldn’t control what people saw, then language models that are potentially trained on huge amounts of the Western internet are even scarier to CCP officials.
So these regulations, at least a draft form, came out at the start of 2023 and it stipulated things like the models must adhere to socialist worldviews; the training data must be “accurate, objective, and diverse”; and also stated that systems with public opinion characteristics must go through safety review.
Luisa Rodriguez: Wow. And just to make sure I understand, these regulations are for Chinese generative models. Do Chinese citizens have access to Western ones like ChatGPT or Claude?
Sihao Huang: They’re not supposed to, but they definitely do. So if you look on Baidu and you type in “ChatGPT,” you can’t find ChatGPT, but it will be “ChtPT” — and it’s like that’s a way for you to access it. It’s just a misspelt version of ChatGPT, but uses GPT-4’s API. And these systems get banned all the time, and then you can find new versions. And of course Chinese citizens use a VPN to access the outside internet. So in reality, if you talk to a lot of developers, they’re using GPT-4; they’re not using Baidu’s service.
Luisa Rodriguez: OK, so this doesn’t apply to Western models, because those models are in theory not accessible. In practice they are accessible. But either way, what we’re mainly talking about here is controls that apply to models developed and allowed in China. And these are basically just making sure that the generative models say things that are allowed under the Communist Chinese Party’s kind of worldview.
Sihao Huang: And I think there’s two very important caveats here. The first one is that these rulings refer to publicly deployed models, and don’t apply to developers just training these models within their own labs. I think this is an important caveat when we think about AI safety, which is that maybe this regulates societal impact, at least China’s version of societal impact. But this does not necessarily cover, and is not intended to cover, AI development activities — particularly because China is intent on making sure that there’s as little hindrance as possible on Chinese AI firms catching up.
Luisa Rodriguez: Right, right. OK, so the reason this is relevant is, say we heard about this regulation, and we’re like, “How much can this, in theory, be applied to actually prevent the kinds of unsafe deployment and development of AI that we actually want to avoid, not just censorship?” If we were to try to apply it, it would actually not work very well, because it would only prevent deployment of unsafe models, not the creation and training of unsafe models. So that seems like a big gap, if we actually wanted it to be useful in this domain that we care about.
Sihao Huang: Absolutely. I would say that the tools and bureaucratic structures that have developed as a part of these regulations are going to be very powerful if we want to start thinking about AI safety rules.
Luisa Rodriguez: Can you say more about that? How much do these regulations feel relevant to the features of AI that you and I actually care about regulating?
Sihao Huang: A lot of these are really quite adjacent to AI safety rules. I think it’s also fair to say that AI safety is something that is very much in line with the Communist Party’s interests.
If we talk about a lot of the common ways in which we think that AI systems could cause harm in the world, let’s say we have in the future AI agents that wreak a lot of havoc on the internet, are not accountable or are not tied to personal identities. This seems like something that the CCP would care about, particularly in terms of thinking about liability tracing.
If we think about AI systems being used to conduct cyberattacks or being used by terrorists to enable proliferation of bioweapons, this also seems like something that China will be very concerned about, because these safety incidents have historically been a very big issue for authoritarian states and their legitimacy. Think about Chernobyl during the Soviet Union. Think about the outbreak of COVID in China, which really was a huge political crisis and led to one of the biggest protests in China since Tiananmen Square at the end of it.
So I think there is definitely the incentive there for the Chinese state to start thinking about AI safety in a similar way as we are: making sure that these systems are controlled, are not misused, and inherently are designed in a safe way. And I think there are signs of us starting to see that.
For instance, if you look at standards like the ones put out by TC260, which is the standard-making body, it talks about long-term risks on AI and calling on AI service providers to, “pay special attention” to the ability of AI to deceive humans, self-replicate, and self-transform, as well as AI capabilities in cyber and CBRN [chemical, biological, radiological, and nuclear] domains. And that seems very much like what we would be interested in ensuring China does not put out and leak into the global commons as a harmful AI system.
But they’re not directly enforceable yet. And it does seem to me as if having these things become enforceable through the strong enforcement authorities that these Chinese agencies have is very much within reach — and I think something that we could do a lot of work on in global regulations to synchronise.
Luisa Rodriguez: Super cool. OK, so these are standards. They’re not regulation yet, but you think there is a kind of pathway to at least making them legally binding in China. And is the last thing you’re saying that plausibly these kinds of standards could be, in theory, brought into an international governance framework?
Sihao Huang: Yeah. I think trying to get China to sign on to these AI safety rules, and trying to harmonise these regulations internationally, is actually a particularly exciting aspect of cooperation right now. Because one of the more unique things that you’re seeing with Chinese AI regulations at the moment is that they are highly permeable.
For instance, there’s a new set of AI laws in the pipeline that will actually go through the State Council, and there are public drafts on these AI laws right now that are being circulated by different scholars. For instance, there’s a set of draft laws circulated by a team of scholars at the Chinese Academy of Social Sciences, and another led by a scholar, I think, at the China University of Political Science and Law. And many of these people are talking to American and British AI researchers in track two dialogues and in official channels, and also reading a lot of the work that is coming out, say, of the White House executive order on AI or the Bletchley Summit.
So you’re starting to see the diffusion of these ideas into the Chinese system, and acceptance because they’re put forth by Chinese scholars. And I think, as we talked about before, this sometimes very much aligns with actual Chinese interests. And I think we’re in this period where, because a lot of these concerns are relatively new, the Chinese government does not have that much capacity inherently to look into AI regulations in their minutiae, and there’s a huge amount of transfer that is possible.
I think part of our job now is to make sure that, one, the Chinese government actually do internalise these risks that I think are beneficial on both sides for there to be good regulation on; and two, to make sure that the way that we are designing these regulatory systems in terms of the process and targets are interoperable — such that if at some point there is enough political capital or need to bring these things together into international bodies, we already have a very solid foundation to talk to each other on.
Luisa Rodriguez: That’s really, really cool. What do you think the odds are that we overcome whatever hurdles are in front of us, and actually get some of these regulations in the pipeline to be real?
Sihao Huang: I think the AI law that we talked about is looking like it’s going to pass in China at some point, unless it really does get cold feet in making sure that it does not have any parts that hinders AI development. And this has happened before with the CAC rulings, but it does seem that this is not a very high cost on China right now, given that they’re not in the frontier.
And I would say that, because of lots of international engagement work that has been happening between different countries — and dialogues between the UK and China, and US and China — I am hopeful that there will be a diffusion of these ideas, and eventually work to have standards be synchronised and harmonised across bodies.
Luisa Rodriguez: So in general, it sounds like Chinese domestic AI policy does have some really good stuff going for it. What exactly do you think it’s doing well?
Sihao Huang: I think the biggest thing that Chinese AI regulations have going for it is that they’re extremely fast at pushing out these rulings. As we talked about, the CAC had a turnaround time of a few months. The biggest downside is that this is not very accountable, because it doesn’t go through a democratic legislative process.
But I think one thing that we need to give China a lot of credit for is that they have been very receptive in talking about AI safety concerns and addressing issues that the international community cares about. You see a lot of scientists in the Ministry of Science and Technology and government advising roles who have been talking quite openly about AI risk — for instance, in the United Nations or in their own internal documents.
They have also established this notion of AI safety/AI security. And actually, this is an interesting point, which is that the word for safety in Chinese, an quan, is the same word as security. So while the government has been very concerned about security in the domestic security, political security sense, the same term for AI security has also been appropriated by the AI safety community to talk about these additional concerns. And I think now there is a lot of domestic momentum in talking about these questions openly, and potentially incorporating them into future rules.
China’s international AI governance strategy [01:49:56]
Luisa Rodriguez: OK, let’s turn to China’s international AI governance strategy. Is Beijing pushing any agenda for how it wants AI to be governed and regulated at the international level?
Sihao Huang: I think there is definitely an emerging strategy for AI governance now. I would say that I break down Beijing’s AI governance strategy internationally into two pieces.
The first of China’s goals is to craft a coherent foreign policy position as a “responsible AI developer.” And a lot of this is about signalling. For instance, a lot of China’s old AI international engagements came out 2016 to 2021, and they talked about things like mass surveillance and human rights. A very interesting one was a UNESCO declaration in 2021 that China signed on to, calling for an end to mass surveillance using AI technologies.
So a lot of these original foreign policy positions on AI I think very much reflected the global trends on how we talked about AI. Back in 2021 and before, it was talking about AI ethics and human rights. It discussed things like data privacy and algorithmic fairness, much in the same way as you expect from a policy paper coming out of DeepMind. And then in 2022, 2023, it started talking more about AI safety/security. So in its speeches in the UN, it was talking about working together to prevent risks, fighting against misuse by terrorists. Again, they mentioned in the UN Security Council that we need to make sure that superintelligence is not going to attack us, and we need to give it reasons to protect ourselves.
Then there’s also this rhetoric of “AI for all,” which is that developing countries deserve the right to benefit from this technology. And I think this all falls into the bucket for me as China trying to make sure that its signalling on AI foreign policy is basically up to speed with international developments, and that it is showing that it is a responsible AI power.
But I think the second bucket is that China does have a number of core AI interests. And the AI interests here are really a few things.
One is making sure that it has access to AI technologies. For instance, in the Global AI Governance Initiative that Beijing put out in October 2023, just a few weeks before the AI Safety Summit in the UK — and this was announced at the Belt and Road Initiative, which is this big meeting of more than 100 developing countries — China announced that, “all countries, regardless of their strengths, size, or social system, should have equal rights to develop and use AI.” And it was also talking in the UN that a certain developed country, in order to seek hegemony, is seeking to build small clubs around these emerging technologies — I think very specifically referring to the United States and its export controls on AI.
So it is pushing this broad narrative that AI access is very important — which is true, and I think we absolutely need to recognise that — and tying that to its conflict with the United States and American foreign policy.
But the second bucket here is the idea of AI sovereignty. So it very much wants to have the ability to govern AI on its own terms. It talks about how AI should be regulated based on a country’s own social characteristics. What that basically means is, “Leave us alone, let us do our AI regulations with socialist values, and don’t criticise us for them. And also make sure that you don’t use AI to manipulate our public opinion or spread misinformation, or intervene in our internal affairs.”
And the third bucket here I would say is AI security. It talks about in the AI Governance Initiative that we should fight against the misuse of AI by terrorists, ensuring that AI always remains under human control, and that we need to build trustworthy AI systems that can be monitored and reviewed. This, once again, is quite aligned with the Communist Party’s core interests. And I think it is on this third bucket in which we can have a lot of cooperation.
Coordination [01:53:56]
Luisa Rodriguez: Cool. OK, let’s actually talk about how policymakers in the US and UK should think about coordinating with China on international AI governance. To start, I’m curious what goals Western policymakers should be aiming for when thinking about how to structure international AI governance alongside the Chinese government. So what is one near-term goal that Western-based policymakers should have in mind regarding Chinese AI capabilities and governance that you think would be robustly good to aim for?
Sihao Huang: So one very exciting set of progress that has been made in AI governance recently has been the establishment of AI safety institutes across different countries. Canada has one, the United States has one, the UK has one, and Japan too. And Singapore, I think is in talks of setting one up. These AI safety institutes are organisations that coordinate model safety and evaluation within each country, and potentially also fund AI safety research in the long term. But it would be very good if China has a similar organisation and can be in touch with other AISIs to share their experience to eventually harmonise regulations, and, when there’s political will, push towards consolidated international governance or basic rules in the international world about how AI systems should be built and deployed.
So I think a near-term goal that would be good to push for when we’re talking to China is to have them also create some sort of AI safety coordination authority. We talked about how China already has a lot of infrastructure for doing AI regulations, and this could potentially come in the form of a body that is established, let’s say, under the Ministry of Information and Industry Technology, or the Ministry of Science and Technology, that centralises the work that it takes to build and push for AI safety regulations on top of what China currently has in information control — and then can become the point contact when China sends delegations to future AI safety summits or to the United Nations, such that we can have common ground on how AI regulation needs to be done.
Luisa Rodriguez: OK, neat. Is there another you think would be good?
Sihao Huang: I think something that would be really good for the US and China to work together on would be to have China signed on to some expanded set of the White House voluntary commitments on AI. These were policies or commitments to create external and internal red-teaming systems to evaluate these AI models; build an ecosystem of independent evaluators in each country to be able to check the safety of frontier models; build internal trust and safety risk-sharing channels between companies and the government, so the governments can better informed about potential hazards that frontier AI can pose; and also investing in expanded safety research.
You may also want China, for instance, to sign on to responsible scaling policies within each company — jointly defining things like AI safety levels, protective measures that go onto your models, conditions under which it will be too dangerous to continue deploying AI systems until measures improve.
And I think the right framing here is not just have China sign on to American White House commitments, but can we also identify additional commitments that Chinese companies have made or China’s asked from its AI developers that would also be beneficial for us. And we structure this as some sort of diplomatic exchange, where “we do more safety, you do more safety” — and we’re learning from each other in a mutual exchange.
Luisa Rodriguez: OK. Thinking more about long-term goals, is there a long-term goal that you think would be robustly good to aim for?
Sihao Huang: I think the most important long-term goal to keep in mind is: how do we have continued dialogue and trust-building? I say this because of two things. One is that US-China relations are very volatile, and we want to make sure that there’s robust contact — especially when harms arise — that the two sides can work with each other, and know who to call, and know the people who are making these decisions.
The second reason is that tech development and policy responses to address the harms of potential new technologies are very difficult to predict. And we can only typically see one or two years on the horizon of what is currently mature. For instance, I think model evaluations have become much more mature over the past few years, and have been pushed a lot onto the international stage, but they’re definitely not all that is needed to guarantee AI safety. Bio and cyber risks have also started to materialise, and are sort of formalised into benchmarks on defences now. There’s also a growing community now on systemic risks and overreliance that I’m quite excited about. And compute governance is also emerging as a key lever.
But there’s going to be more issues on the horizon — things like agent governance or potentially even AGI circuit breakers — that, as these technologies mature and improve in capabilities, there’ll be much more clarity on how governance should be done. And we want to make sure that there are these active channels to talk about emerging risks, and talk about these new policy proposals, and also the institutional capacity to host them and implement them.
Luisa Rodriguez: OK, so those are some of the goals that the West and China might want to have in mind for the short and long term. What kinds of international governance schemes are available to work toward these goals?
Sihao Huang: I would give the answer in two parts. First is the more practical one of what institutions exist, and the second is a more theoretical one. So I think there is a very large and complex set of overlapping international institutions for AI governance right now. On the one hand, you have things like the United Nations and UNESCO that represent a much broader set of countries and include China. You also have more restricted sets of institutions — like the G7, the Global Partnership on AI, the network of AI safety institutes right now, and the cooperation between the UK and the US, or individual bilateral relationships on AI.
And all these forums are going to be very important in pushing out global AI governance and AI benefit-sharing schemes, but they all have their own comparative advantages. I think it is very important to talk to China about AI safety issues, and it’s very important to coordinate with them to make sure that we are not engaging in an AI race, we’re not being reckless with AI safety issues, and that we deal with emergencies together if they do arise.
But it is also really challenging to talk to China constructively about AI ethics issues. And here, that’s where forums like the G7 come in, where we coordinate very tightly with other partner states to make sure the deployment of AI goes well for society.
I think the ultimate strategy here is that we want to make sure that these overlapping sets of institutions are able to cover all our grounds. If we look at, for instance, the history of climate cooperation, nuclear nonproliferation, et cetera, we never ended up with this one set of global leviathans that govern everything all at once to solve all issues. I think more likely than not, we’re going to create this big institutional soup that competes and collaborates with each other in some very complex ways, with both national and subnational divides.
So I think the right approach to think about it here is — given the massive cone of uncertainty around future geopolitics, and also the unpredictable nature of AI development — we need to think very carefully about what it means to have adaptive international governance.
I think there’s a few principles here that you may want to distil. Things like: we want to really constrain the size of ungoverned spaces that arise from the lack of enforcement power, for instance, from competing authorities.
We want to make sure that we intentionally create a nested set of institutions that leverage the comparative advantages. For instance, talking among a tighter set of countries can make sure that you’re more responsive in addressing risks, and you’re also more trusting in pushing forth stronger regulations. But also talking to a broad set of countries, like in the UN, is extremely important to make sure that you’re being inclusive, you’re reflecting the voices of the global majority, and you’re eventually able to make sure that the balances of AI safety and AI benefit-sharing is struck correctly. So there are general tradeoffs in these institutions among scope and depth, or speed versus inclusion, that we need to leverage comparatively.
You also want to do things like maximising linkages between institutions, including across rival ones, that encourage cooperation, particularly in times of crisis. And this principle is reflected in what we talked about before with, for instance, having harmonised standards. Harmonised standards are an example of a way where you don’t directly have a global UN sanctions treaty, or some sort of agreement where both sides are bound to acting in a certain way, but because the structure of our AI regulations look very much alike, if we want to bring them together, the activation energy is very low.
And then you also want to build very strong informal and non-institutional ties, like track two dialogues — so, collaboration between tech companies on information and hazard sharing. And bind all this together through a set of bilateral and multilateral relationships, such that although not all countries can work together directly, there’s a well connected network of interdependencies — such that we all know that if one domino falls, it all falls, and the stakes are high enough that we have to collaborate.
Track two dialogues [02:03:04]
Luisa Rodriguez: That actually reminds me that you’re currently involved in running a track two dialogue between the US and China about AI. And I’m sure you can’t say that much about it, but I am curious to hear a little bit — starting with just, what are track two dialogues?
Sihao Huang: So track two dialogues is diplomatic lingo for connections or meetings between nongovernment officials in two countries. “Track one” are official government-to-government exchanges; track 1.5 are exchanges in which there is a mix of standing government officials and retired ones, or ones from academia. And track twos are actually quite diverse: you have track twos that focus on academic collaboration, you have track twos that focus on retired government officials who still have a lot of context on government policy and have direct channels to the current administration.
Luisa Rodriguez: And are they kind of government sanctioned and facilitated? Where someone from the government is like, “I’d like to start facilitating dialogues in the track two case between nongovernment people about a specific issue,” and then they kind of help those meetings happen? Or is it more grassroots than that?
Sihao Huang: Typically, track twos don’t come from direct government involvement, and I think there’s a few examples here. For instance, the Brookings Institution has run a set of track two dialogues on AI with China for quite a long period of time now — I think four to five years — and they’ve met at regular intervals, and come up with really good progress on defining military AI systems and making sure that the two sides see eye to eye on risky AI deployment.
There is also the Safe AI Forum, which has brought together a lot of AI developers and academics between China, UK, and the US to jointly talk about AI safety. And they came up with the Ditchley Declaration last year, around the time of the AI Safety
Summit, where for instance there was a commitment, or at least a will, to invest one-third of a company’s research budget into AI safety.
And these have been very powerful statements in showing policymakers, potentially track one mechanisms, and the broader community that there is will among civil society to come to these consensuses, and sometimes can very measurably drive forwards policy progress in the official sense.
Luisa Rodriguez: Very cool. Actually, that makes me want to understand a little bit better the mechanism for how they end up being useful.
Sihao Huang: I think every track two dialogue has its own theory of change. There’s ones that just aim to bring together academic communities on AI safety, and this is really important. We talked about how there’s a lot of permeability right now in ideas in China, where many of the AI laws and their drafts are heavily influenced by Western thinking or the White House executive order. So the goal here is: how do you build and construct significant epistemic communities that spread ideas around AI safety between different countries? And because there’s not that much capacity right now within the Chinese government to think about AI safety issues rigorously, actually creating these communities of policymakers could have a very big impact on cooperation in the long term.
Then you also have track two dialogues that discuss very specific policy issues, like military AI, like potentially biosecurity. And if you have these dialogues that include former government officials, then there’s potentially the opportunity for them to transition to say track 1.5, and be converted into government policy with a lot of details having been hashed out already.
And then you also have dialogues that perhaps involve really high-level individuals in tech companies or in governments, or formerly retired government officials that can focus on long-term strategic questions and make sure that leadership on both sides are on the same page when we talk about long-term AI risk, when we think about AI cooperation in the short term, when we think about potentially even issues like AGI deployment. And the goals of these dialogues may simply be to put the same people in a room, to build norms around what is acceptable and what is unacceptable and when cooperation is warranted, and eventually make sure that they know who to call up when crisis arises.
Luisa Rodriguez: Cool. Can you say anything about the track two dialogue you’re involved in? Either what its goals are or how it’s going? You probably can’t say how it’s going.
Sihao Huang: Hopefully I’ll be able to say more at some point in the near future.
Luisa Rodriguez: Cool.
Misunderstandings Western actors have about Chinese approaches [02:07:34]
Luisa Rodriguez: Moving on a little bit: do officials in Western countries tend to have any common misunderstandings about how the Chinese government approaches these AI issues?
Sihao Huang: The biggest misunderstanding is that they are a unitary actor — and they’re definitely not. I mean, even if we just look at ourselves in our own societies, we see a huge amount of disagreement as to how AI should be regulated — I think rightfully so, because there’s so much uncertainty over how it’s going to develop in the future and how it impacts different groups in an uneven way. And if you look at China, not only is there disagreement about the harms and benefits of AI, but there’s also disagreement around a lot of competing government bodies that do AI governance regulation.
So if you look at the regulatory landscape in China, there are three big bodies that do AI regulation. One, as we talked about, was the Cyberspace Administration of China: that is sort of the internet censor and regulator, and has a very big toolkit for AI regulation.
You have the Ministry of Science and Technology: that is very much in charge of doing scientific research typically, and has put out AI ethics principles and are relatively pro-safety. And this is the organisation that sent the vice minister of science and technology, Wu Zhaohui, to Bletchley, to the UK last year.
Then third, you have the Ministry of Industry and Information Technology: that’s broadly in contrast with the Ministry of Science and Technology, which is doing scientific research; they’re doing technological diffusion — so industrial policy. And they host a think tank called the CAICT, which develops tools for certifying trustworthy AI, has been putting out standards for AI evaluation, and also currently manages the national Integrated Circuit Fund that pushes semiconductor policy.
And these three organisations in some sense have overlapping mandates on AI regulation, and they work together with each other sometimes. So if you look at the CAC’s rulings, they’re jointly signed with the Ministry of Science and Technology, MIIT, the Ministry of Education sometimes, and the Public Security Bureau. But there is not a lot of clarity right now on which of these agencies is really going to take the lead on AI safety or AI regulation or AI development.
Another organisation that has come into the mix here is the National Development and Reform Commission, the NDRC, which is an extremely powerful body in the Communist Party that is in charge of economic development and reform policy. And they have been tasked with AI development, but it’s unclear how their mandate currently is different from that of the Ministry of Science and Technology.
So there is this phenomenon where you go talk to a lot of people on the ground in China, and they all claim, “We are the AI regulators; we are the lead on doing AI policy.” But I don’t think China itself or Xi Jinping or the politburo has a clear sense of where this is going to be located just yet.
Luisa Rodriguez: Huh, OK. And is there anything that Western policymakers would do differently if they understood that better? Or is it more like, this isn’t well understood yet, both by people in the West nor by the Chinese government itself, and so we kind of just have to wait and watch and see what happens?
Sihao Huang: I think the biggest implication here is that things are actually extremely malleable on the ground, and this is a very critical period for shaping how AI governance is going to look like in China, domestically and also internationally.
Matt Sheehan, who studies AI governance in China very extensively, has a good analogy for this. He talks about how, if you look at China’s writing, or writing in the People’s Daily about climate change in the 1980s and the 1990s, there was a lot of rhetoric about how climate change was a way for Western developed powers to hold back the Global South from actually industrialising, when they’re the ones who have put out so much carbon dioxide and emissions into the atmosphere.
But they changed their tune quite significantly from saying that climate change is a hoax to climate change is a very real issue over the past two decades because of a few reasons. One is that Beijing started to be covered in thick smog, and Chinese political leadership started to see this as a real problem. Two is that they started to realise that addressing the climate crisis could actually benefit Chinese development, that there’s a real opportunity here to build out capacity in green technologies to begin an energy transition, and this is an important opportunity for development. And the third thing is probably just that Xi Jinping started to convince himself that this is a real deal.
And as we think about AI now, I don’t think Chinese leadership has a clear position on what AI looks like in the future, and how important it is strategically for China, and how important the development of safe AI systems is for China. I think it would serve us a lot of good in the long term to make sure that we communicate in good faith the potential harms of AI systems, and communicate to them the stakes that are at play.
Luisa Rodriguez: OK, last question before we move on: What do you think of the recent signs of cooperation between China and the West on AI? One example being the fact that the UK, US, EU, and China all signed a declaration of AI’s catastrophic danger.
Sihao Huang: I think this is incredibly helpful. And I would definitely not undersell the difficulty of getting the Chinese vice minister of science and technology on stage with [United States Secretary of Commerce] Gina Raimondo talking about AI risk for humanity together. I think that’s very important progress to establishing long-term dialogue, communication, and harmonised regulations on AI systems.
But I would also say that we currently are in somewhat of a honeymoon period on AI issues. Currently, there’s a lot of talk about how AI systems could be potentially harmful, but the ways in which these are tied to core interests are not very well internalised by different countries. So it’s a new and exciting way for China, the UK, and the US to have dialogues together, when dialogues on other issues like climate or security are perhaps stalled. I think we absolutely need to leverage this honeymoon period to do as much as we can, and that makes it an even more important policy window.
But I think in the long term, the real question here is: how do we build continued momentum, genuine desire to engage, and also how do we prevent the use of dialogue of AI, sort of for its own sake, in a way that could hollow it out? So I think it’s particularly important now that we continue this momentum by finding places where diving into substantive and precise discussions about challenges at play that are well defined can solve hard political differences and generate specific actionable ideas. Because continuing this momentum allows us to continue this dialogue, and continue trust-building and working with each other — especially maintaining these ties when more significant harms may arise, and more significant policy ideas that require a huge amount of political capital and coordination can actually come into being.
Luisa Rodriguez: OK, really interesting.
Complexity thinking [02:14:40]
Luisa Rodriguez: Pushing on, one thing I know you’re interested in is complexity thinking, which looks at how relationships between parts give rise to the collective behaviours of a system, and how that system interacts and forms relationships with the environment. My sense is that people think that this is an important way of looking at and understanding systems with a bunch of interconnected and interdependent elements that lead to nonlinear dynamics, and emergent properties, and unpredictable outcomes.
How should complexity thinking affect how we think about the Chinese semiconductor industrial policy?
Sihao Huang: So one aspect of China’s industrial policy for semiconductors that I think a lot of people don’t realise when looking at it from a distance is that it’s not actually just one set of policies that come out of Beijing; it’s a very complex mix of policies and incentives, and informal and formal institutions at the provincial, local, city and national levels.
The Shanghai government, for instance, has its own Shanghai semiconductor funds. The Guangdong government also has its own semiconductor funds. And if you read different local government policies, you see that they are actually quite differentiated. Wuhan, for instance, has a government policy that talks specifically about things like semiconductor manufacturing equipment for memory makers. Shanghai talks about how do you create an independent supply chain for advanced nodes. Guangzhou and Shenzhen talk about advanced packaging and chiplet technologies for building CPUs, because so many chip designers are based in that region. So you start to see a very complex picture of different actors competing against each other, learning from each other, and trying to build their own local niches in the ecosystem.
And there’s this framework that I really like from political scientist Yuen Yuen Ang, who wrote this book, How China Escaped the Poverty Trap, in which she describes China’s process of economic development not only as a static image of, I think a lot of people try to explain it as authoritarian capitalism — where the Chinese state sort of comes down and takes away workers’ rights, and implements this capitalist system that with liberalisation allowed Chinese industry to take off. She outlined a much more nuanced picture where what happened with reform and opening up in the 1980s in China was that the Chinese government managed to create incentive structures for local governments to adapt and to learn from their own environments.
I think there’s an interesting machine learning analogy to draw here: that certain systems cannot be designed; they have to be evolved. And imagine how you would code, using a set of if-then statements, an AI system that recognises a cat or a dog. That is so incredibly difficult. But you can definitely create a machine learning system specifying some sort of structure and loss function, and train it on data, and have it learn according to the data how a cat and a dog is different, even though you can’t explicitly write it out.
I think policymaking has a similar flavour. It’s extremely difficult for one dictator on the very top to try to understand all different local conditions and write the perfect policy as if it’s this big set of if-then statements. What a policymaker should be thinking about is: how do you create the incentive structure as a “loss function” that sets in motion the right sets of evolutionary processes that improve economic efficiency and improve synergies across the stack?
So if we think about China’s process of economic development, it really is: how do you steer these forces of competition, adaptation, niche creation, and large-scale policy development in a way that allows different localities to work together, that allows comparative advantages to be leveraged? I think the right way to think about China’s semiconductor policy is not as this one static set of documents that Beijing has put out; it really is a set of national directives that eventually led to a lot of fine-grained local implementations.
And there’s this continued policy-learning process where China iterates on its national IC funds. As we talked about, there’s been three stages of it, and there was a corruption campaign in the middle. China changed its tack significantly on what it wants to invest in, and also changed its tack on how different localities should interact with each other and the degree of coordination that is needed. And this perspective I think actually Chinese leadership has outlined before in the past. Deng Xiaoping, who was China’s leader during reform and opening up, called the process of Chinese development as “crossing the river while feeling the stones.”
Translating this into more complex systems, machine learning language: the key here is to have tight feedback loops. You want to have stones that you can hold onto as you venture into the river. And because the process of semiconductor development, or AI development, is so complex, it’s not just about building a single technology — it’s about really upgrading China’s entire industrial stack to become that of a developed country — we should be looking at it in the lens of economic developments.
Are there proper and functional feedback loops for policymakers to be receiving signals from the bottom and from people around them, and to be able to correct course and have the political ability to correct course? And if those ingredients are present, even if China makes some mistakes on its industrial policy, it could potentially improve its processes and make it more efficient. Because I think no policy is perfect from the beginning.
Luisa Rodriguez: That was really fascinating.
Sihao’s pet bacteria hobby [02:20:34]
Luisa Rodriguez: We have time for just one more question. Your website says that in your spare time you enjoy baking, taking photos, reading philosophy, and keeping harmless jars of pet bacteria. What is the appeal of pet bacteria?
Sihao Huang: I had this phase where I was very interested in biological computing. I think at heart I’ve always been very interested in the potential that AI can bring to humanity, and the benefits that it can bring in ushering a period of huge amounts of economic development and progress, and reducing poverty and improving human lives.
I was doing this pet bacteria thing back in college, where I wanted to build biological neural networks.
And I was really interested in actually advancing AI and compute until I started realising that maybe we should think more carefully about the deployment and development of these systems. And that the biggest impact it can make is not necessarily in accelerating AI development, but making sure that the development goes well — beneficially, in a differential sense.
Luisa Rodriguez: Right. OK.
Sihao Huang: And I think this other thing with pet bacteria is that I feel like I’m an engineer at heart, and I love to be able to touch the things and build things that I’m studying. So back then I was really interested in thinking about synthetic biology and the potential this has for compute. So I started doing gene editing in my own room and trying to cultivate pet bacteria. I eventually made Christmas ornaments that are filled with glowing bacteria.
Luisa Rodriguez: No way!
Sihao Huang: When you shine them under blacklights, they’re fluorescent.
Luisa Rodriguez: Wow.
Sihao Huang: I still carry on a lot of those habits today. So I work a lot on thinking about chip policy and semiconductor policy. And actually, I am sitting right next to a metallurgical microscope that I bought on eBay. So I would reverse engineer chips on my own, not necessarily for work, but it feels much easier for me to think about policies when I can hold something in my own hands and understand the entire stack.
Luisa Rodriguez: That’s really cool. My guest today has been Sihao Huang. Thank you so much for coming on.
Sihao Huang: Thank you.
Luisa’s outro [02:22:47]
Luisa Rodriguez: All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong.
Full transcripts and an extensive collection of links to learn more are available on our site, and put together as always by Katy Moore.
Thanks for joining, talk to you again soon.