#155 – Lennart Heim on the compute governance era and what has to come after
#155 – Lennart Heim on the compute governance era and what has to come after
By Robert Wiblin and Keiran Harris · Published June 22nd, 2023
On this page:
- Introduction
- 1 Highlights
- 2 Articles, books, and other media discussed in the show
- 3 Transcript
- 3.1 Rob's intro [00:00:00]
- 3.2 The interview begins [00:04:35]
- 3.3 What is compute exactly? [00:09:46]
- 3.4 Structural risks [00:13:25]
- 3.5 Why focus on compute? [00:21:43]
- 3.6 Weaknesses of targeting compute [00:30:41]
- 3.7 Chip specialisation [00:37:11]
- 3.8 Export restrictions [00:40:13]
- 3.9 Compute governance is happening [00:59:00]
- 3.10 Reactions to AI regulation [01:05:03]
- 3.11 Creating legal authority to intervene quickly [01:10:09]
- 3.12 Building mechanisms into chips themselves [01:18:57]
- 3.13 Rob not buying that any of this will work [01:39:28]
- 3.14 Are we doomed to become irrelevant? [01:59:10]
- 3.15 Rob's computer security bad dreams [02:10:22]
- 3.16 Concrete advice [02:26:58]
- 3.17 Information security in high-impact areas (article) [02:49:36]
- 3.18 Rob's outro [03:10:38]
- 4 Learn more
- 5 Related episodes
As AI advances ever more quickly, concerns about potential misuse of highly capable models are growing. From hostile foreign governments and terrorists to reckless entrepreneurs, the threat of AI falling into the wrong hands is top of mind for the national security community.
With growing concerns about the use of AI in military applications, the US has banned the export of certain types of chips to China.
But unlike the uranium required to make nuclear weapons, or the material inputs to a bioweapons programme, computer chips and machine learning models are absolutely everywhere. So is it actually possible to keep dangerous capabilities out of the wrong hands?
In today’s interview, Lennart Heim — who researches compute governance at the Centre for the Governance of AI — explains why limiting access to supercomputers may represent our best shot.
As Lennart explains, an AI research project requires many inputs, including the classic triad of compute, algorithms, and data.
If we want to limit access to the most advanced AI models, focusing on access to supercomputing resources — usually called ‘compute’ — might be the way to go. Both algorithms and data are hard to control because they live on hard drives and can be easily copied. By contrast, advanced chips are physical items that can’t be used by multiple people at once and come from a small number of sources.
According to Lennart, the hope would be to enforce AI safety regulations by controlling access to the most advanced chips specialised for AI applications. For instance, projects training ‘frontier’ AI models — the newest and most capable models — might only gain access to the supercomputers they need if they obtain a licence and follow industry best practices.
We have similar safety rules for companies that fly planes or manufacture volatile chemicals — so why not for people producing the most powerful and perhaps the most dangerous technology humanity has ever played with?
But Lennart is quick to note that the approach faces many practical challenges. Currently, AI chips are readily available and untracked. Changing that will require the collaboration of many actors, which might be difficult, especially given that some of them aren’t convinced of the seriousness of the problem.
Host Rob Wiblin is particularly concerned about a different challenge: the increasing efficiency of AI training algorithms. As these algorithms become more efficient, what once required a specialised AI supercomputer to train might soon be achievable with a home computer.
By that point, tracking every aggregation of compute that could prove to be very dangerous would be both impractical and invasive.
With only a decade or two left before that becomes a reality, the window during which compute governance is a viable solution may be a brief one. Top AI labs have already stopped publishing their latest algorithms, which might extend this ‘compute governance era’, but not for very long.
If compute governance is only a temporary phase between the era of difficult-to-train superhuman AI models and the time when such models are widely accessible, what can we do to prevent misuse of AI systems after that point?
Lennart and Rob both think the only enduring approach requires taking advantage of the AI capabilities that should be in the hands of police and governments — which will hopefully remain superior to those held by criminals, terrorists, or fools. But as they describe, this means maintaining a peaceful standoff between AI models with conflicting goals that can act and fight with one another on the microsecond timescale. Being far too slow to follow what’s happening — let alone participate — humans would have to be cut out of any defensive decision-making.
Both agree that while this may be our best option, such a vision of the future is more terrifying than reassuring.
Lennart and Rob discuss the above as well as:
- How can we best categorise all the ways AI could go wrong?
- Why did the US restrict the export of some chips to China and what impact has that had?
- Is the US in an ‘arms race’ with China or is that more an illusion?
- What is the deal with chips specialised for AI applications?
- How is the ‘compute’ industry organised?
- Downsides of using compute as a target for regulations
- Could safety mechanisms be built into computer chips themselves?
- Who would have the legal authority to govern compute if some disaster made it seem necessary?
- The reasons Rob doubts that any of this stuff will work
- Could AI be trained to operate as a far more severe computer worm than any we’ve seen before?
- What does the world look like when sluggish human reaction times leave us completely outclassed?
- And plenty more
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris
Audio mastering: Milo McGuire, Dominic Armstrong, and Ben Cordell
Transcriptions: Katy Moore
Highlights
Is it possible to enforce compute regulations?
Rob Wiblin: Do you think it would be practical to be able to restrain people from aggregating this amount of compute before they do it?
Lennart Heim: Um… Open question. Put it this way: I think what makes me positive is we’re talking about ordering more than 1,000 chips for a couple of months. That’s like less than 100 actors, probably less than 30 actors, in the world who are doing this, right? We have training runs where we talk about their cost within the single-digit millions here.
And is it then possible to enforce this? I think eventually we’d just maybe start it voluntarily — and I think a bunch of AGI labs would eventually sign up to this, because they have currently shown some interest in this, like, “Hey, we want to be responsible; here’s a good way of doing it.” And one way of enforcing it is via the cloud providers — then I don’t need to talk to all the AGI labs; I only need to talk to all the compute providers. I want to some degree a registry of everybody who has more than 5,000 chips sitting somewhere, and then knowing who’s using a lot of these chips, and maybe for what. You could imagine this in the beginning maybe being voluntary, maybe later enforced by these cloud providers.
But of course, there are many open questions regarding how you eventually enforce this, in particular getting these insights. Whole cloud providers are built around the notion that they don’t know what you’re doing on their compute. That’s the reason why, for example, Netflix is not having their own data centres; they’re using it from Amazon Web Services — even though Amazon, with Amazon Prime, is a direct competitor. But they’re just like, “Yeah, we can do this because you don’t have any insights into this workload anyways, because we believe in encryption.” And Amazon’s like, “Yeah, seems good. We have this economy of scale. Please use our compute.” Same with Apple: they just use a bunch of data centres from Amazon and others, even though they are in direct competition with them.
So there’s little insight there. The only insight you eventually have is how many chips for how many hours, because that’s how you build them. And I think this already gets you a long way.
Rob Wiblin: How big an issue would it be that, if the US puts in place rules like this, you just go overseas and train your model somewhere else?
Lennart Heim: Yeah, but maybe the export controls could just make sure that these chips never go overseas, or don’t go in countries where we don’t trust that people will enforce this. This is another way we can think about it: maybe the chips only go to allies of the US, where they eventually also can enforce these controls. And then the US can enforce this, particularly with all of the allies within a semiconductor supply chain, to just make sure, like, “We have these new rules; how are we going to use AI chips responsibly?” — and you only get these chips if you follow these rules. That’s one way of going about it. Otherwise the chips are not going to go there.
Safe havens of AI compute
Lennart Heim: I think it’s a key thing that I’m sometimes doing when I’ve talked to policymakers: they just always love hardware. It’s like, “Great, it’s just going to work.” I was like, “No, actually, stuff is not secure. Have you seen the state of cybersecurity? It’s terrible.” And that an iPhone is as secure as it is right now, this is years of work. And I think there’s probably right now an exploit out there for listening on iPhone conversations, but it just costs you $100 million — so you only do it for literally really big targets, and not the random hacker on the street does it.
I think it’s really important that whenever you need to reinvent the wheel regarding security, just don’t do it. There’s this thing — “never roll your own crypto” — just don’t do it. Use existing libraries. Never roll your own crypto; use existing libraries to do this kind of stuff. I think the same here: I mostly want to use existing mechanisms. I think there’s still some research to be done, and this is part of the reason why we want to roll it out as fast as possible.
Another way to think about this is that a lot of these mechanisms rely on, well, you can hack them because you have physical access to compute. We have not talked about it a lot yet, but compute does not need to sit under your desk to use it: you can just use cloud computing. I can right now access any compute in the world if people wanted me to. This may be useful if I implement something which is in hardware, and you need to physically tamper with the hardware: you can’t; you’re only accessing it virtually, right? And even if you would tamper with it via software, guess what? After your rental contract runs out, we’re just going to reset the whole hardware. We reflash the firmware, we have some integrity checks, and here we go, here we are again.
So maybe to build on top of this, we previously talked about the semiconductor supply chain. People should think about the compute supply chain, which goes one step further. At some point your chips go somewhere, and the chips most of the time sit in large data centres owned by big cloud providers. So we definitely see that most AI labs right now are either a cloud provider or they partner with a cloud provider. So if we then think about choke points, guess what? Cloud is another choke point. This is a really nice way to restrict access, because right now I can give you access, you can use it — and if you start doing dangerous shit or I’m getting worried about it, I can just cut it off any single time.
This is not the same with hardware. Once the chip left the country and it’s going somewhere, I’m having a way harder time. So maybe the thing you want to build is like some safe havens of AI compute — where you enable these mechanisms that we just talked about there; you can be way more sure to actually work; and even if somebody misuses it, at a minimum, you can then cut off the excess for these kinds of things. So the general move towards cloud computing — which I think is happening anyways because of the economy of scale — is probably favourable from a governance point of view, where you can just intervene and make sure this is used in a more responsible manner.
Rob Wiblin: Yeah, OK. So this is kind of an exciting and interesting point, that people or many organisations currently have physical custody of the chips that they’re using for computing purposes. If we came to think that any aggregation of significant amounts of compute was just inherently very dangerous for humanity, then you could have a central repository, where only an extremely trusted group had custody. I guess it probably would be some combination of a company that’s running this and the government also overseeing it — as you might get with, I suppose, private contractors who are producing nuclear missiles or something like that — and then you could basically provide compute to everyone who wants it.
And I suppose for civil liberties reasons, you would maybe want to have some restrictions on the amount of oversight that you’re getting. You’d have some balancing act here between wanting to not intervene on what people can do on computers, but also needing to monitor to detect dangerous things. That could be quite a challenging balancing act. But basically it is, in principle, possible in the long term to prevent large numbers of groups from having physical custody of enormous numbers of chips — and indeed it might be more economical for most people, for most good actors, to not have physical custody anyway; that they would rather do it through a cloud computing provider. Which then creates a very clear node where probably these hardware-enabled mechanisms really could flourish, because it would be so much harder to tamper with them.
Lennart Heim: Yeah, maybe you don’t even need them there because you just have somebody who’s running it. And we definitely see a strong pivot towards the cloud. And no AI lab is having the AI service they eventually need in a basement to run these systems. They’re sitting elsewhere. They’re sitting somewhere close to a bunch of power, a bunch of water, to run these systems. And if you could just make these facilities more secure, run them responsibly, I think this might be just a pretty exciting point to go to.
You could even think about the most extreme example as a compute bank, right? We had a similar idea with nuclear fuel: just build a nuclear fuel bank. And here we just have a compute bank: there’s a bunch of data centres in the world, we manage them internationally via a new agency, and we manage access to this. And maybe again here we mostly wanted to talk about the frontier AI systems — like the big, big systems — you eventually then just want to make sure they are developed in a responsible and safe manner there.
Are we doomed to become irrelevant?
Rob Wiblin: OK, so: what are we going to do? You were starting to raise this issue of offence/defence balance, where you’re saying that maybe this compute stuff is not going to cut it forever; now we need to start thinking about a different approach. And that approach might be that, sure, the amateur on their home computer, or the small business, might be able to train quite powerful models, but we should still expect that enormous internet giants like Google or authorities like the US government should have substantially better models. Even if it’s impressive what I can access on my home computer, there’s no way that I’m going to have access to the best, by any stretch of the imagination.
So how might we make things safer on a more sustainable basis? Perhaps what we need is to use that advantage that the large players — hopefully the more legitimate and hopefully the well-intentioned players — have, in order to monitor what everyone else is doing or find some way to protect against the harmful effects that you might get from mass proliferation to everyone.
Maybe this does sound crazy to people, or maybe it doesn’t, but I feel like what we’re really talking about here is having models that are constantly vigilant. I guess I’ve been using the term “sentinel AIs” that are monitoring everything that’s happening on the internet and can spring into action whenever they notice that someone — whether it be an idiot or a joker or a terrorist or another state or a hostile state or something — is beginning to do something really bad with their AIs, and prevent it. Hopefully relying on the fact that the cutting-edge model that the US government has is far above what it’s going to be competing with.
But this is a world, Lennart, in which humans are these kind of irrelevant, fleshy things that can’t possibly comprehend the speed at which these AI combatants are acting. They just have this autonomous standoff/war with one another across the Earth… while we watch on and hope that the good guys win.
Sorry, that was an extremely long comment for me, but am I understanding this right?
Lennart Heim: I mean, we are speculating here about the future. So are we right? I don’t know. I think we’re pointing to a scenario which eventually we can imagine, right? And I’m having a hard time telling you the exact answers, particularly just AI governance or my research is like “stop access forever” or something. I think that’s a really high burden to eventually fulfil.
What I’m pointing out is that we have this access effect, but we need to think about the defence capabilities here, in particular if you think about regulating the frontier. And I think this is part of what makes me a bit more optimistic. You’ve just described one scenario where we have these AI defender systems and they’re fighting, they’re just doing everything. Maybe this works well, we can just enjoy each other, right? Like, having a good time and it seems great. But maybe it’s also just more manual; I think it’s not really clear to me.
But I think the other aspect to keep in mind here is we’re talking about — let’s just say this is a GPT-6 system, everybody can train it, whatever — this future system, maybe the system is, again, not dangerous. Maybe there’s going to be a change in the game — again, where we talk go from this 89% to this 90% or something along these lines — which makes a big difference in capabilities, right? This dynamic gives the defender a big advantage there. Maybe people don’t even have an interest in using all of these systems, because the other systems are just way better.
We now think about malicious actors who are trying to do this. I would expect the majority of people not wanting to do this. Those are problems you already have right now, where people can just buy guns. And this goes wrong a lot of times, but it’s not like every second person in the world wants to buy guns and do terrible things with it. Maybe that’s the same with these kinds of futures. Maybe then these defender systems are just sufficient to eventually fight these kinds of things off, in particular if you have good compute monitoring, just general data centre monitoring regimes in place there.
What’s important here to think about is that compute has been doubling every six months. This might not continue forever, or this might continue for a long time. And all the other aspects which reduce the compute threshold have not been growing that fast. So again, all I’m saying is it buys us a couple of more years, right? Like more than this 10, 20, 30. Maybe that’s what I’m pointing to.
But overall, what we try to do with AI governance is like: yeah, AI is coming, this might be a really really big deal. It will probably be a really big deal. And we need to go there in a sane, sensible, well-managed way with these institutions. And there are many open questions, as you just outlined, where we don’t have the answers yet — we don’t even know if this is going to be the case, but we can imagine this being the case. And we just need the systems in place to deal with this.
Rob's computer security bad dreams
Rob Wiblin: OK, so we’ve been talking a little bit about my nightmares and my bad dreams and where Rob’s imagination goes when he imagines how this is all going to play out. Maybe let’s talk about another one of these that I’ve been mulling over recently as I’ve been reading a lot about AI and seeing what capabilities are coming online, this time a bit more related to computer security specifically.
So question is: if GPT-6 (or some future model that is more agentic than GPT-4 is) were instructed to hack into as many servers as possible and then use the new compute that’s available to it from having done that to run more copies of itself — that also then need to hack into other computer systems, and maybe train themselves into one direction or another, or use that compute to find new security vulnerabilities that they can then use to break into other sources of compute, and so on, on and on and on — how much success do you think it might have?
Lennart Heim: I think it’s definitely a worry which a bunch of people talk about. As I just hinted at before, I think computer security is definitely not great. I do think computer security at data centres is probably better than at other places. And I feel optimistic about detecting it: it might be a cat-and-mouse game here, but eventually you can detect it.
Why is this the case? Well, every server only has finite throughput. That’s just the case, as we just talked about, like there’s only that many FLOPS which can be run. So there’s a limited number of copies that can run there, and data centres are trying to utilise their compute as efficiently as possible. Right now you can expect most data centres run at least at 80% utilisation or something, because otherwise they’re just like throwing money out of the window — nobody wants to do this.
So if the GPT-6 system, if this bad worm, comes along just like hacks into the system, there’s only that much compute available which you eventually can use. Then it’s a bit tricky, because it’s like “there is a bit, there is a bit” — it’s kind of like a scheduling problem. Well, then if it would kick the other workloads out, somebody would notice. “I was running this science experiment and this never really finished. What’s going on there?” And data centres are already doing this, monitoring for this.
I think the best example we’ve already seen in the real world is these whole malwares where people’s personal computers were used for crypto mining. This malware is just running on your computer, and then they tried to use your processor to mine crypto for this hacker’s personal wallet to get more money. And people started noticing this, mostly like, “My computer is a bit slower than normal.” So people tried to modify this algorithm so it was only using 20% of the capability of their processing performance, so you’d not detect it. But if you actually go full throttle, literally, your laptop fan would turn on, and like, what’s going on there? If people just see their laptop’s utilisation going up to 100% without them doing anything, be suspicious. Probably you should reset this. Reset this thing.
And I think it’s the same for data centres. Where it’s like, “Oh, there is a computer worm here. They’re doing something. Let’s try to kick it out.” And then you can imagine a cat-and-mouse game which might be a bit more complicated. And maybe this is part of the reason why maybe I’m advocating for the thing which no data centre provider wants, which is like a big red off switch. Maybe I actually want this.
Because normally you’re trying to optimise uptime, because that’s what you want to go for as a data centre provider. They definitely have different tiers there. It’s like, you’re the highest uptime, you’re the coolest data centre out there. And here we just want like, “Gosh. We literally lost control here. Let me just turn off all of the things.” Maybe on a virtual or software level, like turning off virtual machines is not sufficient, because it’s a really sophisticated computer where it’s already trying to escape. You literally would just turn off the compute and figure out and have forensics of what’s been going on there and trying to defend it.
What we eventually exploit there is existing security bugs and holes, and usually we fix them if we just figure out what they are. This takes a little bit of time, but at least compared to AI systems, we have some clue. We at least develop these systems in a way we understand them, so we can try to fix it.
Rob Wiblin: I probably misspoke when I said if GPT-6 were instructed to do this — because it would be much more sensible to have a model that’s extremely specialised at hacking into all kinds of computer systems, which is a much narrower task than being able to deal with any input and output of language whatsoever. So it probably would be quite specialised.
Why do you need such small transistors?
Rob Wiblin: One thing I don’t quite understand is: A chip on a phone, you really need it to be very small — to have a lot of transistors in a tiny amount of space, and to use very little power. But if you’re running a supercomputer, you don’t really care about the physical footprint that much: you can stick it out of town, you can stick it in a basement, you can spread it out. So if you’re trying to create a lot of compute in order to train these models, why do you need the transistors to be so small? Why can’t you make them bigger, but just make a hell of a lot of them?
Lennart Heim: Smaller transistors are just more energy efficient. So if you go back, basically the FLOP per watt goes down over time. And energy costs are a big part of the cost; this enables you to eventually go cheaper. And you cannot just make these chips go faster because you produce less heat. Cooling is a big thing. When we talk about chips, the reason why your smartphone is not running that fast is that it’s only passively cooled, right? And this eventually takes a big hit to the performance.
Another thing to think about is that when we then have all of these chips and we want to hook them up, it just matters how long the cables are. We’re not talking about hooking something up to do your home internet — one gigabit or something — we’re talking about how we want high interconnect bandwidth, we want high-bandwidth zones. We literally want these things as close as possible to each other, and there are just limits to how long you can run these cables.
This is part of the reason why people are really interested in optical fibre, because you don’t have that much loss over longer cables. But then you have all of these other notions, like you need to turn the optical signal into an electronic signal. It’s an ongoing research domain, but people are definitely interested in this — just building a bigger footprint — because then you also have less heat per area.
This whole notion about data centres is really important to think about also, from a governance angle. I think that’s a big topic in the future. People should think carefully about this and see what we can do there and also how we can detect it. If we talk about advanced AI systems, we’re not talking about your GPU at home — we’re talking about supercomputers, we’re talking about facilities like AI production labs, whatever you want to call them. And there’s lots to learn there.
The field of compute governance needs people with technical expertise
Lennart Heim: For compute governance, we definitely need more technical expertise. I think that’s just a big thing. I think that’s also the biggest part where I’ve been able to contribute as somebody who’s studied computer engineering a bit and just has some idea how the stack eventually works. Within compute governance, you have really technical questions, where it’s pretty similar to doing a PhD where you actually work on important stuff. Then we also have the whole strategy and policy aspect, which is maybe more across the stack.
On the technical questions, I think we’ve pointed out a bunch of them during this conversation. There’s a bunch of things. What about proof of learning, proof of non-learning? How can we have certain assurances? Which mechanisms can we apply? How can we make data centres more safe? How can we defend against all these cyber things we’ve just discussed? There’s like a whole [lot] of things you can do there.
And also there are some questions we need computer engineers on. There are some questions which are more like software engineering type, and a bunch of them overlap from information security. How can you make these systems safe and secure if you implement these mechanisms? And I think a bunch of stuff is also just like cryptography, like people think about these proofs of learning and all the aspects there. So software engineers, hardware engineers, everybody across the stack feel encouraged to work on this kind of thing.
The general notion which I’m trying to get across is that up to a year ago, I think people were not really aware of AI governance. Like a lot of technical folks were like, “Surely, I’m just going to try to align these systems.” I’m like, sure, this seems great — I’m counting on you guys; I need you. But there’s also this whole AI governance angle, and we’re just lacking technical talent. This is the case in think tanks; this is the case in governments; this is the case within the labs, within their governance teams. There’s just a deep need for these kinds of people, and they can contribute a lot, in particular if you have expertise in these kinds of things.
You just always need to figure out, What can I contribute? Maybe you become a bit agnostic about your field or something. Like if you’ve been previously a compiler engineer: sorry, you’re not going to engineer compilers — that’s not going to be the thing — but you learn some things. You know how you go from software to hardware. You might be able to contribute. Compiler engineers and others across the stack could just help right now, for example, with chip export controls, like figuring out better ideas and better strategies there.
So a variety of things. But I’m just all for technical people considering governance — and this is, to large extent, also a personal fit consideration, right? If you’re more like a people person, sure, go into policy; you’re going to talk to a lot of folks. If you’re more like the researchy person, where you want to be alone, sure, you can now just do like deep-down research there. You’re not going to solve the alignment problem itself, but you’re going to invent mechanisms which enable us to coordinate and buy us more time to make this whole AI thing in a safe and sane way.
Articles, books, and other media discussed in the show
Lennart’s work:
- Transformative AI and compute: Summary
- Transformative AI and compute: Reading list
- Introduction to compute governance: Video and transcript of presentation
- Draft report: Implications of increased compute efficiency — Performance and access effect for compute
- Compute trends across three eras of machine learning — with Jaime Sevilla, Anson Ho, Tamay Besiroglu, Marius Hobbhahn, and Pablo Villalobos
- Information security considerations for AI and the long term future
- Lennart Heim on the AI triad: Compute, data, and algorithms — on the Future of Life Institute Podcast
- You can also check out Lennart’s website for more resources, and all of Lennart’s work on Google Scholar
International context and export restrictions:
- Chip war: The fight for the world’s most critical technology by Chris Miller
- Choking off China’s access to the future of AI by Gregory C. Allen at the Center for Strategic & International Studies (also see the official press release from the US Department of Commerce)
- Chip exports to Russia plunged by 90% after curbs: U.S. official — Reuters report (2022)
- Video: ASML’s Secret: An exclusive view from inside the global semiconductor giant by VPRO Documentary
AI efficiency, compute, and governance:
- Epoch’s Trends webpage and research reports:
- Scaling laws literature review (2023) by Pablo Villalobos
- Revisiting algorithmic progress (2022) by Ege Erdil and Tamay Besiroglu
- OpenAI’s research reports:
- AI and efficiency (2020) by Danny Hernandez and Tom Brown
- AI and compute (2018) by Dario Amodei and Danny Hernandez
- Video: OpenAI CEO Sam Altman testifies at Senate artificial intelligence hearing (2023)
- What does it take to catch a chinchilla? Verifying rules on large-scale neural network training via compute monitoring by Yonadav Shavit
- Societal and governance implications of data efficiency by Aaron D. Tucker, Markus Anderljung, and Allan Dafoe
Exploring this career path:
- Opportunities at GovAI — including summer and winter fellowships to upskill on AI governance
- TechCongress Fellowship — which gives talented technologists the opportunity to gain first-hand experience in federal policymaking and shape the future of tech policy working with members of the US Congress
- AI governance needs technical work on the Effective Altruism Forum
- What does it mean to become an expert in AI hardware? on the Effective Altruism Forum
80,000 Hours resources and podcast episodes:
- Check out our new curated podcast feed: The 80,000 Hours Podcast on Artificial Intelligence — a compilation of 11 interviews from the show on the topic of artificial intelligence, including how it works, ways it could be really useful, ways it could go wrong, and ways to help
- Podcast: Tom Davidson on how quickly AI could transform the world
- Podcast: Danny Hernandez on forecasting and the drivers of AI progress
- Problem profile: Preventing an AI-related catastrophe
- Career reviews:
Transcript
Table of Contents
- 1 Rob’s intro [00:00:00]
- 2 The interview begins [00:04:35]
- 3 What is compute exactly? [00:09:46]
- 4 Structural risks [00:13:25]
- 5 Why focus on compute? [00:21:43]
- 6 Weaknesses of targeting compute [00:30:41]
- 7 Chip specialisation [00:37:11]
- 8 Export restrictions [00:40:13]
- 9 Compute governance is happening [00:59:00]
- 10 Reactions to AI regulation [01:05:03]
- 11 Creating legal authority to intervene quickly [01:10:09]
- 12 Building mechanisms into chips themselves [01:18:57]
- 13 Rob not buying that any of this will work [01:39:28]
- 14 Are we doomed to become irrelevant? [01:59:10]
- 15 Rob’s computer security bad dreams [02:10:22]
- 16 Concrete advice [02:26:58]
- 17 Information security in high-impact areas (article) [02:49:36]
- 18 Rob’s outro [03:10:38]
Rob’s intro [00:00:00]
Rob Wiblin: Hi listeners, this is The 80,000 Hours Podcast, where we have unusually in-depth conversations about the world’s most pressing problems, what you can do to solve them, and swallowing an ML model then swallowing a more powerful ML model to catch it. I’m Rob Wiblin, Head of Research at 80,000 Hours.
If you’re at all interested in the issue of AI, I think you should listen to some of this interview.
Fortunately Lennart Heim is full of energy and opinions so I think you’ll find this one to be more pleasure than work.
Access to computer chips is a key way that AI regulations may be enforced in future, and understanding how hardware works as an input to research and applications is essential to having a picture of what’s going on with AI.
I’ve been learning a lot about AI this year, and towards the end of this interview you’ll hear how some of my opinions are coming together, regarding which risks are most serious and what approaches for reducing them are actually plausible.
We cover:
- The pros and cons of using compute governance as a mechanism, as opposed to access to algorithms or data or staff.
- What you might be able to accomplish by limiting access to the fastest computer chips.
- What institutions might need to exist for that to work.
- The impact of the export restrictions imposed on China.
- Whether the existence of open source models renders compute governance ineffective.
- When people might be able to train a model like GPT-4 on their home computer, and what protections are important to have in place by the time that’s the case.
- Whether you can build governance mechanisms into the physical chips themselves.
- Whether we need to centralise compute for it to be possible to limit access to bad actors.
- Implications of machine learning for computer hacking.
- The use of graphics processing units (GPUs) and other chips specialised to just be used with AI.
- Careers you could pursue if you think all this is fascinating and urgent to straighten out.
- And plenty more.
One reminder before that though — we’ve put together a compilation of 11 interviews from the show on the topic of artificial intelligence, including how it works, ways it could be really useful, ways it could go wrong, and ways you and I can help make the former more likely than the latter.
I know lots of listeners are looking for a way to get on top of exactly those issues right now, and we chose these 11 aiming to pick ones that were fun to listen to, highly informative, pretty up to date, and also to cover a wide range of themes by not having them overlap too much.
Of course you could find those episodes by scrolling back into the archives of this show, but the compilation is useful because finding those episodes in the archive is a bit of a hassle, and it puts the interviews that we’d suggest you listen to first, in the order we think is most sensible.
The full name of the feed is The 80,000 Hours Podcast on Artificial Intelligence.
But it should show up in any podcasting app if you search for “80,000 Hours artificial,” so if you’d like to see the 11 we chose, just search for “80,000 Hours artificial” in the app you’re probably using at this very moment.
One thing I’ll add is that Lennart is the rare guest who is so passionate and animated, he talks even faster than I do. I can’t remember when that was last the case!
I know that even when the guest talks slower than me, quite a few listeners find me talking fast makes it more difficult to follow the show, and I apologise for that.
Sometimes people ask me what speed I listen to this show at, and usually I only listen to it at 1x because I also find it hard to follow when it’s sped up. So I do think the people who say it can be very fast have a good point.
This episode might be a particularly valuable moment to point out that most podcast apps allow you to slow down podcast episodes to make them easier to follow. So if you’re ever finding it a bit hard to keep up because we’re talking about something complicated and speaking really fast because we’re getting particularly excited, you could consider slowing down the audio a little.
This feature didn’t used to be universal, but these days Spotify, Podcast Addict, Google Podcasts, Pocket Casts and Castbox all let you set a show to play at 80% or 90% of its full speed, and I expect all the other ones that I didn’t check can as well. Apple Podcasts and Overcast, two of the most popular podcast apps on iPhone, let you set the speed to 75% but not 90%. If that’s actually a bit too slow you could consider listening to this show on a different app, like Spotify.
All right, without further ado, I bring you Lennart Heim.
The interview begins [00:04:35]
Rob Wiblin: Today I’m speaking with Lennart Heim. Lennart is a research fellow at the Centre for the Governance of AI, where he focuses on compute governance. In particular, he tries to answer questions like: “In what situations is compute a particularly promising nexus for AI governance?” and “What is a desirable compute governance system, and which hardware-enabled mechanisms might support it?”
His background is in computer engineering, and he’s also a forecaster for the US government’s INFER programme and an advisor to the AI forecasting outfit called Epoch. He’s also a consultant for the OECD.AI Policy Observatory.
Thanks so much for coming on the podcast, Lennart.
Lennart Heim: Hi, thanks for having me.
Rob Wiblin: I hope to talk about current compute governance proposals and their potential weaknesses. But first, what are you working on at the moment, and why do you think it’s important?
Lennart Heim: So I’m working for the Centre for the Governance of AI, in short GovAI, and I’m part of their policy team. And I’m particularly focusing on what you said: compute.
And my time is sometimes spent on high-level strategic questions, like what do we need in 10, 15 years from now if advanced AI systems really are capable? But also thinking about what are the things which we should do right now in policy to eventually go there in the near future? And I think that’s particularly promising. As you have covered on 80K podcasts before, there are just a bunch of things going on with AI, and thinking about governance there seems pretty important and promising.
Rob Wiblin: Yeah, we’re going to skip a lot of the introductory things that we might normally cover with the problem area today, because we’ve done quite a few interviews about AI recently. And I expect even listeners who haven’t listened to those episodes will probably be learning plenty in the news already, so the general topic will be quite fresh in many people’s minds, I think.
Lennart Heim: Absolutely. I think in particular AI governance. I think before, a bunch of people were on the technical stuff, but I think AI governance has been recently in the news. All the things which are not technical, all the politics and people stuff around it, have been covered in the last half a year, a lot.
Rob Wiblin: Hey, listeners. Rob here. Lennart is going to use a bunch of technical terms in this interview, and I thought it might be helpful to define a whole bunch of them at once here so that you don’t feel confused when they show up. In general, I would say don’t worry too much if Lennart says some specific things you can’t quite follow because they seem a little bit too technical or there’s a bit of jargon in there. I didn’t necessarily follow every single sentence, but I think the broader points tend to come through regardless. So if there’s any particular term you don’t recognise, I think just let it flow over you and don’t get too hung up about it.
OK, the first things I’ll define are the three big companies that are involved in producing computer chips that are useful for AI. There’s ASML, which is this Dutch company, which produces the machines that are used in factories that produce those computer chips. Then there’s a company you might well have heard of called Nvidia, which designs the chips. Basically, they figure out what ought to be on these computer chips, and then they send their schematics, the designs that they come up with, to semiconductor manufacturers — the most famous of which is the Taiwan Semiconductor Manufacturing Company, usually called TSMC, which, as its name suggests, is located in Taiwan and produces a very large fraction of all of the fastest, best computer chips that people use for AI training and applications. So there you’ve got ASML, Nvidia, and TSMC.
OK, then we’ve got terms like compute, semiconductors, graphics processing units (or GPUs) and AI accelerators. These are all different terms for computer chips that can be used for training or applying machine learning algorithms. There are differences between them, but often they don’t matter that much for the purposes here. So when we talk about any of those, we’re talking about computer chips.
Lennart also talks about compute clusters a bunch of times. Compute clusters here is just when you stick together a whole bunch of computers, a whole bunch of different computer chips, so that they can operate kind of as one big system. The term for that, they just happen to call it a compute cluster or a computer cluster, or I guess, actually a cloud computing cluster. The cloud, as many of you might know from using Dropbox or Google Drive, for example, is when you basically rent space or computational ability through the internet from a large company that owns a whole lot of hardware somewhere else — that’s called cloud computing.
Lennart also mentions TensorFlow and tensor processing units. TensorFlow was a piece of software related to machine learning that was created by Google and was then used by plenty of other people because it was made open source. And tensor processing units are chips that were designed to be particularly good for use with this TensorFlow software. But the specifics of that, again, don’t matter, in particular for this interview.
Another thing that comes up regularly through the interview are transformers. It’s not that important to know what transformers are, which is just as well, because I’m sure that I would completely butcher the explanation. But suffice it to say that transformers were an improvement in the algorithms used for machine learning that enabled the kinds of large language models that people are familiar with because of ChatGPT. They were an improvement to the algorithm, an improvement to the machine learning software and the methods that we use.
All right, with that out of the way, let’s get back to the interview.
What is compute exactly? [00:09:46]
Rob Wiblin: So it’s going to be compute, compute, compute today. How could people think about compute? What is it exactly?
Lennart Heim: It’s actually pretty hard. I think we have not settled on a real definition yet. When I mean “compute,” I usually mean something like computational power, computational infrastructure — and I’m referring to the hardware, the physical products we need for developing and deploying AI systems.
I think what’s most common to people is the notion of GPUs — graphical processing units — which we at some point started leveraging for AI systems. I prefer to use more general terms: AI accelerators, AI chips. Because Google, for example, has TPUs (tensor processing units), which all do the same. So I like generally talking about the computational infrastructure which supports AI development and AI deployment. And I’m particularly thinking about how I can use this as a node for AI governance to more beneficial AI outcomes.
Rob Wiblin: Yeah. So the unit of compute is the ability to, say, multiply two numbers together or something like that. And that’s a FLOP, right?
Lennart Heim: Yeah, that’s one way I have to think about it. I mean, there are multiple units of compute, and I’ve been fighting recently with a bunch of newspapers, where they use “FLOP” sometimes differently than I do. So a lot of people talk about FLOP, which is a floating point operation. Basically one operation, as you said, you just add two numbers, you divide them — something along these lines. Sometimes even more logical operations: A is bigger than B would also be one arithmetical operation there.
A lot of people always talk about FLOPS, FLOP per second, which is then the performance of a chip. Each chip — your phone, a GPU — has a certain FLOP, which you can eventually crunch: how fast it’s like crunching these numbers per second. An important notion is there are differences in numbers. Sometimes the numbers are bigger: they’re like 64-bit, sometimes like 8-bit. This is important because in AI we’ve recently seen, with pivoting towards smaller numbers to this, guess what? Smaller numbers, it can crunch them faster.
Rob Wiblin: So on a basic level, if your computer is faster, it can do more calculations, then you have more compute on your laptop or on your desktop. And you add more chips to this server farm, you’ve got more compute. That’s the basic idea.
Lennart Heim: I think that seems sort of roughly right.
Rob Wiblin: I saw compute in the news this morning, because I think overnight Nvidia’s share price went crazy and jumped by 20% or 30%. I think it grew by more than the entire total value of Intel.
Lennart Heim: Yeah, exactly. Just saw that. Seems like compute is at least a good tool to make a lot of money, at least right now for Nvidia.
Rob Wiblin: Yeah, definitely. I actually own a couple of Nvidia shares.
Lennart Heim: Early to the game.
Rob Wiblin: Yeah. I bought in 2018, actually. Quite early. I was like, I think this AI thing is going to be a big deal. I should put my money where my mouth is. It’s definitely done well since then.
Ben Hilton, one of my colleagues, has been pointing out for a little while that he expects that the price of chips is going to go up a lot. Normally the price of chips goes down, because there’s technological improvement, bigger economies of scale. But inasmuch as you have this takeoff, where AI suddenly becomes much more useful and the chips are more economically valuable, you could imagine that there might be a real runup in the price. A little bit like how last year the price of gas and oil went through the roof because we just couldn’t increase supply that quickly.
Is this maybe the kind of thing that’s driving up Nvidia’s share price? That people expect them to be able to sell the same chips for a lot more next year than this year?
Lennart Heim: I think so. I think this is probably the summary of it. I think it’s probably wrong to think about just the price per chip. Eventually, what we care about is the price per FLOPS, right? This has been going down. That’s the whole notion where you buy a new smartphone, buy a new computer — maybe for the same price, or maybe a little bit more — you get definitely way more computational performance, most famously driven by Moore’s law, over time.
Yeah, with Nvidia, you see a rapid increase. Just people are interested in AI chips, building a lot of them, because it’s been one of the major drivers in AI systems, which I’ve definitely taken a look at in the past, like figuring out exactly how much has been driving this.
Structural risks [00:13:25]
Rob Wiblin: OK, let’s push on and get to the core of the conversation here, which is compute governance.
When we’re thinking about governance, you could think about compute governance from the perspective of trying to get as much of the benefit from AI as possible. We normally think about it, or at least at the moment, people are currently freaking out about the downsides — or at least that’s what I’m spending a lot of my time thinking about. So from a regulatory and governance point of view, it’s natural to think about kind of risk reduction and risk management, and so on. Personally, I’ve been thinking about this a lot the last few months, and trying to figure out how to conceptualise it.
I guess currently I divide AI threats into these three categories.
The first one is misalignment — where an AI model basically ends up with goals that are super at odds with its creators or with humanity as a whole, so it poses a threat to everyone for that reason.
The second broad category is misuse. And that would be where an AI model actually is doing something that it’s being asked to do — it’s doing something that its creators, or at least its operators, are intending — but from your or my point of view, or perhaps the view of society as a whole, the impacts that that has are undesirable.
And then there’s this third category — which I think has gotten less attention, at least until recently, and also I’m not sure what name we should give to it. But basically there’s this effect that if we start building it — if AIs become very smart, they become very capable across a wide range of tasks — and we have enough compute out there for them to be doing an awful lot of thinking, basically doing an awful lot of mental work, then effectively, we’ve had this big population increase on the Earth, and there might be a whole lot more science and technology going on.
And because AI is able to act so quickly, you might expect that a whole lot of stuff that previously took months might now happen in weeks or days. You kind of get this thing where history as a whole speeds up: in terms of calendar time, more stuff gets crammed into each year than there was before. And because history is pretty volatile, if you start cramming a century’s worth of wars and technological advances and breakthroughs and all of this sort of thing into a single year, things could go either very well or very badly very quickly. So there’s this kind of speeding-up-history element.
I’m not sure whether you have a name for this, or whether it’s something you think about?
Lennart Heim: Yeah. I used to, at some point, give a talk where I was just saying there was a line going up — and when a line goes exponentially up, this is in general a sign of like, gosh, you should really think about this, right? Even if it’s something good going up, it generally requires careful management.
Maybe one way to think about it is just using the term “structural risks.” How does AI shape our incentives, our institutions — but also the other way around, right? And if you just talk about like, we just do like 10 years of labour within one year, this changes like everything, right? This changes our institutions and our labour, and clearly our systems are not adapted and there’s some kind of structural risk. Doesn’t necessarily need to be bad. But in general, when things happen fast and uncontrollably, it definitely requires careful management.
Rob Wiblin: I guess it creates particular risks because you would expect some things to speed up more than others. So it might be that our legislative processes don’t speed up 10x, but a whole lot of science and technology can speed up 10x. So things get a bit out of whack.
Lennart Heim: Or like military, or national competition — all these kinds of things can just definitely change a lot. And this requires management.
Rob Wiblin: As I understand it, your work is primarily focused on misuse. Is that a correct impression?
Lennart Heim: I think I would not say so. I think I usually don’t take this angle. I’m just like, “I’m trying to work on AI governance. I’m trying to make sure we can make better, more sane decisions about AI development and AI deployment.” There is no particular thing where I was like, it’s only about misuse and misalignment.
If you now just take this really blunt tool — I’m just trying to take chips away from someone — well, sure, I make sure they don’t misuse it. But I maybe also have reasons to believe these players are more likely to have misaligned systems because they take safety not as seriously. And if I’m just taking it away from someone, it also helps this whole speeding-up-history to some degree: I’m just slowing it down, or I’m giving it to them in a more responsible or agreed manner. It’s like, “Yeah, you can have these chips, but can we please take care of careful management there?”
So I tend to be agnostic about which part of the problems. Maybe certain policy proposals focus more on one or the other. There is nothing recently when I was like, “It mostly helps with that” — I think it’s mostly always across the dimensions, with maybe a particular portfolio which is more skewed.
Rob Wiblin: Yeah, OK. So it’s relevant to all three, I suppose. I guess every day now, you might be hearing new ideas that people are batting around on social media, in the newspaper, in the policy scene for different AI governance proposals, and compute governance proposals specifically. What is kind of the first test that you apply to them in your mind? Do you have something like, “Would this proposal in the next few years accomplish X or Y?” Is there anything like that?
Lennart Heim: I’m thinking a lot about advanced AI systems: What are the impacts we see in the future? What about future systems which are even more capable? So I think a bunch of times I think about policies like: Do they actually help with this, or do they just address maybe current problems you’re already having? Which seems great, but is there some way I can build on top of this? Can I imagine this is a tiered system in the future and I can scale it up? I have this knob which I can slowly turn up if these AI systems become more capable. So I just love the whole idea of foresight. Also just buying some flexibility into the future, so you can maybe adjust this policy — which is like a high ask for a bunch of policies to have.
And other questions which run through my mind are something like: How feasible is this? Do I think people will buy into this? Which actors does it affect? How stringent is this idea? And maybe also how it can be combined with other proposals? I think most of the time I have a bunch of ideas in my head, and it’s like: Oh, cool. How does this feed into my idea? Are there nice synergies? Do they actually fight with each other?
Those are probably the first reactions there. And of course, I would then sit down, I would think more carefully about it and have a framework there. Not saying I necessarily have a framework for all of these policies to eventually do it. I think most of our work is mostly focused on policy research; I’m not the civil servant who’s eventually implementing it. But of course this is most of a back and forth in a conversation you then have, and you would figure out the details.
Rob Wiblin: Yeah. So there’ll be people who might be thinking about compute governance from the perspective of, “We’re going to get GPT-5 next year, and people might use that for a bunch of bad stuff. How do we address that?” It sounds like you’re interested in that, but you’re primarily concerned with what we’re going to do in five or 10 years, when these systems are much more capable than what GPT-5 will be like next year?
Lennart Heim: Yeah. I do think GPT-5 will probably be pretty capable, just like this hypothetical system which is coming out in two years. But I think this already requires careful management. Ideally, we’d already do something there — and if we would now apply something which applies to GPT-5, so would it probably apply to GPT-8 in the future. So it seems to be a good thing to do there.
I think we’ve just got to be careful — like, what is eventually warranted and what do we actually get done? I feel like definitely the world is more buying into all these kinds of risks, but not to the degree where, for example, if we talk about compute, it’s somewhat of a blunt tool and it’s not usually a way we usually regulate stuff. So I think we’ve got to be careful there, how we first go there.
The meme which I’m currently trying to push mostly is this whole notion of: Compute is an interesting governance node for AI. It enables certain governance probabilities. Please be aware of this. We can chuck it into certain proposals. It’s not self-sufficient; you always need to add it to other stuff.
Rob Wiblin: So on that topic, I suppose if you were trying to do regulation or governance in order to improve the impact that AI has, there are very different kinds of actors who you might focus on. So you could think about AI model operators, or perhaps the people who were training them, or of course hardware manufacturers as well. And probably there are others we can think about.
And in terms of productive inputs to the advancement and training of cutting-edge AI models, I think it’s typically broken down into this classic triad of data and compute and algorithms. Shouldn’t there also be a fourth one, which is talent? So being able to hire people to make use of these things. Or do people think talent kind of goes into the ability to create these other three things?
Lennart Heim: So if you add talent, it’s not a triad anymore, so we don’t have this nice term. My argument is mostly that talent is a secondary input to all of these: talent helps you for algorithms; talent helps you for data; talent eventually helps you for compute. And again, algorithms, data, and compute also go with each other: more data usually means you need more compute. But I think it’s a nice way to think about this.
Another way I would describe it a lot of times is as the AI production function. The AI production functions as inputs as you just described, and out we get a model — like a trained ML algorithm which has certain capabilities. Then it seems like if we can govern inputs, we can govern the production of these AI models, and this might be a desirable thing.
Why focus on compute? [00:21:43]
Rob Wiblin: OK, so there’s a whole lot of different angles you could take on this. What are the unique strengths of compute in terms of governance? Why would you think about governing compute in particular, rather than something else?
Lennart Heim: Many arguments for that. Maybe let’s start with describing algorithms and data. With algorithms, we describe the techniques, the underlying math, for all of these systems. On a really high level, this is machine learning; these are neural networks. On a lower level, we think about specific techniques: how you train these systems, maybe new activation functions. Maybe also just ideas like the transformers, which change this field a lot, which is driving current progress. Those are algorithms which are underlying all of these. Eventually this is just computer code which is on your computer.
Then we have data. Because we mostly talk about the ML systems, we have data where we train these systems on. The system sees the data, says, like, “Yep, this is good, this is bad” — and based on this, you get feedback, and then you train the system over time. And over time we just accumulate a lot of data: we talk about terabytes of data to train these systems. Depending on the model you train, it’s text, it’s images, it’s videos — anything along these lines.
And what both of these have in common, particularly algorithms, is just they are sitting on your computer. I can literally press Command+C, Command+V and I duplicated it, and I can just steal it to some degree. With data, it’s a bit more complicated, because it’s just a lot.
But if you now compare this to compute, we’re actually talking about a physical product there, and this just makes everything significantly more governable and more interesting. So we describe this as a feature of eventually “excludability”: there’s some way you can exclude people from using compute. Excluding people from using algorithms and data is significantly harder. And compute has this nice property of you need it physically.
Also, if you run a certain algorithm on your computer, and your computer is using all of its FLOPS, then nobody else can use them: there’s a limited amount of floating point operations per second we can execute right now in the world. This gives me some idea what’s going on there, right? I can make a known estimate. Like if somebody tells me this system has that many FLOPS, I’m like, “No way. We could not have trained this this long.” In particular, if you talk about exponential progress there and exponential growth there. So this excludability is a really nice feature of compute.
Rob Wiblin: Are there any other key strengths or benefits from targeting compute?
Lennart Heim: Yeah, I think the quantifiability. But if you just try to quantify algorithms, this is really hard, right? Just having a hard time saying, “This algorithm is way better than this.” There are some ways to measure it, but it’s really hard to do good research there and to say what’s been happening there over time. In particular, if you just imagine the invention of the transformer, that’s a huge algorithmic progress to some degree. You could say this is discontinuity. It’s really hard to eventually measure this and then also just to see it coming, because [ultimately] algorithms are just ideas.
Same with data. With data we at least can say, “It’s 700 gigabytes of data,” but there’s a huge difference between a Wikipedia text, and a book, and like training on Reddit, just regarding the quality. So with data, we have this huge problem regarding quantifiability. It’s so multidimensional; there’s like so many metrics across which you can measure it. Even with high-quality data, there’s no agreed-upon metric. What exactly is “high quality”? You can sometimes see in papers that they trained for two rounds on Wikipedia data — because Wikipedia data is pretty good, high-quality data — and you would probably not do the same with Reddit. Or they even try to exclude 4chan, because this is like the last thing you eventually want your system to be trained on.
And compute then has this quantifiable feature, literally saying, “This is how many chips you’ve got.” If I know how many chips you have, I have some idea what your theoretical maximum amount of aggregated FLOPS is, and how fast you can eventually crunch these numbers. This then gives me just an idea of your capabilities, and which actors eventually matter. So I can just literally count the chips. I can count how many FLOPS you have, and to some degree it’s just easily quantifiable. It’s a little bit more one dimensional. Also the FLOPS is simplifying it a bit, but I think it’s definitely true to claim that, compared to data and algorithms, compute is more quantifiable.
Rob Wiblin: Yeah, makes sense. I guess an extra issue with data is even if someone doesn’t steal the data that was used to train GPT-4 from OpenAI, other people can scrape data off the internet as well. Other people can download all of Wikipedia. It’s extremely hard to exclude people from that.
Lennart Heim: Indeed. Yeah.
Rob Wiblin: I thought that you might raise the benefit that — relative to algorithms, and certainly data — the chip industry is incredibly concentrated, as I understand it. I recently read this book, Chip War; it gave me a bit more of an insight into how this industry operates and a bit of its history. There’s almost like only one company that makes most of the machines that we use to make these most advanced chips, called ASML, which happens to be based in the Netherlands. Basically everyone else gave up on trying to compete with them, which is amazing. And there’s Nvidia, there’s the Taiwan Semiconductor Manufacturing Corporation, TSMC, and a couple of other actors — AMD, Intel…
Lennart Heim: Right. Definitely many more actors. Some matter more, some matter less.
Rob Wiblin: But we’re talking about not a large number of companies, and they’re very geographically concentrated. In particular in Europe and the US and Taiwan, maybe a handful of others. I guess Japan, possibly.
Lennart Heim: I think usually people say the US, Japan, the Netherlands, and Taiwan. Then you at least have most of the cutting-edge chips.
Rob Wiblin: But if you want to govern something and it’s a global issue, you’d rather just have to deal with four countries and a handful of companies than 100 countries and 100 different companies. So that’s one benefit of going down the compute route.
Lennart Heim: Indeed. Yeah. What I covered before (the excludability and quantifiability) is what I sometimes describe as “fundamental features” — whereas the semiconductor supply chain is something like a “state of affairs.” It’s nice that the world turned out to be this way, right? This could theoretically change. I think it’s not that feasible that it will change anytime soon, because we just talk, in my opinion, about the most complex product which humankind has ever created. These chips, it’s just insane what we’ve created there. And just like, how all societies run on this.
To bring a bit more structure to what you just said, it’s like there’s like a three-step process to think about this. There’s the design process, where you design a chip on a high level, architectural-wise: How do I actually add these numbers? This is what Nvidia is famous for, and whose stock rises, just rates that go up a lot. Nvidia has been doing this historically, but they don’t produce these chips.
This is a trend we’ve seen within the last decades emerge: a lot of design companies, also like Apple, they just don’t produce the chips anymore. They go to so-called “fabs.” And one example there is TSMC, who is the leading developer for cutting-edge chips. I think they have 90% of the market share of all cutting-edge chips. Other important players there are Samsung and Intel as well. Intel being a famous example which worked across all sectors: they did design; they did the fabrication; and they did the last step, which is the packaging. Right, the packaging: we have another third-party actor. Sometimes TSMC is doing it; it depends exactly what you do.
But we go from the design, then you go to the fab which produces the chip, then you go to the packaging — like, taking these wafers with a bunch of chips, making one chip out of it, and then basically Nvidia gets the chip back and can eventually sell it. And yeah, it’s full of choke points, if you want to describe it this way. Particularly if you just care about cutting-edge chips — like the chips of the smallest transistors which have the highest performance — ASML are the only ones producing the machines, which go to TSMC and Samsung, which are then producing the cutting-edge machines. So basically, whoever ASML ships to, they have the ability to produce these chips. Others simply do not.
Rob Wiblin: Yeah, I definitely recommend reading the book Chip War, if people are interested. It is mental what is inside these chips. And it’s insane the manufacturing process, the number of technological hurdles that have had to be overcome in order to make things this small and to manufacture them at scale. I guess the current phase is ultra-high wavelength ultraviolet light that they’re using, EUV, and it took many years of R&D — I think in particular from ASML, but by other groups as well.
Lennart Heim: From ASML betting on this. I was making strategic investments to ASML betting on this. The Japanese have tried, and eventually failed, to do this. They’ve just wasted billions trying to develop this. And ASML just then succeeded, and now has this market. They’re having a pretty good time.
Rob Wiblin: No kidding. This is the kind of new wave of technology that is coming online now. I guess there might be future evolutions that are also pushed forward. I’m actually not sure what the next wave is going to be in three or five or seven years’ time.
Lennart Heim: Yeah, there’s lots of small tweaking and other stuff going on there. The general trend everybody talks about is most large transistors are getting smaller. This is what we’re trying to enable there. But they’re becoming way more hybrid right now, and there’s some roadmaps that have some ideas that this is going to happen within the next two years. Like these EUV machines are operating right now; they probably produced a chip which is sitting in our laptops or on our smartphones which people are using to listen to this. And yeah, of course they’re thinking about what’s going to be next, they’re making the next developments.
There’s this whole notion about how a great paper idea is getting harder to find, and they have this case example of the semiconductor manufacturing industry, where you just see we need exponentially more money to keep this exponential law eventually working. And this is what we have just seen. So it will stop at some point. I think the literal definition of Moore’s law will definitely stop, for sure.
But eventually what I care about is the computational performance. And there are other tricks so you can still continue doing this — like 3D stacking in memory computing and a bunch of other things on the horizon — which are not completely new technologies, but like hybrid technologies which still rely on this really complicated process of building semiconductors.
Weaknesses of targeting compute [00:30:41]
Rob Wiblin: Yeah. What are the weaknesses of trying to target compute as opposed to other inputs?
Lennart Heim: I think we now just mostly talked about compute; we didn’t talk about AI, right? I’m doing AI governance. Maybe we should call it “AI compute governance.” Or maybe this is the wrong term. Sometimes we talk about “data centre AI compute governance” to make it even more targeted.
We just made a claim that compute is governable due to the features we just discussed. And the claim which is underneath this one is that, by governing compute, I can govern AI systems. Why is this the case? We’ve seen that AI systems used exponentially more compute over time being trained, doubling every six months. This is ridiculous. This just means this is faster than Moore’s law, and Moore’s law is already pretty fast. When we talk about speeding up history, this is part of what’s happening there.
So it looks like, all things being equal, systems trained with more compute are more capable systems, have more capabilities — famously described in the empirical observation called “scaling laws,” where we just see these kinds of observations. This means more compute means more capable systems. I’m like, “Cool. If I can govern compute, I can govern the most capable systems to some degree. I can give it to people, I can take it away from people, I can distribute it accordingly.” So that’s the case where I think we can govern AI.
What are the cases against this? Well, I just said compute is important for AI. If this stops being true — if tomorrow somebody has a new magic sauce, a new algorithmic efficiency innovation, the transformer 2.0, and everybody can train GPT-4 on a smartphone — well, I’m having a hard time eventually governing these systems, because it’s just widespread.
Rob Wiblin: Hey, listeners. Rob here. Just thought I’d dive in and offer another definition. Another technical term that comes up that actually is important is scaling laws. So this one I think we’ve explained on a couple of previous episodes, and they are really important, so I’ll just explain what they are again. Basically, scaling laws are this observation that people have made over many years, over the last ten years, I guess, of improvements in AI: that if you increase the inputs to an AI model, to the training of an AI model by some amount, then you tend to get a consistent improvement in the performance of that model.
So people have noted, for example, that if you double the amount of data that is used to train a model, then you might get a consistent 10% reduction in the error rate, say. Similarly, if you double the amount of compute that is involved in training a chess playing algorithm, then you might get a consistent 5% improvement in its quality of play.
So the name for these empirical observations, that you get kind of consistent improvement in performance for a consistent percentage increase in the inputs that go into training a model, those are called scaling laws. And conveniently, if those scaling laws continue to hold, then that allows you to predict the performance of AI models some years in the future, because we have some idea about what additional amount of compute might be available, what additional amount of data there might be. And so by projecting forward these past improvements that have been quite consistent over time, we can guess where things might be in 2025 or 2026.
Lennart Heim: Right now it’s the case there’s probably less than 10 data centres in the world who are able to train these kinds of systems, who have that many chips eventually to go about this.
So that seems pretty important. You will always need [compute], no doubt. The question is, how much do you need of it, and from which concentrated matter do you need it?
Rob Wiblin: Right. So a possible future weakness would be if you no longer need very much compute in order to do these things that we’re potentially worried about, then it just becomes a lot harder to get much traction limiting people’s access this way.
I suppose another weakness, but maybe not relative to other things, is we need access to chips for other functions, for things that don’t have anything to do with AI. So we need them on our laptops, need them on our phones. People have data centres doing other kinds of work that we don’t regard as dangerous. So because it has these multiple different functions, in order to govern AI via compute, you might need to govern all compute. And that is going to accidentally have this bycatch of a whole lot of other industries that might be quite irritated that they’re being interfered with.
Lennart Heim: That’s a really common pushback. And I think that’s absolutely right. If I’m just claiming, “We’re governing all the compute,” like, no way — it’s everywhere. Like it’s sitting in your lightbulb if it has internet-of-things features or something. And as I said before, compute is somewhat of a blunt tool, because it’s untargeted to some degree.
Lucky us, we live in a world where it’s a tiny bit better. At least we have specialised processors for AI. And we don’t have necessarily only processors which are AI — like, we started with graphics processing unit, GPUs, which were used for gaming. Turns out they’re also really, really good for AI systems. But now, over the last years, we’ve pivoted a bit more, where we added certain features to GPUs. I think it’s not even fair to call them GPUs anymore; we still all do it, but it’s an AI accelerator, which has certain features — in particular, really good at processing tensors. This is different from the CPU you have in your smartphone, but your smartphone also has a small GPU.
But compared to the chips which I talk about — which are mostly chips sitting in data centres which have the biggest capabilities, running on 500 watts — your smartphone could not support this. You need active cooling for these kinds of things. So actually, the regulatory target of compute is way more niche than most people think. I’m talking about probably less than 1% of all chips, probably even smaller. If we talk about this, nobody cares about your AI chips on your smartphone, right?
Rob Wiblin: You’re saying 1% of chips, like counting the chips? Or 1% of all compute that exists in the world?
Lennart Heim: Yeah, it’s probably more of the compute eventually, because they’re just like significantly faster. But not all compute is equal.
Rob Wiblin: Measured by kilogrammes of chips or something.
Lennart Heim: Maybe that’s it. And just measured by highly parallelisable compute, of course. And just faster. People should look up CPUs versus GPUs for the main differences there. But the [GPU] can just do the same thing really fast in parallel a lot of times. Your CPU is way more flexible, right? That’s the general notion.
You have this whole spectrum from the most specialised chips — which are also the most efficient; they can literally do one thing. An example is media encoding processors in your TV. Or bitcoin mining: they have specialised processing tools for bitcoin mining. And then you move across the spectrum to GPUs: they’re like a bit more flexible, but they’re not as flexible as the CPU. The CPU couldn’t do literally everything, but it has some cost, given this flexibility. We observe this all around the world.
And we talk about this more specialised compute here — which uses a lot of energy, which we will talk about. They are orders of magnitude faster than the chips in your smartphone. Nobody’s training AI systems on your smartphone. We talk about the chips which eventually train these systems.
Chip specialisation [00:37:11]
Rob Wiblin: Yeah, let’s just pause and explain this issue of chip specialisation for particular functions, because I think it’s important here. And it’s probably going to be important in future interviews that we do as well, so it’s worth me being clear on what’s going on, and listeners as well.
Maybe I’ll just explain my kind of rough understanding of it, and you can tell me where I’m wrong. So I think of CPUs — the kind of thing that you have in your laptop or your phone — as being the maximally general compute-processing thing. It’s not particularly good at any particular kinds of computational operations, but it does all of them roughly equally well. And you can change the chip design in order to make it particularly strong at one particular kind of calculation, but then it will, on the other hand, be worse at other kinds of calculations.
And people started doing this long ago for gaming purposes with these graphics processor units, or GPUs — where they figured out that, actually, if all you want to do is just render a video game, there’s a particular kind of calculations that you’re doing constantly, and other ones that never come up, more or less. So we’re going to design a chip that is just really good at figuring out the shading on an object based on different levels of light. It’s going to be particularly good at doing that exact operation and not other things. And then I guess you get a big boost of something like maybe 10 to 100 times efficiency for the things that it’s most specialised on.
Now, at some point people figured out that actually the calculations that we’re doing in training ML models are remarkably similar to the thing that we’re doing when we’re trying to render a video game, so maybe we should just use GPUs. And so that gave them a boost. And then they said, we can do even better than that: we can specialise these chips even more to be particularly good at exactly the kinds of mathematical operations that are constantly happening when you’re training ML models. And I think this led to, again, a 10 to 100 times improvement over these maximally general CPUs that people might have been using before.
One extra thing is that you run out of juice here: you get this 10–100x boost in efficiency, but then you can’t do that again, because the chip has already been specialised onto the thing that you want. And so to some extent, we’ve already done this — or like, now, ML models are trained on specialised chips and so we won’t expect to get this kind of leap forward again.
But the fact that these chips that are specialised on training ML models are so much more efficient in terms of energy efficiency, say — or weight efficiency, even — at doing this one thing versus everything else, means that you can kind of get most of the way there, just by governing these chips that are specialised on training ML models. Because all of the rest of the chips in the world that are aimed at doing more general tasks or just different kinds of calculations couldn’t really be used that well, couldn’t really be used that efficiently to train ML models — certainly not the cutting-edge ones, anyway.
Is this right?
Lennart Heim: Yeah, there’s maybe some nuances, which I of course always add as a computer engineer. I think you’re pointing to the right thing where we talk about an order of magnitude difference between these chips and what they’re good at. And this just helps a lot. So basically, to overcome this difference, you need 100 times more chips. Well, that’s really hard to eventually accomplish. And a lot of times the difference is even bigger, becoming more specialised.
Export restrictions [00:40:13]
Rob Wiblin: Let’s talk about the most visible piece of existing compute governance, or at least almost the one thing that I’m aware of. Many people will have heard that last year [the US imposed export controls on the best chips, by some measure, on their export to Russia (because of Ukraine) and also China. In the latter case, I think it was explicitly designed basically to hobble their AI and tech industries. I suppose they would have said it was because they were worried about military applications.
But trying to cut off another major country from this entire area of technology is a pretty big step. It’s not something that I can recall the US doing that recently, given that China and the US aren’t in overt conflict in any way. And as I understand it, the Netherlands is going along with it, given that they’re the home of ASML. And Taiwan is going along with it; they would have to go along with it, given that they’re the home of TSMC. So it’s being enforced by a whole lot of different parties reasonably intensely.
Is there much more to say about these particular export restrictions than what I’ve just said there?
Lennart Heim: Lots of things to say about it. Where do we even start? I think what’s important to note is that those are the famous October 7 controls, right? Let’s put them in two camps: One of the camps is making sure China is not able to build their own sovereign semiconductor supply chain; the other part is what you were just talking about: the AI chips. There is no big success if you cut off the access to AI chips if they can just build their own chips. So you eventually want to do both at the same time.
Let’s put it this way: ASML was previously not allowed to send their machines to China, because they were asking for an export licence from the Dutch government and for some reason they really never got around to approving it. And now we have these official rules with the Netherlands. I’m not sure if “joining” is the right term. Same with Japan. They’re like roughly the same rules, but they’re saying, “This has not a lot to do with this” — but of course, it’s somewhat correlated.
This is also partly due to the US: the US might not supply the ASML machines, and the US might not have TSMC, but the US is supplying the software and the lasers for the ASML machines. So they have some way to say, like, “Hey guys, please follow these rules otherwise we’re going to tell our US companies…” This is some weird way how the US export controls, how they can leverage their power: not exporting the stuff to you anymore. So you’d rather join; you follow these rules by not exporting these types of equivalents to China.
So they use all these kinds of rules to eventually enforce this, because also what you want to have is that TSMC is not allowed to produce chips for China, which are under certain thresholds there. So that’s the whole thing about the semiconductor manufacturing equipment: you’re just really trying to make sure China is not able to build a sovereign semiconductor supply chain for cutting-edge chips. We’re talking about chips which use EUV machines, like the most advanced chips that we just talked about.
Rob Wiblin: As I understand it, these rules have effectively prevented China from being able to manufacture these most cutting-edge chips anytime soon. Because it’s so difficult to do — it’s the most difficult manufacturing thing that humanity has ever done, more or less. And inasmuch as they’re cut off from all of the intermediate inputs, even if they can get access to the internal documents that might describe on paper how in principle you would do this, the technical challenges to actually doing it are very high.
Lennart Heim: Yeah, absolutely. I mean, there’s a whole history of them trying to get access to this. I watched a great documentary about ASML, and it was like we just double our budget for security every year, because we’re just so worried about IP theft. And there are famous cases where China eventually did steal IP — and yeah, taking them to court didn’t really work, right?
So it looks like even IP is not sufficient there. Like you eventually need this whole tacit knowledge when you buy a machine from ASML. Having your own machine is one thing. You just get your whole team of engineers who’s going to live in your fabrication unit there and make sure this machine runs — this is a big part of it. So even stealing a machine, you still need to be able to operate it — and there’s only a handful of people who can do this. And yeah, this tacit knowledge: here we’re talking about how the input called “talent” is really important for ASML and for everybody who tries to build the chips, particularly for TSMC.
Rob Wiblin: I was going to say earlier that “fab” is short for “fabricator,” which is basically a chip factory.
Lennart Heim: Indeed. Yeah. Or like “fabrication unit” or something. That’s some of the term.
Rob Wiblin: Yeah, if you want to be cool and in with the compute people, you gotta call them fabs.
Lennart Heim: Then you go with the fabs.
Rob Wiblin: Is there anything else you want to say about the underlying motivation for these export controls, given that they’re reasonably unprecedented?
Lennart Heim: I mean, people can read the document from the US government. They explicitly say China tries to be a world leader in AI by 2030. They’ve been using AI systems for human rights violations, most famously with computer vision algorithms with the Uighurs in China. And this is part of the motivation for how to just explicitly target these AI chips, right? Talking explicitly about China’s goal of being a leading actor on AI, and they’re explicitly worried about human rights abuse with it and military usage about AI systems. So the US is clear about this, that this is a motivation there.
Rob Wiblin: Do you think these export controls were a good move, all things considered?
Lennart Heim: I feel more certain about cutting off the access to producing sovereign chips. So like, having these kinds of dependence, and we all come together and talk about what we actually produce seems good.
Cutting off the access to the chips I think is more debatable. All things being equal, I feel positive about it that they eventually did it. I think there needs to be some tweaks so we can make them better to eventually achieve that desired goal, and actually enforcing them is another big question there.
But I think if we have some type of dampening race dynamics, I think this is a really good thing to achieving this. It’s unfortunate that this is the way of going about it. I would rather have some kind of treaties where we just all come together and decide on these kinds of things.
But yeah, given that, I think this just reduces the AI governance problems to less actors, and makes it more certain that, for example, China is not leading the AI race. You see it right now: a lot of times when people talk about slowing down AI and AI regulations, and the people bring up the comment, like, “But China.” It’s become so much just a typical response: “But China.”
Rob Wiblin: I have started having the same kind of almost-mocking response: “But China! But China!”
Lennart Heim: Yeah, indeed. And I think these kinds of rules would help you, that now it’s less certain that [China] eventually gets this. And I think these rules are a really big deal and could potentially have a tremendous impact on the whole production of AI within China, if they are enforced correctly. And again, some tweaks to be done. There’s some caveats if they cover everything right now.
Rob Wiblin: So let’s say that GPT-4 is the most impressive ML model that exists currently. Potentially, China does have plenty of compute. Maybe it doesn’t have these very latest cutting-edge chips, but there’s still plenty of computers and plenty of supercomputers in China. Why can’t they just train GPT-4 on those older chips by leaving it running for a bit longer?
Lennart Heim: Yeah, they still have access to the older chips. I don’t think that the controls cut off access to systems right now. The export controls get exponentially more effective over time for two reasons: because we exponentially grow the number of compute, and chips get exponentially better.
So how could you overcome it? Well, like next year, if chips are twice as good, you get two times the chips as before. But it’s just hard: at some point, you need like 16 times the same chips. It’s really hard, just building these types of clusters. And it’s not only that chips get better, but we also scale systems, right? The reason why we have bigger and bigger systems is because we build bigger and bigger clusters. So while everybody is doubling the size of the cluster anyways, they need to double the size of that cluster because systems are getting bigger, and they need to double their size of the clusters because chips are getting eventually bigger. So [the controls] mostly have an impact in the future. I don’t think they cut it right now.
In particular, when we talk about these controls, what I think is really interesting is I think a bunch of people get wrong which chips we’re talking about there. First, people think it’s about chips which go into tanks or rockets. No, those are not the chips which go on tanks or rockets. You don’t do this — your battery will run out of power immediately, basically. These chips in the tanks and rockets are closer to one in your washing machine; it’s not that sophisticated, what you eventually do there. It’s different if you try to calculate a trajectory of a missile or something: then you do it on supercomputers, and maybe the chips are closer.
We’re talking about the chips which are used in data centres for AI trainings. And how did the US try to define these chips? Because they need to write some kind of law which defines the chips. They said they define this by chips which have a certain computation performance: they have that many FLOPS per second, and they have a certain “interconnect bandwidth” — how fast chips can talk to each other. And this is a distinct feature of chips within data centres: because we need a lot of chips, they’re really fast at talking to each other.
This rule is currently sitting at 600 gigabytes per second. So this basically cuts off access to the A100, which is Nvidia’s chip from two years ago, and to Nvidia’s most recent chips, to H100 — which are now like never before. What did Nvidia do? Well, Nvidia reacted like, “OK, cool. We’re just going to reduce our interconnect bandwidth. So we keep our performance the same, but reduce the interconnect bandwidth to like 400 gigabytes per second — below the threshold so we can continue selling it.”
Well, they don’t have access to the same chip as in the US, but they have the same computation performance, with a little bit worse interconnect bandwidth. So the question is, like, how big eventually is the penalty now for training these kinds of systems? My current belief is this is not sufficient. We talk about a penalty of 10% — maybe 30%, if you’re really lucky — for building cutting-edge AI systems. And if you see current AI systems, and if you believe AI systems are really important, well, you’re willing to eventually pay this, to pay this penalty of like 10–30%.
I think this is part of where the US basically needs to adjust their threshold to cover more chips, and reduce the interconnect threshold maybe even further. So you eventually have a penalty which just hurts more — and maybe this should be 10x or even more.
Rob Wiblin: Yeah. I’ve heard from others that these export controls were a significant blow to ML research in China, and that China is meaningfully behind. And also that it seems to be falling further behind — because as you’re explaining, these restrictions become more of a problem over time, more or less, the longer they’re in operation. I guess possibly they could bring down the threshold for what restrictions there are in order to make it more serious. And especially as the cutting-edge ones that other countries have access to just keep getting better and better, then having had these export controls in place for longer, basically the chips that are left in China just get left in the dust, more or less.
Is that broadly right? That the amount of compute that the largest ML model in China has been trained on will probably fall progressively behind the largest amount of compute that’s been used for a model in the US?
Lennart Heim: Yeah, I don’t think this is the case right now with the current rules, for the simple reason you can still continue pushing computation performance of chips into infinity. The only thing you need to make sure of as an exporter right now is you don’t have more than 600 gigabytes per second interconnect. There are many reasons to believe that computation performance is just way more important than interconnect speed.
Rob Wiblin: So why are the rules just about the interconnect speed?
Lennart Heim: The thing they’re trying to solve there is if you just make it about a whole computational performance, you would hit a lot of chips: you would hit gaming GPUs (people are probably not excited about this); and you would hit, in the future — because the whole idea about this rule is you keep them where they are right now — five years from now you’re going to hit a MacBook, right? And then you have like a bunch of US companies not being excited about this anymore. And then I’m also targeting what I’m actually not supposed to be targeting, right?
So what I’ve been trying to think about there is: We’re not worried about chips; what we’re eventually worried about is a bunch of chips together. We’re worried about supercomputers, because that’s what you need for AI systems. The problem is, with single chips, you can just hook them all up, right? So I need to make it about the chips so you don’t get a supercomputer. There are actually also rules where they forbid the export of certain supercomputers, and I think it’s actually defined by FLOPS per square metre. Just one way to think about this. But again, there are various different ways I can evade it. You just make it a bit further apart or something.
But what we eventually just need is some way we can only target supercomputers without targeting consumer devices. I think this is an open question, how we eventually try to achieve this. So you want some kind of feature which [disables] chips from being hooked up. Like a chip should not be a part of another 10,000 chips — that’s worrisome. If a chip is just part of another 100 chips, I don’t care — that’s not the workloads we’re eventually worried about. Those are not the things which then are going to be trained as AI systems.
But this whole notion of “it’s about supercomputers; it’s not about chips”: we just have a really hard time designing these kinds of things, right? And this is where people like me and others are trying to think about this, where you need technical experts, but also people who know what a problem is. Like, you need to think about the chip specifications, and then the last step is like: Is China going to be an AI world leader? You’ve just got to go all the steps below, like: What does it mean? What is the penalty? How bad is this? How is this going in the future?
And if you read news about China not being able to catch up or something, I think a lot of times China’s progress is overblown anyways, and I think that’s independent of the current export controls, why they’re doing this. I think there’s lots of signs right now that China probably is going to be regulating these kinds of systems really, really hard. And if their current proposals are going to be implemented, that’s just going to take a big hit to the industry and how they’re going to be doing it — just because they need to censor that much. Like, you cannot just simply deploy a model, because it might say bad things about the CCP and they’re not too excited about it.
Rob Wiblin: I guess, being more generous, you might say the government there is more cautious in general about the kind of chaos that technological advances might create. Maybe they’re taking a more prudent approach, perhaps for bad reasons, but in some sense they’re following a more precautionary approach. And I suppose that’s great from our point of view, because it means that we don’t feel this need to race in order to remain competitive, inasmuch as China is willing to slow things down just because they think that’s the more prudent point of view. I guess either being generous from the perspective of the Chinese people, or being less generous from the perspective of the Chinese Communist Party, then we can afford to take our time as well.
Lennart Heim: Indeed, yeah. It just seems important. A lot of it depends on what people believe. It seems like there is a lot of times a consensus — particularly right now in the US, where there’s this angle that they’re trying to become a superpower.
Then I think the other thing to look out for is that they’re regulating industry. What they do internally within the army or within the AGI labs might be different. This might not follow these rules, right? So maybe their deployed systems are heavily regulated, but something which is happening in the background is actually not the case. And this might be another [point in] favour for these types of export controls.
Rob Wiblin: I see. Yeah, that makes sense. One thing I don’t quite understand is: A chip on a phone, you really need it to be very small — to have a lot of transistors in a tiny amount of space, and to use very little power. But if you’re running a supercomputer, you don’t really care about the physical footprint that much: you can stick it out of town, you can stick it in a basement, you can spread it out. So if you’re trying to create a lot of compute in order to train these models, why do you need the transistors to be so small? Why can’t you make them bigger, but just make a hell of a lot of them?
Lennart Heim: Smaller transistors are just more energy efficient. So if you go back, basically the FLOP per watt goes down over time. And energy costs are a big part of the cost; this enables you to eventually go cheaper. And you cannot just make these chips go faster because you produce less heat. Cooling is a big thing. When we talk about chips, the reason why your smartphone is not running that fast is that it’s only passively cooled, right? And this eventually takes a big hit to the performance.
Another thing to think about is that when we then have all of these chips and we want to hook them up, it just matters how long the cables are. We’re not talking about hooking something up to do your home internet — one gigabit or something — we’re talking about how we want high interconnect bandwidth, we want high-bandwidth zones. We literally want these things as close as possible to each other, and there are just limits to how long you can run these cables.
This is part of the reason why people are really interested in optical fibre, because you don’t have that much loss over longer cables. But then you have all of these other notions, like you need to turn the optical signal into an electronic signal. It’s an ongoing research domain, but people are definitely interested in this — just building a bigger footprint — because then you also have less heat per area.
This whole notion about data centres is really important to think about also, from a governance angle. I think that’s a big topic in the future. People should think carefully about this and see what we can do there and also how we can detect it. If we talk about advanced AI systems, we’re not talking about your GPU at home — we’re talking about supercomputers, we’re talking about facilities like AI production labs, whatever you want to call them. And there’s lots to learn there.
Rob Wiblin: Before we push on from this, all things considered, is the US in a geopolitical strategic race with China to develop AI right now in practice? Or at least is it at all competitive?
Lennart Heim: I guess it’s like a personal opinion or something. I’m definitely not an expert who speaks on this. I’ve been basically directed to this whole China thing because it was about chips. I was not thinking about China and the US that much before, and I’m definitely not a China expert on any of this. But I can say the perceived notion is — it comes up in testimonies and other stuff, and even from the labs themselves — just saying “But China.”
If it actually is, is a different question. Like, what is the perceived notion and what does it enable for my policy? And if the people believe they’re in a race, this changes the policies I do, right? And we should really be careful about believing if we’re in a race or not.
Rob Wiblin: Yeah. I’m planning to interview some other people about this question in particular. My impression is that people are more concerned about there being a race with China the less they know. Which makes me a little bit suspicious that in reality, this may not be a competitive race at the moment in practice, and in fact, there’s more breathing room than amateurs might suspect.
Lennart Heim: Yeah, I think it would be wrong to say if we solve the race with China, the AGI problem or the AI governance problem is solved. This is definitely wrong. I think one of the reasons to do it is just that, actually, if people believe we’re not in the race with them, then eventually all the others can slow down, we can come together as the world. And I think ideally, this would be an interest where we just all come together and say, “Hey, this seems like a really big deal. We should all get coordinated about this.”
Rob Wiblin: OK. For the rest of this interview, I’m going to assume that we think that AI models are going to be trained in the next 10 years, maybe the next three years, which could be used to do a lot of harm to humanity, or at least to individuals, if they were operated by the wrong people and asked to do things that we would wish these models weren’t doing — so basically, it’s this powerful strategic technology with harmful applications.
If that’s right, then we don’t want anyone to be able to operate these most powerful ML models as soon as they’re trained — and maybe we don’t want those models to be trained at all, either, given that they might inevitably proliferate. And we might, in particular, really want to prevent some groups like terrorists, or North Korea, and probably some other people we can stick on the list, from accessing this technology for a long time, maybe ideally ever. Are you happy with those assumptions going in? Is that a useful setup?
Lennart Heim: Well, it’s a pretty big ask, if we just talk about these kinds of setups. But yeah, I think that’s a useful way to think about this. I think there’s sometimes a way to just test your policies: would they eventually help with this? If this is eventually going to be the case in the future, that’s a different question. But thinking about it, and if you have some governance regime which can address this, seems right — but it must be a pretty stringent one, right?
Compute governance is happening [00:59:00]
Rob Wiblin: Yeah. OK, so let’s do a section now on basically current proposals that people are making for possible regulations and governance systems for compute. What’s an example of a proposal or mechanism that has been put forward to achieve those kinds of goals?
Lennart Heim: Well, I could say these chip export controls are a current proposal right now — I think a pretty big deal. It’s out there, so compute governance is happening. I used to say, “It’s happening without a lot of knowledge. This seems bad. We should really think about this.” And clearly governments and others are linking experts on these kinds of topics to make this better.
Other proposals which have been put forward, I think, are using compute as this type of monitoring or knowledge. What a lot of people talk about is a threshold. So as you just said, maybe you don’t even want to train certain systems, because just the existence in itself is dangerous because it might proliferate, somebody might steal it. You might even think, if you believe in AI takeover scenarios, that during a training round, AI is starting to bribe some people and trying to take over the world or something along these lines, so you might not even ever want to train a system.
And with this compute threshold, you could basically maybe agree on, like, “Here’s the threshold; you’re not going to train a system which is bigger than X.” This might be useful because if I know this compute threshold, I can roughly say that you roughly need 5,000 chips for six months to train a system like this — and I’m just going to try to monitor whoever wants 5,000 chips for six months. We’re definitely going to check what you’re going to be doing there — and maybe not even allow it, right?
So this requires some oversight from these types of compute providers. Most times, these cloud providers are providing this compute. They definitely have this insight after somebody trained the system, because that’s how you get built, right? They ask you how many chips you use for how much time, so they know this for sure right now. They don’t know exactly what you did. They’re just like, “Oh yeah, you used a bunch of GPUs for whatever. You could have trained 10 small systems, one big system, or even just deployed a system.” I think that’s part of the problem we’re having right now: you don’t have these insights to what exactly is happening.
But as an upper bound, I can say that’s how much compute you got, so this might be the biggest systems you’ve developed. And if you never have enough compute across this threshold, I’m like, “Yeah, seems fine. Seems like you have not done something really dangerous there.”
Rob Wiblin: I see. So the simple structure here would be that there’s some threshold of compute, above which we’re concerned about what the consequences would be of a training run that large. We want to identify anyone who has that many chips concentrated in one place, or I suppose anyone who’s renting out chips up to that scale, and then maybe they would need a licence or approval in order to do any training runs that are larger than that, basically.
Lennart Heim: Indeed. The whole idea of licences I think is really important here.
There are two types of licences we could imagine there. If I’m telling somebody, “You’re not allowed to train a system above X,” I cannot make it about the model; the model never exists. So I need to have a licence about the developer, about the company, right? It’s like,
“Do you have your AI driver’s licence with you? Are you actually a responsible AI developer? Oh, you are — you’re allowed to train such a system.” This would be the first thing you could imagine.
Another thing you could imagine is like a licence about the model. So maybe you’ve trained a system and it’s above this threshold, so we classified it as “potentially dangerous.” So we want to make sure this model has a licence before you deploy it. This is then where the whole notion of evaluations (or evals) comes in, where people try to test these models for dangerous properties we’re eventually worried about. And if we’re not worried about it, the model gets its licence, its stamp of approval, and then you can deploy it.
And again, compute could also come into play there, because usually people deploy the systems at cloud providers: it’s just the cheapest way, it’s just the economy of scale, how we deploy them. And maybe you want to tell them, “You’re only allowed to deploy systems which have a licence, or else you’re not allowed to deploy them. This would be reckless if you would do so.” It’s a way that compute, first of all, helps you to monitor, to decide this threshold — like “compute indexing,” I think is one way to describe it — and then later compute is also just an enforcement tool: like, “Your model doesn’t have a licence, so sorry, we’re not going to run this one.”
Rob Wiblin: Yeah. How would this be enforced? Do you think it would be practical to be able to restrain people from aggregating this amount of compute before they do it?
Lennart Heim: Um… Open question. Put it this way: I think what makes me positive is we’re talking about ordering more than 1,000 chips for a couple of months. That’s like less than 100 actors, probably less than 30 actors, in the world who are doing this, right? We have training runs where we talk about their cost within the single-digit millions here.
And is it then possible to enforce this? I think eventually we’d just maybe start it voluntarily — and I think a bunch of AGI labs would eventually sign up to this, because they have currently shown some interest in this, like, “Hey, we want to be responsible; here’s a good way of doing it.” And one way of enforcing it is via the cloud providers — then I don’t need to talk to all the AGI labs; I only need to talk to all the compute providers. I want to some degree a registry of everybody who has more than 5,000 chips sitting somewhere, and then knowing who’s using a lot of these chips, and maybe for what. You could imagine this in the beginning maybe being voluntary, maybe later enforced by these cloud providers.
But of course, there are many open questions regarding how you eventually enforce this, in particular getting these insights. Whole cloud providers are built around the notion that they don’t know what you’re doing on their compute. That’s the reason why, for example, Netflix is not having their own data centres; they’re using it from Amazon Web Services — even though Amazon, with Amazon Prime, is a direct competitor. But they’re just like, “Yeah, we can do this because you don’t have any insights into this workload anyways, because we believe in encryption.” And Amazon’s like, “Yeah, seems good. We have this economy of scale. Please use our compute.” Same with Apple: they just use a bunch of data centres from Amazon and others, even though they are in direct competition with them.
So there’s little insight there. The only insight you eventually have is how many chips for how many hours, because that’s how you build them. And I think this already gets you a long way.
Rob Wiblin: How big an issue would it be that, if the US puts in place rules like this, you just go overseas and train your model somewhere else?
Lennart Heim: Yeah, but maybe the export controls could just make sure that these chips never go overseas, or don’t go in countries where we don’t trust that people will enforce this. This is another way we can think about it: maybe the chips only go to allies of the US, where they eventually also can enforce these controls. And then the US can enforce this, particularly with all of the allies within a semiconductor supply chain, to just make sure, like, “We have these new rules; how are we going to use AI chips responsibly?” — and you only get these chips if you follow these rules. That’s one way of going about it. Otherwise the chips are not going to go there.
Reactions to AI regulation [01:05:03]
Rob Wiblin: Yeah. OK, so this broad proposal is out there. People are talking about it. It’s being contemplated by AI companies, and by people in politics, and think tank folks like you. What’s the reaction been?
Lennart Heim: Recently in testimony from Sam Altman, explicitly said a compute threshold might be something you might want to implement. Like we’re going to run certain evals, and you need licences for deploying models above a certain compute threshold — and maybe even a capability threshold, but we’re having a hard time directly saying which capabilities are dangerous or not. So I think people are talking about it. If it eventually gets implemented is a different question.
Otherwise the reaction to compute is, a lot of times, that it’s already happening, but it’s happening to China: it’s like a way to enforce these kinds of rules, like the really blunt tool, and it’s a lot of times how you would actually enforce something domestically. It’s like a weird way of regulating, in particular if you think about having asked before you even train a system: you must really believe that just the existence of a system itself is dangerous. Whereas I think most folks are just like, just don’t deploy it, right? Just don’t run it.
And it’s way easier to make statements about the dangers once you have a system. I’m talking about the dangers of having 1025 FLOP [roughly GPT-4-size training compute], where it sounds dangerous to me by now, because I just have this mental model of “this is this model and has these capabilities” or something. But I think it’s quite a stretch to say that everybody’s going to be buying this. It’s way more evident if you actually think about these systems.
Rob Wiblin: So I know Silicon Valley is a big place where people have lots of different opinions, but I think some people have been having a negative reaction to a lot of the discussion of AI regulation from a freedom of use point of view: a sense that basically there’s going to be regulatory capture where particular AI companies will be able to lock in their advantage by regulating the industry so much that it’s very hard for any small firm to get started. I suppose some people also just have a kind of general antiregulation attitude within business, and some folks like that might look at this and just have a negative instinctive reaction. Have you been tracking that as well?
Lennart Heim: Yeah, I mostly think about the compute things. It’s like a bunch of people have reactions where they’re just worried about their GPUs — I think they have already been floating some memes around with GPUs on flags, which says “come and take it” or something. And my response is like, I’m not worried about your single GPU, even your 10, even your hundreds, or maybe even your thousands. I’m starting to just talk about supercomputers, where we talk about more than 5,000 GPUs. Maybe there’s roughly the threshold there.
So definitely the direction is like, “Leave my computer alone.” Everybody has computers. But no, actually I’m only talking about centralised AI data centre compute — specialised compute which is used for AI. And I think this already helps a lot. This definitely comes at a cost, right?
I think what most people sign up to, and what I think is definitely a good idea, is just like regulating frontier AI systems, regulating these companies. And this is just only a handful. Those are like the leading systems which you need to go about there. In particular, there’s just not that many companies who train systems which are this big, because you just have these immense capital costs to just train these systems.
And it looks like at least OpenAI and others are actually going out there, like, “Please regulate us. Let’s think about something domestically, ideally also internationally.” And it just seems like compute is one of the better tools out there. I would never claim it’s a silver bullet, but it’s maybe one of the better ones, in particular when we talk about this technology. It comes at some costs: it comes at some needs to implement it, and costs in thinking, and new institutions we need there, but yeah.
Rob Wiblin: I wonder if there could be value in rebranding all of this as not “compute regulation,” but “supercomputer regulation.” Because, like you’re saying, even small firms or small AI companies might worry about this, but in reality they’re probably not going to be covered. And even an AI startup that’s trying to do medical advice or some specialised AI function, it’s very unlikely that they would in the near future be training a model with more compute than GPT-4, so they probably wouldn’t be covered.
Lennart Heim: Indeed. Yeah, cool. Then I’m doing “supercomputer governance” now. It’s over. No “compute governance.” Maybe that’s the future.
I think that’s also why I’m just saying this notion it’s not about chips; it’s about the aggregation of chips, right? And using compute as a node, you have to be more specific there.
It’s the same when people would say “I do data governance” or something like that: we’re not talking about all the data we’re trying to use, but how can you use it with more safe, responsible measures? How can I use this tool? It’s not only about taking it away from people. I think it’s a lot of times mostly about monitoring: seeing what’s going on and having verifiable commitments. I think this is just what technology eventually allows you to do.
Rob Wiblin: Yeah. Later on, I’m going to talk about this underlying problem that if today we need supercomputer regulation because of improvements in algorithms and the technological progress in chips, then gradually it’s going to, over time, shrink from needing supercomputer regulation to just needing large middle-sized computer regulation — and then one day maybe to laptop regulation, which might strike people either as unacceptable or unviable. So we’ll come back to that question, but at least in the immediate term, we can think about it in terms of supercomputer regulation, which does seem a lot more practical.
Lennart Heim: I think so, yeah. I think this is the thing we’re talking about within the next years.
Creating legal authority to intervene quickly [01:10:09]
Rob Wiblin: Yeah. OK, so this is one broad category of regulation. Is there another strategy or another general type of compute regulation that people are talking about?
Lennart Heim: Yeah, another notion which I’ve been trying to cover is, we talked a lot about regulating training runs, but I’m just mostly trying to get ideas of where compute is a node for AI governance.
And I think we already see it, to some degree, where sometimes compute is getting used just for law enforcement. If you would imagine I’m going to start a website online where I’m trying to start to sell drugs. I would run it via the Tor network in a different way. At some point, somebody wants to just turn it off. Because they want to send me angry letters, they want me to get a lawyer, but they really don’t fight me. So they’re just turning off — the data centre is literally pulling the plug as the last resort. I think that’s an important component to understand: that sometimes this is one way of just making sure somebody’s actually off, if it produces harm. You can just think about how this drug market is producing harm over time, so the earlier you turn it off, the better it is.
I described it like we maybe want some kind of oversight over deployment of AI systems. I think the notion which a lot of people get wrong is they mostly think about compute as only about training. Which is roughly right: you need the majority of compute, or a lot of compute, for training a system in a small timeframe. But if you look at all the AI compute usage in the world, the majority is used for inference — because every single time you do Google search, every single time people are chatting with ChatGPT, it’s running on these AI systems. So the majority of compute is currently used for AI inference.
Rob Wiblin: Hey listeners, Rob here with another quick definition to help you out. One term that I picked up from this interview that I wasn’t in the habit of using before is inference. So when we talk about a machine learning system, you know, a piece of AI software, let’s imagine say a machine learning algorithm that plays chess. It has to go through this training process where the neural network is created and the weights that make up that network are chosen, or they’re evolved gradually over time in response to how it’s performing as those weights are gradually changed.
Now, that’s the training process. But then once you’ve got that neural network, once it’s been trained, then when you apply it to a specific case, let’s say you’re playing against this chess software and it’s using that neural network to figure out what move to play against you, that is called inference. So that’s an application of the neural network that you’ve trained. All right, back to the interview.
Lennart Heim: And this can also be governed; we can use this in a more responsible manner. And maybe the thing you want to make sure of there is that the cloud providers and compute providers who have the majority of all our inference compute are using this in a responsible manner.
For example, we just said you need a licence before you deploy a system. But maybe another way I’d think about it is if one AI system does harm — for example, it runs a big media disinformation campaign — I want law enforcement to have a direct line to the cloud providers, like, “Hey guys, please turn this off.” This is just an immediate way you can turn it off.
And cloud providers, for example, check who was running a system: who was actually my customer there? So you have some know-your-customer regime, which they maybe should implement to then enable this. With know-your-customer regimes, you can get these training ideas, but you can also just get these deployment oversight ideas — where you just make sure if you deploy AI systems, you want to do this in a more responsible manner. And it’s just like this node where you can go if nothing else works, right? Your angry lawyer letters don’t work, you can just pull the plug. Maybe that’s a good resort and maybe a good way to think about this.
And just in general, for cloud providers, getting to this “you’re part of this game; you’re responsible” general idea, which we’re trying to get out there. We just talked about how more compute means more capabilities for AI systems. What we’re saying is more capabilities also means more responsibility — so whoever has more compute, you just bear more responsibility to making sure these systems are deployed in a safe and a secure manner.
Rob Wiblin: I see. So the earlier broad category was preventing people from doing training runs that involve very large amounts of compute, and I guess denying access to large amounts of compute to bad actors — or at least actors you don’t have confidence in. This is the other end of the spectrum, where something has already been trained and deployed that it turns out is having harmful effects. And basically, we want to respond very quickly. We don’t want to be stuck in a many-months-long legal wrangling — or even maybe a many-hours-long detective run — because these things could spiral out of control extremely fast.
So this isn’t going to help with the misaligned superintelligence scenario: it’s not going to be outsmarted by someone pulling the plug, because it was going to see that coming and not have any visible misbehaviour until that will no longer work. However, if you just have a model that’s being misused — so if you have a model that’s not superintelligent or maybe it’s having harmful effects almost unintentionally, because it’s been given wrong instructions and it’s like spazzing out, more or less — then you want to detect that very quickly, basically cut the electricity to the place where it is before things can get out of control.
So the thing is here I guess is creating legal authority to intervene and turn things off very aggressively — and making sure that there is a process that will actually cause that to happen very fast, so you don’t end up with a very extended set of phone calls before anything can be done at all.
Lennart Heim: Indeed. And just being aware of it. They’re just sometimes like, “This AI is coming from the cloud” — literally the cloud, just like it’s coming from above. No one will talk about physical locations where this is happening. I think this is particularly interesting because capabilities of AI systems get better over time when people learn how to use it. I think that’s roughly true with ChatGPT: ChatGPT is now more powerful than it used to be, not because OpenAI changed something, but because people learned how to use it. People hooked it up with a bunch of other open source tools, and now it has these closed loops and you can do a bunch of other things. Or like recently these browsing plugins, where the system is now able to look up stuff on the internet.
So it could be the case that over time people develop more capabilities, more functions we didn’t see before, and now it’s like damn, we didn’t see this coming before. Now it’s time to eventually pull it back. Right now OpenAI has the authority to do this, and one way they’re enabled to do this is they’re not releasing the model — they’re going to release an API to do it, so they can turn it off every single time or they can still patch it. But if we talk about other systems which eventually are deployed — on cloud providers in particular, when people want to do it in a cost-efficient way — this would be one way, to just have this direct line to then go about it there and just make sure the system is being stopped running, and you stop harm from being created.
Rob Wiblin: Yeah. Has there been a reaction to this idea?
Lennart Heim: Not really. I have not pushed a lot.
Rob Wiblin: I see. But are you the only one talking about this? Surely there’s other people who have raised this as a broad mechanism.
Lennart Heim: I think more people are talking about it. It’s really hard to see what’s been happening in the last six months, but I just have this whole notion that compute for AI governance got way more attention. I think initially you got a lot of attention around the Chinese export controls, but I think now more broadly, with the CEOs themselves or other prominent figures saying this is actually a really promising enforcement node and monitoring node which we can then use.
I just expect there to be way more progress and way more people diving into this and thinking about this more clearly. What I think is still lacking is technical knowledge there and how to think about this. I think a lot of times people have just wrong conceptions about what you can actually know and what you can’t know, and where you need more technical features and where you don’t need technical features.
Rob Wiblin: Yeah. Do you want to say a little bit more about the strengths and weaknesses of this broad approach?
Lennart Heim: I think where I’ve changed my mind over time is how I thought about compute governance. People always said a compute governance regime is like this self-sufficient thing which is going to solve the whole AI governance problem. And I think over time it’s like, nah, actually compute has like mostly like three features.
I think we currently describe them as: it can enable you to get knowledge — which actors have chips, what are their capabilities, where are they, what are they doing with them? This is roughly what compute can do for you.
Compute can also enable you to enable distribution. Like, “You do pretty good things” — like maybe you want to boost certain actors, right? I think right now we see in the US and UK a lot of initiatives around giving academics more compute, because I think this is actually good, we need some not necessarily competitors, but some counterpart to the big corporations there. So you can distribute compute: you can give it to people, but you can also just get it away from less cautious actors, if they don’t have the AI driver’s licence or something.
And the last part is enforcement, which is described like literally taking away, or not giving people access at all, not even exporting chips there.
So there’s like three rough ways that you can think about compute. And I think this is not self-sufficient. None of this. A lot of times it’s just like, “Well, I can set a compute indexing threshold and then we do X.” But what is X? This is then where all the other people are coming in. And I think this is definitely where I changed my model over time. It’s like it plugs in; it’s a tool for AI governance which people can use either for enforcement, for monitoring, for distribution, something along these lines.
So I think I work way more together with other people, and a lot of times it’s like, “Lennart, here’s a proposal. Can you just see? I have this gap and this gap. Are there any technical things you can think about this?” And the technical things a lot of times are just about making it easier: building trust, for example, can be used there; or if you just have a nicely defined quantifiable metric compared to a bunch of other things which just are way harder to define.
Building mechanisms into chips themselves [01:18:57]
Rob Wiblin: OK, let’s push on and talk about this other broad category of governance approaches, which is building particular mechanisms into the chips themselves, I guess. I’m not sure exactly what this is called — maybe you can explain in a second — but the basic idea here is that we could try to govern compute by setting up the compute chips with rules about what they could and couldn’t be used for under different conditions. You could imagine, for example, that TSMC chips could be used for normal computing processes without restriction, but that they would have something on them, some firmware, that would detect if they’re being used to train ML models. And potentially they’d shut themselves down and prevent that from continuing, unless they had some unlock code approving the training run, from TSMC or some government agency or whatever.
And I think another possibility in this vein that I’ve heard is that chips might be able to maintain a tamper-proof record of what they’ve been used for in the past, and that then potentially inspectors could go around looking at concentrations of compute and seeing if any unauthorised ML training runs had been occurring there.
First up: Are these things possible? Are they really possible or are they impossible? They sound a little bit magical to me.
Lennart Heim: Well, yeah, computers are magic, right? Technically, we can do a bunch of stuff. I think most of the questions are about, is it secure?
I think we converge towards calling this term “hardware-enabled mechanisms.” Why is it interesting to talk about hardware-enabled mechanisms? Well, we can leverage the concentration of the supply chain. If we just tell Nvidia or TSMC to implement this, we cover the majority of our compute. That’s a huge win.
Another reason why hardware is interesting is that sometimes there are some reasons to believe it’s more secure than software, if implemented correctly. It’s a bit harder to tamper with this. And maybe let me start with just saying we already see this in the real world. There are already examples out there.
The most prominent example is probably every second person is using an iPhone. An iPhone has some hardware-enabled mechanism which just makes sure it’s only running iOS — you cannot run anything else on the system. We have a history of people trying to do this, so-called “jailbreaks,” but they’ve become probably exponentially harder to run other systems. So like right now on your iPhone, you can only run software which Apple approves. First you run iOS and then there is an App Store. You cannot just simply download any random app and run it on your smartphone, right? Apple goes through each app, puts it on an App Store, and then you can run it. Of course they say this is all a security feature. It is. It enables better security, but it’s just another feature. Just like well, they take a 30% cut off every app sale, and it’s another way of achieving this. So I think there it’s already happening.
Another example with iPhones where it’s pretty hardware-enabled is iPhones have this Face ID and Touch ID to make sure you unlock your phone, right? And what sometimes happens is the screen breaks and you want to replace Face ID and Touch ID, and people have been having a problem. If you replace this component, you cannot get a replacement part; it does not work with it. The simple reason is that these Face IDs and Touch IDs are uniquely bound to each phone. And they also do this because of security, so you cannot snoop the connection. So you can only use this, and only Apple can approve this. They have some magic key lying around to just enable you to have another Touch ID and Face ID.
And this might also be an interesting feature to think about, where you can just not simply take a chip and put it elsewhere; it’s like, bound to this. These two things only work together there. So these hardware-enabled mechanisms are already happening. Maybe the most annoying example is HP recently made it so that when you use ink which is not from HP, you cannot use this printer anymore.
Rob Wiblin: It just turns itself off forever?
Lennart Heim: Yeah, I think literally, they just change forever. It’s not even just like you cannot use other ink. This thing is just completely disabled, I recently read. It’s like, wow. But I think it points to things like, well, technically this is possible. I feel pretty confident if some smart hackers would get together, they could enable this ink thing again. It’s just not that interesting, right? I don’t know. People are just probably buying a new printer. I stopped buying HP printers because it’s just so annoying.
Rob Wiblin: Yeah. This is completely irrelevant, but it is amazing to me that the printer industry is so stuck on the model, and has been for decades, where the printer is very cheap — maybe they’re even selling it almost below cost price — and then they charge you a hell of a lot for the ink. Why don’t they just do what almost everyone else does, which is sell the printer for the cost plus 20%, and then sell you the ink for cost plus 20%? I’m sure we could Google this and find out. But anyway, it’s just fascinating that this is the lengths that they have to go to.
Lennart Heim: Commitment problem, I don’t know. Call to action: everybody commit to buying these types of printers so we can eventually solve this.
Rob Wiblin: OK, yeah. The other category of hardware-enabled mechanisms that I’d heard of was, I think during the crypto boom a few years ago, Nvidia’s chips were being used for crypto mining — which, tragically, made it very difficult for gamers to get access to these graphics cards. And so I think they put some mechanism on the chips to make it impossible to use them — they would prevent themselves from being used for crypto mining — in order to please their gamer customers, or, I don’t know, maybe even people running supercomputers. And I think this actually worked for a while, but then eventually people figured out how to jailbreak it for some period of time and then they could use it for crypto mining again.
Lennart Heim: That’s a really interesting example. So what Nvidia did there is just like, I think they called it “light hash rate” — LHR. We can wonder why Nvidia did it, because eventually just the crypto miners were just paying more for the GPUs and the gamers couldn’t get it. From a purely financial point of view, this was definitely not the best move that Nvidia did. I think I would rather understand it — and maybe they proved themselves right — as Nvidia were like, “We don’t want to disappoint our gamers; we want to keep these customers, and we don’t want to bet on the crypto community. This is actually not our core community, only some part.” They want to steer them towards certain products to use.
And in this case, this was particularly concerned about Ethereum. Most other cryptocurrencies had already moved: if we go back to our spectrum of AI chips, they’d already moved to ASICs because they’re more specialised — you exactly know what you’re going to crunch numbers on. Whereas Ethereum had this interesting move. They did not want ASICs to happen, so they developed an algorithm which was really memory intense. So basically, Ethereum was always looking for the gaming GPU with the best memory bandwidth, and that’s Nvidia. It’s not about the FLOPS here — they don’t care — it’s only about the memory bandwidth that you can read this.
And Nvidia was like, as you just said, for some reason not excited about this, and tried to make sure this was not happening. So they implemented some mechanisms where they could detect Ethereum mining. And this mechanism worked really well because you know exactly what they’re doing — you don’t know the exact numbers, but you know the exact algorithm they’re crunching — and they just knew, “Oh, this memory access pattern is unique to Ethereum, so we’re just going to throttle this one and implement it in the firmware.”
As you said, eventually this got jailbreaked. It’s not really clear how hard it was, because Nvidia at some point released by accident the firmware which disabled this, and then tried to roll it back, but as we know —
Rob Wiblin: Was it hacked or did they just accidentally publish it?
Lennart Heim: They accidentally published it, yeah. They accidentally published this.
Rob Wiblin: Oh boy, I feel sad for whoever did that.
Lennart Heim: Yeah. Then eventually somebody got the firmware, and then they could try to figure out the details there, like where the difference is.
The other thing that happened is Nvidia also had a big hack in 2022, where people stored a bunch of source code of these kinds of things and also of the firmware. And once you have the source code, you have some way to go around it, or you just compile a new firmware where you actually disable this. And then some people just eventually figure this out. For example, how did I know it’s about this memory access pattern? Well, of course Nvidia didn’t tell us; people eventually figured this out. So it turns out eventually this mechanism was circumvented.
What are the lessons here? I think there’s one lesson: mechanisms are theoretically possible, but they’re way easier if you know exactly what you’re looking for. This is way harder for AI. We don’t even know what AI systems in the future are going to be looking like and what their unique memory access pattern is. And this then changes again if your model is this big, that big, which activation function you have — then you have these tiny changes, right?
Rob Wiblin: And people could deliberately change it in order to obfuscate what’s going on.
Lennart Heim: Indeed. Right. This is just the thing you were doing. This is not possible for Ethereum, where there’s like some idea. When you don’t know what they’re looking for, you don’t also know how to change it. But just the spectrum you can do in AI is just way bigger than Ethereum mining.
The other lesson is: security is hard. I think that’s the biggest concern about any hardware-enabled mechanisms. Whatever you’re going to implement there, you really want to make sure it works and people cannot disable it. And what is the worst case if you disable it? In this case, they tried to throttle the graphic cards. If you disabled it, you had the full capability to use this. So if you would buy these graphic cards — and Nvidia was banking on [their customers never being able] to use it this way — well, it just changed [if their customers circumvent the security mechanism].
If you now just imagine the US would try to do the same thing with China. Well, if you have these mechanisms and they circumvent it, there we go. It’s just like worthless.
Rob Wiblin: The cat’s out of the bag.
Lennart Heim: It’s really important how you circumvent this and how you implement this. If you literally do it on a chip, it’s way harder than to do it than if it’s more like a firmware or even a higher-level software abstraction. Can you do it on scale, or do you need to go to each chip and etch something away with a laser? Which is way harder than just an exploit where you just update the firmware on all the GPUs.
So security is the key thing you want to look out for there. And what I eventually think we need there are some ways to verify that certain mechanisms are still in place, maybe remotely, maybe with other ideas like physical unclonable functions — like this technical thing which you can do — or maybe even with onsite inspections, where you’re just like, “Hey, we have this mechanism, we’re like a bit worried you’ve tampered with this. Can we please come on site and check if everything is still where it’s supposed to be by running some numbers, [checking] if this mechanism is still working?”
I think it would probably be wrong to just say we implemented this and then it works. Our history of getting security right is just like, “everything eventually gets broken” is roughly my take. It’s always a matter of cost. And if we talk about the AI thing where we have national interest in this, people are going to put a lot of power and a lot of money into trying to break this.
Rob Wiblin: I see. OK, so it’s possible in principle to do some things in this direction. I suppose the thing where it samples from the calculations that it’s being required to do and then stores those so that someone could inspect it later, that does seem viable. Detecting whenever you’re being used for an ML training run, a bit more touch and go whether that’s really viable?
Lennart Heim: Seems way harder. You can imagine earlier things. I’m excited about this whole research idea of verifying properties of training runs. So at some point we might have some ideas of what are actually concerning properties of training runs: Are transformers with this component way more worrisome than this one? A big model versus a small model? Like we already talked about, probably big models are more concerning, but maybe you could also say a big model with reinforcement learning components is way more worrisome than the one without.
So you have some automated code inspection. Ideally you want to automate it, because nobody wants to give away the intellectual property to do this. And this is also just like how these mechanisms can eventually play in — maybe that’s also just a software abstraction. So maybe for hardware-enabled mechanisms, instead of thinking about, are we going to throttle the GPUs and make sure they aren’t going to be used for that, I think the way more exciting way to think about it is to provide assurances: “Can you prove you did X?” Like, I can now claim, “Tomorrow, I’m going to do a training run which is X big” — but it would be way better if I had a technical mechanism which can just be like, “Yep, he’s right. He actually did this.”
I think people should think about the nuclear example. If I have some way how one country can credibly say how many nuclear missiles it has, and you trust the math, this seems way better. Nobody would lie about it. You can solve these whole bargaining and commitment problems with these kinds of things, just via this type of tech where somebody’s saying, like, “Look: I only trained systems this big, and here’s the cryptographic proof that I only did this.”
Rob Wiblin: Is that possible?
Lennart Heim: Is that possible in general? Yeah, there are some ideas about it. If people want to read more about this, there was a great paper by Yonadav Shavit called “What does it take to catch a chinchilla?,” where he describes a whole regime which is building on many open questions we’re having. But it points towards some research, like, this roughly is the thing we need there.
So you could imagine there’s some way there is a proof of learning how you can prove I’ve trained a certain system. If I know now how much compute you have, I can roughly say, “You can train 10 systems this size, or one system this size” — and eventually what I want to see is like, “Can you please show me the proof of learning for this one big system or the 10 small systems? Basically prove to me how you used all of this compute.” So for training, this is definitely possible. There’s a bunch of ways to make it more feasible and less costly, which you need to investigate there.
The problem is also that you not only do training. So you have a bunch of compute — maybe a bit used for training, a bit for inference, maybe you also run a climate simulation. So basically what I need to have is proof of non-training, right? Like, “Can you prove to me that you did not train the system, or can you prove to me that you did all of these other things — that you just ran inference and you did not eventually train the system?”
And I would describe them as open research questions. People should definitely think more about this. We have not seen a lot of research there. And this can just really help with lab coordination, international coordination, just anyone really. If we all agree on “We’re not going to train a system bigger than X,” this would be a mechanism how we eventually can just have this.
And it’s not going to be a silver bullet, but it definitely helps. I describe it usually as “it reduces the social cost.” And this seems good, right? If tech can reduce the social cost, if people have more belief in cryptography than in each other, we should leverage that.
Rob Wiblin: Yeah. A cross-cutting consideration with all of these hardware-enabled mechanisms is that we should expect, based on past experience, that there’ll be a cat-and-mouse game — where people will put in place these mechanisms with the goal of restricting what you’re doing or you being able to prove what you did reliably, and we should expect those to get cracked sometimes, for people to figure out how to work around them.
But as that cat-and-mouse game progresses over a period of years or decades, the low-hanging fruit in terms of breaking these mechanisms might get found, like with iPhones, and then eventually it would be really quite challenging, and not many people would be able to break them. But it’s unlikely that we’d be able to get there very quickly. That would actually require the experience of people trying really hard to break them and then all of those things being patched.
And the point you were making is that inasmuch as it is strategic, the geopolitical strategy here requires that these mechanisms work. It fails catastrophically when one country or one actor can just suddenly break all of these things, because someone posted the wrong stuff on the internet and allowed them to break it. So for stability or for our security, it would be dangerous to rely on these hardware-enabled mechanisms holding long term, because they can just fall apart completely.
Lennart Heim: Yeah, absolutely. I think it’s a key thing that I’m sometimes doing when I’ve talked to policymakers: they just always love hardware. It’s like, “Great, it’s just going to work.” I was like, “No, actually, stuff is not secure. Have you seen the state of cybersecurity? It’s terrible.” And that an iPhone is as secure as it is right now, this is years of work. And I think there’s probably right now an exploit out there for listening on iPhone conversations, but it just costs you $100 million — so you only do it for literally really big targets, and not the random hacker on the street does it.
I think it’s really important that whenever you need to reinvent the wheel regarding security, just don’t do it. There’s this thing — “never roll your own crypto” — just don’t do it. Use existing libraries. Never roll your own crypto; use existing libraries to do this kind of stuff. I think the same here: I mostly want to use existing mechanisms. I think there’s still some research to be done, and this is part of the reason why we want to roll it out as fast as possible.
Another way to think about this is that a lot of these mechanisms rely on, well, you can hack them because you have physical access to compute. We have not talked about it a lot yet, but compute does not need to sit under your desk to use it: you can just use cloud computing. I can right now access any compute in the world if people wanted me to. This may be useful if I implement something which is in hardware, and you need to physically tamper with the hardware: you can’t; you’re only accessing it virtually, right? And even if you would tamper with it via software, guess what? After your rental contract runs out, we’re just going to reset the whole hardware. We reflash the firmware, we have some integrity checks, and here we go, here we are again.
So maybe to build on top of this, we previously talked about the semiconductor supply chain. People should think about the compute supply chain, which goes one step further. At some point your chips go somewhere, and the chips most of the time sit in large data centres owned by big cloud providers. So we definitely see that most AI labs right now are either a cloud provider or they partner with a cloud provider. So if we then think about choke points, guess what? Cloud is another choke point. This is a really nice way to restrict access, because right now I can give you access, you can use it — and if you start doing dangerous shit or I’m getting worried about it, I can just cut it off any single time.
This is not the same with hardware. Once the chip left the country and it’s going somewhere, I’m having a way harder time. So maybe the thing you want to build is like some safe havens of AI compute — where you enable these mechanisms that we just talked about there; you can be way more sure to actually work; and even if somebody misuses it, at a minimum, you can then cut off the excess for these kinds of things. So the general move towards cloud computing — which I think is happening anyways because of the economy of scale — is probably favourable from a governance point of view, where you can just intervene and make sure this is used in a more responsible manner.
Rob Wiblin: Yeah, OK. So this is kind of an exciting and interesting point, that people or many organisations currently have physical custody of the chips that they’re using for computing purposes. If we came to think that any aggregation of significant amounts of compute was just inherently very dangerous for humanity, then you could have a central repository, where only an extremely trusted group had custody. I guess it probably would be some combination of a company that’s running this and the government also overseeing it — as you might get with, I suppose, private contractors who are producing nuclear missiles or something like that — and then you could basically provide compute to everyone who wants it.
And I suppose for civil liberties reasons, you would maybe want to have some restrictions on the amount of oversight that you’re getting. You’d have some balancing act here between wanting to not intervene on what people can do on computers, but also needing to monitor to detect dangerous things. That could be quite a challenging balancing act. But basically it is, in principle, possible in the long term to prevent large numbers of groups from having physical custody of enormous numbers of chips — and indeed it might be more economical for most people, for most good actors, to not have physical custody anyway; that they would rather do it through a cloud computing provider. Which then creates a very clear node where probably these hardware-enabled mechanisms really could flourish, because it would be so much harder to tamper with them.
Lennart Heim: Yeah, maybe you don’t even need them there because you just have somebody who’s running it. And we definitely see a strong pivot towards the cloud. And no AI lab is having the AI service they eventually need in a basement to run these systems. They’re sitting elsewhere. They’re sitting somewhere close to a bunch of power, a bunch of water, to run these systems. And if you could just make these facilities more secure, run them responsibly, I think this might be just a pretty exciting point to go to.
You could even think about the most extreme example as a compute bank, right? We had a similar idea with nuclear fuel: just build a nuclear fuel bank. And here we just have a compute bank: there’s a bunch of data centres in the world, we manage them internationally via a new agency, and we manage access to this. And maybe again here we mostly wanted to talk about the frontier AI systems — like the big, big systems — you eventually then just want to make sure they are developed in a responsible and safe manner there.
Rob Wiblin: I imagine that some listeners will be very interested in this thread that we just started talking about, which is the civil liberties issues — perhaps that the level of monitoring that we’re talking about here, and the level of concentration that we’re talking about, raises concerns about authoritarianism or abuse of power, too much surveillance. I think these are super-important issues. I think we’ve had to put this one to the side for this interview, but we’ll come to it in some future interview, because it’s a huge conversation of its own.
And I think first we need to understand the situation. As a community of people listening to the show, and me hosting it, we need to understand the issue here, and understand how these mechanisms might work to begin with, before we start thinking about how exactly we would balance the implementation of this against other very broad social concerns. You’re nodding along with that?
Lennart Heim: I’m on the same page here as you are. I think we really need to be careful implementing these mechanisms, enabling a surveillance state, or some of these mechanisms eventually would enable this to some degree. We really need to be careful here.
I think the notion is to understand that we’re talking about AI compute here — we’re not talking about all the compute. I think this gets you a really, really long way. And yeah, it seems worthwhile thinking about. And as I just said, trying to hint at some things, I’m not saying that a compute bank is a good idea. I think it’s good for people to have the ideas out there, and we really need to think carefully about, first of all, what is warranted — and if it’s warranted, how we would get there and if this would even be a good idea.
Rob not buying that any of this will work [01:39:28]
Rob Wiblin: Yeah. OK, let’s push on from hardware-enabled mechanisms specifically, because I want to do a section now where I explain why on some level I just don’t buy the idea that the above ideas are anywhere near sufficient for the problems that we’re facing.
The reason is that, as I began to explain earlier, on current trends, it seems like people are going to be able to train really dangerous models on their home computers in something like 10 years. So this stuff that’s focused on preventing massive server farms from falling into the wrong hands or being used to do dangerous stuff seems like that’s going to work for the era that we’re in right now, for the next couple of years. But it’s going to quickly become insufficient and I’m not sure what we’re going to do at that next stage.
Throughout this conversation, we’re going to be talking about these future models as if they’re agents that are going to be semi-autonomously going out into the world and taking actions on people’s behalf, and I guess possibly on their own behalf at some point. It’s possible that some listeners, especially people who are not paying as much attention to AI, are going to be a bit taken aback by this, because you might have the impression that GPT-4 is just like this fill-the-word-gap processor: it’s not an agent; it’s not out there in the world doing a whole lot of stuff.
But I’m going to assume that models in a couple of years’ time are going to be these agents, because I think it’s very clearly true — because people are working very hard to turn them into agents, and they have already so many of the latent capabilities that would be necessary for them to operate as staff members of a firm, basically engaging in a very wide suite of activities, potentially with supervision or with partial autonomy. I guess if you think that’s not so likely anytime soon, maybe just imagine that this is coming a little bit later, once we have figured out how to turn these models into more useful agents that are engaging in quite complex tasks on their own.
Let’s work through this bit by bit. As I understand it, Sam Altman has said that it cost over $100 million to train GPT-4. Do we know anything more than that at this point?
Lennart Heim: I’m not aware of any more information. There have been estimates on how much compute was used. I think this seems roughly the right ballpark. I would probably expect this is the whole development cost — not only a final training run — so the final training run is just a little bit less, maybe less than a half.
Rob Wiblin: We’re already on a side note here, but if it only cost $100 million, that’s a lot of money for you and me. It’s not a lot of money for the government or for tech companies. So couldn’t, like, this year, Alphabet or the US government just do a training run that’s 100 times larger, throw $10 billion at the problem and produce GPT-5?
Lennart Heim: Yeah. It’s interesting. It sounds like a lot of money, but from government aspects, it’s not a lot of money.
Rob Wiblin: It’s a pittance.
Lennart Heim: It’s just a question of what you get out of it, right? Do you think right now GPT-4 is worth more than $100 million? I don’t know. I think we’re seeing right now to which degree it’s actually worth this much money. We are entering this new era of large deployment of large language models, we see how much money they’re actually worth.
Could they eventually do it? You still need talent to do it. I’m really pretty confident the government could not pull this off right now with their talent to eventually go about this. And then they need to fight with others about the compute, right? OpenAI has contracts with Microsoft Azure. They have their clusters to eventually do this. The government maybe has some supercomputers with the DoE. I don’t think they have the best chips in there to then train the system. And then you just need the engineers to also do it.
So there’s one thing: how much money you spend on it — this is just the compute, right? There’s everything about how much and which talent you need, and how good this talent eventually is. I think there’s a lot of tacit knowledge, and there’s a lot of tricks to eventually train these systems which are that good.
Rob Wiblin: Yeah. I guess a complication here is that, as I understand it, when you’re doing training runs trying to get better performance on these models, ideally you increase the amount of compute and the amount of data simultaneously. That’s kind of ideal. If you just massively increase the amount of data or the amount of compute without increasing the other one, then you get diminishing returns on each one of those inputs. And because we are getting kind of close to exhausting the kinds of text that GPT-4 was trained on, increasing the compute 100-fold wouldn’t bring as much improvement as you might expect, if you simply held the data constant as well.
Lennart Heim: Indeed. Yeah.
Rob Wiblin: OK, new question: To get the same boost in performance that we got from GPT-3 to GPT-4, do you know roughly how much the model will cost to train currently? Or maybe how long we’d have to wait for algorithmic and compute improvements to occur so that we could get the GPT-5 equivalent — the same boost in performance — for, again, around $100 million?
Lennart Heim: Yeah. There are multiple ways to think about this. One way to think about this is just like well, from GPT-3 to GPT-4, it required an order of magnitude more compute. So let’s say from GPT-4 to the next system is also another order of magnitude more compute.
Now, you could ask yourself: Hardware gets better over time, so how many years do we need to wait so that hardware in the future is as good, so you get the GPT-5 system basically for the same price as before? So if hardware doubles every two years, every 2.5 years — which is roughly the doubling of hardware price performance — we then just need to wait roughly three doublings, which is then like six to eight years or something, and then it’s like 10 times cheaper to train these systems. And then you could just have the GPT-5 for the GPT-4 cost, if you just go via this.
But next to this, we have this whole notion of algorithmic efficiency: systems are becoming better at absorbing the data, and they increase more capabilities for the same amount of resources. This is really messy to think about this and what it’s been there. We have some papers which look into the doubling of algorithmic efficiency in computer vision, and that’s been roughly every year. So with this notion, if we just need to go 10x and we wait roughly two, three, four years, then we have again like this many algorithmic efficiency [improvements] that you’d get the same there.
But on top of this, you have a lot of other techniques. And I think if you just look at the system LLaMA, which was somewhat leaked by Facebook, and then we got Alpaca — which is this fine-tuned system which used GPT-3.5 or GPT-4 to make it better. And I think this was way cheaper, and you got a lot of capabilities. I’ve not seen the full capabilities and full range, but I think it’s pointing towards that there can be some kind of distillation: if there’s a more powerful system already out there, you have an easier time building a system which is a little bit closer to it. You probably won’t surpass it, but it’s easier to get to these kinds of systems, to get to these kinds of capabilities.
But overall, really hard to say. How do you measure performances? Is it as good across all these performances? We have some benchmarks on how to think about it. And this whole notion of algorithmic efficiency is really hard to wrap your head around, or like measure in a quantifiable way.
Rob Wiblin: OK, so I think a key thing that people should understand here is that in recent years we’ve seen very rapid improvements in algorithmic efficiency. So with the same amount of compute and data from one year to the next, you’re able to get significantly improved performance. Do you know roughly what is the doubling time, or I guess in this case, the halving time that we’re getting from algorithmic improvements?
Lennart Heim: For computer vision, according to the best analysis we have there, it’s roughly one year. Which is still less than we spend on compute, right? We’ve been doubling compute every six months. So basically we scaled up compute faster than we got by algorithmic efficiency. And then of course, we have these gains on top of each other: you get more algorithmic efficiency, you throw more compute at it, so you basically have faster capabilities there.
I’m excited about what’s going to be happening in the future. It’s really hard to say. And this is only for computer vision. I hope we have more analysis there in the future on other domains where we can look into this, like what has been algorithmic efficiency there?
Rob Wiblin: Yeah. Do you know if we’re getting close to running out of improvements to the algorithm? So that maybe a one-year halving time won’t be sustained long term?
Lennart Heim: That’s guesswork. Are we running out of ideas? I don’t know. Maybe something is true, just that maybe ideas are getting harder to find. There’s some reason to believe it’s hard to find ideas. But maybe also just more people. Like, let’s just say there’s like X chance every single time to find a new idea, which isn’t that good, but we have more people working on it — well, then we find more ideas into the future. We definitely have more interest right now in AI systems, so we’d expect that we will have more ideas in the future if we’re going forward.
An interesting notion is maybe that we see more and more research happening in industry. And industry, as compared to academia, is less diverse in their research ideas. And maybe that’s bad; maybe there’s less diversity in research they do on AI, and they’re less likely to stumble across new ideas which are really groundbreaking, right? If you find something which is outside the current paradigm, this is more likely to happen in academia. And academia is maybe not seeing as big of an AI boost as the industry is currently doing. So it’s really hard to say, in particular where we’re just entering this new era where historically a bunch of research has been happening in industry. This was not the case before for other technologies, where a lot of the groundbreaking research was mostly coming out of academia.
Rob Wiblin: OK, so to get to my bottom line here: On current trends, when would an individual be able to train GPT-4 themselves, say, at home, using a bunch of hardware that they’ve bought for around $100,000?
Lennart Heim: I mean, we can just try to run with the previous numbers we were talking about, basically. So there are two components which feed into this: compute gets cheaper over time, and the other component being we have some algorithmic efficiency there.
So if we just say: What is the cost when you would be able to train the GPT-4 alone? Let’s say it costs you $100,000 instead of $100 million or something — we go down by three orders of magnitude. So if we then think about this, and we put together that every year the algorithmic efficiency doubled — so every year you need only half the amount of compute you used before to eventually get to the system — and then every two years, you get two times more compute for the same price performance. Then this should be around like seven to eight years, if you think about this. But then we still talk about $100,000, which is still not your local GPU at home, but maybe you could just rent compute there.
I do think this is probably not going to be the case. The reason I already said is that I think it might be getting harder to find new algorithms to continue this. And I think it’s really unclear if this doubling time about algorithmic efficiency — which I was just talking about from computer vision — also applies to language models, or whatever the future of AI systems is.
The other reason is labs are starting to become more secret about the algorithms; they stopped publishing them. So it would be really hard for an individual to exactly know what the architecture is of these systems. We don’t know about the architecture of GPT-4; we don’t know about the architecture of PaLM 2, which is the recently released model from Google. And I think this just makes it significantly harder for people to replicate any of these efforts. The reason why open source has been so close so far to the cutting edge is mostly that these systems worked in the open, right? And this will definitely get harder over time.
Rob Wiblin: OK, so there’s quite a few pieces here. Let’s take them one by one. In terms of someone being able to produce a very powerful model on the amount of hardware that an individual or a small business might be able to buy: The compute is getting cheap at an unusually fast rate, but maybe then the limiting factor would become access to the enormous amounts of data that are required. So perhaps in six or seven years’ time, you might have enough compute in principle as a small business, but you wouldn’t have access to the entire corpus of data off the internet. Although, as we talked about at the very beginning, you can copy and paste this stuff surprisingly easily.
I guess inasmuch as you were doing multimodal training — like one mode is text, another mode is, say, video or images — then the amount of data that we’re talking about that you might be training a video system on might be a lot larger than the corpus of text, because it’s just so much harder to compress and store video than it is text. So that could end up being a limiting factor for a while: Do you even have the hard disks and the ability to move that amount of data back and forth for the training process, even if you can afford the chips involved?
Lennart Heim: And this also reminds me that when I talk about compute halving, there’s this pretty narrow perspective. Like I talked about compute mostly regarding floating point operations per second, but there’s way more to compute: like each system has memory on the system, which they need to talk to each other. And the growth of floating points per second, when I talk about doubling every two years, this is about the price performance — like how many flops per second do I get per dollar, for example. But the memory capacity and the memory bandwidth on a GPU on an AI accelerator has been growing way slower.
So if you continue having these really large systems, you might theoretically have enough FLOP per second on your GPU, but you don’t have enough memory to store it — and then you need to offload it on your hard disk, and you see like really, really large penalties. So there might be the thing where your single GPU might have theoretically enough computation performance, but you just don’t have the memory — and that’s the reason why you need a tonne of them.
This is already what we’re seeing right now: memory is growing slower. These systems are really large, and ideally you want to store the whole weights on the onboard memory on the GPU. And this is too big, right? They have like, I don’t know, 40 to 80 gigabytes if we talk about these systems nowadays. Systems are roughly half a terabyte up to a terabyte big nowadays; you can’t fit them on a single GPU, so you just need a lot of them. Even though theoretically the computation performance is sufficient, you need more memory. This overall then drives the costs, where you just need the capacity to store this. And this might be another factor which makes it harder for just random people to train these systems.
Rob Wiblin: OK, so that’s one aspect of it. The other one that you were just alluding to is this issue of even if the cutting-edge labs are coming up with all of these great algorithmic improvements, will those be widely accessible?
It seems like until yesterday, everyone was just publishing everything. All of this incredibly important, very valuable, potentially dangerous information about how to do ML training was just online, basically anyone could grab it. That has started to change, and it’s almost certainly going to continue to get locked down for a whole lot of reasons, both commercial and potentially security focused.
So there’s this question of how far behind the cutting edge might we expect the open source, amateur side of ML to fall? On the one hand, the fact that they won’t be able to necessarily just copy all of the research that OpenAI is doing is definitely going to push things back. On the other hand, it does seem like the open source community is very enthusiastic about doing all of this work and doing their own research. People are doing all kinds of work on AI outside of the labs now, building on what is already available. A lot of people have a fire in their belly about making sure that AI not be concentrated in particular firms, because they’re worried about this concentration of power issue.
And as you were saying, initially, the first person has to build this language model by training it on all of the text from the internet. But the copycats can potentially instead… Listeners might not have heard this one, but basically it’s possible to train a very good language model much faster than training it on all of the text on Wikipedia: instead, you just put prompts into GPT-4 and then see the stuff that comes out, and then update a tonne based on that, and then just do that for enormous numbers of inputs and outputs. And that turns out to be enormously more efficient, because in a sense, GPT-4 has already done the work of distilling the wisdom from the original, much larger source data. GPT-4 has distilled it into a relatively small number of weights — of numbers in this enormous matrix.
So there’s some reasons why you might expect the open source people to keep up, and some reasons why you might expect them not to. Do you have any overall take here?
Lennart Heim: Yeah, there is a paper called “Societal and governance implications of data efficiency,” and people should just read it and replace “data” with “compute.” I’m trying to quickly explain it.
What we mostly talked about is the so-called “access effect.” So if compute efficiency goes up over time — so basically get more capabilities for less compute — as we just talked about, the open source community is catching up. They have pretty good capabilities; they’re not that far behind. So basically it gets easier for people to get access to these kinds of capabilities, right? Over time you get more access to this. That’s one effect.
The other effect you have is a “performance effect.” The performance effect is basically for the one who’s leading, who will also get more for your previous compute budget, right? So they also pushed up how much performance they eventually have.
So these effects then basically push the laggards, but also the leaders to some degree. And this is really important to think about, because while maybe we have capabilities or like benchmarks in their percentages, like 80% to 90%, sometimes 99% is like way, way better than 98%. I think the best example is self-driving cars. Right now maybe you have self-driving cars that are like 98% good. It’s not enough; we’re just not deploying it. Whoever has the first self-driving car who’s like at 100%, they just win. There’s like winner-takes-all dynamics, maybe. So we have this whole notion that there are some people who are just leading with the frontier, and they also have this effect where they get more capabilities with the same effects, which makes the open sourcers’ lives easier.
And then an important thing to look at is if we, for example, think about malicious use of maybe open sourcers, or maybe other actors, it all depends on how the offence and defence is eventually going to play out there. So you might imagine that the people who are leading, those are the, let’s say, “nice guys” — we can regulate them and then we can use their systems to defend against this other system. And this is what we need to think about: what is the offence/defence balance here eventually? Can a GPT-5 — which they can train, which is way more expensive — then defend against all the GPT-4s which are going to be in abundance on the internet in the future?
I think this is a really interesting research question. One thing to think about, and maybe one reason I’m somewhat optimistic, is you just think these future capable systems are like way better defenders, or are way more likely to eventually just take over certain enemies where they just win, have way more deployed.
Rob Wiblin: OK, let’s just back up one second here. The key thing that spits out of doing this sort of analysis is you were saying that maybe an individual would be able to train GPT-4 at relatively low cost in seven years or something like that, if current trends continue. Now, probably that’s a little bit early, at least for the $100,000 budget, because there’s various other challenges that they might face in doing this. But when you’re dealing with a bunch of exponentials, it doesn’t push it out very long — maybe they need it to become four times easier, but then that’s just like another year or two. So if it’s not seven years, it certainly will be within 15 years, I’d think — and maybe within 10, because people will find workarounds for all of these challenges that we’ve been talking about.
And 10 years is not very long! Ten years is coming along pretty soon, and conceivably it could be earlier. Now, some people might be thinking, well, GPT-4 is not that dangerous. GPT-4 is very cool, but that’s not the kind of thing that I’m most worried about misuse for. Probably we’re more concerned when we’re thinking about GPT-5 and GPT-6, which might have a 10x cost increase again, so that’s again pushing things out — but again, only a couple of years before we need to be concerned about those models as well.
And in making this point, I was slightly working with one hand behind my back when I said, what about a $100,000 budget? Because if it cost $1 million or $10 million, there’s an enormous number of actors out there in the world that have access to that level of compute and that level of budget. And also they can just wait longer; they could just do the training run over a longer period of time if they can’t get that number of chips all concentrated in one place.
So this era in which you can — on current trends, with the way that we do computation now — restrict access to the ability to train what are, by today’s standards, very compute-intensive models: how long we can deny people access to that doesn’t seem like it’s a very long era. We’re talking about a five- or 10-year era here — after which we need to get substantially more heavy handed with denying people access to the hardware, or centralising it somewhere in order to monitor it all the time, or just accept that this stuff is proliferating massively to everyone. And hopefully by then we’ve figured out that it’s safe, or we’ve done various other things to make it safe.
Just to say five or 10 years is not very long, folks. Ten years ago, I moved to come and work at 80,000 Hours.
Lennart Heim: And here we are.
Are we doomed to become irrelevant? [01:59:10]
Rob Wiblin: OK, so: what are we going to do? You were starting to raise this issue of offence/defence balance, where you’re saying that maybe this compute stuff is not going to cut it forever; now we need to start thinking about a different approach. And that approach might be that, sure, the amateur on their home computer, or the small business, might be able to train quite powerful models, but we should still expect that enormous internet giants like Google or authorities like the US government should have substantially better models. Even if it’s impressive what I can access on my home computer, there’s no way that I’m going to have access to the best, by any stretch of the imagination.
So how might we make things safer on a more sustainable basis? Perhaps what we need is to use that advantage that the large players — hopefully the more legitimate and hopefully the well-intentioned players — have, in order to monitor what everyone else is doing or find some way to protect against the harmful effects that you might get from mass proliferation to everyone.
Maybe this does sound crazy to people, or maybe it doesn’t, but I feel like what we’re really talking about here is having models that are constantly vigilant. I guess I’ve been using the term “sentinel AIs” that are monitoring everything that’s happening on the internet and can spring into action whenever they notice that someone — whether it be an idiot or a joker or a terrorist or another state or a hostile state or something — is beginning to do something really bad with their AIs, and prevent it. Hopefully relying on the fact that the cutting-edge model that the US government has is far above what it’s going to be competing with.
But this is a world, Lennart, in which humans are these kind of irrelevant, fleshy things that can’t possibly comprehend the speed at which these AI combatants are acting. They just have this autonomous standoff/war with one another across the Earth… while we watch on and hope that the good guys win.
Sorry, that was an extremely long comment for me, but am I understanding this right?
Lennart Heim: I mean, we are speculating here about the future. So are we right? I don’t know. I think we’re pointing to a scenario which eventually we can imagine, right? And I’m having a hard time telling you the exact answers, particularly just AI governance or my research is like “stop access forever” or something. I think that’s a really high burden to eventually fulfil.
What I’m pointing out is that we have this access effect, but we need to think about the defence capabilities here, in particular if you think about regulating the frontier. And I think this is part of what makes me a bit more optimistic. You’ve just described one scenario where we have these AI defender systems and they’re fighting, they’re just doing everything. Maybe this works well, we can just enjoy each other, right? Like, having a good time and it seems great. But maybe it’s also just more manual; I think it’s not really clear to me.
But I think the other aspect to keep in mind here is we’re talking about — let’s just say this is a GPT-6 system, everybody can train it, whatever — this future system, maybe the system is, again, not dangerous. Maybe there’s going to be a change in the game — again, where we talk go from this 89% to this 90% or something along these lines — which makes a big difference in capabilities, right? This dynamic gives the defender a big advantage there. Maybe people don’t even have an interest in using all of these systems, because the other systems are just way better.
We now think about malicious actors who are trying to do this. I would expect the majority of people not wanting to do this. Those are problems you already have right now, where people can just buy guns. And this goes wrong a lot of times, but it’s not like every second person in the world wants to buy guns and do terrible things with it. Maybe that’s the same with these kinds of futures. Maybe then these defender systems are just sufficient to eventually fight these kinds of things off, in particular if you have good compute monitoring, just general data centre monitoring regimes in place there.
What’s important here to think about is that compute has been doubling every six months. This might not continue forever, or this might continue for a long time. And all the other aspects which reduce the compute threshold have not been growing that fast. So again, all I’m saying is it buys us a couple of more years, right? Like more than this 10, 20, 30. Maybe that’s what I’m pointing to.
But overall, what we try to do with AI governance is like: yeah, AI is coming, this might be a really really big deal. It will probably be a really big deal. And we need to go there in a sane, sensible, well-managed way with these institutions. And there are many open questions, as you just outlined, where we don’t have the answers yet — we don’t even know if this is going to be the case, but we can imagine this being the case. And we just need the systems in place to deal with this.
Rob Wiblin: Yeah. I think a key issue here that makes this feel so unnerving, and also makes the future that I was describing seem quite hard to avoid, is the fact that these ML systems act so much faster than humans — such that if we were relying on the police to respond, or law enforcement to react as human beings holding meetings and doing stuff, they would have lost by the time they could even figure out what was happening.
We’re kind of like sloths by comparison to computers. I’m not sure what the fundamental reason is here. I suppose the human brain operates with very little energy. What is it, like two watts or something? Something ludicrous. So that’s one impediment that we face: that we’re just not using very much energy relative to what a server farm would. The other thing is just biology has struck on a mechanism for signal transmission that is incredibly slow relative to what you can do down a piece of metal, let alone an optic fibre or whatever. So we’re just at this very severe disadvantage in terms of how quickly we can think and how quickly we can do stuff, I guess, especially relative to an incredibly specialised model that’s just trying to do one thing.
Lennart Heim: I don’t feel optimistic about the cyberpolice fighting off future AI systems with current technologies. Defenders need to equip themselves eventually with the systems, and the thing which I’m hoping to enable is that the defenders will have the most capable of systems. I think this seems like a pretty likely thing to do in the future. In particular, if you think about the regulation of the frontier systems in particular, if they’re as powerful as we’re trying to imagine here, governments will come and intervene, right? And this will just help there.
Rob Wiblin: Right. I feel like this needs to be discussed more, or I don’t think that this has sunk into the mainstream at all as yet. The fact that humans are, from a security standpoint, going to be obsolete, pathetic flesh bags pretty soon — and that we’re going to be in the crossfire of this AI-versus-AI dominated future — if this is just what most people thinking about AI governance or AI security think, then I reckon maybe this needs to be passed on to regulators somewhat quickly. Because I don’t feel like we’re on track to put the necessary infrastructure in place for this to work well in time. Is that about right?
Lennart Heim: I think this seems about right. I mean, you’ve been talking about AI on this podcast for a long time, and I think the last couple of months have been a game changer there. Are we on track now? No, I don’t think so. I’ve not seen evidence yet. But I think we’re definitely moving in the right direction to make more progress there, and I think it overall makes me optimistic. If you now see the degree which people buy into these kinds of things we just talked about, this has definitely changed. I think this eventually enables you to just do these kinds of measures we were just talking about. There’s a bunch of things which just make me hopeful there. But yeah, clearly it’s not sufficient. I don’t think we’re on the ball, but I’m optimistic. We will hopefully build more in the future, but we are just facing one of the biggest challenges we’ve ever faced there.
Rob Wiblin: Yeah. Just to be clear: I’m not advocating for the outcome that I was just describing or saying that this is good. I’m just saying that when I project forward, I can’t exactly see an alternative. But of course, predicting the future is incredibly hard. So maybe I have a misguided picture here, or maybe you have a misguided picture here, and when we come back in five years’ time, we’ll kind of laugh a little bit about the stuff we expected to happen.
But to me it just feels a bit like the default, and I think more work needs to be done ahead of time to make that safe, because it sounds terrifying. Like, this is starting to set off my authoritarian alarms, and, I don’t know, maybe this is a sci-fi trope, but you need to empower the defensive AI systems to react extremely swiftly to attack and shut down things that are sufficiently hostile and sufficiently dangerous. But now you’re like giving the nuclear codes… Well, not literally, but…
Lennart Heim: Hopefully not. We’ll draw a line there.
Rob Wiblin: Yeah. Right. But you’re giving it incredibly powerful authority for these systems to act autonomously and in a violent way against adversaries.
Lennart Heim: Just against all the compute. You know?
Rob Wiblin: And I feel like you really want to at that point have specified, have tested very carefully, what these things do — because this is like the maximally dangerous kind of application. And yet not applying it this way is also so dangerous.
Lennart Heim: In the beginning, you asked me about what are the things you think about when you hear a new policy. Maybe this is a way to stress-test anything: it’s like, OK, here are Rob’s wildest nightmares. How does this go in the future? Will this policy eventually help with these kinds of wildest nightmares? Of course I discount it a bit, but that’s one thing: How is it eventually going to turn out? Yeah, we just project a bunch of things in the past; we mostly thought about where is the frontier going to be? Where are the laggards going to be? Where are they going to be? What is the access of any relevant actors or rogue states — what they can do?
And then this whole notion is like: how will the offence/defence eventually go? And there are many reasons it goes well. There are also many reasons why it eventually does not go well. In particular if you think about certain types of attacks where the offender has just a way big advantage.
Rob Wiblin: Yes. Yes, I guess we’re talking about how, in this future, human reaction time would simply be too slow to be relevant. At a very different scale, you might think that our social reaction time — our reaction time as a government and as a society to these developments — is also just way too slow. That it’s taking months for people to digest what’s happening. And for things that five years ago would have seemed like crazy reactions to start now seem like just obviously the sensible next policy stand.
But if this stuff comes at us sufficiently quickly, we’re just not going to have time to make peace with what has to be done, and do the necessary work to make it happen safely, before we have to have this stuff in place or we’re screwed.
Lennart Heim: Yeah. I think that’s where this whole idea of slowing down comes from, right? It’s just obvious they’d be not equipped for this — the current progress is just immense; what’s been happening there caught everybody by surprise, right? Everybody’s updating when and what is going to happen in the future. And just going more slowly and more sanely would be a great way forward. And I think to some extent, this is what AI governance is about: like, hey, let’s take a look at the state, what’s going on, and how should we move forward? Should we move forward? What should move forward? All these kinds of questions should be asked and yeah, I think we’re clearly in a world right now where we discuss them. And this seems great.
Rob’s computer security bad dreams [02:10:22]
Rob Wiblin: So we’ve been talking a little bit about my nightmares and my bad dreams and where Rob’s imagination goes when he imagines how this is all going to play out. Maybe let’s talk about another one of these that I’ve been mulling over recently as I’ve been reading a lot about AI and seeing what capabilities are coming online, this time a bit more related to computer security specifically.
So question is: if GPT-6 (or some future model that is more agentic than GPT-4 is) were instructed to hack into as many servers as possible and then use the new compute that’s available to it from having done that to run more copies of itself — that also then need to hack into other computer systems, and maybe train themselves into one direction or another, or use that compute to find new security vulnerabilities that they can then use to break into other sources of compute, and so on, on and on and on — how much success do you think it might have?
Lennart Heim: I think it’s definitely a worry which a bunch of people talk about. As I just hinted at before, I think computer security is definitely not great. I do think computer security at data centres is probably better than at other places. And I feel optimistic about detecting it: it might be a cat-and-mouse game here, but eventually you can detect it.
Why is this the case? Well, every server only has finite throughput. That’s just the case, as we just talked about, like there’s only that many FLOPS which can be run. So there’s a limited number of copies that can run there, and data centres are trying to utilise their compute as efficiently as possible. Right now you can expect most data centres run at least at 80% utilisation or something, because otherwise they’re just like throwing money out of the window — nobody wants to do this.
So if the GPT-6 system, if this bad worm, comes along just like hacks into the system, there’s only that much compute available which you eventually can use. Then it’s a bit tricky, because it’s like “there is a bit, there is a bit” — it’s kind of like a scheduling problem. Well, then if it would kick the other workloads out, somebody would notice. “I was running this science experiment and this never really finished. What’s going on there?” And data centres are already doing this, monitoring for this.
I think the best example we’ve already seen in the real world is these whole malwares where people’s personal computers were used for crypto mining. This malware is just running on your computer, and then they tried to use your processor to mine crypto for this hacker’s personal wallet to get more money. And people started noticing this, mostly like, “My computer is a bit slower than normal.” So people tried to modify this algorithm so it was only using 20% of the capability of their processing performance, so you’d not detect it. But if you actually go full throttle, literally, your laptop fan would turn on, and like, what’s going on there? If people just see their laptop’s utilisation going up to 100% without them doing anything, be suspicious. Probably you should reset this. Reset this thing.
And I think it’s the same for data centres. Where it’s like, “Oh, there is a computer worm here. They’re doing something. Let’s try to kick it out.” And then you can imagine a cat-and-mouse game which might be a bit more complicated. And maybe this is part of the reason why maybe I’m advocating for the thing which no data centre provider wants, which is like a big red off switch. Maybe I actually want this.
Because normally you’re trying to optimise uptime, because that’s what you want to go for as a data centre provider. They definitely have different tiers there. It’s like, you’re the highest uptime, you’re the coolest data centre out there. And here we just want like, “Gosh. We literally lost control here. Let me just turn off all of the things.” Maybe on a virtual or software level, like turning off virtual machines is not sufficient, because it’s a really sophisticated computer where it’s already trying to escape. You literally would just turn off the compute and figure out and have forensics of what’s been going on there and trying to defend it.
What we eventually exploit there is existing security bugs and holes, and usually we fix them if we just figure out what they are. This takes a little bit of time, but at least compared to AI systems, we have some clue. We at least develop these systems in a way we understand them, so we can try to fix it.
Rob Wiblin: Just to add a bit more colour to this scenario: I probably misspoke when I said if GPT-6 were instructed to do this — because it would be much more sensible to have a model that’s extremely specialised at hacking into all kinds of computer systems, which is a much narrower task than being able to deal with any input and output of language whatsoever. So it probably would be quite specialised.
And yes, I basically am describing a computer worm, which I think our youngest listeners might not really have that much exposure to computer worms. But from what I understand, from the early days of the Internet and networking computers through to about the period of Windows Vista, this was a regular occurrence — where basically people would find some vulnerability within an operating system, or sometimes within email software, that would basically allow you to break into a computer, then email everyone with a copy of the virus. And then it would spread to other computers until basically everything was shut down, just in a cacophony of people passing this malware or this virus between all of their computers.
You could Google this question, like the “largest computer worms” or the largest outbreaks. I remember when I was a kid there were a handful of times that enormous numbers of computers went down basically for a day or two until these vulnerabilities could be patched. You would have just companies completely inoperable, more or less, because their computer systems had been infected with these worms. I think that stopped more or less because computer security got better. It’s still very bad, but it was so bad then that it’s not so easy now. There’s now a lot more firebreaks that make it hard to put together all of the security vulnerabilities that you need for a worm like that to operate.
So why could this come back? In the worm case, it was just a person or a group that programmed this tiny piece of software to use just a handful of vulnerabilities, or maybe just a single vulnerability, in order to break into these computer systems one after another in a kind of exponential growth situation.
In this new world, we’re imagining an ML system that is extremely good at doing security research, more or less, and discovering all kinds of different vulnerabilities. It basically has all of the knowledge that one might need in order to be an incredibly effective hacker. And so it’s going to just keep finding new vulnerabilities that it can use. So you shut it down from one avenue and then now it’s discovered something else and it’s copying itself using this other mechanism. And potentially it could also self-modify in order to obfuscate its existence or obfuscate its presence on a computer system. So it’s quite hard to clear it out. So it can kind of lie idle for a very long time using very little compute and then come to life again and copy itself elsewhere using some new zero-day exploit, some new as-yet-unknown computer vulnerability that it’s picked up in the meantime.
So it seems like serious people worry that something along these lines could happen. I think jokers have already tried doing this with existing language models. Of course they don’t have the capabilities required to simply pull this off. It’s not simple, so it actually hasn’t happened. But if the capabilities got to a sufficiently high level, then this could be something that we could observe.
Lennart Heim: Yeah, it seems like everybody’s computer security worst nightmares are these kinds of things. Like having thought a bit and worked in information security, I was like, yep, information security is pretty bad. There’s definitely different companies with different standards there. And as you just described, back in the days, it used to be like the wild west, where literally kids were able to take down Myspace because they found some bugs, and then this thing was like self-replicating.
Um, what do we do about this?
Rob Wiblin: Well, it seems like the most realistic way to defend against this is that you would expect that the white hat people would have a larger budget than pranksters or terrorists or just ne’er-do-wells that are doing this. And so why didn’t Google train the hacking model first, and then use that to detect all the vulnerabilities that this model could possibly find and then patch them all?
I think the way that this could have legs is, firstly, it might just be that no one’s on the ball and so no one produces this ML hacking model for benevolent purposes first. So it might be that the bad people have this idea and put it into operation before the necessary security work has been done on the other side. it might also just be that you might have an offence advantage here: where they only have to find one vulnerability, whereas you have to patch everything. And even if you have a security model that can discover the vulnerabilities, in fact, patching them might be a hell of a lot of work in many cases. There just aren’t enough system operators in the entire world to do all of the necessary software updates.
So anyway, it might be a danger during this kind of intermediate stage. Is there anything you want to add to this?
Lennart Heim: Yeah, maybe there are two different notions we should try to disentangle here. I think there’s one idea of just like there’s this AI system which is self-replicating and going around. And the best way to defend against this is you train the system ideally on an air gap server — so systems which are not connected to the internet — and you try to evaluate it there for these dangerous and self-replicating capabilities. And if they have it, well, please don’t deploy it. This is the first thing.
Another notion we can talk about is that you can use these models to help you code, and they help you to produce new malware. And then as we just basically described, it’s going from server to server and it’s doing X. Really depends what X is how you would eventually detect it. And an AI system self-replicating, going from place to place to acquire more copies of itself, I think it’s something different than just like malware going around. Because this is already the case which we see a lot of times, right? We just expect like these offender capabilities, like these script kiddie capabilities to become significantly better in the near future to do this.
But yeah, for these AI systems, that’s why we need these capabilities evals: just like, people should really check: Do these systems have the idea in some ways to self-replicate? And I expect this to not come immediately from one system to the other, but rather like, cool, maybe certain systems can theoretically do it — you can basically talk to them and get them to do it over time, and in the future they might do it on their own. But we will see some prompts and ways, like some signs of this previously, where we should be really careful.
But this whole idea of having air gap servers really helps there. I think one of the worst things you can do with AI systems you don’t understand is deploy them on the internet. This seems really bad — the internet is just a wild west. And also just defend our critical infrastructure against the internet. Just don’t hook everything up to the internet. It’s just a bad idea.
Rob Wiblin: I’ve been just banging my head against walls for the last five years watching everything get connected to the internet, and it’s like this is a completely centralised failure node now for everything — for the water, for electricity, for our cars. I think just based on common sense, given how bad computer security is, this has been a foolish move. There are benefits, but we’ve just been completely reckless I think in the way that we’ve connected essential services to the internet. At least, so far as I understand it, we haven’t connected the nukes to the internet, but that seems to be almost the only thing that we haven’t decided to make vulnerable.
Lennart Heim: Yeah, at least we leave this alone. This seems really good. But like everything else is. It seems really bad, and I think we have not seen the worst yet because nobody has deployed the capabilities yet. Most nation states know each other’s critical infrastructure. If they want to they can pull the plug, and for some reason they don’t do it — not for some reason; it makes sense to not do it — but if they wanted to, they could pull it. And having AI systems doing this is definitely not great. Some things should just simply not be connected to the internet.
It’s funny, as a technical guy I’ve always been the one who’s like, “Please let’s not hook it up to the internet.” Like this whole idea of internet of things is like… Don’t get me wrong, seems great — it’s a lot of fun having all of these fancy blue lights in your room — but oof.
Rob Wiblin: Yeah, we’re just going to lose all electronics simultaneously in a worst-case scenario, where someone sufficiently malicious, or an agent that’s sufficiently malicious is interested in basically shutting down society. And I mean, people would starve en masse. That’s the outcome of the way that we’re setting things up.
Lennart Heim: And we see this right now already with just companies where ransomwares are getting deployed, right? Just like whole companies. We had this in our hospitals. But lucky enough some ransomware is like, “Oh, sorry guys, we only meant to target financial corporations, not your hospitals. Here, here’s the encryption key. But sorry for taking off your whole network for a month or for a week or something.”
They’re not defended; nobody’s on the ball on cybersecurity. I feel pretty confident on this one statement. Some people just way more than others. But it just goes hand in hand with AI systems if we don’t figure this one out. And maybe you should leverage it in the meanwhile to make more systems secure. And if we can’t, just let’s not hook it up to the Internet — that would be great.
Rob Wiblin: I think, unfortunately, we’ve just completely lost on that one. The not hooking up to the internet, there’s almost nothing left.
Lennart Heim: I think there’s like some critical infrastructure where we just don’t do it. I feel like I would expect some power facilities to not be hooked up with the Internet, but maybe I’m just wrong and naive, maybe too optimistic.
Rob Wiblin: Yeah. I think this raises an entire intervention class that I haven’t seen discussed very much among AI existential risk folks, which is that maybe a very valuable thing to do is to start a computer security business that uses ML models to find vulnerabilities and alert people and try to get them to patch it, to try to get as much of a lead as you possibly can on just improving computer security in general against this broad threat of this new way that people can try to identify and take advantage of vulnerabilities.
Lennart Heim: Ideally the AI labs would do it. My colleague Markus had this idea that there needs to be some responsible disclosure. Where it’s like, “Hey, we’re an AI lab, we developed this system. Hello, society. There are these vulnerabilities. I think the system might be able to exploit it. We can only deploy the system if we patch these vulnerabilities, which we know the system can exploit for sure.” Or else we should not deploy the system, right?
Rob Wiblin: Yeah. One model that has better incentives might be that they have to notify people about all of these ways that it could harm those folks, or that their systems are vulnerable to it. And then they say, “Well, we’re going to deploy this in a month, so you’ve got a month to fix this” — or you’ve got six months, or whatever it is.
Lennart Heim: Ideally you have more time, and ideally you can also say no. It’s like, oh gosh, you’re not alone. There are still other people who eventually decide. Eventually it’s up to governments and democracies to decide what gets deployed.
Rob Wiblin: Yeah. I mean, currently if people don’t patch their computers, mostly that harms them, because maybe their money will be stolen or their data is going to be stolen.
Lennart Heim: I mean, there’s a harm to society, right? It’s just like insurances for these kinds of stuff.
Rob Wiblin: Well, the direction I was going was saying right now I bear most of the costs. But in this new world where compute can be used for hostile purposes, it becomes a whole societal issue if people aren’t patching their servers, or people’s servers are vulnerable — such that it may be necessary to have much more serious regulation saying it’s unacceptable to have large amounts of compute hooked up to the internet that are vulnerable to infiltration. It’s just a threat to all.
Lennart Heim: Yeah, I think so. I think in general, one good policy addition is that data centres should have certain security norms. It’s that simple. Certain security norms regarding physical access, and certain security norms regarding cyber access for these kinds of systems.
Rob Wiblin: And they have to add a red button.
Lennart Heim: Ideally the red button. We have to be a bit careful there about the design — again, it’s dual-use; maybe the wrong people push the red button, or the AI system could do it. But maybe that’s the thing we eventually want, where this red button is in this case favourable, because there are less copies or something. But yeah, this should be explored in more detail to which degree this is good, but having some failsafe switch for these kinds of systems — where you see, like, “Oh my god, this AI system is going haywire” — yeah, that’d be good.
Rob Wiblin: I guess one thing is just to turn off the system, another thing that the red button could do — or I suppose, I don’t know, I guess it’s the yellow button next to the red button —
Lennart Heim: Let’s call it the yellow button, yeah.
Rob Wiblin: — I think at the moment it probably takes quite a bit of work to basically start up an entire set of compute and to reset it basically back to factory settings, or reset it back to some known safe pre-infection setting and then turn it back on again. Maybe that needs to be much more automated, so that basically you press this yellow button and the whole thing goes through some process of clearing out everything and basically resetting something from a previous known safe state, because that reduces the cost to doing it because you’re not denying people access to their email.
Lennart Heim: Yeah, but eventually there’s still some cost. That’s the whole idea why we have these power supplies and these batteries in these data centres: if you run out of power, you never want your system to shut down because you just lose data, right? So first they run off the battery, and if they still don’t get power back by then, they turn on their generators to generate their own power. These things are right now optimised to just always stay up, have their uptime up.
We need some innovations there where we can try to think about these things in particular. A bunch of these things we’re now talking about are pretty speculative: like AI systems self-replicating, going from data centre to data centre, or even malwares. And I think this is a failsafe, but there are a bunch of interventions in between where we could just detect this. Monitoring is just the first idea: just knowing what’s going on in your data centres, what’s going in and what’s going out, and what is using your compute.
Concrete advice [02:26:58]
Rob Wiblin: Yeah, OK. We’ve talked for a while. We should begin to head towards the final stretch. Let’s talk a little bit about concrete advice for listeners, or I guess possibly for policymakers or other people who are listening, who might be able to contribute to making better all of the problems that we’ve been talking about today.
What sorts of people or skills do you think are most required to move forward this field of compute governance?
Lennart Heim: For compute governance, we definitely need more technical expertise. I think that’s just a big thing. I think that’s also the biggest part where I’ve been able to contribute as somebody who’s studied computer engineering a bit and just has some idea how the stack eventually works. Within compute governance, you have really technical questions, where it’s pretty similar to doing a PhD where you actually work on important stuff. Then we also have the whole strategy and policy aspect, which is maybe more across the stack.
On the technical questions, I think we’ve pointed out a bunch of them during this conversation. There’s a bunch of things. What about proof of learning, proof of non-learning? How can we have certain assurances? Which mechanisms can we apply? How can we make data centres more safe? How can we defend against all these cyber things we’ve just discussed? There’s like a whole [lot] of things you can do there.
And also there are some questions we need computer engineers on. There are some questions which are more like software engineering type, and a bunch of them overlap from information security. How can you make these systems safe and secure if you implement these mechanisms? And I think a bunch of stuff is also just like cryptography, like people think about these proofs of learning and all the aspects there. So software engineers, hardware engineers, everybody across the stack feel encouraged to work on this kind of thing.
The general notion which I’m trying to get across is that up to a year ago, I think people were not really aware of AI governance. Like a lot of technical folks were like, “Surely, I’m just going to try to align these systems.” I’m like, sure, this seems great — I’m counting on you guys; I need you. But there’s also this whole AI governance angle, and we’re just lacking technical talent. This is the case in think tanks; this is the case in governments; this is the case within the labs, within their governance teams. There’s just a deep need for these kinds of people, and they can contribute a lot, in particular if you have expertise in these kinds of things.
You just always need to figure out, What can I contribute? Maybe you become a bit agnostic about your field or something. Like if you’ve been previously a compiler engineer: sorry, you’re not going to engineer compilers — that’s not going to be the thing — but you learn some things. You know how you go from software to hardware. You might be able to contribute. Compiler engineers and others across the stack could just help right now, for example, with chip export controls, like figuring out better ideas and better strategies there.
So a variety of things. But I’m just all for technical people considering governance — and this is, to large extent, also a personal fit consideration, right? If you’re more like a people person, sure, go into policy; you’re going to talk to a lot of folks. If you’re more like the researchy person, where you want to be alone, sure, you can now just do like deep-down research there. You’re not going to solve the alignment problem itself, but you’re going to invent mechanisms which enable us to coordinate and buy us more time to make this whole AI thing in a safe and sane way.
Rob Wiblin: Yeah. Seems like one thing that stands out is you need technical people, but not just technical people. What you’re craving is technical plus understands policy, technical plus understands law, technical plus is a great communicator — that kind of thing.
Lennart Heim: Ideally. Not all of your combinations at the same time.
Rob Wiblin: Sorry, yeah.
Lennart Heim: And sometimes you just learn it. I was just this tech guy: I developed embedded systems and I deployed them. I didn’t have a lot of exposure to policy besides me trying to get involved in politics at some point. Yeah, you can learn this.
Rob Wiblin: What counts as “technical” here? What are all of the things that would qualify as that?
Lennart Heim: Yeah, we can literally go down to the semiconductor supply chain, like somebody who knows how these things are getting manufactured. We need these people to implement export controls, because they know how the supply chain works, they know what is critical, they know what’s going to be the next big thing in the future where you maybe want to explore export controls. So this is like literally somebody working on semiconductor stuff already is useful just in like trying to understand this.
A lot of times you have a problem with engineers, where they’re just like deep down on one thing. Nobody sees the whole thing, right? Because everything is just so hard. You just have this one engineer who literally works on this one part. That’s what they do for their whole lifetime. And you need somebody who’s able to take a step back and think about this.
One question which I try to ask is like: Here are the specifications of a chip. Which one would you pick? Where would you set the threshold? And your desired goal is to dampen race dynamics. This is a long chain of things you need to think about there. And ideally, you can think through all of the steps — but you’re never alone, right? So ideally, join other people, join think tanks, so we can all work together.
I’ve been having a blast just working together interdisciplinarily. I think everybody’s always joking about interdisciplinary is this great thing, everybody wants to do it. And then you get it in your studies when you work with, I don’t know, I did like, computer engineering and then I worked with a power engineering person — like, “Oooh, interdisciplinary.” No, now I’m working with people thinking about treaties, lawyers — everything across the stack. And it’s really, really great. You don’t always need all of them for every type of research, but eventually you all come together.
And people pick their battles within there. You can be across the spectrum — you don’t need to do all of the things. It’s always better if you can do all of the things, but this always comes at some penalty regarding your knowledge there. And sometimes you just need deep technical expertise, but you also need somebody who you can tell it to and then translate it. I think a bunch of my work has just been literally translation work. I talk to the people in the basements thinking about all of this stuff, and then I think about what it means for governance. And then I go to the policy people, like, “I think that’s what we’re going to do.”
Rob Wiblin: What’s needed on the more policy side? I suppose you need bureaucrats, like, people to take interest in this in the civil service, people to take interest in this inasmuch as there are policy advisors to ministers or policy advisors to people running government departments, people advising folks in Congress. All of that stuff seems quite essential as well.
Lennart Heim: Yeah, all these things seem essential. I think people should always think about which governments matter the most regarding AI, and also which places matter the most regarding AI. Ideally, more and more places will matter regarding AI, but just more or less, right? If you literally work in the department which is responsible for AI, here we go. This seems good, for example. And in general, also they are like me with my techniques; they just need more technical expertise there for people to go in there. And they would love to have you; they really embrace technical people.
And there are great ways to upskill on this. There’s literally something called the TechCongress Fellowship, just a way of going about this where technical people get put into Congress and work with people together. A lot of times I think they don’t end up staying, because the other options are just so nice, right? Compared to working at Google — where you’ve got your nice working times, you got a nice cafeteria — it’s hard to work in governments. But this is where it happens. And I would just really encourage people — the impact case is definitely there.
Rob Wiblin: If a listener was under 30 and wanted to help you and your colleagues out in some way, what sort of person should they go off and try to become, who might be able to really move the needle in future? Are there particular kinds of undergrad degrees or any kinds of backgrounds that you’re going to be excited to hire from in five years’ time?
Lennart Heim: I’ve been asked this question a lot. “It depends” is probably the answer. I would be excited about people just understanding how AI accelerators work to a good degree.
Rob Wiblin: Accelerators?
Lennart Heim: Accelerators. Just AI chips — just like GPUs, TPUs, but the broader term for this. And you would probably study electric engineering for this with a focus on computer engineering. Then you would do design, architecture, these kinds of things. You probably don’t need to understand how you manufacture chips in detail, but also somebody who knows this in detail seems good. But ideally I’d want them in the US government right now, immediately, to do these kinds of stuff. But skilling up there seems good.
And maybe another notion is I think it’s easier to go from learning technical stuff first and then learn governance stuff, than the other way around. So it’s probably good to start with a pretty STEM-y degree, learn these kinds of things. I think it just in general enables you to think well, just do a bunch of good stuff. Don’t get too attached to your specific knowledge. Mostly get attached to the way you think, the way engineers solve problems. These are the kinds of transferable skills which I’m excited about.
Your specific knowledge might be relevant from some classes. It’s hard to say what exactly, but maybe something like AI accelerator design, understanding the semiconductor supply chain better, but also people who know how to train these systems. For example, I need somebody to know like, how big does your cluster need to be? What are the properties you’re actually looking for there? Like how many chips, how they are interconnected, how much energy do you need then? These kinds of things are really important, because if I’m now writing policy, those are actually the numbers that I need — and a lot of times it’s really hard to get information from the people at the labs, because there’s just a lot of confidentiality involved. So if somebody’s at the labs and wants to do governance, that seems great. Lots to do. I would love to get somebody like this on board.
Rob Wiblin: So that was people who were younger. I mean, all of this stuff is so obscenely urgent that I feel like this is maybe more a moment for people who have relevant expertise already and are over 30 and are closer to the peak of their career, in terms of their influence and their technological capabilities. I was actually looking at the age distribution of listenership to this show today, and I think about half are under 30 and half are over 30. So we could really use the over-30 people to jump into action right now. Do you want to talk to them?
Lennart Heim: So over-30 people: Work on impactful stuff is the general message — but that’s probably not a surprise if you’re listening to this podcast. People in technical jobs — be it a software engineer, be it a hardware engineer — test your fit for AI governance. Just start reading and see what needs to be done there, and pick open questions. A large degree of what I’m trying to do is put open questions out there so people can work on it. Two months after the release of this episode, hold me accountable if I have not published my questions yet — let’s use this as a forcing function. For some people, like, hey, here are the questions which you might want to work on.
And then I think a great way to just go about it is to test fit with fellowships. Maybe try to take a sabbatical, maybe a longer vacation. Try to test your fit. You should do something you eventually enjoy. And then think, do you want to work in governance? Maybe you want to do some kind of placement in a think tank. Maybe you want to do some placement in Congress and Senate, or along these lines. There are great opportunities out there to do this. And if you don’t like it, then you go back. But I definitely promise you, people will just welcome you with open arms with any type of technical knowledge on these kinds of things, particularly right now.
Another way to test a fit is to join us, GovAI. For three months, we run summer and winter fellowships, and the goal is to just have people upskill on AI governance. Then we can work together on a project, and I exchange technical knowledge with governance knowledge. And if we work together, we can probably do something good there.
Rob Wiblin: Yeah. I guess the challenge for people post 30, and especially maybe post 40, is they’ve already made commitments in terms of what they’re trained in. If you’re a lawyer and you’re 50, you’re probably not going to go back and do an undergrad degree in computer science.
A benefit here is that it seems like the surface area is very large: there are many different skills that are relevant, and there’s going to be many different actors working on this. It feels like AI is a rocket ship — and AI governance is therefore also going to be a rocket ship, basically. So I would expect a lot of roles in government policy in the labs coming up that are relevant to this. And I suppose, yeah, the technical side is a particularly useful thing to have, but so potentially is legal or like legislative.
Lennart Heim: Absolutely, yeah. Don’t get me wrong: if I’m overemphasising technical stuff, this is just where I’m coming from, where I feel like I want to make a dent. But in general, AI governance can absorb all kinds of talents. Technical AI safety is a bit harder there — it’s really hard to get started on this if you don’t have the relevant background knowledge. But my claim is whatever your background is, there’s probably something useful to do within AI governance.
Rob Wiblin: Economists as well, I imagine, are going to be relevant all over the place here. Thinking of my background.
Lennart Heim: Oh, yeah. Absolutely.
Rob Wiblin: Do you have any other suggestions for people who are inspired to take action based on this conversation? We didn’t talk about computer security there, which is a whole other thread that is adjacent to this.
Lennart Heim: Yeah, information security, computer security, big thing. Please work on this. I think recently there was actually a career review published on this. So this seems to be a great thing to take a look at and work on this. And there are more people trying to do this work at the labs. Once we’ve got the labs secure, we also need to secure people like me and others, because we also work with more and more confidential information. That seems important. Yeah, make the computers secure, make the labs secure, and also make the AI governance people secure.
Which other things would I like to say? When I got started, I didn’t work on compute, but I had a hardware background. I was like, “AI seems like a big deal. This technical stuff seems really hard. I don’t know how to contribute. I’m a hardware engineer. All I can do is build better AI chips. That seems bad.” Then I kind of dropped it. And at some point, I would discover that compute seems really important as an input to these AI systems — so maybe just understanding this seems useful for understanding the development of AI. And I really saw nobody working on this. So I was like, “I guess I must be wrong if nobody’s working on this. All these smart people, they’re on the ball, they got it,” right? But no, they’re not. If you don’t see something covered, my cold take is like, cool, maybe it’s actually not that impactful, maybe it’s not a good idea. But whatever: try to push it, get feedback, put it out there, talk to people and see if this is a useful thing to do.
You should, in general, expect there are more unsolved problems than solved problems, particularly in such a young field, and where we just need so many people to work on this. So yeah, if you have some ideas of how your niche can contribute, or certain things where you don’t think it’s impactful just because we haven’t covered it yet, that does not mean it’s not a good thing to go for. I encourage you to try it and put it out there. This has definitely been my path. I’m just like, “Nobody’s working on this. It seems important.” And then I just went around, I was like, “Hey, why is nobody working on this?” And it was like, “Oh yeah, do you want to work on it?” Sure. Cool. Now I’ve been doing this and trying to get more people involved in these kinds of things.
Rob Wiblin: It’s really interesting that we had this perception that it’s probably covered, because a couple of years ago, the entire field of people concerned about AI governance was really small. I guess especially thinking from this somewhat longer-term point of view, we’re talking about at best dozens of people. What are the odds that one of those 30 people had a background in compute specifically, and also had the energy and entrepreneurialism to start a whole new research agenda? Of course it’s unlikely.
Lennart Heim: Indeed, yeah. I just didn’t think that well about it. Yeah, what should have been my prior there? But maybe it’s also something like, at least somebody should have written it down as an open question or said it or something. I just didn’t find anything. You just googled “AI and compute”: not a lot, right? And at some point, OpenAI put out their blog post, and I just saw this exponential line going up. Exponential lines going up? I don’t know. Somebody should think about this, right? And just chased it, yeah.
Rob Wiblin: To look at it from another point of view, there probably were a lot of people in government, broadly speaking, thinking about it — I imagine that the groundwork required for these export restrictions on chips probably began many years earlier, and it was something that would have germinated in someone’s mind in the policy or national security space a long time before. So from that point of view, there probably were quite a lot of people thinking about this on some level.
I think it’s just that if you’re in government, if you’re in an agency trying to write legislation, if you’re at RAND or whatever, trying to come up with a policy proposal here, it’s quite challenging for those folks to think like five, 10, 15 years ahead. It feels like that group is often very focused on problems today and problems next year — stuff that people are talking about right now. And so you can find a lot more things being neglected if you’re thinking, well, what are the issues going to be in five years’ time? Which is kind of where you were able to grasp that, whereas someone who’s currently in Congress just doesn’t have the leisure for that.
Lennart Heim: Yeah, I think there’s a bunch of misconceptions you have when you’re relatively young. It’s like, “Oh, yeah. It seems like nobody’s doing this.”
I think another problem in AI governance we have is there’s just a bunch of work which is not publicly available or something, like it’s in people’s minds. And in particular right now, I think we would see more work but also less work, because people are busy: they just don’t put ideas out there. But they have thought about it, and they didn’t dig deep into this. When I then learned about it, it was like people had looked into compute before, but nobody ever finished any projects there — because they kind of got stuck, or they got drawn into government and immediately worked on it, and then they stopped communicating with the outside world. In particular if it’s sensitive topics like this.
Rob Wiblin: Yeah. Another thing that has occurred to me recently: I guess as I’m getting older and I’m no longer feeling like I’m early in my career, I’m realising a mechanism by which really important stuff gets neglected. When I was 20, I was like, “Why aren’t people who are 50 working on problem X, Y, and Z?” And it’s like, they’re busy. Thirty years ago, they were at your stage, and they saw what stuff was coming up and what they thought was really important — and then they’ve been working on it since then, developing expertise. And now they’re flat out trying to solve the problems that they’ve specialised in solving. But now they don’t have the time to just be scoping out issues that aren’t a problem yet or speculating about what other issues there might be.
So they’re not stupid; it’s just that this is kind of the system: that to some extent you need young people to be doing this while they are at undergrad. Not completely — obviously there are mechanisms by which older people notice stuff that’s up and coming — but when you’re 20, you have a particular opportunity here to guess what is going to be important in future that is very difficult for the current director of an agency to be doing, because it’s very hard to be both implementing current, necessary functions while also anticipating what’s coming up a long time in the future.
Lennart Heim: Absolutely. I think this is the thing which has changed over the last months for me. Like, oof, when was the last time I just thought about big-picture questions for 20, 10 years from now, compared to just reacting? A lot of times you’re just reacting, putting out fires every now and then. I think it’s just becoming worse over time.
And then in general, the more you progress in your career, you become more busy — and it just becomes harder to make time for these kinds of things and cover everything. As I just said, like me writing up all of these research questions: I’ve got them all in my head, right? I can say yes or no, but just actually taking time to write them up and put them out there… oof.
Rob Wiblin: Yeah, we’re getting to the point where we’ve committed to understanding and specialising in particular topics, and we’re probably no longer alert to what stuff is going to be important in 10 years’ time. That’s something that other people, who haven’t yet necessarily got full-time jobs, we need you to do that — and we’ll interview you in 10 years’ time.
Lennart Heim: This is part of why GovAI has a policy team which I’m part of. So I’m committing to work on the stuff right now and try to have the big-picture case in my mind, but [some of] my other colleagues are just like, “I’m thinking of what’s next in the future” or something, like take a step back. You want a portfolio of people who do that.
Rob Wiblin: Exactly.
Lennart Heim: Of course it’s always hard; I cannot decide the portfolio of the world on this right now. And I think right now a lot of people feel drawn to just act right now and act on policy. I think this might be true for a bunch of people, maybe not for everyone. Maybe the things people have done before were also useful, and maybe we just work on all the stuff right now. And then there is the next opportunity we learn five years from now, where we’re just like, damn, nobody saw this coming, right? Like people should always think about what’s the next big thing or something.
I think my claim is that when people thought OpenAI was a big deal, they should have also thought at the same time that compute is a big deal, because [OpenAI] just pretty openly bought into the scaling hypothesis. So if you think OpenAI might be a big deal in the future, at the same time somebody should have figured out compute governance, right? But again, there’s a lack of talent, lack of expertise for these kinds of things — and this is eventually why it just didn’t happen.
Rob Wiblin: Well, I guess people told me six or eight years ago that compute is going to be a massive deal. Like compute’s key.
Lennart Heim: And you bought Nvidia.
Rob Wiblin: And I bought Nvidia, exactly. So I got looped into it. But the issue is the people who had the expertise to notice that didn’t necessarily have the expertise to take action on it, because they’re not experts in compute. Those people were closer to being experts in algorithms who noticed, because they’re next to it, that compute was going to be a really big deal. Yeah. I don’t know. We end up with all kinds of strange stuff because of the finitude of the human mind and the need to specialise.
Lennart Heim: And sometimes you just also need to direct people to it, right? I’m having an easy time saying like, well, maybe there were some arguments out there, but then it’s not clear where a technical person should go, right? You don’t do technical alignment, you don’t really do governance: like what is the place to be now? Think about these things. Then you have a hard time.
If you try to think about a new field, there are not that many places where you can do it. Like this does not really fit into academia. For the international relations people, I’m too technical. For the technical people, I’m too political. So there are not that many places where this kind of research eventually can happen. So I’m just really lucky being at a place where I can just think about these things and try to make it a bigger thing. That also no mainstream think tanks tried to think about this, as a field, as a domain to work on.
Rob Wiblin: Yeah. All right, final question: Life’s coming at us pretty fast. It can be a little bit stressful. It’s stressful for me thinking about this all the time, and I imagine sometimes it gets you down as well.
Lennart Heim: Oh, yeah.
Rob Wiblin: What do you do to feel better and unwind?
Lennart Heim: I’m a big fan of nature. Just going hiking, being out there. I live most of the time in Zurich, which is a beautiful place to just go swimming, go down the river with a boat, listening to good music. I think that’s what I’m doing.
And I’m actively now trying to look for people where I don’t need to talk about AI. So I’m actively trying to make this commitment, like, “Folks, no AI talk, please.” Because now that it’s mainstream; everybody wants to talk about it, right? It’s just like, “Ugh, I’ve already been doing this all day.”
Rob Wiblin: “Shut up, Mom.”
Lennart Heim: Like your favourite band suddenly becomes cool, and it’s like, “Oh, damn.”
The other thing is I just really like to be in nature. I was a lucky kid who grew up with a bunch of huskies. My dad had this weird hobby where we just had 20 sled dogs. There was not a lot of snow, but sometimes, every now and then, we had snow. We had a trike where I was just sitting in front and we were going out with the dogs. So I think it’s still a big thing for me to unwind. And also just a plan Z, I think. A plan Z that’s just like, literally just go to Alaska, Canada, get my 20 dogs and just, yeah — that’s it. Hooking nothing up to the internet, just to be safe. Just me and my dogs and a bunch of people. That’d be great.
So I think this still really grounds me. And just solving all the other problems in the world. Yeah, AI is a big thing, but like, come on. It’s kind of ridiculous what we’re claiming here, right, what we were just talking about. So sometimes just like, gosh, how did I get here? I started with saving the whales, and now I’m working on AI governance? It’s been a weird, bumpy road.
Rob Wiblin: Yeah. It’s enough to make you wonder whether you’re living in a simulation. My guest today has been Lennart Heim. Thanks so much for coming on The 80,000 Hours Podcast, Lennart.
Lennart Heim: Thanks for having me.
Information security in high-impact areas (article) [02:49:36]
Rob Wiblin: Hey everyone. Hope you enjoyed that interview.
In the conversation, we talked a bunch about the importance of information and computer security, and how that might be really critical in the development of extremely powerful artificial intelligence systems. And not completely coincidentally, 80,000 Hours recently published a new and updated career review on Information security in high-impact areas as a career choice that one might make in order to try to do a lot of good. That article received a big update from author Jarrah Bloomfield.
You can find that article on our website of course, by searching for “80,000 Hours and information security,” but I thought given its pertinence to this interview, it might be good to do an article reading and throw it on the end here. So I’ll have some final announcements in the outro before this all finishes, but before that, for your enjoyment, here is an article reading on “Information security in high-impact areas” by Jarrah Bloomfield.
Audio version of Information security in high-impact areas – Career review
Rob’s outro [03:10:38]
Rob Wiblin: Two notices before we go.
Would you like to write reviews of these interviews that Keiran, Luisa, and I read?
Well, by listening to the end of this interview you’ve inadvertently qualified to join our podcast advisory group. You can join super easily by going to 80k.link/pod and putting in your email.
We’ll then email you a form to score each episode of the show on various criteria and tell us what you liked and didn’t like about it, a few days after that episode comes out. Those reviews really do influence the direction we take the show, who we choose to talk to, the topics we prioritise, and so on. We particularly appreciate people who can give feedback on a majority of episodes because that makes selection effects among reviewers less severe.
So if you’d like to give us a piece of your mind while helping out the show, head to 80k.link/pod and throw us your email.
———
Also just a reminder that if you’re not already exhausted and would like more AI content from us we’ve put together that compilation of 11 interviews from the show on the topic of artificial intelligence, including how it works, ways it could be really useful, ways it could go wrong, and ways you and I can help make the former more likely than the latter.
It should show up in any podcasting app if you search for “80,000 Hours artificial,” so if you’d like to see the 11 we chose, just search for “80,000 Hours artificial” in the app you’re probably using at this very moment. You can also check it out using one of the top menus on our website at 80000hour.org.
All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.
Audio mastering and technical editing by Milo McGuire, Dominic Armstrong, and Ben Cordell.
Full transcripts and an extensive collection of links to learn more are available on our site and put together by Katy Moore.
Thanks for joining, talk to you again soon.
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.