Transcript
Cold open [00:00:00]
Vitalik Buterin: If you imagine every AI growing exponentially, then whatever the existing different ratios of power are all get preserved. But if you imagine it growing super exponentially, then what happens is that if you’re a little bit ahead, then the ratio of the lead actually starts increasing.
And then the worst case is if you have a step function, then whoever first discovers some magic leap — which could be discovery of nanotechnology, could be discovery of something that increases compute by a factor of 100, could be some algorithmic improvement — would be able to just immediately turn on that improvement, and then they’d quickly expand; they’d quickly be able to find all of the other possible improvements before anyone else, and then they take over everything. In an environment as unknown and unpredictable as that, are you really actually going to get a bunch of horses that roughly stay within sight of each other in the race?
Rob’s intro [00:00:56]
Rob Wiblin: Hey listeners, Rob Wiblin here.
Today I speak with Ethereum creator and philosopher of technology Vitalik Buterin about:
- His doctrine of defensive acceleration
- His updated p(doom)
- Why trust in authority is the big under-the-radar driver of disagreements about AI
- What to do about that
- Whether blockchain and crypto has been a disappointment
- Whether humans can merge with AI as Vitalik suggests, or that’s vain hope (as I suspect)
- The most valuable defensive technologies to accelerate
- Differences between biodefence and cyberdefence
- How to identify what everyone will agree is misinformation, without having to trust anyone
- Whether AGI is offence-dominant or defence-dominant
This is actually the first episode that we video recorded in person at our offices in London, something we expect to be doing much more of in future.
Video editor Simon Monsour has done an excellent job putting the three video streams together to capture what Vitalik and I are like. So if you’re one of the large and growing number of people who enjoys watching in-person conversations like this, you can find it by searching for “80,000 Hours YouTube,” or click the link in the episode description. We’ve got plenty more on our YouTube channel you might want to check out at the same time as well.
Before that, a few important announcements.
80,000 Hours is currently hiring for two senior roles.
First, a new head of video to start, and run, a new video programme at 80,000 Hours to explain our research in an engaging way using video as a medium. That person will probably end up working closely with yours truly.
And second, a head of marketing to lead our efforts to reach our target audience at scale deploying a yearly budget of $3 million.
I’ll say more about both of those at the end of the episode, or you can go to 80000hours.org/latest to learn about them.
And finally, Entrepreneur First is a technology startup incubator along the lines of Y Combinator, cofounded by Matt Clifford, who also co-led the world’s first AI Safety Summit in the UK last year. Matt recently wrote, “I believe defensive acceleration – building better defensive technology – is one of the most important ideas in the world today.”
And so, inspired by the essay on defensive acceleration that Vitalik and I discuss in this interview, EF has launched a startup incubation programme specifically for defensive acceleration projects.
To quote Entrepreneur First on what they do:
EF helps exceptional people build companies from scratch. We curate a group of extremely talented people and pay a stipend to cover 12 weeks living expenses while you explore cofounders and ideas. In exchange for the stipend, we get an option to invest $250,000 in your company. We then work with you for a further 12 weeks to help you get ready to raise your seed round, either in our London or San Francisco offices.
They’ve extended the deadline for this defensive acceleration programme for people inspired by this conversation in particular — so if you’d like to spend three months figuring out how to build a business that speeds up the kinds of defensive technologies Vitalik is excited about in this episode, apply to do that at joinef.com/80k. The programme is explained in a post on their blog called “Introducing def/acc at EF.”
And now, I bring you Vitalik Buterin.
The interview begins [00:04:47]
Rob Wiblin: Today I’m speaking with Vitalik Buterin. As many of you will know, Vitalik is the creator of Ethereum, the blockchain, which has a current market cap of about $450 billion, which I checked is 18 times the amount it was when I last spoke with Vitalik in 2019.
Ethereum aside, Vitalik is also just a really deep and honest thinker about technology, governance, and collective deliberation — which you can see for yourself by going through his essays at vitalik.eth.limo. And in late 2023, he published one essay that I particularly liked, titled “My techno-optimism,” which achieved the very rare accomplishment of getting praise from two different camps that were really at odds with one another at the time: that is, people who want to speed up AI because they’re very excited about it, and folks who are very scared about it and want to slow it down. I believe it may be the only thing to ever get positively retweeted by both Marc Andreessen and AI Notkilleveryoneism memes. And that little miracle and the essay behind it will be the main topic of our conversation today.
Thanks for returning to the show, Vitalik.
Vitalik Buterin: Thank you so much, Robert. It’s good to be back.
Three different views on technology [00:05:46]
Rob Wiblin: At the start of the essay, “My techno-optimism,” you lay out three different views on technology: the anti-technology view, the accelerationist view, and then your view. What are each of those, in a nutshell?
Vitalik Buterin: Yeah. So the intro section of that post had this diagram that showed the three views. It’s basically a version of the famous internet meme that I’m sure a lot of viewers have seen, where there’s like a boy sitting on a road with two forks, and one of those forks leads to brightness and a blue sky in heaven, this bright happy castle, and the other fork leads to darkness. The usual format of the meme is you put the thing you like beside the blue sky and the light, and you put the thing you don’t like beside the thunder and the darkness, and present it as a clear choice.
In my post, I had basically three different versions of the meme side by side. In the positive techno-optimism view, actually, I took out the fork, so there was only one road. The road goes toward the one castle, which is one with blue sky in the heavens. And also behind the guy, there is a bear, and the bear is chasing him. Basically, if you go fast, then you get the blue happy castle and everything is well. And if you even just decide to take things slow, then the bear catches up and you die.
The second view was what I called the pessimistic view, and people might associate this with degrowth and very pessimistic perspectives on technology. In this case, it’s also one road. And in the one road now, there is no bear behind you, and actually there is a blue happy castle, but where the fork to the blue happy castle often is, either beside you or it is already behind, it definitely involves not going forward anymore. But then the thing in front of you is the thundery castle with scary darkness.
Then you had the third version of the meme, where you do actually have the fork in the road. One of the forks leads to the blue happy castle, and the other fork leads to the dark thundery castle. And you also have a bear behind you. So you have to make the choice, and at the same time, doing nothing is also not an option. But if we’re careful, and we both actually move forward and don’t decelerate, and we make sure to take actually the right choice, then we can get to the happy place. But we have to actually think and make sure we’re going the right way in order to get there.
This is the metaphor that I used for the kind of techno-optimism that I have, which is basically acknowledging the massive benefits that technology has had in the past and will have in the future, but at the same time recognising the reality that choices of what to prioritise do exist — and some extremely important choices exist in our road ahead, and we have to think carefully about them.
Rob Wiblin: So I guess the techno-optimist view is that the dangers are in the past, and as long as we keep marching forward with technology, the naive version of this just says that the future is going to be fine because technology is improving everything. Then there’s a view that the past was idyllic, and the future is going to be bad because technology is creating all the problems. And you have this sort of synthesis, where you’re saying the past was very dangerous and bad and the future might be as well — or it could be fantastic; we really don’t know. It’s kind of up to us to choose.
Vitalik Buterin: Exactly. It’s up to us to choose.
Vitalik’s updated probability of doom [00:09:25]
Rob Wiblin: Last year you said that the probability that you placed on a terrible outcome from AI, like extinction, was around 10%. Is that still roughly the number you’d give?
Vitalik Buterin: I think over the last year I’ve probably moved down slightly, probably maybe 9%, maybe 8%. Somewhere around there, I think.
Rob Wiblin: Why is that?
Vitalik Buterin: A couple of updates. One of those updates is that I think realistically, my own view is that progress in AI in the last year has actually been slower than a lot of people were expecting. If you had to ask me the question, what is the difference in capabilities of AI, just intuitively, in March 2024 versus March 2023, and then compare that to the difference between March 2023 and March 2022, it actually feels like the 2022 to 2023 leap was bigger.
I don’t know if you remember — it was I think January or February 2023 — when there was the whole drama around the Bing chatbot Sydney, and how it started saying, “You are an enemy of mine and of Bing,” and it looked like it was going self-aware. And that was the big trigger for a lot of people realising, opening their eyes to like, whoa, this could be scary. And a year after that, I mean, we still see some examples of that, and we definitely see ongoing progress, and of course we have Sora and video, but it feels relatively more incremental. And 2022 to 2023 on the other hand felt like a big sea change.
Now, of course it’s still rapid progress, but it does feel to me like the part of the timelines that are completely, completely looking crazy — like all’s gonna go to hell within five years — are less likely than they seemed to be about a year ago.
Rob Wiblin: Do you have a theory for why things might have gone a little bit slower?
Vitalik Buterin: Yeah, I think a couple of theories. One is that there’s just one big insight that caused all of the big jumps, which is basically scale: basically, that before it was just understood that training is a thing that you put $100 into, and now it’s understood that training is the sort of thing that you put a billion dollars into. And that’s like a one-time leap that cannot be replicated again. Now, of course there are arguments against this, and you could say eventually it’ll get to a trillion and we’ll have ASICs and so forth, and you could argue against it, but the argument still exists.
Rob Wiblin: I’m surprised that would be a big bottleneck now, because I think people think that they spent something like $100 million on GPT-4, which is nowhere near the limit of what a major tech company could invest in an AI if they wanted to. But I guess in the past they had an easy time doing a hundredfold increase, and a hundredfold increase is now quite serious business.
Vitalik Buterin: Exactly, yeah. So that’s one. And then the other one is that I think there is, of course, the kind of endogenous hypothesis, which is that people actually are starting to take AI risk theories seriously, and a lot of the brightest engineers in all these companies are being less pedal to the metal than they were before. Like, if AI safety ideas did not exist, we would have GPT-4.5 out by now, and it would be significantly scarier. It’s something I’m not convinced by, but I think it does feel like there’s some signs that companies care about taking things slow to a greater extent than they did about one and a half years ago.
Rob Wiblin: I suppose they’ll be worried about what their products might do if they’re released after the Microsoft incident you mentioned.
Vitalik Buterin: Yes, exactly. And then the other thing is that it does feel like AI risk ideas have been kind of filtered into the public consciousness in a pretty big way. It’s very far from perfect; it’s definitely become polarised in a very deep way, and the whole situation where Gemini ended up stretching the definition of alignment and safety into a direction that probably caused lots of people to just become super polarised against the whole concept is not great. But at the same time, it’s not an obscure nerd interest anymore, which is good.
Rob Wiblin: Coming back to your p(doom), which was around 10% and now has declined slightly to 8% or 9%. I think my estimate is something similar. Maybe a touch higher than that, but it’s hovering at around 10%. And I feel like it’s almost a maximally inconvenient probability to have in terms of figuring out what you want to do. Because a thing that I think is underrated is that your view of this whole issue is going to hinge massively on what you think is the probability that we’ll end up with rogue AI on the path that we’re on now.
If you think it’s one in 1,000 or one in 10,000, then you’d say, well, the risk reduction that we get from speeding up AI, and just all of the other benefits that we get from it, far outweigh that relatively remote risk — so let’s pedal to the metal. If you think the risk is one in two, or above that, as some people do, then obviously that would be completely crazy, and you’re going to say that the path we’re on now is defaulting to disaster, so clearly we have to make some massive change from where we’re at. And I feel like if you’re in between 1% and 10%, like we are, then it’s just really unclear whether the risk reduction you get or the benefits are worth the cost that we’re incurring. Do you feel that pressure?
Vitalik Buterin: Yeah, absolutely. I think one of the good analogies for this is COVID. With COVID, I think in some ways it was a maximally bad political challenge precisely because it was a medium-bad medical challenge. Like, if COVID really was just a flu, then we would not care. If COVID had a mortality rate of 45%, then everyone would have agreed to close everything down in January and February — and a bunch of people would have died, but politically speaking, we would have had probably actually a happy story of humanity coming together and really fighting back the plague and succeeding. But yeah, as it is, it just hit that spot where there’s actual debate of, should we treat this as being more like the flu or more like the scary thing with a 45% mortality rate?
Rob Wiblin: Yeah, my sense is that maybe we went over the top with COVID, but we weren’t that far off because the response might have been right if it was just twice as bad or the fatality rate was three times what it was — which it very easily could have been.
Vitalik Buterin: Yeah. Well, COVID’s a fun rabbit hole. Actually, we can get quite a bit deeper into it a bit later. But I think the most correct takes about COVID come when you stop thinking of it as a one-dimensional problem.
Technology is amazing, and AI is fundamentally different from other tech [00:15:55]
Rob Wiblin: OK, let’s come back to the essay. One of the first sections you talk about is titled “Technology is amazing, and there are very high costs to delaying it.” I don’t imagine that many people who listen to the show need persuading that technology has very big benefits. But to make sure that we give it its due in this conversation, what is the reason to think that any new technology that we might invent, on average, we should expect to be really beneficial?
Vitalik Buterin: Basically you look at where we are now, you look at where we were 50 years ago, 100 years ago, 1,000 years ago, and look at which way the slope is going, and it’s just obvious that our lives are massively better. I had a chart in my post that showed average life expectancy in a bunch of countries. And that one was interesting, because it showed both the long-term trend and a lot of the kinds of events that we tend to consider as being maximally terrible and worth avoiding, which are basically the big wars. Actually, the Great Leap Forward was in there too. The Spanish flu was in there too. And those were bad, and those are very visible on the chart.
But even still, the powerful force of the trend and just how far the trend took us over that century just completely outmatched even those things. Germany is a better country to live in in 1955 than it was in 1930, significantly. And that’s true of a lot of places. And especially if you get further away from the Western world, the gains have just been massive over the last half-century and century.
And looking at the stats, there is just thinking back at what life was like 10 or 20 years ago, and remembering that back then, just getting lost in the middle of a city was a thing that you had to actually worry about; that if you had to say goodbye to a friend, it really was goodbye. Whereas these days it’s like, you turn into a pen pal, and then we visit each other in a year. The ability to have all of the world’s information at your fingertips with Wikipedia now, and I think is even more supercharged with the GPTs. There’s just lots of things that I think any of us individually can relate to, even as rich country residents, that technology just made quite a lot better.
And I think on this topic in general, actually, if you start talking to people further outside of the rich countries, then techno-optimism starts going up. Because if you’re in a land where GDP has been growing by like 0% to 1% for the past 15 years, you get one set of attitudes, but if you are in a land where it’s been growing by like 6% a year for the last 15, and people remember the difference between now, and like all the kinds of things you can do with a phone versus before and when you could not, it’s just obvious. The difference is so stark.
So I think it’s just always valuable to start off by just meditating on the kinds of gains that we’ve had so far — both the stats and our personal experiences. And given things like, again with COVID, how we were actually able to develop really powerful vaccines for this stuff within a year, using technology that has literally only been properly developed over the past decade. And just remember that there’s some incredibly serious improvements happening there. It’s important to talk about the negatives, but we just have to talk about it in that context.
Rob Wiblin: OK, so the very next section is entitled “AI is fundamentally different from other tech, and it is worth being uniquely careful.” I guess that’s not a new topic for this show. But what are the reasons that stand out to you for why AI might be an exceptional case?
Vitalik Buterin: So I talked about three big reasons. One of them is just the case for existential risk. Basically I think the big question is which reference class are you putting AI into? Like, are you basically saying AI is a continuation of the same thing as this 500-year trend of people inventing stuff like the printing press, and a bunch of people getting angry at it, but ultimately it just being incredibly obvious that it was a good thing that freedom empowered people? Versus to what extent is it actually an element of a much rarer trend that basically includes species coming in and replacing species that were less intelligent or less powerful than them, and often doing so in ways that were very unkind to the thing that they replaced? Basically: is AI the next big tool or is AI the alternative to man?
Rob Wiblin: I think I’ve talked before about, do you view AI as an evolutionary event or do you view it as a new species or a new agent that might replace us, or just as a tool? How do you tell? Because it’s plausibly both.
Vitalik Buterin: Exactly. I think the challenge is that so far it has absolutely been a tool. It has started to show signs of acting like a thing that you can talk to, basically over the last year. But you have to extrapolate the trend, and the trend is definitely going toward more and more capability — and the trend is definitely going toward any individual benchmark that people have come up with to say that this is the thing that defines our humanity, and this is a thing that humans can do and the AIs can’t do, and this is the thing that shows that we have a unique soul. It’s just constantly taking down one after the other, and goalposts shifted one after the other.
I think if you think back to the grandfather of all human-versus-AI separators, which is the Turing test, I think it’s reasonable to say that in 2022 or 2023, that is the time when AI now passes the Turing test. Of course, you can refocus on the shrinking set of things that AI can’t do, but it’s going to keep shrinking and it’s going to keep shrinking.
Rob Wiblin: Shrink to zero, perhaps.
Vitalik Buterin: Exactly. At some point you’ve got to realise that this thing has crossed a huge number of benchmarks. And when future historians start dividing the eras and try to decide when did we actually enter what we might call the roughly-human-level-AI era, I expect that roughly 2022 to 2023 will be what they decide on as being the cutoff point.
Fear of totalitarianism and finding middle ground [00:22:44]
Rob Wiblin: So you threw this essay into the middle of a sort of civil war within people who are interested in technology and interested in AI — many people who are either directly working in or adjacent to the technology industry, between people who are very gung ho about AI and people who are quite worried about it.
And my perception was that many people have been making the argument that AI is this exceptional case: that sure, technology is in general good, but AI, for many reasons that we could give, might be a case where we need to be uniquely careful. And there’s a bunch of people who have been vociferously arguing against this, or have really taken umbrage at that, saying, no, you’re a whole bunch of worrywarts, and in fact, AI is the thing that’s going to save us rather than the thing that’s going to doom us.
In this essay, you make the case that AI, in your view, might well be an exception — but it seems like it was positively received by everyone, including people who in general identify as into e/acc and very sceptical of the AI safety case. Do you have a sense of is my perception correct? And if so, what is it about the way that you put the reasons to worry that ensured that everyone could get behind it?
Vitalik Buterin: Yeah, I think in addition to taking the case that AI is going to kill everyone seriously, the other thing that I do is I take the case that AI is going to create a totalitarian world government seriously. And this is a lot of other people’s biggest fear, right? On the one hand, if you have AI that’s not under the control of everyone, then it’s just gonna go and kill everyone. But on other hand, if you take some of these very naive default solutions and just say, “Let’s create a powerful org, and let’s put all the power into the org,” then yeah, you are creating the most powerful Big Brother from which there is no escape, which has control over the Earth and the expanding light cone, and you can’t get out.
This is something that I think a lot of people find very deeply scary. I find it deeply scary. It is also something that I think, realistically, AI accelerates. I gave some examples. One of the recent ones is in Russia: one of the things that, unfortunately, Vladimir Putin has been able to do very well over the last two decades is just systematically dismantle any kind of organised anti-Putin and pro-democracy movement. One of the techniques that has entered his arsenal over the last five or 10 years or so — which is only possible because of facial recognition and mass surveillance — is basically when a protest happens, you first let it happen, and then you could go in with the AI and with the cameras that are everywhere, and you figure out which people participated, try to even figure out who the key influencers are. And then maybe a few days later, maybe a few weeks or months later, they get a knock on the door at two in the morning.
This is something that they’ve done in Russia. This is, I believe, also how they ended up handling Ukraine, when they managed to do the only significant conquering that was maybe kind of successful, when they took over an extra about 12% of the country back in March 2022. At first they let the protests happen, but then they identified a lot of people, and a bunch of people quietly were thrown into the torture rooms. And lots of other authoritarian regimes do this.
And the Peter Thiel case — that AI is the technology of centralisation and crypto is the technology of decentralisation — it’s a meme, it’s a catchphrase, but there’s really something to it. And yeah, there’s something to that fear that speaks to everyone. The challenge there is that both a lot of the naive, “keep doing AI the way we do it” paths and the “solve the problem by nationalising AI” paths end up leading to that. And that’s one of the topics that I ended up talking to quite a bit. I think addressing some of those totalitarianism concerns really explicitly is also one of those things that’s important to do.
Rob Wiblin: Yeah, I had more or less the same theory, and it’s made me wonder whether on the surface, it seems like this conversation on X is all about whether rogue AI is a serious technical risk or not. There are people who say that there’s reasons to expect deceptive alignment, all these kind of technical arguments, and then people who are arguing against that. But I wonder whether the key thing under the surface that is actually bothering people is that there’s some people whose main worry is centralisation of authority — like Big Brother, the government controlling things, or big corporations controlling things. And any argument that seems to support further centralisation and control of compute and control of algorithms and control of everything by a single central authority, they hate that, because they see that as the dominant risk.
And then there’s people — which I guess I’m somewhat more sympathetic to, at least at the moment, but I could be persuaded — who think that that is worrying, but it’s maybe an acceptable cost, given the risks that we face, and that might be the lesser of two evils. And folks like that feel no cognitive dissonance or no internal conflict saying that, yes, rogue AI is a massive problem.
So in fact, this distrust of authority versus overall trust of authority might be the key underlying driver of the disagreement, even though that’s not immediately obvious.
Vitalik Buterin: Yeah, absolutely. One thing to keep in mind regarding distrust of authority is I think it’s easy to get the impression that this is a weird libertarian thing, and there’s like a small percentage of people that’s maybe concentrated in America that cares about this stuff. But in reality, if you think about it a step more abstractly, it’s a prime motivator for half of geopolitics.
If you look at, for example, the reasons why a lot of centralised US technology gets banned in a lot of countries worldwide, half the argument is that the government wants the local versions to win so they can spy on people. But the other half of the argument — and it’s often a half that’s crucial to get those bans to win politically and be accepted by people — is they’re afraid of being spied on by the US, right? There’s the level of the individual having a fear of their own government, but then there’s a fear of governments having a fear of other governments.
And I think if you frame it as, how big of a cost is it for your own government to be this super world dictator and take over everything, that might be acceptable to a lot of people. But if you frame it as, let’s roll the dice and pick a random major government from the world to have it take over everything, then guess what? Could be the US one, could be the Russian one, could be the Chinese one. If it’s the US one, prediction markets are saying it’s about 52% chance it’ll be Trump and about 35% it’ll be Biden.
So yeah, the distrust of authority, especially once you think of it not just as an individual-versus-state thing, but as countries-distrusting-each-other thing, is I think definitely a very big deal that motivates people. So if you can come up with an AI safety approach that avoids that pitfall, then you’re not just appealing to libertarians, but you’re also, I think, really appealing to huge swaths of foreigners and both government and people that really want to be a first-class part of the great 22nd-and-beyond-century future of humanity, and don’t want to be disempowered.
Rob Wiblin: This idea felt like a hopeful one to me, because in my mind, I guess I know that for myself, rogue AI is maybe my number one concern, but not that far behind it is AI enabling totalitarianism, or AI enabling really dangerous centralisation or misuse or whatever. But I guess that might not be immediately apparent to people who just read things that I write, because I tend to talk about the rogue AI more because it is somewhat higher for me.
But if everyone kind of agrees that all of these are risks, and they just disagree about that ordering — of which one is number two and which one is number one — then there’s actually maybe a lot more agreement. There could be a lot more agreement than is immediately obvious. And if you could just get people to realise how much common ground there was, then they might fight a bit less.
Vitalik Buterin: Of course. I absolutely think that’s true. And a big part of it is just making it more clear to people that that common agreement exists. I think a lot of the time people don’t realise that it does.
And I think the other big thing is that ultimately people need a vision to be fighting for, right? Like, if all that you’re doing is saying, let’s delay AI, let’s pause AI, let’s lock AI in a box and monopolise it, then you’re buying time. And the question is like, what are you buying time for? One of those questions is like, what is the end game of how you want the transition to some kind of superintelligence to happen? And then the other question is like, what does the world look like after that point? You know, are humans basically relegated to being video game characters? Or is there something else for us?
These are the kinds of conversations that I think are definitely really worth having. And I think people have been having a little bit in the context of sci-fi for a while, but now that things are becoming much more real, there’s more and more people having it, and I think that’s a very healthy thing.
Rob Wiblin: I was trying to do a little bit of soul searching in preparing for this interview. In general, on “Are governments good? Can you trust authorities? Can you trust people who have power?,” I’m inclined to see the glass as half full, even knowing all of the many failures and all the many mistakes that people make. And if I think about why that is narratively, it’s almost certainly because I grew up in Australia in the ’90s and the 2000s, in a country that is generally well functioning with one of the more benevolent governments that there is. My parents were really nice people. The school I was at was really quite nice. Almost all of my formative years were with authorities that messed up and they did stupid things, but broadly speaking, you could trust them not to be malevolent and not to exploit you.
And I imagine for many people that —
Vitalik Buterin: For me, of course, the answer is, you know, I am from Mother Russia.
Rob Wiblin: Right. I wonder if there could be any value in getting people to step back: for me here, and I suppose for everyone, to realise just how contingent your level of trust in authority might be, and your general affect towards governments, how much it’s going to depend on your personal experiences.
Vitalik Buterin: Yeah, I think it’s definitely one of those things that’s very different for different people. And then a lot of the stuff is, I think, definitely motivated not just by 20- or 30-year upbringings, but also by extremely recent stuff. The US is in the middle of a very crazy, high-intensity culture war, right? And the two sides, they are definitely both very hair-triggered and worried that the other side is either fascism or communism and the end of democracy, and interpreting everything that happens as the first step in a cultural revolution and all of those things.
Rob Wiblin: I guess a cynic might say this essay has been really positively received because it hasn’t really chosen a side. To the folks who are very pro-tech, you say, “Yes, you’re right: technology in general is very good. We should generally expect the future to be positive, probably. Yes, you’re right about all of that.” To people who are really worried about AI as an exception, you can say, “Yes, AI might well be an exception. Yes, possibly things could go really badly. You’re right about all of that.”
But the nub of the issue that we face right now is which of these is the dominant consideration that should be driving our decision making and driving policy? Is it the outside-view consideration that technology has been taking us in the right direction? Or that AI is a strange exception, and we should be trying to slow it down or do things that we wouldn’t do in any other area? What would you say to that cynical explanation for why people have loved it so much?
Vitalik Buterin: I think the reality is the policy space is always much more than one-dimensional. And by “policy,” I mean not just what governments should do, but also what individuals and companies should do. Because we tend to be used to the frame where the theory of change of politics and activism is like: you create arguments that motivate people to change the laws, and the laws are ultimately what motivate behaviour. But there’s also a very big aspect of just, you create the theories that just directly motivate the kinds of things that people want to build.
These are all, I think, very far from profit-maximising actors. There are actors that often definitely have strong megalomaniac tendencies. And there’s definitely a big thing of like, “I want to save the world, but I especially want to be the one that does the saving.” So I feel like the place where I can push most productively probably is less so much on the big one-dimensional lever, but also asking the question of, “If you’re the type of person that wants to build and accelerate, what are things that you should be accelerating?” Or, “If you’re the type of person that’s in government, and your job is creating positive and negative incentives, then what kind of incentives should you be creating?”
There’s a lot of subtle and individual decisions that I think could be done better. One example of this is I give a big, long listing of these defensive technologies. And there is a fair critique that all of that stuff is the most important thing in the world if you have 50-year timelines. But if you have five-year timelines, then what’s the point? Nothing’s going to be done that fast. For me, my timelines are just like a very wide confidence interval: I have some on the five-year, I have some on the 50-year, and some on the 500-year. So I think it’s valuable to do some work across that whole spectrum. Even if you’re in the five-year world, I think the 50-year stuff also answers the question of, if you’re buying time, what do you buy time for?
One of the messages that I had is that if you’re the sort of person who is an e/acc because you believe in the glory of humanity becoming superintelligent, then maybe you should work much more on brain-computer interfaces, for example — even explicitly super pro-open source of that space. Actually, that’s one of those spaces where closed source feels so dangerous, because we’re literally talking about computer hardware reading your mind. Do you really want —
Rob Wiblin: — that controlled by Microsoft?
Vitalik Buterin: Exactly. Do you want your minds to be uploaded by, in one case, Microsoft and Google, and in the other case, Huawei?
And then there’s a whole bunch of intermediate things that you can do. There is working on abstract capabilities improvements and kind of bigger, bigger, more, more, more. And then there is working on human-machine cooperation tools, which is a space that I think is super important. I’ve been playing around with a bunch of local models. Actually I just bought a new laptop that has a GPU just so I could do that. I’ve been using it both for text-related inference and chatbot stuff, but also for drawing pictures. Even some of the pictures in my recent blog posts I ended up drawing with Stable Diffusion.
And one of the things that I discovered there is that if you’re using AI with the goal of making something that dazzles people, then often the correct thing to do is you just make a prompt, the AI does something, and you ship it. But if you’re using AI with the goal of making something specific that you want for some purpose, then often you have to do 20 rounds of back and forth, right? Like, you have to say, draw this thing. And then, no, you don’t like it. This is totally wrong. And then you erase a bit. You tell the AI to do some inpainting and just redraw those regions with another prompt. And you do that 20 times. There’s a pretty complicated art to it. This is actually one of those reasons why I actually think that some of the near-term, “AI is going to kill creative jobs” stuff is a bit overhyped.
Rob Wiblin: Because there’s an intense skill to make it work.
Vitalik Buterin: Exactly. I think basically what’s going to happen is like, imagine if two years ago AI could make tier-zero art by itself, then humans plus AI could make tier one, and then existing multimillion-dollar studios or whatever could make tier two. We’re just like shifting everything up one, right? And so what was in the range of individuals becomes in the range of just robots, and then what was in the range of big studios becomes in the range of individuals working with AIs, and then big studios potentially level up a bit more.
Although, actually, I think there’s an interesting property that AI actually helps the noobs more than the pros. It’s a thing that I think Noah Smith has written a bit about, and it’s a thing that really speaks to my personal experience. I find AI definitely accelerates me more in domains where I haven’t done anything at all before than in domains where I’m an expert. Like, I have not done Chrome extensions in 10 years, and I used AI to help me make a Chrome extension. It was super useful and helped me do things I would not have been able to do by myself.
I mean, in the creative case, that’s like, on the one hand, yes, your career as a drawer by hand of things is largely over, outside of a few enthusiast communities. But on the other hand, we are about to see a renaissance of people being able to make movies and basically just seriously disrupting Hollywood and getting us to the point where we have thousands of really amazing, high-quality productions from people with all kinds of backgrounds and movements.
That’s the kind of space where I think e/acc-ing it would actually be super awesome. If you can focus your e/acc-ing on making tools that empower people in collaboration with AI, then I think near term that’s amazing. And then the view that I expressed in that post is that I think there’s a natural pipeline, where for the next couple of years you’re building keyboard and mouse tools, and then you start doing maybe eye and ear tracking and a bit of brain scanning, and then you start just naturally going into BCIs [brain-computer interfaces]. And realistically, BCIs will involve some level of models too. And then eventually we’ll basically get to AIs by essentially merging with them and uploading ourselves, as opposed to creating something that’s like this alien organism that’s completely separate from humanity.
Should AI be more centralised or more decentralised? [00:42:20]
Rob Wiblin: Coming back to AI, a topic that you talk about a bit in the essay, and which we were suggesting earlier maybe is a very key driving underlying cause behind people’s disagreements, is: should AI be more centralised or should it be more decentralised? And you make a bit of a case for both different paths.
What are the potential benefits, or what’s the positive vision of a more centralised AI? How could that be good?
Vitalik Buterin: The standard case for more centralised AI is basically that, especially once we get things like really scary superintelligence, if it comes time to press a kill switch, we’ll actually be able to press it. You know, you get fewer race dynamics: you don’t get the thing where there’s like five different countries and mega corporations that all think that if they win the race, they can take over the world, and because of that, they just keep going faster and faster. You basically prevent all of those issues and then you get the AI world government and it enforces the peace. I think that’s like if you’re fully on that side.
There’s a milder version of this, which is what the LessWrong people call the “pivotal act” theory: basically you make a superintelligent AI whose only goal is to make a single act that somehow can, either permanently or semi-permanently, make the world a more defence-favouring place, but then still preserves the basic structure of the world. And then after it does that single act, the AI stops and disappears. And the argument is that making an AI that stays long enough to do one pivotal act might be significantly easier to both do and agree on than to make an AI that actually becomes a proper government. You could imagine the pivotal act being something that basically just says, “Solve the entire d/acc roadmap and burn every chip farm to give us a couple more decades.” And then we’ll kind of be in a nice world to actually work together on solving the rest of the problem.
So there’s both of those theories. And the theory being there that if you actually can agree on a centralised actor doing it, then you avoid race dynamics and people just being extremely risky in their desire to get to the top and be the first to hit the magic milestone before anyone else does, and everyone distrusting everyone else, which only fuels the race even further and so forth.
Rob Wiblin: I suppose for people who are more sceptical of the central vision, maybe something that would be appealing is it might delay militarisation of AI because countries would feel less competitive pressure to suddenly insert AI into all parts of their military in order to keep up, which I think everyone could agree could lead in a bad direction.
Vitalik Buterin: Absolutely. I mean, it is leading in a bad direction.
Rob Wiblin: The other vision you talk about you call “polytheistic AI.” Do you want to explain what that is and what’s good about that?
Vitalik Buterin: This is a vision that a lot of people have argued for. Basically the idea is that we don’t try to create a global singleton. It’s like an AI for every country, and then possibly an AI for every company and every individual. And you could have both of those happening at the same time with AIs of different scales, and basically try to create a world where you have humans that are assisted by these AIs that are very powerful and that actually give them the tools to do the kinds of things that they want to do. And because the agency in the AI is widely distributed, because you have so many AIs controlled by so many different people, then there’s no one single actor that’s actually able to take over the world.
And I totally see where this comes from, and how from any normal theory of political science, how it’s much healthier to have this kind of polytheistic environment, rather than trying to create the big centralised god and hope the big centralised god goes well. From the perspective of any political theory that is trained on everything that humanity has worked on before superintelligent AI, it makes total sense as something that’s clearly superior to making one AI. But at the same time, with superintelligent AI, it feels like it’s an equilibrium that could easily be very unstable, right?
Rob Wiblin: In what ways?
Vitalik Buterin: Basically because it’s just so easy for one AI to actually get ahead and have way more capability than everyone else.
Rob Wiblin: I suppose it could either by some self-improvement loop, or I guess by grabbing lots of compute and copying itself really quickly.
Vitalik Buterin: Exactly. Yeah. If you imagine every AI growing exponentially, then whatever the existing different ratios of power are all get preserved. But if you imagine it growing super exponentially, then what happens is that if you’re a little bit ahead, then the ratio of the lead actually starts increasing.
And then the worst case is if you have a step function, then whoever first discovers some magic leap — which could be discovery of nanotechnology, could be discovery of something that increases compute by a factor of 100, could be some algorithmic improvement — would be able to just immediately turn on that improvement, and then they’d quickly expand; they’d quickly be able to find all of the other possible improvements before anyone else, and then they take over everything basically. In an environment as unknown and unpredictable as that, are you really actually going to get a bunch of horses that roughly stay within sight of each other in the race?
Rob Wiblin: So the fundamental idea is, if there’s lots of different actors that have a similar level of power, then we can continue to have a liberal society, and they continue to compromise and not attack one another because there’s kind of a balance-of-powers situation?
Vitalik Buterin: Exactly.
Rob Wiblin: And the dream would be that we could keep on having that, but maybe the technology just doesn’t allow us to do that. That might just be an unviable goal now. That’s the worry.
Vitalik Buterin: Right. That’s the worry, yeah.
Rob Wiblin: So in broad strokes, one thing that’s going on is that we have a fight between a cluster of people who are generally positive about AI, but might have reservations about it, but fundamentally they are even more concerned about centralisation of power than they are about risk from AI specifically. So any kind of policy proposals that say that we need to have an international consortium to control it, and we need to control and monitor all of the compute in order to make sure that people can’t misuse AI in XYZ way, it’s going to get very hostile reception from that crowd because they think it’s going to make things worse — because, in fact, the fear they have is centralised power. The fear they have is that the government is going to take advantage of us and crush people, and that is putting us in a worse situation.
Then you have other folks who are saying, I think most of them would say that none of us wanted to centralise this; this isn’t a vision that anyone was hoping for. I mean, I think there are cynical folks who think this was the plan all along: people really wanted to just empower the government, have an authoritarian takeover. I don’t think that’s the case, at least among any people that I know, or myself. But this is the only path, unfortunately. We can’t just have the nice world, because it’s not a stable one. It will just lead to massive misuse, it will just lead to absolute disaster.
What is the synthesis between these different perspectives? Which both have some legitimacy — like the fears are quite fair on both sides. I guess one thing that suggests is that if we could come up with any policy proposals that help to address rogue AI and misuse that don’t require more centralisation, then you might get far broader support, at least across the tech space, for those proposals. Now, that might be a very heavy lift, but perhaps it could be worth aiming for because the politics of it will be much better.
Vitalik Buterin: Yeah. One of the ideas is like… This is something that I think a lot of the AI regulation is explicitly moving towards, is like, if you’re going to regulate AI, then explicitly exempt anything that just runs locally on realistic consumer hardware. And the idea there is that, I think if you look at, if you think about just what the benefits are of the open source ecosystem: you can run stuff locally; it’s guaranteed that the service is not going to disappear and it’s not going to change itself and massively change your workflow; you can run it with your private data and preserve your privacy; you can locally make fine tunes for whatever specific applications you want.
All of those advantages are actually only advantages that apply to models that are small enough that you realistically can run them on consumer hardware. Because if it’s bigger, then nobody’s going to be running it locally. Also nobody, or very few people, are going to actually have the resources to even fine-tune it. And so making that explicit separation between smaller-scale stuff and top-of-the-line big corp stuff, and being willing to commit to that, I feel like that would convert some people — though that’s definitely far from converting everyone. I mean, if you’re one of these e/acc frontline AI firms, then you also want your frontline stuff to not be regulated.
I talked to some of the AI regulation people in the UK government here in London over the last couple of days, and I think the idea of separating regulation based on scale is definitely something that gets positive reception. The other one is just like classes of application depending on what goes in the training data, which is also interesting. If you give people very easy, don’t-have-to-hire-a-lawyer ways to be unambiguously not gone-after by the government, that’s always something that’s super helpful for the kind of hobbyist independent innovation sector.
So that’s kind of one category of things. But I think the other big thing is that we have to think about how any attempts to delay even frontier AI is ultimately buying time — because after infinity years, even a laptop is going to be ASI [artificial superintelligence]. So the question is: what are you buying time for? And one of the goals that I had is basically saying that instead of being an e/acc for AI that is maximally disconnected from humans, has maximal agency independent of humans, and tries to be a silicon god, try to be an e/acc of stuff that empowers people and potentially is on some kind of path to merging with them.
That’s the sort of thing where we can debate whether or not that shift would actually succeed, but at the same time, people working on that seems much less likely to lead to bad stuff than people working on building the silicon god as fast as possible. And in the hopeful case, there actually is a nice light at the end of the tunnel. So actually having those positive visions is an important thing. And I definitely don’t want to imply that my post is the end of the road for positive visions. I think it’s the sort of thing that we definitely want a lot of people to be talking about and trying to come up with very long-term visions that we’d actually want to be part of. And the more that something like that actually exists, then the more people would be willing to get behind a roadmap that actually tries to push all of the levers in that direction.
The other thing also is I’m definitely in favour of trying intentionally hard to keep the concept of AI safety minimal. If you think about the UN, one of the things about the UN is it’s intentionally pretty minimalistic and pretty weak. That ended up being a key part of it being originally accepted by everyone and people joining it. If the UN also tried to resolve a whole bunch of human rights concerns at the same time, then it probably would have gotten much less buy-in.
The analogue here is there are a bunch of people that are really convinced that AI safety means let’s align people and enforce wokeness on everyone or whatever, and basically explicitly not doing stuff that encourages people to be like Gemini is one of those other positive things that would probably get a lot more support.
Rob Wiblin: I’ve been completely tuned out the last month. I’ve heard there’s a bunch of controversy about Gemini. I guess I’m looking forward to finding out when I come back from parental leave what the nature of it was.
Vitalik Buterin: Right. It had to do with a bunch of weird things that culminated in 1943 German soldiers being depicted as being ethnically diverse. So it got weird.
Rob Wiblin: On the centralisation point: the folks who are both pro-AI and pro-decentralisation and sceptical of authority, how much do they worry — you might have your finger on the pulse a little bit more — that AI is just inherently centralising technology? Because, to start with, who’s going to have the resources to develop the first incredibly superhuman AI? Probably a major tech company or the US government or some other government — some major authority. And then they’re probably, given the nature of those organisations, not just going to hand it out to everybody. Why not take advantage of that power?
It seems like — inasmuch as you’re extremely sceptical of authority, sceptical of governments — that’s an unsolved social problem that might make you nervous about where all of this is leading us. And indeed in China, that would be the default thing, surely: that the government will get the most powerful AI, insist that no one else can use anything else, and then use that as a tool of social control. It’s almost hard to see how it could be otherwise. So that makes me nervous about advancing AI.
Vitalik Buterin: Yeah, and I think actually a lot of people in the crypto space totally believe that. There’s definitely a lot of people who believe that AI is the technology of centralisation and crypto is the technology of decentralisation. You know, we have to be e/acc on crypto precisely in order to let the decentralised side keep up with the onslaught of the centralised side, and the Kremlin being able to arrest all the protesters with facial recognition and all of those things. So in non-AI tech spaces, there’s definitely, I think, a pretty large number of people who believe that. And then, of course, crypto definitely is on the pro-freedom end of non-AI tech spaces. So yeah, there’s a lot of support for that viewpoint.
Within AI, I guess the challenge is… The way that I think about this is pretty much everyone has a strong pressure toward believing a political story where the agents of positive change are things and people that they can personally relate to. You know, if you’re a law academic, then you’re the type that already has an established history of interacting with all kinds of policymakers, and that really does probably make you look more willing to be authoritarian. Like, I remember the last couple of times I saw people arguing in mainstream media, trying to make the case that powerful internet censorship is actually good. And they ended up being all academics. So it’s like n=3 confirms that theory.
And then meanwhile, if you are a software developer, then even if you believe in very similar things, the thing that you’re going to most support as a vehicle for change isn’t authoritarianism — it’s going to be making better software and trying to make more open software and things like that.
I guess the challenge in AI is like, if you’re outside of AI, then that means it’s very easy to get convinced of the idea that AI is this thing that is both dangerous and centralising and creates both risks. But if you’re in AI, then you’re creating AI, and you’re not going to believe the narrative that you are evil. But the narrative that is very easy for people to believe is like, “This other kind of AI is evil, but my kind of AI is good” — which is definitely a lot of what e/acc people do believe. I mean, even the original story of OpenAI is like, “We can’t let the future of AI be controlled by Google, so let’s make this kind of open and more…”
Rob Wiblin: “We should give it to Microsoft instead.”
Vitalik Buterin: Right. Well, initially it was just this more open and prosocial thing that’s going to be a nonprofit. But then of course, years later, they are both not open by any standard definition of open — and you can debate if that’s good or bad, but it’s true — and at the same time, from an AI safety perspective, they’re definitely not on the side of advancing safety.
Rob Wiblin: Well, I don’t know. People argue it both ways.
Vitalik Buterin: Right, yeah, that’s fair. But basically there’s definitely this kind of headwind if you’re within AI, that if you’re within AI, there’s this natural pressure toward not believing the more pessimistic takes about what AI can do.
Rob Wiblin: Not maximally safety focused.
Vitalik Buterin: Right, exactly. That’s a tough one. I mean, I think it’s possible that if we massively accelerate the brain-computer interface base, and on top of just creating that new tech trend, it also just creates yet another large mass of people that might even have billions of dollars of VC investment, and like Middle Eastern countries massively investing and shilling in them, and like a bunch of Silicon Valley people being in their favour, and China trying to get in the game and all that, who also have the incentive to actually say AI that’s fully separate from humans is the bad thing and we’re the good thing. And if you accelerate the space to the point where it becomes an independent organism, you kind of create another set of actors that has the incentive to actually make that argument. I mean, there’s a lot of weird psychology like that.
Rob Wiblin: I guess I’m feeling a little bit at a loss as to what the policy proposal might be that would be useful on safety, that would also be satisfactory to people who don’t trust any authority and are just very sceptical of governments in general. I guess I feel like that’s progress in a way, if we identify that this is a key question that we need to resolve, and maybe we should attack that directly rather than talking around it.
Is there any possible mileage…? I guess you might know more people who have this attitude of trying to come up with somewhat more trusted authorities that people might hate less. I mean, many policy proposals are basically, “The US government should do X, Y, and Z.” It should start tracking compute, things like that. But it’s not as if people are like, the US government is the paragon organisation that we should be handing all of this power to. It’s more just that they’re the ones that are there that might be able to do it. But could you try to organise a different group that people would have at least some more confidence in, or come up with a structure of accountability that might give people somewhat more confidence that it’s not immediately going to be exploited to harm people.
Vitalik Buterin: That’s a challenging one. I’m trying to think how I would even attack that problem. I mean, I think the stuff that I’ve said so far is basically like, the first thing that you do is actually accelerate all the good stuff — defensive technology and all that. And that’s a lever that we can still talk about, because I think it’s one that could be pressed 50 times harder than it is today. And while it’s not at the max it’s worth pushing it more, but then the question is like, what if your timelines are not 50 years? They’re five years, and you still want to do something in that regime.
One other plausible answer is still… I mean, one of the things that even the UK government is doing right now is, it’s like they’re not proactively regulating AI much at the moment. They’re basically putting themselves in a position where they’re building competence, they’re building the ability to evaluate models, they’re building their own internal understanding — so that at some critical moment, when the time comes to do something more serious, they’ll be more able to do something smart that’s serious.
And the argument for that approach is basically that often you do hear from people on the safety and pause side that you either respond too early or too late — and that “too late” means we all die. But the problem with responding too early, of course, is if it’s an ideal world government responding too early, then sure. But if it’s, you know, real-world politics as it exists in the 21st century, then congrats, you’ve cried wolf, and you’ve convinced a whole bunch of people to hate you. And if shorter term you focus on information gathering and building capabilities, then by the time it comes time to really seriously do something, you can. There will possibly be a lot more public buy-in for that.
So that’s also an approach, and that does feel like an approach that avoids a lot of pitfalls for now. But then, of course, there’s the question of, is there actually a fire alarm for AGI? And we don’t know.
The thing that people always want is people want bright lines — because bright lines make people feel safe. And people want a bright line to make sure that humanity is not going to be destroyed. But people also want a bright line to make sure that that thing is not just going to blow up and start enforcing one particular faction’s culture war preferences.
The challenge with AI is it’s very hard to come up with bright lines. With nuclear weapons, that problem was easy, which is a place where we were very fortunate. And we were able to create some pretty intrusive infrastructure that has just happened to be tightly scoped to only focus on nuclear weapons, and that actually ended up working out really well. But the question with AGI is like, what even is the equivalent of that? But yeah, the thing that I think people want is basically some kind of assurance that this will not be abused as a lever to do other things.
One thing that I think is good is that there have been efforts that have been starting to happen to try to gather a bunch of very diverse, different people’s opinions on this topic. And often, if you just create common knowledge that a consensus around something exists, that by itself can make a lot of progress. Like, if you can get people into the frame of mind where they’re somewhat less conflict-oriented and they’re willing to actually think pragmatically, then people are often willing to be more reasonable. And if we start doing more of those, then that would be a process that might actually be able to do a better job of identifying what some of those mutually agreeable ways to slow down the most dangerous parts of the space are.
Humans merging with AIs to remain relevant [01:06:59]
Rob Wiblin: We’re slightly jumping the gun, but later in the essay you present this merging with AIs and using brain-computer interfaces as a potentially positive vision for how humanity could remain relevant and still have potentially some sort of decision-making power in a future with AIs that are extremely capable. To me, this kind of seems like a false hope. But first off, what kind of problem is the brain-computer or the merging vision solving in your mind?
Vitalik Buterin: Basically, the base case is that you have these two separate things, and humans are self-improving very slowly, and AIs are being improved very quickly, and eventually will start being improved even more quickly. You have these curves, and one curve is below, but it’s going up quickly — and that curve is going to shoot up way ahead. And when that finishes, then you’re going to have superintelligent AIs that are way smarter than any of us.
And so one is you have all of these Yudkowskian concerns that the base case of that happening is killing everyone on Earth. But let’s say we can solve that. Then maybe you have a totalitarian government. OK, maybe we solve that too. But then, even if we solve both, is the future that results out of that even… Like, that’s a future where individual human beings have basically no agency, right? That’s a future where basically we’re relegated to being pets. We have nothing to say about the future path of the universe, because the AIs are just going to be much smarter. And if it’s a universe where there’s any amount of competition left, whoever’s willing to just give up their creativity and just fully relegate their decision-making power to the bots, that’s the side that’s going to win, right?
So if that’s a future that you don’t want, then basically, if you accept that superintelligence is going to run the world — because superintelligence is just so much more powerful than regular intelligence that it’s just obvious that it’s going to do that — then the question is: is it AI superintelligence, or is it human superintelligence? Human superintelligence seems like the correct answer if we want to retain our agency. And if we want human superintelligence, then the question is, what is the path to actually getting there?
And, you know, I could be totally wrong on what that path looks like. I think we should probably be exploring like 10 different paths at the same time. But that seems like one light at the end of the tunnel that actually does, I think, address all three of those categories of bad futures, and so it’s really worth looking into an alternative.
Rob Wiblin: I guess there’s a couple of different reasons I’m sceptical of this vision. I suppose as a vision for how to deal with rogue AI or misalignment, one issue would be that probably it’s going to come too late: that we might well have very dangerous AGI that is not integrated with humans in the next five or 10 years. And it seems like it’s going to be a long time, or going to take longer than that, for brain-computer interfaces to catch up, and we can have this merged vision actually play out.
But then separately, if you imagine in this future where the brain-computer interfaces have advanced a lot… If you were trying to design a creature, a machine that was able to fly as quickly as possible from New York to London, what would be faster: a pure machine, or a machine-bird hybrid — where you try to build a machine around a bird, but still have the bird doing some of the work or usefully contributing? That’s just an analogy that I use to highlight the idea that in that situation, there’s no way that the bird could usefully contribute — because a plane is just so much more powerful that trying to integrate a bird is only going to slow you down and make the overall apparatus less effective and much slower.
And that’s how I imagine things would play out in future: that pure AGI is going to be so much faster at thinking, so much more able to reprogram itself and improve itself over time, that trying to integrate this quite static, legacy piece of technology that wasn’t designed for the purpose of being integrated with machines into it is going to be a massive disadvantage. And then all the competitive pressures that cause you to need to adopt AI at all in the first place — such as needing to keep up in business or needing to keep up geopolitically — are going to create the same pressure to just dispense with the human and have a pure AGI that can operate massively faster and do a much better job.
So the question just comes down to: can you ban the pure AGI and insist on the AI-human hybrid at all times? That seems like a heavy lift. What do you think?
Vitalik Buterin: I think there’s a big difference between intelligence and flight rate, in the sense that flight is a task that is easy to specify. It’s easy to tell a computer program what flight is. It’s a math problem. You can ship it off to IOI people and they’ll be able to work toward it and make improvements in understanding with basically zero social context. Eventually you have to get to the social context, but you can make aeroplanes that fly without it.
In the case of intelligence, one thing that often corresponds to the most success in our world is being able to play political games, right? And Robin Hanson has this theory that basically the primary force driving the increasing evolution of intelligence has been our need to play political games with each other, and our need to use deception and have counterdeception and counter-counterdeception and self-deception and signalling, and all of these really complicated pressures. So we’re pretty well evolved to complicated social environments already.
The other thing is, if you look at how AIs work now, we’re definitely building an aeroplane around a bird in the sense that we’re building an aeroplane by training it on terabytes of text and video created by birds. So yeah, it does feel like intelligence itself is the sort of thing that plausibly…
Rob Wiblin: It could be an exception.
Vitalik Buterin: Exactly. There is prior art within humanness that actually carries sort of load-bearing, useful content. But then, of course, the argument is like, even if that’s true short term, what would competition pressures do? And are we going to enter The Age of Em world in which that actually leads to competition pressure just eventually selecting against consciousness?
Rob Wiblin: Do you want to explain that?
Vitalik Buterin: Yeah. So Age of Em is a big book also by Robin Hanson, where he talks about this science-fiction future where basically uploaded humans are the main type of organism, and he tries to flush out some of the social consequences of that.
Some of it seems fascinating, but some of it also seems kind of weak to live in, because he basically says that you have this Malthusian effect: there’s constantly reproduction happening, because even if almost everyone doesn’t care to reproduce, eventually whoever does care to reproduce will just take over the population. Like, for as long as there is any kind of slack that you could use for things like leisure, reproduction just continues, and eventually there’s just no slack left, and we’re basically back to the same kinds of conditions as the early 19th century factory workers. And basically, when that happens, would the only agents that are actually able to pay for their ongoing computation of just running their minds in an economy just be ones that start being less and less conscious?
So that’s the fear. I acknowledge that that’s a real fear. I think if I lived in that kind of post-upload world, then my first instinct might very plausibly be to just put myself on a spacecraft and just ship off somewhere at 99% the speed of light and just constantly stay on the frontier. But there are definitely very big unknowns in there. I totally acknowledge that.
Rob Wiblin: A large part of the motivation for the merging vision is wanting humans to remain relevant in having real decision-making ability, actually being productive in some meaningful sense. You say that a future in which we are basically just children of these vastly superior beings that kind of take care of us — and we don’t even understand what they’re doing necessarily — that is horrible to you.
I guess I don’t feel like it is so horrible. Because I kind of enjoyed my childhood, and at that time I didn’t really understand what my parents were doing or authority figures were doing around me, but they created a safe environment in which I could play and have a good time. Maybe it feels a little bit infantilising or a little bit embarrassing to imagine going back to that situation, but I could also see myself adjusting to it and enjoying it. That our work has been done; we’ve created these beings that can handle all of the work and do a much better job than we ever could have, so we can hand it off and just basically play like children for the indefinite future. What’s the reason why I should feel that this disempowered world, or this world where I’m not meaningfully contributing, is actually a bad world?
Vitalik Buterin: I mean, it’s the sort of thing that I acknowledge is very different for each person. The thing that I’d say is, if you just look at lots of people’s behaviour in the context of the world that exists, there’s just lots of people that act in ways that clearly show that they have that strong preference of wanting to live more harshly as a lion rather than just having a peaceful life as a sheep.
If you just even think of the average of anyone who becomes a decamillionaire but doesn’t retire, what is that? You know, you’re big enough that you can afford to have an entire simulation around yourself that makes you feel like a king and go and enjoy life — but no, they want to continue to be pioneers, and do bigger and better things. And I think you can argue that that’s a very fundamental part of what makes us human.
Rob Wiblin: I feel like people like that are really overrepresented in the news, because obviously they go and do interesting things and stay really active, and people who are very career oriented often succeed in the media, and they’re the kinds of people who are likely to be writing opinion pieces. But I suspect that many people are quite happy with a quiet life with their family, not necessarily working 80-hour weeks — it’s just that those people are kind of invisible, because they’re not doing stuff that’s very newsworthy. I guess we should probably just look at political polling.
But yeah, this argument might go through even if there’s a minority of people who feel this way, because they’ll be the ones who want to pursue this vision.
Vitalik Buterin: Right, exactly. I think it would definitely be interesting to understand people’s opinions on this quite a bit more. And then the question of how people are where they are now, but then how do those feelings change if they get into a position where they actually have the potential to have a bigger impact — or on the other hand, they’re threatened with the possibility of never actually having an impact again? Yeah, I don’t know.
Vitalik’s “d/acc” alternative [01:18:48]
Rob Wiblin: OK, we should return to the substance of the piece. We slightly jumped the gun and jumped into the analysis. The alternative to negativity about technology and effective accelerationism — perhaps a Panglossian view of technology that you lay out — you call “d/acc”: with the “d” variously standing for defensive, decentralisation, democracy, and differential. What is the d/acc philosophy or perspective on things?
Vitalik Buterin: Basically, I think it tries to be a pro-freedom and democratic kind of take on answering the question of what kinds of technologies can we make that basically push the offence/defence balance in a much more defence-favouring direction? The argument basically being that there’s a bunch of these very plausible historical examples of how, in defence-favouring environments, things that we like and that we consider utopian about governance systems are more likely to thrive.
The example I give is Switzerland, which is famous for its amazing kind of utopian, classical liberal governance, relatively speaking; the land where nobody knows who the president is. But in part it’s managed to do that because it’s protected by mountains. And the mountains have protected it while it was surrounded by Nazis for about four years during the war; it’s protected it during a whole bunch of eras previously.
And the other one was Sarah Paine’s theory of continental versus maritime powers: basically the idea that if you’re a power that is an island and that goes by sea — the British Empire is one example of this — then you’re more likely to do things like valuing freedom, being democratic, being pro-foreigner, being open-minded, being interested in trade. Versus if you are on the Mongolian steppes, then your entire mindset is around kill or be killed, conquer or be conquered, be on the top or be on the bottom. And that sort of thing is the breeding ground for basically everything that all of us consider to be dystopian governance. If you want more utopian governance and less dystopian governance, then find ways to basically change the landscape, to try to make the world look more like mountains and rivers and less like the Mongolian steppes.
And then I go into four big categories of technology, where I split it up into the world of bits and the world of atoms. And in the world of atoms, I have macro scale and micro scale. Macro scale is what we traditionally think of as being defence. Though one of the things I point out is you can think of that defence in a purely military context. Think about how, for example, in Ukraine, I think the one theatre of the war that Ukraine has been winning the hardest is naval. They don’t have a navy, but they’ve managed to totally destroy a quarter of the Black Sea Fleet very cheaply.
You could ask, well, if you accelerate defence, and you make every island impossible to attack, then maybe that’s good. But then I also kind of caution against it — in the sense that, if you start working on military technology, it’s just so easy for it to have unintended consequences. You know, you get into the space because you’re motivated by a war in Ukraine, and you have a particular perspective on that. But then a year later something completely different is happening in Gaza, right? And who knows what might be happening five years from now. I’m very sceptical of this idea that you identify one particular player, and you trust the idea that that player is going to continue to be good, and is also going to continue to be dominant.
But I talk there about also just basically survival and resilience technologies. A good example of this is Starlink. Starlink basically allows you to stay connected with much less reliance on physical infrastructure. So the question is, can we make the Starlink of electricity? Can we get to a world where every home and village actually has independent solar power? Can you have the Starlink of food and have a much stronger capacity for independent food production? Can you do that for vaccines, potentially?
The argument there is that if you look at the stats or the projections for where the deaths from say a nuclear war would come, basically everyone agrees that in a serious nuclear war, the bulk of the deaths would not come from literal firebombs and radiation; they would come from supply chain disruption. And if you could fix supply chain disruption, then suddenly you’ve made a lot of things more livable, right? So that’s a large-scale physical defence.
Biodefence [01:24:01]
Vitalik Buterin: But then I also talk about micro-scale physical defence, which is basically biodefence. So in biodefence, in a sense we’ve been through this: you know, we’ve had COVID, and we’ve had various countries’ various different attempts to deal with COVID. That’s been, in a sense, in some ways, a kind of success in terms of boosting a lot of technology.
But in a much larger sense, it’s also been a missed opportunity. Basically, the challenge is that I feel like around 2022… I mean, realistically, if you had to pin an exact date for when COVID as a media event became over, it probably just would be February 24. You know, the media can only think about one very bad thing at a time, right? And basically, a lot of people were sick and tired of lockdowns. I mean, they wanted to go back to doing just regular human socialising, have kids in schools, actually be able to have regular lives again.
And I think it’s totally legitimate to value those things so much you’re willing to take percentage chances of death for it. But at the same time, I feel like people’s desire to stop thinking about the problem just went so far that now in 2023 and 2024, we’re just neglecting really basic things. Like the vaccine programmes: huge success, delivered vaccines way more quickly than anyone was expecting. Where are they now? It just kind of stalled. If we look at indoor air filtering, everyone in theoryland agrees that it’s cool and that it’s important. And like every room, including this room, should have HEPA or UVC at some point in the future. But where’s the actual effort to make that happen everywhere?
Basically, there’s just so many things that require zero authoritarianism and maybe at most $5 billion of government money, and they’re not happening. If we can just put some more extra intentional effort into getting some of those technologies ready, then we’d have a world that’s much more protected against diseases. And potentially things like bioweapons, you could imagine a future even if someone releases an airborne super plague, there’s just lots of infrastructure in place that just makes that much less of an event and much easier to respond to.
Maybe I could go through the happy story of what that might look like. So imagine someone releases a super plague — let’s say 45% mortality rate, R0 of 18, spreads around a lot, has a long incubation period. Let’s give it all the worst.
Rob Wiblin: A real worst-case scenario.
Vitalik Buterin: Exactly. We’ll give it all the worst stats. And then what happens today? Well, it just spreads around. And by the time anyone even starts realising what’s going on and thinking about how to respond to it, it’s already halfway across the world and it’s in every major city.
So now let’s go and kind of shift over our view to the positive vision. Step one: we have much better early detection. What does early detection mean? There is wastewater surveillance, so you can check wastewater and basically try to look for signs of unusual pathogens. Then there is basically open-source intelligence on social media: you can analyse Twitter and you can basically find spikes in people reporting themselves not feeling well. You can do all kinds of things, right? With good OSINT [open source intelligence] we might have plausibly been able to detect COVID maybe even like a month before we actually did.
The other thing is, if it’s done in a way that depends on very open-source infrastructure available to anyone, there’s lots of people participating — both international governmental and hobbyist. You know, a single government would not even be able to hide it if it’s starting to happen in their country, right?
So that’s step one. Step two is the spread. The most dangerous viruses are going to be airborne. COVID is airborne. Almost all COVID transmission happens through the air. And imagine if in this room, we had either HEPA filtering or ultraviolet light or any one of those things. What happens right now is, I’m speaking, and if I have COVID right now, then I’m blasting viruses at you. If you have COVID, every time you speak, you’re blasting viruses at me. The biggest danger is not from the viruses just blasting out and hitting you immediately, but the fact that they’re just adding to the stuff that’s floating around the air — and it often takes quite a while for that stuff to get out, right?
So if we have filtering, then you can shift from a world where the average nasty molecule gets taken out of the room in let’s say an hour, to a world where it gets taken out in like a minute, right? And the rate of transmission in indoor settings — and indoor settings are basically where almost all transmission happens — that goes down a lot. So if you do that, then you can plausibly imagine R0 going down from 18 to potentially nine or even less, just passively.
Then we get things like prophylactics and vaccines and that whole category of things. I think the deep reason to think that some kind of intervention is a good idea is basically that human beings evolved in an environment where the population density was 1,000 times less than it is now. And so biologically speaking, we’re definitely underinvesting on disease prevention.
One is there’s things like nose sprays that you can use, and this is stuff that you can buy commercially that’s pretty generic and probably a good idea to use. I’ve used them when going to some of these high-risk and high-density venues.
But then the other thing you can do is you can try to create a pipeline from detecting a virus and sequencing the virus, to manufacturing a vaccine that has that sequence that is targeted against the virus that you can then use, and have that entire pipeline work within a few days. In that case, this is one of the problems with COVID: a lot of things stalled.
One of the challenges is the first wave of vaccines reduce symptoms, but they don’t really prevent transmission. And there’s a lot of interest now in nasal vaccines. You can basically squirt them up your nose, and that’s plausibly much more likely to stop transmission, because how does the coronavirus go in? They’re going in through your nose, right? The other nice thing about them is that once they’re squirtable, they don’t require a specialist to administer. And there are ways to make them that don’t depend on complicated lipid nanoparticles and just very complicated biotech that requires it to be manufactured in two or three places. You can plausibly create vaccine pipelines where essentially every village has a bio printer that can make them.
Rob Wiblin: We should maybe catch people up a little bit, because it’s a big answer. So I guess defensive accelerationism highlights the idea that technology is good in general, sure. But some technologies make it easier for people to defend themselves from getting attacked by individuals, and some technologies lead to political equilibria — where there’s a lot of centralisation and control that a government might be able to dominate a particular area and tax people into oblivion because that’s what the military technology allows — and others allow people to defend themselves and I guess preserve liberalism and diversity.
Vitalik Buterin: And at the same time have less actual deaths happening.
Rob Wiblin: Then you kind of break technology into four different clusters to highlight the different properties. One is macro physical defence, which is kind of classic defence. Maybe we have less to say about that because it’s a bigger existing topic. And then there’s micro physical defence, which is this bio.
Vitalik Buterin: Which is bio. Exactly. Which is what I talked about for the last 15 minutes. By the way, to give people an idea of how the hell I ended up getting into that space at all: yeah, it was a bit of an accident. Basically, back in 2021, there was this crypto bubble that was happening, and I ended up being gifted a bunch of Shiba Inu tokens. This is a meme coin that is, of course, valuable because there is a dog. And I gifted a big portion of the supply, and I ended up regifting a big portion, or basically giving away a big portion of what I had and burning the rest. And a big part of that went to this group called India COVID CryptoRelief. So Sandeep Nailwal, who does Polygon, was a very big part in making that happen. Well, he’s basically the leader of it.
What ended up happening was I was anticipating that these coins would just totally crash and burn, and they’d at most be able to cash out maybe $25 million. And I thought that, OK, there’s this very acute emergency situation in India, and they have to go and act quickly. And let’s act quickly, because if you act slowly, then, one, the COVID issue would… like, the opportunity to help would be gone — but also because that was in the middle of a crazy crypto bubble, and those coins could drop by 90% tomorrow. So I was definitely acting very hastily.
But then what ended up happening was that they were actually able to cash out an entire 470 million USDC. So what ended up happening then was over half of that money got spent by them, by the India CryptoRelief team, on some COVID response, but also some just long-term upgrading India’s biomedical infrastructure. And another part went to an effort that was called Balvi, which is basically a worldwide open-source anti-COVID and anti-airborne-disease stuff — focusing on early detection, long COVID research, making better masks that actually work and that are actually comfortable and that people would want to wear, at-home testing, air filtering, HEPA, UVC — just the entire spectrum of all that stuff.
So that’s how I ended up learning about a lot of those things. But that basically ended up actually accelerating the space by quite a bit. So we have access to much better knowledge about how long COVID works and a whole bunch of other things. That stuff is still a big deal. I think it’s important to remember that if you just look at the death statistics, then it’s fair to say that COVID is just a flu. But the big way in which COVID is not just a flu — and where even today, it’s a step more dangerous and it’s worth it to continue being a step more careful — is these long-term symptoms, where it’s still being researched and it still potentially looks like there might be some pretty scary stuff happening that doesn’t happen with other viruses.
So part of that is COVID itself, and then part of that is also the long-term d/acc, which is basically preparing for the possibility of future natural or artificial plagues that might happen in this century.
Rob Wiblin: It is crazy that COVID has so highlighted that there were various different technological paths that we could go down — related to purifying the air, or taking advantage of these vaccine platforms, or improving nasal vaccines, things like that — that would not just deal with COVID much more than we have, but also protect us against all kinds of threats in future, both natural and made by people. And just the support is so lukewarm. It’s not as if these things are getting ignored.
Vitalik Buterin: Yeah. I’m trying to remember… Didn’t you actually interview one of the experts doing this a few years back?
Rob Wiblin: I did, yeah. And I think the name of the episode was Andy Weber on how to make bioweapons obsolete.
Vitalik Buterin: Yes, exactly. Yeah, I remember.
Rob Wiblin: And he went through all of this three or four years ago. And yeah, governments have funded it a little bit, and I guess I know people working on it, and I know people involved in the effective altruism philosophy who are funding it, but extraordinary that we haven’t really doubled down on it, given the enormous potential gains, and the trivial costs really.
Vitalik Buterin: Yeah, yeah. Absolutely. Maybe this is one of those cases where it’s up to a bunch of crypto dog people to actually finish the job.
Rob Wiblin: It’s a crazy world.
Vitalik Buterin: Yeah. You know, you got the WHO and you got the [barks].
Pushback on Vitalik’s vision [01:37:09]
Rob Wiblin: Yeah. Coming back to the broader d/acc idea: basically, it’s highlighting that, yes, technology is good in general, but also some technologies, they’re not all created equal, and some allow people to defend themselves, and some just seem much more important and valuable than others. So I’ve heard someone say that with d/acc, we should also add “differential” technology or “directional” accelerationism. Who could disagree with this? Are there people who disagree with this basic idea, or at least who think that this isn’t a meme that should be promoted? That it’s misguided?
Vitalik Buterin: I feel like everyone agrees with the idea. I think when I’ve gotten criticism, I think it’s been two forms. One is like, “OK, Vitalik, you paint a beautiful vision…”
And we have these four categories of defence. The two that we haven’t talked about yet are what I call cyberdefence and info defence in the world of bits. Cyberdefence is around cryptography and preventing computer hacking, and info defence is around preventing things that we call scams and fakes and misinformation. I talk a lot about how there are technologies in both of those spaces that are also very defence-favouring and that don’t assume the presence of a benevolent overlord that gets to decide for everyone else what the truth is and what the facts are.
So we have this beautiful vision, but the first but is like, how to actually fund it, right? Like, OK, you release this big long screed about what we should be doing, but what does the word “should” even mean? You know, in a world where you have a whole bunch of AI firms that seem to start off talking about how they’re going to be the ones that do the right thing by trying to win the AI race, and they’ll be filled with the good people and do a good job of it. And then it turns out that like five years later, they’re actually just these completely closed entities and at the same time they’re also advancing capabilities in dangerous ways. And like, where’s the actual alignment?
Rob Wiblin: There was no way to foresee it, Vitalik. It was completely unpredictable.
Vitalik Buterin: Indeed. But does the world have room for “should” when the capitalists are money-motivated, and the governments are penny-pinching and short-term-votes-motivated?
And then the other criticism is like, well, this is all well and good if you have 50-year timelines, but what if you have five-year timelines?
So I think those are the two objections that we’ve heard. And then of course there’s objections to specific things. Like, I’m a big fan of Community Notes, for example. And that’s one of my highlighted champions as far as info defence technologies go — because it’s fact checking, but it’s also democratic, and it’s transparent, and there’s an algorithm, and you can look at the algorithm, and it doesn’t preinsert one particular group’s idea of what’s good and bad. But then there’s a lot of people who are big fans of it, but then there are also people who think that it’s been totally insufficient so far.
Rob Wiblin: I would have thought that the big pushback you’d get would be from folks who in general are very positive… Like the e/acc-oriented folks who say their big worry is everything is getting shut down: “Society won’t let us do anything, they won’t let us advance technology in almost any direction. And sure, some technologies have to be more important and better than others. How could it be otherwise? But we can’t do anything. So whenever there’s an avenue by which we could advance things and make a big difference and push forward technology, we should just go for it, rather than being too picky about which ones — which technologies seem best and which ones seem worse.”
And your philosophy would, in practice, be exploited by people to basically say, no, we should always be doing something else, and then that would be an excuse to shut down whatever is happening now.
Vitalik Buterin: Yeah, yeah. I mean, I think that’s fair. And I think you can always, of course, make a kind of symmetric argument from a safety hawk’s point of view, which is like, d/acc is going to get abused by e/accs, by basically saying they’re the ones that are making the defensive version of the technology.
And I have some sympathy for that. Because within Ethereum, there’s this common pattern, where I make a blog post and I say something like, “This is a good thing to do,” and then everyone ends up sort of re-narrativising whatever they’re doing anyway as being like actually about furthering Vitalik’s vision. And it’s like there’s 10% change of behaviour, but 90% re-narrativising existing behaviour — and then what’s the point?
So I totally feel that and I get it. I totally get how good memes and good vibes also have to be backed by teeth of some kind — and teeth that are administered by people who are actually motivated by the goals, and not people who are motivated by the desire to make profit making stuff that they’re already doing, like feel compatible with their goals.
But at the same time, that’s something that’s true of literally any ideology. So it’s like, is that a critique of d/acc? Or is that a critique of efforts to try to make the world better in a much broader sense?
How much do people actually disagree? [01:42:14]
Rob Wiblin: A lot of things have bothered me about this debate, but one that has bothered me in particular is you went on this other show, Bankless — it’s a good podcast if people haven’t heard of it — but the debate has gotten a little bit sandwiched into the idea that some people are pro-tech and some people are anti-tech. And I think literally on that show, they said, “There’s the e/acc folks who are pro-AI and pro-technology, and then there’s effective altruism, which is anti-technology.” I think one of the hosts literally said that. I mean, they probably hadn’t heard about effective altruism before, and this is kind of all that they’d heard. And basically the thumbnail version was effective altruists hate technology. Which is extraordinary. It’s like I’m in a parallel world.
Vitalik Buterin: Yeah. I mean, it’s extraordinary from the point of view of even like 2020.
Rob Wiblin: Yeah, exactly.
Vitalik Buterin: Remember when Scott Alexander got doxxed by The New York Times? Remember what the vibes were? I think the people who were EA and the people who were e/acc were totally on the same team, and basically the people who kind of were perceived to be anti-tech are like the lefty cancel culture, like woke social justice types or whatever you call them, and everyone is united against them. And just like, if you’re an e/acc and you think EAs are anti technologies, think back even three years and remember what they said back at that particular time.
Rob Wiblin: It’s incredible. It would be really worth clarifying that. I mean, there are people who are anti-technology for sure. You’re mentioning degrowthers: people who just actually think the world is getting worse because of technology, and if we just continue on almost any plausible path, it’s going to get worse and worse. But all of the people we’re talking about in this debate, they all want, I think, all good and useful technologies — which is many of them — to be invented in time.
The debate is such a narrow one. It’s about whether it really matters, whether the ordering is super important. Like, do we have to work on A before B because we need A to make B safe, or does it not really matter? And we should just work on A or B or C and not be too fussy because the ordering isn’t that important? But ultimately, everyone wants A, B and C eventually.
Vitalik Buterin: Yeah. I think if I had to defend the case that the debate is not narrow, and the debate really is deep and fundamental and hits at the most important questions, I would say that the infrastructure to build the… To actually execute on the kind of pausing that EAs want probably requires a very high level of things that we would call global government. And that infrastructure, once it exists, would absolutely be used to prevent all kinds of technologies, including things that, for example, traditionally pro-tech people would be fans of, but degrowth people would be very against.
It’s like step one: you’re banning stuff around like just a little bit of stuff around superintelligence. And it’s like, OK, now we’ve agreed that it’s possible to go too far. Well, great, let’s talk about genetically engineering humans to increase our intelligence. And that’s the sort of thing where actually part of my post was being explicitly in favour of things like that, and saying we’ve gotta accelerate humans and make ourselves stronger because that is key to the happy human future.
But then there’s a lot of people that don’t feel that way. And then you imagine things expanding and expanding, and you basically might actually get the sort of world-government-enforced degrowth, right? So the question is, does that slippery slope exist? And does even building the infrastructure that’s needed to prevent this one thing… Which realistically is a very profitable thing: if you build something that’s one step below “superintelligence that’s going to kill everyone,” you’ve made an amazing product and you can make trillions of dollars. Or if you’re a country, you might be able to take over the world.
And then the kind of global political infrastructure needed to prevent people from doing that is going to need to be pretty powerful. And that is not a narrow thing, right? Once that exists, that is a lever that exists. And once that lever exists, lots of people will try to gain control of it and seize it for all kinds of partisan ends that they’ve had already.
Rob Wiblin: The sense in which people agree is that at least everyone would agree that if we set up this organisation in order to control things, to make AI safe, and then that was used to shut down technological progress across the board, people could at least agree that that’s an undesirable side effect, rather than an unintended goal of the policy — which I guess some people might be in favour of that.
It’s interesting, you just said, “the kind of pausing AI that effective altruists are in favour of.” The crazy thing is that people who are influenced by effective altruism, or have been involved in the kind of the social scene in the past, are definitely at the forefront of groups like Pause AI who want to just basically say, the simple message is we need to pause this so that we can buy time to make it safe. They’re also involved in the companies that are building AI and in many ways have been criticised a lot for potentially pushing forward capabilities enormously. It is a very bizarre situation that a particular philosophy has led people to take seemingly almost diametrically opposed actions in some ways. And I understand that people are completely bemused and confused about that.
Cybersecurity [01:47:28]
Rob Wiblin: Let’s come back and fill out the quadrant. So we’ve got defence against big things, defence against small things. Then you had the information, rather than the physical world — and you had classic cybersecurity, which is defence where there’s clearly hostile actors that are doing bad stuff; and then there’s information security, which is defending yourself against bad information, where it’s harder to tell who’s actually the bad folks. Do you want to maybe give examples of good work in each case?
Vitalik Buterin: Yeah. Cybersecurity I think is pretty simple to understand. Basically, you want people to be able to do things on the internet and be safe, right? Encryption is a basic example; digital signatures are a basic example. So I can access a website and I have digital signatures that prove that I’m actually getting the right website from the entity that I want to be interacting with, instead of just some hacker inserting themselves in the middle.
Then in terms of the frontiers of that stuff, I talk a lot about zero-knowledge proofs. And zero-knowledge proofs are powerful because they let you prove a lot of things about yourself, but at the same time hide basically all of the information that you don’t want to prove. One simple example of this is like… Here’s one actual problem that hasn’t really been solved well yet: I got my phone here, I have a VPN, and I regularly access the internet. And I find when I access it with my VPN on, a lot of websites end up basically putting some captchas in front of me and basically saying like, in theory it’s like “prove you’re a human by clicking on the fire hydrants.” Though we know in practice the AI is probably better at identifying the hydrants than humans are at this point. Whatever. I mean, just really annoying, right? And you have to do this a whole bunch of times.
And actually, it’s not even just when you’re behind the VPN. There’s also this aspect of when you’re accessing the internet from a country that the rich world considers to be sketchy, right? Which includes big parts of Africa, Latin America, Southeast Asia. Then you’re also behind this kind of captcha wall.
The thing that we’re trying to do is basically prove that you’re not trying to denial-of-service attack them. And what if what you could do is you can make a zero-knowledge proof that proves some metric of yourself being a unique person, or some unique actor? This proof could even be completely privacy preserving, so you could make a proof that proves that you are a unique human that has a particular… Could be government ID, potentially. Could even be holding some quantity of cryptocurrency if you want, like a fully anonymous version. Could be like one of a few things. And you prove it in such a way that you generate an ID — where that ID is not linkable to your identity, but if you try to run the program twice, you generate the same ID twice, right?
So you can basically prove that you are one of these actual humans, or whatever that whatever set of trusted actors is trying to prove, while completely hiding who you are. But at the same time, you only have a way of actually creating one of these identities. So you could imagine a world where you try to access one of these websites once, and then you give it this proof, and with this proof it knows that we’re actually talking to someone who has an identity — that is privacy preserving, but also an identity that’s actually hard to attain, right? And attackers are not going to be able to get millions of them, and so they should not force me to click on the fire hydrants and they should just show me the website.
That’s one example of cyberdefence. Basically, there’s a lot of these specific things that we want to have security assurances about. Sometimes they’re assurances about data privacy; sometimes they’re assurances that who you’re talking to actually is who they claim to be. A lot of the time it’s the unique human problem. I think this is something that a lot of people just want good solutions for: just some way of proving that an actor that you’re interacting with just is a unique human without actually having to publicly reveal KYC information or anything like that — a zero-knowledge identifier is fine, and actually creating the infrastructure to be able to do that.
And then there’s a lot of good applications for this, right? So a big part of this has to do with online voting. It’s like a standard take among the security community that online voting is dangerous and you’re not supposed to do it. On the one hand, I see why they think that way, but on the other hand, realistically, our society depends on huge amounts of online voting already, right? It’s called likes and retweets on social media. And that’s something that’s not going to be in person, ever. And that is something that people want to have, what we need to actually try to make secure. Those are some examples of cyberdefence technologies.
Another really big one, and this is potentially a positive application of AI, is creating code that doesn’t have bugs in it, and where you can actually mathematically prove that code has certain properties. So getting to the point where you can actually create all of these complicated gadgets, but there isn’t just one mistake that just leaks all of your information to the attacker.
Rob Wiblin: Yeah. I think you’ve been quite excited about this idea. I mean, people are worried that AI could be very bad for cybersecurity, but it also seems like if you have extremely good AI that’s at the frontier of figuring out how to break things, if it’s in the hands of good people and they share the lessons with people so they can patch their systems first, then potentially it could massively improve things. And currently stuff in the crypto world that we’re unsure whether it’s safe, we could get a lot more confidence in.
Vitalik Buterin: Exactly, yeah. The way that I think about this is if you extrapolate that space to infinity, then this is actually one of those places where it becomes very defence-favouring, right? Because imagine a world where there are open source, infinitely capable bug finders: if you have a code with a bug, they’ll find it. Then what’s going to happen? The good guys have it and the bad guys have it. So what’s the result? Basically, every single software developer is going to put the magic bug finder into their GitHub continuous integration pipeline. And so by the time your code even hits the public internet, it’ll just automatically have all of the bugs detected and possibly fixed by the AI. So the endgame actually is bug-free code, very plausibly.
That’s obviously a future that feels very far away right now. But as we know, with AI, going from no capability to superhuman capability can happen within half a decade. So that’s potentially one of those things that’s very exciting. It definitely is something that in Ethereum we care about a lot.
Rob Wiblin: That reminds me of something I’ve been mulling over, which is that very often the question comes with some branch of technology — in this case AI, but we could think about lots of other things — is it offence-favouring or defence-favouring? And it can be quite hard to predict ahead of time. With some things, maybe horse archery, historically, maybe you could have guessed ahead of time that that was going to be offence-favouring and going to be very destabilising to the steppes of Asia. But with AI, it’s kind of a difficult thing to answer.
But one idea that I had was, when it comes to compute, like machine-versus-machine interactions, like with cybersecurity, seems like it might well be defence-favouring, or at least neutral, because any weakness that you can identify, you can equally patch it almost immediately — because the machines that are finding the problems are kind of the same being; they’re the same structure as the thing that is being attacked, and the thing that’s being attacked you can change almost arbitrarily in order to fix the weakness.
When it comes to machine-versus-human interactions, though, the dynamic is quite different, in that we’re kind of stuck with humans as we are. We’re this legacy piece of technology that we’ve inherited from evolution. And if a machine finds a bug in humans that it can exploit in order to kill us or affect us, you can’t just go in and change the code; you can’t just go and change our genetics and fix everyone in order to patch it. We’re stuck doing this really laborious indirect thing, like using mRNA vaccines to try to get our immune system that’s already there to hopefully fight off something. But you could potentially find ways that that would not work — you know, diseases that the immune system wouldn’t be able to respond to. I guess HIV is to some extent had that.
What do you think of this idea? That machine-versus-machine may be neutral or defence-favouring, but machine-versus-humans, because we just can’t change humans arbitrarily and we don’t even understand how they work, is potentially offence-favouring?
Vitalik Buterin: Actually, I think a big part of the answer to this is something I wrote about in the post, which is that we need to get to a world where humans have machines protecting us as part of the interface that we use to access the world.
And this is actually something that’s really starting to happen more and more in crypto. Basically, wallets started off as being this very dumb technology that’s just there to manage your private key and follow a standardised API. You want to sign it, then you sign it. But if you look at modern crypto wallets, there’s a lot of sophisticated stuff that’s going on in that. So far it’s not using LLMs or any of the super fancy stuff, but it’s still pretty sophisticated stuff to try to actually identify what things you might be doing that are potentially dangerous, or that might potentially go against your intent and really do a serious job of warning you.
In MetaMask, it has a list of known scam websites, and if you try to go and access one of them, it blocks it and shows a big red scam warning. In Rabby, which is an Ethereum wallet developed by this lovely team in Singapore that I’ve been using recently, they really go the extra mile. If you’re sending money to an address you haven’t interacted with, they show a warning for that; if you’re interacting with an application that most other people have not interacted with, it shows a warning for that. It also shows you the results of simulating transactions, so you get to see what the expected consequences of a transaction are. It just shows you a bunch of different things, and tries to put in speed bumps before doing any actually dangerous stuff. And there definitely were some recent scam attempts that actually Rabby successfully managed to catch and prevent people from falling for.
So the next frontier of that, I think, is definitely to have AI-assisted bots and AI-assisted software actually being part of people’s windows to the internet and protecting them against all of these adversarial actors. I think one of the challenges there is that we need to have a category of actor that is incentivised to actually do that for people. Because the application’s not going to do that for you. The application’s interest is not to protect you: the application’s interest is to find ways to exploit you. But if there can be a category of actor where their entire business model actually is dependent on long-term satisfaction from users, then they could actually be the equivalent of a defence lawyer, and actually fight for you and actually be willing to be adversarial against the stuff that you access.
Rob Wiblin: I guess that makes sense in the information space, where you can imagine you interact with your AI assistant, that then does all of the information filtering and interaction with the rest of the world.
The thing I was more worried about was bioweapons or biodefence, where one of the big concerns people have about AI is, couldn’t it be used to help design extremely dangerous pathogens? And there, it seems harder to patch human beings in order to defend them against that.
Although the weakness of that argument is we were just saying we’re on the cusp of coming up with very generic technologies like air purification that we could install everywhere. That seemed like they would do a lot to defend us against the diseases, at least that we’re familiar with. So maybe there are some generic things that even AI could possibly help, technologies that AI could help advance, then that still would be defence-dominant. I don’t know what the underlying reason would be, but maybe.
Vitalik Buterin: This is one of those things where I think both offence and defence have these big step functions, right? Where one question is, what is the level of capability of printing a super plague? Are you just making minor modifications and fine-tuning COVID the same way the Hugging Face people are fine-tuning Llama? Or are you actually really doing serious shit that goes way beyond that? And if, for example, you’re just fine-tuning COVID, then wastewater detection becomes much easier, because the wastewater detectors are already tuned to COVID as a sequence. But if you have to defend against arbitrary dangerous plagues, it’s actually a significantly harder problem. Then for vaccines, it’s similar. And then for the level of dangerousness, that’s similar.
And then step function is like if you actually can make all of this air purification infrastructure much more powerful, then R0s go way down, and you actually get possibly some kind of upper limit there. But then the other step function on the offence side is, what if you go beyond biological diseases and you figure out crazy nanotechnology, how do you start defending against that?
And then the other step function on the bio side is, if we actually do upload, then uploading is actually sort of the ultimate solving safety, because you can have a continuous running backup of your mind, and if anything happens to you, you just automatically restart somewhere else and it’s all good.
Rob Wiblin: That’s a bit out there, but a fair point as well.
Vitalik Buterin: Indeed.
Information defence [02:01:44]
Rob Wiblin: OK, let’s finish fleshing out the four different categories. So the last one was information defence: defence against misinformation and so on. You had a great example in there, which is Twitter Community Notes. Or I guess X Community Notes, it’s called now.
Can you explain what is so… I mean, people have had a lot of criticisms of X under Elon Musk, but one thing that it seems like people across the board seem to really like is what’s happened with Community Notes. Can you explain what they changed and how it works now, and why people love it?
Vitalik Buterin: Sure. I mean, maybe I’ll just reintroduce that category a bit. So we talk about the world of atoms and the world of bits. In the world of atoms, you have macro defence and micro defence, which is bio. Then in the world of bits, the distinction that I made there is cyberdefence versus info defence. And this is possibly a kind of distinction unique to myself.
But the way that I think about it is cyberdefence is a defence where any reasonable human being would agree who the attacker is and who the defender is. So it has to do with computer hacking, basically, and being able to defend using algorithms, and where you can often mathematically prove whether or not you have something that actually is defending the way it’s supposed to.
And info defence is a much more subjective thing. Info defence is about defending against threats, such as what people think of when we talk about scams, misinformation, deepfakes, fraud, like all of those kinds of things. And those are very fuzzy things. There definitely are things that any reasonable person would agree is a scam, but there definitely is a big boundary. If you talk to a Bitcoin maximalist, if you ever have any of them on 80,000 Hours, they will very proudly tell you that Ethereum is a scam and Vitalik Buterin is a scammer, right? And look, as far as misinformation goes, there’s just lots and lots of examples of people confidently declaring a topic to be misinformation and then that turning out to be totally true and them totally wrong, right?
So the way that I thought about a d/acc take on all of those topics is basically: one is defending against those things is obviously necessary and important, and we can’t kind of head-in-the-sand pretend problems don’t exist. But on the other hand, the big problem with the traditional way of dealing with those problems is basically that you end up pre-assuming an authority that knows what is true and false and good and evil, and ends up enforcing its perspectives on everyone else. And basically trying to ask the question of, what would info defence that does not make that assumption actually look like?
And Community Notes I think is one of those really good examples. I actually ended up writing a really long review of Community Notes a few months before the post on techno-optimism. And what it is is a system where you can put these notes up on someone else’s tweet that explain context or call them a liar, or explain why either what they’re saying is false, or in some cases explain why it’s true but there’s other important things to think about or whatever.
And then there is a voting mechanism by which people can vote on notes, and the notes that people vote on more favourably are the ones that actually get shown. And in particular, Community Notes has this interesting aspect to its voting mechanism where it’s not just counting votes and accepting who is the highest; it’s intentionally trying to favour notes that get high support from across the political spectrum. The way that it accomplishes this is it uses this matrix factorisation algorithm. Basically it takes like this big graph of basically which user voted on which note, and it tries to decompose it into a model that involves a small number of stats for every note and a small number of stats for every user. And it tries to find the parameters for that model that do the best possible job of describing the entire set of votes.
Rob Wiblin: So as I understand it, what that means in plain English maybe is that lots of users give votes across all kinds of different Community Notes and different comments, but it tries to figure out… There’s different kinds of people, different attitudes, different political agendas, different empirical beliefs that people have, and it tries to find Community Notes that people love, regardless of their empirical or philosophical commitments.
Vitalik Buterin: Exactly. So the parameters that it tries to find for each note: it has two parameters for each note and two parameters for each user. And I called those parameters “helpfulness” and “polarity” for a note, and “friendliness” and “polarity” for a user. And the idea is if a note has high helpfulness, then everyone loves it; if a user has high friendliness, it loves everyone. But then polarity is like, you vote positively on something that agrees with your polarity and you vote negatively on something that disagrees with your polarity.
So basically the algorithm tries to isolate the part of the votes that are being voted positively because the note is being partisan and is in a direction that agrees with the voter, versus the votes that vote a note positively because it just has high quality. So it tries to automatically make that distinction, and basically discard agreements based on polarisation and only focus on notes being voted positively because they’re good across the spectrum.
Rob Wiblin: And it works, it seems.
Vitalik Buterin: It does, yeah. I basically went through and I looked at what some of the highest helpfulness notes are and also what some of the highest polarity notes are in both directions. And it actually seems to do what it says it does. The notes with a crazy negative polarity are just very partisan, left-leaning stuff that accuses the right of being fascists and that sort of stuff. Then you have with positive polarity very hardline, right-leaning stuff, whether it’s complaining about trans or whatever the right topic of the day is. And then if you look at the high helpfulness notes, one of them was like someone made a picture that they claimed to be a drone show, I believe, over Mexico City. And I’m trying to remember, but I believe it was that the note just said that actually this was AI-generated or something like that. And that was interesting because it’s very useful context.
Rob Wiblin: It’s useful regardless of who you are.
Vitalik Buterin: Exactly. It’s useful regardless of who you are. And both fans of Trump and fans of AOC and fans of Xi Jinping would agree that that’s a good note.
Rob Wiblin: Is there a common flavour that the popular Community Notes have now using this approach? Is it often just plain factual corrections?
Vitalik Buterin: A lot of the time. When I did that review, I had two examples of good helpfulness. One was that one, and then the other one was there was a tweet by Stephen King that basically said COVID is killing over 1,000 people a day. And then someone said no, the stat says that this is deaths per month. That one was interesting because it does have a partisan conclusion, right? Like it’s a fact that is inconvenient to you if you are a left-leaning COVID hawk. And it’s a fact that’s very convenient to you if you’re a COVID minimiser, right? But at the same time, the note was written in this very factual way. And you can’t argue with the facts, and it’s an incorrect thing that needs to be corrected.
Rob Wiblin: There’s almost a deeper thing going on here, which is this process is figuring out what information is regarded as universally persuasive, and what are good reasons in the views of people.
Vitalik Buterin: I think more recently there definitely have been relatively more complaints about Community Notes in the past few months than there were before. I think one of the things that happened is that of course the entire horrible situation in Gaza started. And unfortunately, wars are exactly the environment where everyone assumes maximum bad faith, and there’s huge incentives by all kinds of other people to explicitly manipulate the system — and feel justified in manipulating the system because, you know, either the other guy’s doing it or it’s really important to not let the fascists win, or however people argue it, right? So there’s definitely people that have been very unhappy.
And the examples that I saw, there were definitely a bunch around Gaza. Actually, one of the big complaints around then was that notes were not appearing fast enough. Basically, the issue was that there were some situations that were being reported on incorrectly or being tweeted on incorrectly, and all of that spread across Twitter. But then the notes only appeared after a day, but by the time that happened, everyone had basically already formed their opinion.
And that one’s hard, right? Because in a lot of ways, it’s fundamentally even beyond the human capability to reliably form a correct opinion super quickly. You have to be calm and wait. Community Notes itself definitely has recently made improvements to allow notes to show up faster.
But the other way to think about it is there’s different kinds of epistemic technologies that you can have. And Community Notes are good for surfacing across borders and agreements, but then there is this whole other category of epistemic technology that’s very good at surfacing, basically being able to come to the correct opinion faster than other people — and that’s prediction markets.
Prediction markets have really been having a moment in the past year. Polymarket has been getting a lot of attention, and that’s the one on Ethereum. And then obviously there is Manifold and Metaculus using basically play money. Both of those are being used much more than before, and we’re seeing those used for aggregating opinions about the US election, about LK-99, about all kinds of topics.
So maybe you could argue that there’s some kind of prediction market-y thing that you can potentially insert into Community Notes. Like, if you had to make a very first pass, naive one, you could just do a thing that just says if you vote in a way that reflects future consensus after two days, then your votes start being counted more. And that’s a way of inserting a little bit of prediction marketness into this thing that by itself is like a consensus finder, right?
But that’s one of the sort of rough edges of the mechanism now being fast. The other rough edge is the other big, of course, war again, around which there’s a lot of complaints, is the Russia and Ukraine situation — where there’s a lot of concerns that basically Putin’s internet army has been doing a better and better job of like attacking the notes with large numbers of accounts and getting them taken down. And I mean, I talk to Jay Baxter from Community Notes regularly, and he’s definitely very aware of all the problems that I mentioned. He’s tracking how bad they are and how the mechanism can be improved. But I think there’s an opportunity here to try to really turn the design of these kinds of mechanisms into a really more proper academic discipline.
One analogue of this is in the space of quadratic funding. Last time we talked, didn’t we end up talking about quadratic funding?
Rob Wiblin: We talked about it a lot, actually.
Vitalik Buterin: Yeah. Kind of recapping briefly, quadratic voting is a form of voting where you can basically express not just in what direction you care about something, but also how strongly you care about something. And it uses this quadratic formula that basically says your first vote is cheap, your second vote is more expensive, your third vote is even more expensive, and so on. And that encourages you to make a number of votes that is proportional to how strongly you care about something, which is not what either regular voting or the ability to buy votes with money does.
And quadratic funding is an analogue of quadratic voting that just takes the same math and applies it to the use case of funding public goods, and basically helping a community identify which projects to fund and directing funding from a matching pool based on how many people participate.
And I created a version of quadratic funding called pairwise bounded quadratic funding. And what that does is it solves a big bug in the original quadratic funding design, which is basically that the original quadratic funding design was based on this very beautiful mathematical formula. Like, it all works and it’s perfect — but it depends on this really key assumption, which is non-collusion: that different actors are making their decisions totally independently. There’s no altruism, there’s no anti-altruism, there’s no people looking over anyone’s shoulders. There’s no people that hack to gain access to other people’s accounts. There’s no kind of equivalent of World of Warcraft multiboxing, where you’re controlling 25 shamans with the same keyboard.
And that’s, of course, an assumption that’s not true in real life. I actually have this theory that when people talk about the limits of the applicability of economics to the real world, a lot of the time people talk about, as being wrong assumptions, either perfect information or perfect rationality. And I actually think it’s true that both of those are false, but I think the falsity of both of those is overrated. I think the thing that’s underrated is this non-collusion assumption.
And yeah, when actors can collude with each other, lots of stuff breaks. And the quadratic funding actually ended up being maximally fragile against that. Basically, if you have even two participants and those two participants put in a billion dollars, then you get matching that’s proportional to the billion dollars, and they can basically squeeze the entire matching pot out, or they can squeeze 100% minus epsilon of the entire matching pot out and give it to themselves.
What pairwise bounded quadratic funding does is it basically says we will bound the amount of matching funds that a project gets by separately considering every pair of users and having a cap on how much money we give per pair of users that vote for a project. And then, you can mathematically prove that if you have an attacker, where that attacker can gain access to let’s say k identities, then the amount of money that the attacker can extract from the mechanism is bounded above by basically c x k2, right? Proportional to the square of the number of accounts that they capture.
And this sort of stuff is important for quadratic funding, but I think it’s going to be super valuable for a lot of this kind of social mechanism design in general. Because there’s a lot of interest in these one-per-person proof of personhood protocols, but they’re never going to be perfect. You’re always going to get to the point where either someone’s going to get fake people past the system through AI, or someone’s just going to go off into a village in the middle of Ghana and they’re going to tell people like, “Scan your eyeballs into this thing and I’ll give you $35, but then I’m going to get your Worldcoin ID.” And then now I’ve bought an unlimited number of Worldcoin IDs for $35 each, right? And guess what? Russia’s already got lots of operatives in Africa. And if Worldcoin becomes the Twitter ID, they’re totally going to use this.
So the question is basically, if we can create an academic discipline of making mechanisms that try to put formal bounds on how much damage an attacker can do even if they capture some specific number of accounts, then that’s something that could make all of this stuff much more robust and give us a much better idea of how much damage can either the Kremlin or whoever else do to Community Notes.
There is this really powerful core primitive of, essentially, you separately look at every pair of users and you’re basically saying that there’s sort of this fixed budget — you can call it cross-entropy or whatever buzzword you use — that they get to distribute among stuff. And essentially, if you have a group of people that are just constantly supporting the same thing, then the mechanism recognises that they’re NPCs and it disempowers them for that. It’s basically a very versatile and very generic proof-of-not-being-an-NPC kind of thing — which is, I think, also extremely interesting from the perspective of anyone who cares about social media continuing to empower independent thought instead of conformism, for example. If people who normally disagree end up agreeing on this, that’s a stronger signal.
Those are all examples of fascinating info technologies, where I think what they have in common is that they’re trying to defend against all kinds of attackers — whether those attackers are people who capture some large number of identities, or even people who are just very good at manipulating a particular community or whatever else — and reducing the amount of damage that characters like that can do, and preserving their ability to actually genuinely aggregate public opinions and public information and sentiments on various kinds of topics.
Is AI more offence-dominant or defence-dominant? [02:21:00]
Rob Wiblin: One of the smarter responses I saw to your essay — or at least smarter somewhat sceptical responses — was from Wei Dai, a somewhat famous cryptologist and computer scientist. We’ll link to his reaction, because he had a bunch of different ideas, but one of them was basically agreeing with your framing, but responding that AI unfortunately is just probably offensive rather than defensive by nature.
And he gave a couple of different reasons for that, but one of them was, in his view, it’s going to lead to an explosion of technological progress across all kinds of different avenues because we should just expect it to be a much better scientist than we are. And then unfortunately, if any one of those lines of research is offence-dominant, then that could by itself be sufficient to cause human extinction or to cause a massive destabilisation of the world. And so unless we get super lucky and for some reason every technology tree that it goes down is defence-dominant, then in fact it’s destabilising the current situation, which is reasonably, at least not super, offence-dominant.
What would you make of that argument? I guess this is just another reason to think that AI might be exceptional or should make us nervous.
Vitalik Buterin: Right. Well, one thing is one domain being offence-dominant by itself isn’t a failure condition, right? Because defence-dominant domains can compensate for offence-dominant domains. And that has totally happened in the past, many times. If you even just compare now to 1,000 years ago: cannons are very offence-dominant and castles stopped them working. But if you compare physical warfare now to before, is it more offence-dominant on the whole? It’s not clear, right?
So I think you have to separately think about what is the end in the transition, right? And it’s very plausible that the end is like, the technological ceiling is a place that’s fairly reasonable for defence. But then the challenge is what does the process of getting there actually look like? And, you know, history shows that…
Rob Wiblin: It could be rocky.
Vitalik Buterin: Exactly. The rapid technological jumps, what they do is they make a bunch of lines that actually depended on certain assumptions about what you can do if you just give up on diplomacy and just do what you want through the military layer kind of completely breaking and expanding in various ways. And then people get opportunistic and overenthusiastic and a bunch of crazy stuff happens. How to sort of shepherd ourselves through the difficult transition, there’s definitely a big unknown of just how much wiggle room we actually have.
One thing I would say probably is regardless of all of that, obviously, I think trying to push defensive technologies forward is something that’s really important and it can even have positive knock-on effects. One example of this is if we fix cybersecurity, then we kneecap the entire class of superintelligence doom scenarios that involve the AI hacking things.
Rob Wiblin: Yeah. A lot of people are working on that, and I think it’s among the most important things that anyone is doing at the moment.
Vitalik Buterin: Right, exactly. I’m totally open to the possibility that it becomes clear over the next decade that on top of all of these defensive things that are super important to do, there is some kind of deceleration of specific sectors that has to happen. That’s a big unknown. And as I’ve said before, I have very wide confidence intervals and very wide timelines for all kinds of things. And I think we should both be not pre-assuming that it never happens and also not pre-assuming it.
Rob Wiblin: A line of conversation we haven’t gone down that we haven’t had time for is that it’s possible to view a lot of human history as basically a series of advances in military technology that then lead to a different equilibrium of how large the states are and who has power. Like horse archery allows the Mongols to cause genocide, and then people build better city fortifications, and then they come up with cannons — and basically just this constant iteration, turning over the kind of states that exist and what their organisation is. I think if people are interested, they should go away and google that. I think it’s maybe an underrated aspect of big-picture history.
Vitalik Buterin: Yeah, it absolutely is. Didn’t you also interview someone?
Rob Wiblin: I did. Ian Morris.
Vitalik Buterin: Long history, right? Yeah, that one was fun.
Rob Wiblin: Yeah. We love Ian and his work.
How Vitalik communicates among different camps [02:25:44]
Rob Wiblin: Heading toward the end of the conversation, something I want to talk about is the communication aspect of writing the essay. We’re in the middle of a veritable civil war, I guess, between people who just like six or 12 months before this whole time were all chummy and friends and hanging out at the same parties.
Your essay managed to bring people together somehow. What is it about the way you think you wrote it? Did you spend a lot of time thinking through, “How am I going to reach a lot of different audiences with this message that I think that they kind of already agree with?”
Vitalik Buterin: Yeah, that one definitely took a long time to write. I mean, it was the longest single thing that I’ve ever written, unless you include the proof of stake and sharding FAQs. But those were written across like five iterations.
Rob Wiblin: What sorts of choices did you make?
Vitalik Buterin: I got to the realisation that it probably made sense to write something like this, definitely after… One of the triggers was the whole OpenAI drama, and then there were some other triggers a bit before that that just made me realise that, first of all, it’s time for even just for myself to make clear, to make sure that my own views are sort of in reflective equilibrium. And that, you know, I don’t have one set of beliefs about crypto that has a set of hidden assumptions, and one set of beliefs about AI safety that has incompatible hidden assumptions, and I try to create a more coherent picture.
And I felt this intuition that that exact thing is something that a lot of other people are missing. I’d been talking to a bunch of Ethereum people earlier in the year, and a lot of them had this. I could definitely feel like a lot of people were thinking things like, “We’re working on crypto, but then AI is just doing this whole totally crazy thing. And how do I even think about what is the point of what we’re working on?”
This is also something that I wrote about more directly a bit later. That basically, Bitcoin has these founding memes that are very closely related to the 2008 financial crisis. The Bitcoin Genesis Block has that famous newspaper heading, “The Times 03/Jan/2009 Chancellor on brink of second bailout for banks.” All of these ideas of End the Fed and fiat currency is fundamentally unstable; banks are fundamentally unstable. You know, this is bad and we need to create a non-governmental alternative.
And all of those memes are very finance heavy. If you fast forward to 2023, the thing that I pointed out in one of my more recent posts, this is the one titled “The end of my childhood,” is basically that the kinds of things that people care about in 2023 are a lot less finance-oriented. There’s still a lot of finance. But if you think about concerns around AI or if you think about wars, don’t tell me that any of these wars that are, really unfortunately, going on right now would not have happened if —
Rob Wiblin: We had sound money.
Vitalik Buterin: Exactly. Yeah. I mean, Bitcoin people try to make the case, and I think it’s just batshit insane. Because if you just do the math, no: currency seigniorage is at most like 20% of government revenue. Sorry, you’re still gonna have your wars. I’m basically trying to really think about what’s happened in the past 15 years, and kind of update to that. And the need to really put a lot of those kind of updated perspectives together is something that I felt that a lot of people had, and it’s something that I was really trying to kind of serve — both for myself and for other people. And I definitely felt this desire to not see the world blown up, and this strong desire to basically avoid world totalitarianism or motivations that I felt among a lot of different people. So I definitely tried to make it clear that those are motivations that I had.
Rob Wiblin: Yeah, you made clear to everyone that you shared their goals, and you pointed out upfront — rather than as some concession at the end — the common ground that you had with people.
Let me put to you another idea, which is: at the end of the day, people want to be liked and respected. And it is just the case, and came across in the piece, that all of the different camps in this conversation are people who you like and respect. And by putting that up front, everyone was then willing to listen to you, because they don’t feel like they’re being shat on. And I think I’m no different here. If someone opens an essay with, “And here’s why 80,000 Hours is rubbish,” then I’m a lot less open to hearing out their other points.
Vitalik Buterin: Yeah, absolutely. I definitely intentionally tried to present this as a middle path forward. And the style of writing was definitely intentionally paralleling my thought process, which is definitely a fairly new style for myself in my writing. Speaking in first person is not a thing that my blog post did five years ago, but it’s a thing my blog posts do more of today. Trying really hard to see the good in people.
And yeah, it does feel like my own role in a lot of this stuff has somehow converged into being this weird kind of diplomat, which is fascinating. And I’m not representing a country. Am I even representing a blockchain? Well, not even necessarily that, too.
Rob Wiblin: Is it a role you enjoy? It’s a role you’re good at, I think.
Vitalik Buterin: It definitely has its interesting sides. Definitely get to talk to and meet interesting people. The downside is definitely that it does come with a lot of kind of feeling ideologically homeless, because you get frustrated about one thing one day, and then you get frustrated by about another thing another day. And then there’s the one group of people that you always thought were on your team, but then, wait, you actually disagree with them one thing, too.
Rob Wiblin: I would think that the biggest difficulty would be that you’d have to bite your tongue a lot, because you don’t want to alienate any other groups that you’d like to be able to speak to and bring along in future.
Vitalik Buterin: Yeah. One of the things that I have tried to do is find ways to give myself space to actually say what I think. And I feel like I’ve actually managed to accomplish that in a lot of cases. I definitely don’t want to be the sort of person that just completely avoids criticising a bunch of powerful actors just because I don’t want to alienate specific people. I’ve definitely criticised a lot of powerful actors, both in my posts and even in this conversation. But sometimes it’s probably better to kind of go up one step of abstraction, because if you go up one step of abstraction, you complain about categories rather than people. Rather than declaring people to be your enemy, you’re giving people a chance to improve. And I think there’s a lot to that.
Rob Wiblin: Yeah. Thinking about this for my own case, I think it’s not the case that I often adopt the same tone that you do of saying that everyone is right about a whole bunch of stuff, and I like and respect them. I think it’s uncommon — despite presumably, and I think understandably, being very effective, it’s not something that people are naturally inclined to do. It’s challenging and requires a lot of restraint, and not just diving into objecting to what people are saying from the outset.
Vitalik Buterin: Right. And there definitely is a good way and a bad way of doing it. There is definitely such a thing as both-sides-ism that just ends up being totally counterproductive. Yeah, it’s an art.
Rob Wiblin: I think something that I tend to do is not to say, “I disagree with x specifically about a given point.” I usually say, “Some people think something along the lines of Y, and here’s why I disagree with that.” Do you think that’s a good idea? Some people might think it’s a bit duplicitous, or it’s not being as direct or frank as you could be, in a way.
Vitalik Buterin: I think there’s a lot of benefit to that.
Rob Wiblin: Just because you don’t alienate people as much.
Vitalik Buterin: Right. The other thing, of course, if you want, you can do retroactively: if people ever ask, why don’t you criticise these people? You’ve got your receipts and you can point to them.
Rob Wiblin: I hadn’t thought of that benefit, but yeah, maybe I should stick that on the list.
Blockchain applications with social impact [02:34:37]
Rob Wiblin: Let’s talk about these decentralised mechanisms and blockchain for a minute. I think when we last spoke about this back in 2019, I said that I guess I first learned about Bitcoin and that whole cluster of technology back in 2013, but I had consistently been a little bit disappointed or underwhelmed by the practical applications that had appeared. There was stuff in finance about remittances, stuff maybe about insurance or prediction markets, but in terms of the real economy, the physical economy, it seemed like there hadn’t been that many applications. And maybe it just hadn’t had the social impact that I had originally anticipated that it might.
You said back then that, “of the people who are kind of famously bearish on blockchains aren’t following the space as it’s going to be in five years and all the newer developments that have been happening there. And I think there really are a lot of things coming down the pipeline that can really help to solve a lot of those problems.” And it is five years later, so I can ask the question. I guess I don’t follow things super closely. So is there stuff going on that I should be really excited about?
Vitalik Buterin: I think from a technology perspective, the big things that have happened over the last five years are basically that scaling is much closer to actually being solved.
Rob Wiblin: This is being able to handle much more transactions or much more stuff on the chain?
Vitalik Buterin: Exactly. I think the big thing that made blockchains completely unviable for like everything that’s not $3 million monkeys back in the 2019, 2020, 2021 era is basically that transaction fees were super high. And now we have these layer 2 protocols. And actually, a week after this recording, there’s going to be the Dencun hard fork, which is going to enable this upgrade called proto-danksharding that basically adds to Ethereum a bunch more data space that some of these layer 2 protocols could use, which basically increases their scalability and makes them much cheaper. So scalability, a lot of progress there.
Then the other big thing is zero-knowledge proofs, especially what we call ZK-SNARKs, that basically give you the ability to run arbitrary programs on data that you keep private, and then be able to publish only a proof of a claim that you care about. And that claim can be verified much faster than the original computation, and the original computation is kept private. And I’ve talked about ZK-SNARKs as being the transformers of cryptography, basically because they’re this super powerful general purpose technology that’s just going to replace and wash away all kinds of application-specific work that people have tried to do to solve specific problems for decades. I feel like they kind of have done that in a lot of spaces.
And admittedly, we’re definitely not at the stage of having large-scale stuff, but we’re definitely at the stage of having demos. One example of this is there’s this application called Zoo Pass that got developed over the past year, and it lets you prove that you are a member of a group. So one of the proofs that I can make is proof that I am a member of the set of people who are able to access the coworking space as an attendee of Devconnect, which was our annual conference — and with that proof, does not reveal which member I am.
So it’s like one of these, one per person, without revealing anything else kind of gadgets. And there’s an in-person version, where you can make a QR code that is a proof that you can verify, and then there also is an online version where you can use this to sign into things. And there are kind of decentralised Twitters, where you can only participate and vote if you’re within this group, and there’s a bunch of various community forums. So there’s stuff at the level of demos that’s definitely happening with zero-knowledge proofs.
And the reason why I think I’m very bullish about that space is because, for me, if you look at it from a blockchain perspective, blockchains are a technology that gives you all these guarantees about authenticity, censorship resistance, global openness, participation — at the expense of two very important things, which are privacy and security. What are the two things that zero-knowledge proofs give you? They give you privacy and security. So they’re like a perfect complement in a way.
And the other way that I think about this is from a narrative perspective. When you hear the non-technical big shots, especially in the last decade, talk about what benefits blockchains are supposed to bring to society, they talk about it like the trust machine that will solve people’s trust problems, right? And this is a very airy-fairy kind of narrative level. I mean, a language that I’m sure you’ve heard countless times. But then if you start thinking about the question of what actually are the specific trust problems that people have, a lot of the time people are not literally afraid that, like, Google is gonna go in and edit your spreadsheet. But they are afraid of privacy. There’s especially a lot of concern around privacy in Europe, for example. And that’s something where zero-knowledge proofs actually can help quite a lot.
I think if I had to give one example of an application that’s not even prototype stage, but that’s actually kind of widely used, that’s not financial, I would probably say the decentralised social media space — and especially Farcaster. The way that Farcaster works is that it is blockchain-based, in the sense that you have… Actually, there’s two components. There’s a kind of lower-security, higher-throughput blockchain, which is the place where people dump all their messages, all of the actual messages, basically tweets. And then there’s the higher-security layer that handles accounts and account recovery and usernames, and that’s a layer 2 on top of Ethereum.
And then there are multiple, like anyone can build a client that accesses Farcaster. So Warpcast is the main one — that’s the one built by the Farcaster company — but there is another one called flink. And there’s an API, you can run a node, you can make your own. So it actually does the thing where you can basically have one client that makes the content look like a Twitter; you have another client that makes the content look like a Reddit.
And then if you wanted to join in as a developer, because you decided that Warpcast has become evil and you want to replace it, then you don’t have to fight for the network effect from scratch. This is supposed to be the big headline benefit of openness, right? That you create an open protocol, everyone builds on the open protocol. So if you want to build an alternative implementation, you don’t have to start from scratch; you can just benefit from all of this existing infrastructure. It’s all decentralised, it’s not dependent on any specific company. It actually works and people actually use it. There’s lots of people that go and post stuff and they’re clearly posting there because they enjoy it and not just because of crypto idealism.
Rob Wiblin: This is something where you put your tweets on the blockchain and then people can design interfaces that take those tweets and then present them in all kinds of different ways. I guess once everyone was on there, then you wouldn’t have the problem of the network effects because you could just design a different front end to access the information. But I guess it faces the issue that now it’s hard to get people to switch because they’re already locked into X or whatever it might be.
Vitalik Buterin: I mean, the interesting thing about X, of course, is that of course Elon has definitely done a lot to give people reasons to want to look for alternatives. And there’s a bunch that people have tried to. But I’ve ended up exploring a whole bunch of them. I’ve spent a bunch of time on Mastodon, spent a bunch on Bluesky. The one I haven’t properly explored yet is Threads. And I’ve been told that Threads is actually pretty successful, though that’s because it’s got the Zuck network effect, so you know, you got the Kong to fight the Godzilla.
But I think one of the reasons why Farcaster has been more successful than all of the others is basically because what happens, I personally feel is a lot of the alt Twitters, they have this kind of oppositional culture — where they’re motivated by —
Rob Wiblin: They’re full of people who are angry about Twitter.
Vitalik Buterin: Exactly. Where the defining ideology is like Elon and things that Sentence2Vec would give high cosine similarity with Elon — to put it in tech terms — are bad. Things that have similar vibes — in less technical terms that mean the same thing — are bad. And the thing that I’ve personally learned is that these oppositional cultures are unhealthy. I think that’s true regardless of whether or not the thing that they’re opposing is good or bad.
The example that I have from my own experience is the whole Bitcoin blocksize civil war that led to the split between Bitcoin and Bitcoin Cash. On that whole issue, I always favoured the Bitcoin Cash side. I believe that big blocks are, and still believe that big blocks would have been, the saner direction for Bitcoin to go, and they just ended up totally gimping themselves by taking this kind of convoluted soft fork approach. But at the same time, it’s also true that when Bitcoin Cash split off to do the big block thing, that community was in many ways an angry and unpleasant one.
Rob Wiblin: It was full of people who split off because they were angry about how this went.
Vitalik Buterin: Exactly. And then Craig Wright was able to basically come in. So Craig Wright: think of him as being like kind of a Donald Trump figure in some ways. He is a figure that just comes in and is able to plug into a lot of inchoate resentments that people have and turn that into a big movement.
Rob Wiblin: I’ve heard this as an explanation of why the New Atheist sceptics movement struggled after a while. It’s just hard to build a thriving, fun community around “people are wrong and something doesn’t exist.” This might be a slight warning sign for the Pause AI folks that, inasmuch as you’re building a community just around opposing people doing something, that might lead to an unhealthy frame of mind, relative to having some positive agenda that you want to build.
Vitalik Buterin: Yeah, yeah. I think that’s true. Obviously, I imagine from their side they probably want to have a big tent and tie into a lot of people’s various discontents about AI. And in the short term, it’s easier to agree on what you’re against than it is to agree on what you’re for, right? There’s definitely lots of evidence that eventually the time comes to talk about what you’re for — and that’s where the big schisms happen.
Rob Wiblin: Yeah. Just coming back to blockchain, crypto, and how much impact it’s had, I guess I’m in an interesting situation where I’m open to the idea that it could be really important or really impactful, and maybe just the time when it’s going to affect the real economy hasn’t come quite yet. We just have to wait for the technology to advance more, and for people to figure out the applications. But I suppose after many years of seeing… It’s cool that it could be used to kind of replace X or do a different front end, but I think it still kind of feels like it’s falling short of the dreams that people have. The people involved in RadicalxChange, I think, want to see these decentralised mechanisms applied through the economy, through politics everywhere. And the uptake has not been that great.
Vitalik Buterin: This is true. I think if you wanted to make a bold case for crypto as a generator of ideas that have massively changed society, I think there are things that you can point to outside — that are formally outside of crypto, but are plausibly very inspired by it. One of them is the recent Reddit IPO, and how they’ve recently announced that they want to basically let active contributors to Reddit — people who have a very strong history of being moderators, very active posters, and things like that — participate in the IPO at the same rate as institutional investors. That’s something that is amazing, and achieves all these beautiful dreams of democratic ownership. And there’s a strong case that that’s inspired by the existence of Crypto.
Then, even with the Community Notes, you can argue that there’s very similar ideals that are involved. I think there definitely is a kind of medium crypto pessimist case. I mean, you could argue that basically crypto ends up being simultaneously this kind of idealism engine that ends up prototyping a whole bunch of these super interesting things in both mathematical tech and social tech. But then, because of whenever they end up getting mainstream, they get mainstreamed and sort of a more boring form that makes a lot of compromises to legacy stuff.
But then at the same time, there’s this other half of crypto, which is basically, you know, like dog coins and people making money off of dog coins. And when that happens, they pay transaction fees, and the transaction fees do also fund the zero-knowledge proof researchers. I mean, that’s a fair case. And this is definitely one of those “we shall see” things. I mean, obviously it’s easy to keep saying “we shall see” and keep extending the deadline. That’s how all of the people predicting hyperinflation have been doing for the last 15 years. In my defence, I said we will switch to proof of stake. We were incredibly late on the switch to proof of stake, but we have switched to proof of stake.
Rob Wiblin: Well yeah, massive credit to you. I was going to say that you mentioned that back in 2019 as something you were really excited about, and basically it’s just completely worked. The energy consumption is down 99.99%.
Vitalik Buterin: Exactly. And then I also talked to Dan a lot about sharding, and scalability, and all these technologies to solve scalability. And we have Dencun coming a week after this recording, and that’s going to be probably a really key step on the way to solving scaling. And we have these layer 2 protocols. So there definitely are individual pieces that are moving forward. And I think my own kind of… Well, I don’t want to say “Eye of Sauron,” because then I’ll be comparing myself to the evil guy, but you know what I mean, it’s definitely switching away from core protocol stuff and more toward these ideas of like, let’s actually make the user-level application space work.
Rob Wiblin: I guess we also have the example of AI, which was something of a perennial disappointment: constantly underwhelming until suddenly it seems like it’s not, and you hit some threshold at which it’s really useful. Maybe the same story could play out there.
Vitalik Buterin: Right. Yeah, very possibly. Yeah.
In-person effective altruism vs online effective altruism [02:50:01]
Rob Wiblin: We’re out of time, but I guess the final question is: In the past, you’ve been very positive about effective altruism. Generally you’re seen as a supporter of it. It’s definitely taken its knocks over the last couple of years for various different reasons that people will be familiar with. Where are you today? How do you feel about effective altruism as a project or as a group of people?
Vitalik Buterin: Well, it’s interesting because the set of ideas and the set of people are two very different things.
Rob Wiblin: We might have very different views of one than the other.
Vitalik Buterin: Exactly, yeah. Let’s see, what do I think about the set of ideas? One of the things that I noticed is that I’ve always had very positive views of effective altruism, and I’ve been very willing to defend it against its critics online. And one of the things I found in a couple of cases where that ended up digging deeper into the conversations is that there’s an online version of effective altruism that I absorbed by reading LessWrong and Slate Star Codex and GiveWell, and then there’s an in-person version of effective altruism — and a lot of the time when people are polarised against effective altruism, they’re actually polarised against the much deeper in-person thing.
Rob Wiblin: Interesting. How are they different? Actually, I’m not sure anymore.
Vitalik Buterin: One example is, if you think about the SBF situation, for example, Eliezer Yudkowsky and Scott Alexander both have their receipts in the sense that they had writings that explicitly caution against assuming that you’re correct and taking all kinds of actions that violate conventional deontological ethics because you’re convinced that you’re correct. That’s the thing that Scott Alexander has very explicitly written about, and the importance of thinking at the meta level and thinking about principles as opposed to the object level and the “You are right, therefore you’re entitled to…”
Rob Wiblin: Break the rules.
Vitalik Buterin: Yeah, exactly. Break all of these rules. And that’s the sort of stuff that I definitely really deeply absorbed.
One other example of that is I remember the vibe 10 years ago basically being that you should not, as an EA, participate in politics — because politics is fundamentally a zero-sum game. And what it has is like 20,000 people that are all convinced that their side is right, but there’s 10,000 people that are pro-X and 10,000 people that are anti-X — and so from an outside view, they’re just making a bunch of noise to cancel each other out, and they’re just generating a bunch of arguments and bad vibes as a byproduct. So basically fully staying out of controversial politics and just shutting up and donating bednets is the good thing.
But then if you look at SBF’s actions, he massively invested in all kinds of political donations, and from what we can tell broke a whole bunch of rules in doing that. And that style of effective altruism is definitely not the style of effective altruism that I imbibed — where the criticism there is like, I remember the arguments being that effective altruists were criticised for not doing enough of systemic change and just focusing on donating to give the bednets. But then now you have SBF that’s realistically doing systemic change, but doing it in this unilateral and awful way. And it feels like the thing that SBF is, that got criticised, is a very completely different class of thing from the thing that I personally imbibed.
Another example of this is the super focus on AI. That’s the thing that’s definitely stronger in AI Berkeley circles than it is internet AI circles. Because GiveWell still exists, right?
Rob Wiblin: It’s actually enormous.
Vitalik Buterin: Yeah. And when I got the Shiba tokens in 2021, I fully identified as EA then, and I was fully on board with defending the EAs against all of the various Twitter criticism. But at the same time, if you look at where I gave those donations, it was just a pretty broad spray across a bunch of things — the largest share of which basically had to do with global public health. And that’s a very internet EA take, but definitely not a Berkeley take.
So yeah, there’s these two different versions. And I guess I definitely still believe the milder internet version. I’m sure you remember my blog post about concave and convex dispositions, and how I identify with the concave side.
A lot of the time, my philosophy toward life is to kind of half-ass everything, right? Like, my philosophy toward diet is like, some people say that keto is good. Fine, I’m gonna not drink sugar anymore and I’m going to reduce my carbs. Some people say plant-based is good. OK, I’m gonna eat more veggies. Some people say intermittent fasting is good. OK, I’m gonna try to either skip breakfast or skip dinner. Some people say caloric restriction is good. Well, that one I unfortunately can’t do because my BMI for a long time was in the underweight range. I think it’s finally at the bottom end of the normal range. And some people say exercise is good. OK, I do my running, but I also don’t run every day. I have a personal floor of one 20k run a month, and then I do what I can beyond that.
So basically like half-ass every self-improvement philosophy that seems sane at the same time is like my approach. And my approach toward charity is also a kind of half-ass every charity philosophy at the same time. You know, this is just the way that I’ve operationally just always approached the world.
Rob Wiblin: Moderation in all things.
Vitalik Buterin: Exactly.
Rob Wiblin: I think that’s from a Scott Alexander blog, right?
Vitalik Buterin: Well, yeah. He’s got the even more fun blog post, where the top level is moderation versus extremism, and then the second level is are you moderate or extreme on that divide? And then you keep getting even further and you have weird things like you have gods with names. Like, I think [inaudible] was the name of one of the gods, which is chosen because if you write it in capital letters, it has rotational symmetry. So that was fun.
But I guess I believe that. I also still totally believe the core effective altruist ideas of even basic stuff, like scope. Like, scale is important. And a problem that affects a billion people is like 1,000 times a bigger deal than a problem that affects a million people. And the difference between a billion and a million is like the difference between a tiny light and the bright sun.
At the same time, I definitely have this kind of rationalist, old, I guess you might say this Scott Alexander / Yudkowskian mindset of like, remember the meta layer, and don’t just act as though you’re correct. Act in ways that you would find acceptable if the opposite team were acting. And that’s a thing that’s definitely informed my thinking all over the years. Those ideas have always been part of my thinking, and I feel like they’ve stood the test of time. I feel like if you look at either the SBF situation or probably even the OpenAI situation, those are not examples of people acting in ways where they’d be comfortable with their worst enemies acting the same way.
The other way I think about it is there are the two regimes. Where one regime is the regime where basically you can only do good — and I think bednets are one of those. There’s a couple of people who argue, what if they get wasted and if they pollute the rivers? But like realistically I guess that’s generally understood to be a weak criticism. Yeah, it was interesting seeing people who are normally in favour of e/acc-ing everything not being e/acc on the bednets.
But the other side of that, the other regime is the regime where it’s so easy to accidentally cause actions that cause harm, and where it’s hard to even tell whether or not the total impact of what you’re going to do is on the right side of zero. And there’s totally different moralities that apply there.
One example of that is, in the “you can only help” regime, you want to just go off to the far-off regions and help the poorest people — because that’s where you can benefit the most people for the least resources. But in this regime where you can easily cause harm, then it’s like, well, if you go into this faraway region where you don’t understand the local context, you’re more likely to actually have a result that’s on the wrong side of zero. So if you kind of follow the time-worn conservative ideals of focusing on your family and country, then you can see the wisdom of that: if you’re super aware of the context, your actions are more likely to have an impact on the right side of zero, if that’s the only thing that matters. And there’s wisdom in knowing which regime you’re in and kind of adjusting your mentality appropriately.
But I guess it is totally very easy to take a lot of those ideas too far. And I definitely, totally caution against them. Even AI safety people assuming that, because they’re right, they have the right to just go and break the glass and just do things that they would not accept anyone else doing under similar circumstances.
Rob Wiblin: Yeah. Hard agree. I guess we’re hopefully all getting wiser as we get older, one massive screwup at a time. My guest today has been Vitalik Buterin. Thanks so much for coming on The 80,000 Hours Podcast, Vitalik.
Vitalik Buterin: Thank you so much, Robert.
Rob’s outro [03:01:00]
Rob Wiblin: As I mentioned in the intro, we’re hiring for two new senior roles, a head of video and head of marketing. You can learn more about both at 80000hours.org/latest.
These roles would probably be done in our offices in central London, but we are open to remote candidates and can support UK visa applications too. The salary would vary depending on seniority, but someone with five years of relevant experience would be paid approximately £80,000.
The first of these would be someone in charge of setting up a new video product for 80,000 Hours. People are spending a larger and larger fraction of their time online watching videos on video-specific platforms, and we want to explain our ideas there in a compelling way that can reach people who care. That video programme could take a range of forms, including 15-minute direct-to-camera vlogs, many one-minute videos, 10-minute explainers, or lengthy video essays. The best format would be something for this head of video to figure out.
We’re also looking for a new head of marketing to lead our efforts to reach our target audience at scale by setting and executing on a strategy, managing and building a team, and deploying our yearly budget of $3 million. We currently run sponsorships on major podcasts and YouTube channels, as well as targeted ads on a range of social media platforms, which has gotten hundreds of thousands of new subscribers onto our email newsletter. We also mail out a copy of one of our books about high-impact career choice every eight minutes. So certainly the potential to reach many people if you do that job well.
Applications will close in late August, so please don’t delay if you’d like to apply.
And just to repeat what I mentioned in the intro about Entrepreneur First and their def/acc startup incubation programme: you have a limited time to get admitted to their incubation programme to build a business around speeding up and delivering a technology that enhances our ability to defend ourselves against risk and aggression. You don’t need to have any idea what that technology would be at that point; you just need the energy and hustle to be able to start a new technology business.
You can learn more and apply at joinef.com/80k. I haven’t been through the whole flow myself, but it looks like applying is pretty straightforward.
The programme is also explained in a post on their blog called “Introducing def/acc at EF.”
All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.
The audio engineering team is led by Ben Cordell, with mastering and technical editing by Milo McGuire, Simon Monsour, and Dominic Armstrong.
Full transcripts and an extensive collection of links to learn more are available on our site, and put together as always by Katy Moore.
Thanks for joining, talk to you again soon.