#191 – Carl Shulman on the economy and national security after AGI (Part 1)
On this page:
- Introduction
- 1 Highlights
- 2 Articles, books, and other media discussed in the show
- 3 Transcript
- 3.1 Cold open [00:00:00]
- 3.2 Rob's intro [00:01:00]
- 3.3 The interview begins [00:04:43]
- 3.4 Transitioning to a world where AI systems do almost all the work [00:05:20]
- 3.5 Economics after an AI explosion [00:14:24]
- 3.6 Objection: Shouldn't we be seeing economic growth rates increasing today? [00:59:11]
- 3.7 Objection: Speed of doubling time [01:07:32]
- 3.8 Objection: Declining returns to increases in intelligence? [01:11:58]
- 3.9 Objection: Physical transformation of the environment [01:17:37]
- 3.10 Objection: Should we expect an increased demand for safety and security? [01:29:13]
- 3.11 Objection: "This sounds completely whack" [01:36:09]
- 3.12 Income and wealth distribution [01:48:01]
- 3.13 Economists and the intelligence explosion [02:13:30]
- 3.14 Baumol effect arguments [02:19:11]
- 3.15 Denying that robots can exist [02:27:17]
- 3.16 Semiconductor manufacturing [02:32:06]
- 3.17 Classic economic growth models [02:36:10]
- 3.18 Robot nannies [02:48:25]
- 3.19 Slow integration of decision-making and authority power [02:57:38]
- 3.20 Economists' mistaken heuristics [03:01:06]
- 3.21 Moral status of AIs [03:11:44]
- 3.22 Rob's outro [04:11:46]
- 4 Learn more
- 5 Related episodes
This is the first part of our marathon interview with Carl Shulman. The second episode is on government and society after AGI. You can listen to them in either order!
The human brain does what it does with a shockingly low energy supply: just 20 watts — a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution has already managed, and could accomplish the work of top human professionals given a 20-watt power supply?
Many people sort of consider that hypothetical, but maybe nobody has followed through and considered all the implications as much as Carl Shulman. Behind the scenes, his work has greatly influenced how leaders in artificial general intelligence (AGI) picture the world they’re creating.
Carl simply follows the logic to its natural conclusion. This is a world where 1 cent of electricity can be turned into medical advice, company management, or scientific research that would today cost $100s, resulting in a scramble to manufacture chips and apply them to the most lucrative forms of intellectual labour.
It’s a world where, given their incredible hourly salaries, the supply of outstanding AI researchers quickly goes from 10,000 to 10 million or more, enormously accelerating progress in the field.
It’s a world where companies operated entirely by AIs working together are much faster and more cost-effective than those that lean on humans for decision making, and the latter are progressively driven out of business.
It’s a world where the technical challenges around control of robots are rapidly overcome, leading to robots into strong, fast, precise, and tireless workers able to accomplish any physical work the economy requires, and a rush to build billions of them and cash in.
It’s a world where, overnight, the number of human beings becomes irrelevant to rates of economic growth, which is now driven by how quickly the entire machine economy can copy all its components. Looking at how long it takes complex biological systems to replicate themselves (some of which can do so in days) that occurring every few months could be a conservative estimate.
It’s a world where any country that delays participating in this economic explosion risks being outpaced and ultimately disempowered by rivals whose economies grow to be 10-fold, 100-fold, and then 1,000-fold as large as their own.
As the economy grows, each person could effectively afford the practical equivalent of a team of hundreds of machine ‘people’ to help them with every aspect of their lives.
And with growth rates this high, it doesn’t take long to run up against Earth’s physical limits — in this case, the toughest to engineer your way out of is the Earth’s ability to release waste heat. If this machine economy and its insatiable demand for power generates more heat than the Earth radiates into space, then it will rapidly heat up and become uninhabitable for humans and other animals.
This eventually creates pressure to move economic activity off-planet. There’s little need for computer chips to be on Earth, and solar energy and minerals are more abundant in space. So you could develop effective populations of billions of scientific researchers operating on computer chips orbiting in space, sending the results of their work, such as drug designs, back to Earth for use.
These are just some of the wild implications that could follow naturally from truly embracing the hypothetical: what if we develop artificial general intelligence that could accomplish everything that the most productive humans can, using the same energy supply?
In today’s episode, Carl explains the above, and then host Rob Wiblin pushes back on whether that’s realistic or just a cool story, asking:
- If we’re heading towards the above, how come economic growth remains slow now and not really increasing?
- Why have computers and computer chips had so little effect on economic productivity so far?
- Are self-replicating biological systems a good comparison for self-replicating machine systems?
- Isn’t this just too crazy and weird to be plausible?
- What bottlenecks would be encountered in supplying energy and natural resources to this growing economy?
- Might there not be severely declining returns to bigger brains and more training?
- Wouldn’t humanity get scared and pull the brakes if such a transformation kicked off?
- If this is right, how come economists don’t agree and think all sorts of bottlenecks would hold back explosive growth?
Finally, Carl addresses the moral status of machine minds themselves. Would they be conscious or otherwise have a claim to moral or rights? And how might humans and machines coexist with neither side dominating or exploiting the other?
Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore
Highlights
Robot nannies
Carl Shulman: So I think maybe it was Tim Berners-Lee gave an example saying there will never be robot nannies. No one would ever want to have a robot take care of their kids. And I think if you actually work through the hypothetical of a mature robotic and AI technology, that winds up looking pretty questionable.
Think about what do people want out of a nanny? So one thing they might want is just availability. It’s better to have round-the-clock care and stimulation available for a child. And in education, one of the best measured real ways to improve educational performance is individual tutoring instead of large classrooms. So having continuous availability of individual attention is good for a child’s development.
And then we know there are differences in how well people perform as teachers and educators and in getting along with children. If you think of the very best teacher in the entire world, the very best nanny in the entire world today, that’s significantly preferable to the typical outcome, quite a bit, and then the performance of the AI robotic system is going to be better on that front. They’re wittier, they’re funnier, they understand the kid much better. Their thoughts and practices are informed by data from working with millions of other children. It’s super capable.
They’re never going to harm or abuse the child; they’re not going to kind of get lazy when the parents are out of sight. The parents can set criteria about what they’re optimising. So things like managing risks of danger, the child’s learning, the child’s satisfaction, how the nanny interacts with the relationship between child and parent. So you tweak a parameter to try and manage the degree to which the child winds up bonding with the nanny rather than the parent. And then the robot nanny optimising over all of these features very well, very determinedly, and just delivering everything superbly — while also being fabulous medical care in the event of an emergency, providing any physical labour as needed.
And just the amount you can buy. If you want to have 24/7 service for each child, then that’s just something you can’t provide in an economy of humans, because one human cannot work 24/7 taking care of someone else’s kids. At the least, you need a team of people who can sub off from each other, and that means that’s going to interfere with the relationship and the knowledge sharing and whatnot. You’re going to have confidentiality issues. So the AI or robot can forget information that is confidential. A human can’t do that.
Anyway, we stack all these things with a mind that is super charismatic, super witty, that can have probably a humanoid body. That’s something that technologically does not exist now, but in this world, with demand for it, I expect would be met.
So basically, most of the examples that I see given, of here is the task or job where human performance is just going to win because of human tastes and preferences, when I look at the stack of all of these advantages and the costs that the world is dominated by nostalgic human labour. If incomes are relatively equal, then that means for every hour of these services you buy from someone else, you would work a similar amount to get it, and it just seems that isn’t true. Like, most people would not want to spend all day and all night working as a nanny for someone else’s child —
Rob Wiblin: — doing a terrible job —
Carl Shulman: — in order to get a comparatively terrible job done on their own kids by a human, instead of a being that is just wildly more suitable to it and available in exchange for almost nothing by comparison.
Key transformations after an AI capabilities explosion
Carl Shulman: Right now, human energy consumption is on the scale of 1013 watts. That is, it’s in the thousands of watts per human. Solar energy hitting the top of the atmosphere, not all of it gets down, but is in the vicinity of 2 x 1017 — so 10,000 times or thousands of times our current world energy consumption reaches the Earth. If you are harvesting 5% or 10% of that successfully, with very high-efficiency solar panels or otherwise coming close to the amount of energy use that can be sustained on the Earth, that’s enough for a million watts per person. And a human brain uses 20 watts, a human body uses 100 watts.
So if we consider robotics technology and computer technology that are at least as good as biology — where we have physical examples of this is possible because it’s been done — that budget means you could have, per person, an energy budget that can, at any given time, sustain 50,000 human brain equivalents of AI cognitive labour, 10,000 human-scale robots. And then if you consider smaller ones, say, like insect-sized robots or small AI models, like current systems — including much smarter small models distilled from the gleanings of large models, and with much more advanced algorithms — that’s a per person basis, that’s pretty extreme.
And then when you consider the cognitive labour being produced by those AIs, it gets more dramatic. So the capabilities of one human brain equivalent worth of compute are going to be set by what the best software in the world is. So you shouldn’t think of what average human productivity is today; think about, for a start, for a lower bound, the most skilful and productive humans. In the United States, there are millions of people who earn over $100 per hour in wages. Many of them are in management, others are in professions and STEM fields: software engineers, lawyers, doctors. And there’s even some who earn more than $1,000 an hour: new researchers at OpenAI, high-level executives, financiers.
An AI model running on brain-like efficiency computers is going to be working all the time. It does not sleep, it does not take time off, it does not spend most of its career in education or retirement or leisure. So if you do 8,760 hours of the year, 100% employment, at $100 per hour, you’re getting close to a million dollars of wages equivalent. If you were to buy that amount of skilled labour today that you would get from these 50,000 human brain equivalents at the high end of today’s human wages, you’re talking about, per human being, the energy budget on Earth could sustain more than $50 billion worth at today’s prices of skilled cognitive labour. If you consider the high end, the scarcer, more elite, higher compensated labour, then it’s even more.
If we consider an even larger energy budget beyond Earth, there’s more solar energy and heat dissipation capacity in the rest of the solar system: about 2 billion times as much. If that winds up being used, because people keep building solar panels, machines, computers, until you can no longer do it at an affordable enough price and other resources to make it worthwhile, then multiply those numbers before by a millionfold, 100 millionfold, maybe a billionfold, and that’s a lot. If you have 50 trillion human brains’ worth of AI minds at very high productivity, each per human being, or perhaps a mass of robots, like unto trillions upon trillions of human bodies, and dispersed in a variety of sizes and systems. It is a society whose physical and cognitive, industrial and military capabilities are just very, very, very, very large, relative to today.
Objection: Shouldn't we be seeing economic growth rates increasing today?
Rob Wiblin: You might expect an economic transformation like this to happen in a somewhat gradual or continuous way, where in the lead up to this happening, you would see economic growth rates increasing. So you might expect that if we’re going to see a massive transformation in the economy because of AGI in 2030 or 2040, shouldn’t we be seeing economic growth rates today increasing? And shouldn’t we maybe have been seeing them increase for decades as information technology has been advancing and as we’ve been gradually getting closer to this time?
But in reality, over the last 50 years, economic growth rates have been kind of flat or declining. Is that in tension with your story?
Carl Shulman: Yeah, you’re pointing to an important thing. When we double the population of humans in a place, ceteris paribus, we expect the economic output after there’s time for capital adjustments to double or more. So a place like Japan, not very much in the way of natural resources per person, but has a lot of people, economies of scale, advanced technology, high productivity, and can generate enormous wealth. And some places have population densities that are hundreds or thousands of times that of other countries, and a lot of those places are extremely wealthy per capita. By the example of humans, doubling the human labour force really can double or more economic output after capital adjustment.
For computers, that’s not the case. And a lot of this reflects the fact that thus far, computers have been able to do only a small portion of the tasks in the economy. Very early on in the history of computers, they got better than humans at serial, reliable arithmetic calculations, which you could do with an incredibly small amount of computation compared to the human brain, just because we’re really badly set up for multiplying and dividing lots of numbers. And there used to be a job of being a human computer, and I think that there are films about them, and it was a thing, those jobs have gone away because just the difference now in performance, you can get the work of millions upon millions of those human computers for basically peanuts.
But even though we now use billions of times as much in the way of that sort of calculation, it doesn’t mean that we get to produce a billion times the wages that were being paid to the human computers at that time, because there were diminishing returns in having more and more arithmetic calculations while other things didn’t keep up. And when we double the human population and capital adjusts, then you’re improving things on all of these fronts. So it’s not that you’re getting a tonne of enhancement of one kind of input, but it’s missing all of the other things that it needs to work with.
And so, as we see progress towards AI that can robustly replace humans, we should expect the share of tasks that computing can do to go up over time, and therefore the increase in revenue to the computer industry, or in economic value-add from computers per doubling of the amount of compute, to go way up. Historically, it’s been more like you double the amount of compute, and then you get maybe one-fifth of a doubling of the revenue of the computer industry. So if we think success at broad automation, human-substituting AI is possible, then we expect that to go up over time from one-fifth to one or beyond.
And then if you ask why would this be? One thing that can help make sense of that is to ask how much compute has the computing industry been providing historically? So I said that now, maybe an H100 that costs tens of thousands of dollars can give computation comparable to the human brain. But that’s after many, many years of Moore’s law, during which the amount of computation you could buy per dollar has gone up by billions of times and more.
So when you say, right now, if we add 10 million H100s to the world each year, then maybe we increase the computation in the world from 8 billion human brains’ worth to 8 billion and 10 million human brains, you’re starting to make a difference in total computation. But it’s pretty small. It’s pretty small, and so it’s only where you’re getting a lot more out of it per computation that you see any economic effect at all.
And going back further, you’re talking about, well, why wasn’t it the case that having twice as many of these computer brains analogous to the brain of an ant or a flukeworm, why wasn’t that doubling the economy? And when you look at it like that, it doesn’t really seem surprising at all.
Objection: Declining returns to increases in intelligence?
Rob Wiblin: Another line of scepticism is this idea that, sure, we might see big increases in the size of these neural networks and big increases in the amount of effective lifespan or amount of training time that they’re getting — so effectively, they would be much more intelligent in terms of just the specifications of the brains that we’re training — but you’ll see massively declining returns to this increasing intelligence or this increasing brain size or this increasing level of training.
Maybe one way of thinking about that would be to imagine that we were designing AI systems to do forecasting into the future. Now, forecasting tens or hundreds of years into the future is notoriously very challenging, and human beings are not very good at it. You might expect that a brain that’s 100 times the size of the human brain and has much more compute and has been trained on all of the knowledge that humans have ever collected because it’s had millions of years of life expectancy, perhaps it could do a much better job of that.
But how much better a job could it really do, given just how chaotic events in the real world are? Maybe being really intelligent just doesn’t actually buy you the ability to do some of these amazing things, and you do just see substantially declining returns as brains become more capable than humans are.
Carl Shulman: Well, actually, from the arguments that we’ve discussed so far, I haven’t even really availed myself of much that would be impacted by that. So I’ll take weather forecasting. So you can expend exponentially more computing power to go incrementally a few more days into the future for local weather prediction, at the level of “Will there be a storm on this day rather than that day?” And yeah, if we scale up our economy by a trillionfold, maybe we can go add an extra week or so to that sort of short-term weather prediction, because it’s a chaotic system.
But that’s not impacting any of the dynamics that we talked about before. It’s not impacting the dynamic where, say, Japan, with a population many times larger than Singapore, can have a much larger GDP just duplicating and expanding. These same sorts of processes that we’re already seeing give you corresponding expansion of economic, industrial, military output.
And we have, again, the limits of just observing the upper peaks of human potential and then taking even quite narrow extrapolations of just looking at how things vary among humans, say, with differing amounts of education. And when you go from some high school education to a university degree, graduate degree, you can see like a doubling and then a quadrupling of wages. And if you go to a million years of education, surely you’re not going to see 10,000 or 100,000 times the wages from that. But getting 4x or 8x or 16x off of your typical graduate degree holder seems plausible enough.
And we see a lot of data in cases where we can do experiments and see, in things like go or chess, where we’ve looked out to sort of superhuman levels of performance and we can say, yeah, there’s room to gain some. And where you can substitute a bigger, smarter, better trained model evaluated fewer times for using a small model evaluated many times.
But by and large, this argument goes through largely just assuming you can get models to the upper bounds of human capacity that we know is possible. And the duplication argument really is unaffected by the sort of that, yes, weather prediction is something where you’ll not get a million times better, but you can make a million times as many physical machines process correspondingly more energy, et cetera.
Objection: Could we really see rates of construction go up a hundredfold or a thousandfold?
Carl Shulman: So the very first thing to say is that that has already happened relative to our ancestors. So there was a time when there were about 10 million humans or relevant hominids hanging around on the Earth, and they had their stone hand axes and whatnot, but very little stuff. Today there’s 8 billion humans with a really enormous amount of stuff being produced. And so if you just say that 1,000 sounds like a lot, well, every numerical measure of the physical production of stuff in our society is like that compared to the past.
And on a per capita basis, does it sound crazy that when you have power plants that support the energy for 10,000 people, that you build one of those per 10,000 people over some period of time? No, because the efforts to create them are also scaling up.
So, how can you have a larger number if you have a larger population of robot workers and machines and whatnot, I think that’s not something we should be super suspicious of.
There’s a different kind of thing which is drawing from how, in developed countries, there has been a tendency to restrict the building of homes, of factories, of power plants. This is a significant cost. You see, you know, in some very restrictive cities like New York City or San Francisco, the price of housing rises by several times compared to the cost of constructing it because of basically legal bans on local building. And people, especially folk who are immersed in the sort of YIMBY-versus-NIMBY debates and think about all the economic losses from this, that’s very front of mind.
I don’t think this is reason for me not to expect explosive construction of physical stuff in this scenario though, and I’ll explain why. So even today we see, in places like China and Dubai, cities thrown up at incredible rates. There are places where intense construction can be allowed, and there’s more of that construction when the payouts are much higher. And so when permitting building can result in additional revenue that is huge compared to the local government, then they may actually go really out of their way to provide the regulatory situation that will attract investments of an international company. And in the scenarios that we’re talking about, yes, enormous industrial output can be created relatively quickly in a location that chooses to become a regulatory haven.
So the United Arab Emirates built up Dubai, Abu Dhabi and has been trying to expand this non-oil economy by just creating a place for it to happen and providing a favourable environment. And in a situation where you have, say, the United States is holding back from having million-dollar-per-capita incomes or $10-million-per-capita incomes by not allowing this construction, and then the UAE can allow that construction locally and 100x their income, then I think they go ahead and do it. Seeing that sort of thing I’d also expect encourages change in the more restrictive regulatory regimes.
And then AI and such can help on the front of governance. So unlimited cheap lawyers makes it easier to navigate horrible paperwork, and unlimited sophisticated AIs to serve as bureaucrats, advisors to politicians, advisors to voters makes it easier to adjust to those things.
But I think the central argument is that some places providing the regulatory space from it can make absolutely enormous profits, potentially gain military dominance — and those are strong pressures to make way for some of this construction to enable it. And even within the scope of existing places that will allow you to make things, that goes very far.
Objection: "This sounds completely whack"
Rob Wiblin: OK, a different reason that some listeners might have for doubting that this is how things are going to play out is maybe not an objection to any kind of specific argument, or a specific objection to some technological question, but just the idea that this is a very cool story, but it sounds completely whack. And you might reasonably expect the future to be more boring and less surprising and less weird than this.
You’ve mentioned already one response that someone could have to this, which is that the present would look completely whack and insane to someone who was brought forward from 500 years ago. So we’ve already seen a crazy transformation through the Industrial Revolution that would have been extremely surprising to many people who existed before the Industrial Revolution. And I guess plausibly to hunter-gatherers, the states of ancient Egypt would look pretty remarkable in terms of the scale of the agriculture, the scale of the government, the sheer number of people and the density and so on. We can imagine that the agricultural revolution shifted things in a way that was quite remarkable and very different than what came before.
Is there any other kind of overall response that someone could give to a listener who’s sceptical on this on grounds that this is just too weird to be likely?
Carl Shulman: So building on some of the things you mentioned. So not only that our post-industrial society is incredibly rich, incredibly populous, incredibly dense, long-lived, and different in many other ways from the days of millions of hunter-gatherers on the Earth, but also, the rate of change is much higher. Things that might previously have been on a thousand-year timescale now happen on the scale of a couple of decades — for, say, a doubling of global economic output. And so there’s a history both of things becoming very different, but also of the rate of change getting a lot faster.
And I know you’ve had Tom Davidson, David Roodman and Ian Morris and others, and some people with critical views discussing this. And so cosmologists among physicists, who have the big picture, actually tend to think more about these kinds of cases. The historians who study big history, global history over very long stretches of time tend to notice this.
So yeah, when you zoom out to the macro scale of history, in some ways it’s quite precedented to have these kinds of changes. And actually it would be surprising to say, “This is the end of the line. No further.” Even when we have the example of biological systems that show the ceilings of performance are much higher than where we’re at, both for replication times, for computing capabilities, and other object-level abilities.
And then you have these very strong arguments from all our models and accounts of growth that can really explain some of why you had the past patterns and past accelerations. They tend to indicate the same thing. Consider just the magnitude of the hammer that is being applied to this situation: it’s going from millions of scientists and engineers and entrepreneurs to billions and trillions on the compute and AI software side. It’s just a very large change. You should also be surprised if such a large change doesn’t affect other macroscopic variables in the way that, say, the introduction of hominids has radically changed the biosphere, and the Industrial Revolution greatly changed human society, and so on and so forth.
Income and wealth distribution
Rob Wiblin: One thing we haven’t talked about almost at all is income distribution and wealth distribution in this new world. We’ve kind of been thinking about on average we could support x number of employees for every person, given the amount of energy and given the number of people around now.
Do you want to say anything about how income would end up being distributed in this world? And should I worry that in this post-AI world, humans can’t do useful work, there’s nothing that they can do for any reasonable price that an AI couldn’t do better and more reliably and cheaper, so they wouldn’t be able to earn an income by working? Should I worry that we’ll end up with an underclass of people who haven’t saved any income and are kind of shut out of opportunities to have a prosperous life in this scenario?
Carl Shulman: I’m not worried about that issue of unemployment, meaning people can’t earn wages to support themselves, and indeed have a very high standard of living. Just as a very simple argument: right now governments redistribute a significant percentage of all of the output in their territories, and we’re talking about an expansion of economic output of orders of magnitude. So if total wealth rises a hundredfold, a thousandfold, and you just keep existing levels of redistribution and government spending, which in some places are already 50% of GDP, almost invariably a noticeable percentage of GDP, then just having that level of redistribution continue means people being hundreds of times richer than they are today, on average, on Earth.
And then if you include off-Earth resources going up another millionfold or billionfold, then it is a situation where the equivalent of social security or universal pension plans or universal distribution of that sort, of tax refunds, can give people what now would be billionaire levels of consumption. Whereas at the same time, a lot of old capital goods and old things you might invest in could have their value fall relative to natural resources or the entitlement to those resources once you go through.
So if it’s the case that a human being is a citizen of a state where they have any political influence, or where the people in charge are willing to continue spending even some portion, some modest portion of wealth on distribution to their citizens, then being poor does not seem like the kind of problem that people are facing.
You might challenge this on the point that natural resource wealth is unevenly distributed, and that’s true. So at one extreme you have a place like Singapore, I think it’s like 8,000 people per square kilometre. At the other end, so you’re Australian and I’m Canadian and I think they’re at two and three people per square kilometre, something like that — so a difference of more than a thousandfold relative to Singapore in terms of the land resources. So you might think you have inequality there.
But as we discussed, most of the natural resources on Earth are actually not even in the current territory of any sovereign state. They’re in international waters. If heat emission is the limit on energy and materials harvesting on Earth, then that’s a global issue in the way that climate change is a global issue. And so if you wind up with heat emission quotas or credits being distributed to states on the basis of their human population, or relatively evenly, or based on prior economic contribution, or some mix of those things, those would be factors that could lead to a more even distribution on Earth.
And again, if you go off Earth, the magnitude of resources are so large that if space wealth is distributed such that each existing nation-state gets some share of that, or some proportion of it is allocated to individuals, then again, it’s a level of wealth where poverty or hunger or access to medicine is not the kind of issue that seems important.
Articles, books, and other media discussed in the show
Carl’s work:
- First appearance on The 80,000 Hours Podcast: #112 – Carl Shulman on the common-sense case for existential risk work and its practical implications
- Appearances on the Dwarkesh Podcast:
- Reflective Disequilibrium — Carl’s blog
- Propositions concerning digital minds and society (with Nick Bostrom)
- Sharing the world with digital minds (with Nick Bostrom)
- Racing to the precipice: A model of artificial intelligence development (with Stuart Armstrong and Nick Bostrom)
- Carl’s response on LessWrong to Katja Grace’s post, Let’s think about slowing down AI
- Whole brain emulation and the evolution of superorganisms
Artificial sentience:
- The Google engineer who thinks the company’s AI has come to life by Nitasha Tiku
- Artificial Intelligence, Morality, and Sentience (AIMS) Survey — 2023 poll by the Sentience Institute by Janet Pauketat, Ali Ladak, and Jacy Reese Anthis
- Improving the welfare of AIs: A nearcasted proposal by Ryan Greenblatt
- Passion of the Sun Probe — a vignette by Eric Schwitzgebel (a former guest of the show)
- Artificial intelligence: An evangelical statement of principles
AI forecasting:
- FutureSearch
- Forecasting future world events with neural networks by Andy Zou et al.
- Approaching human-level forecasting with language models by Danny Halawi et al.
AI and economic growth:
- Artificial intelligence and economic growth by Philippe Aghion, Benjamin F. Jones, and Charles I. Jones
- Economic growth under transformative AI by Philip Trammell and Anton Korinek
- Explosive growth from AI automation: A review of the arguments by Ege Erdil and Tamay Besiroglu
Other recent AI advances:
- Anthropic’s Constitutional AI: Harmlessness from AI feedback
- AI Control: Improving safety despite intentional subversion by Ryan Greenblatt and others at Redwood Research
- Eureka! NVIDIA research breakthrough puts new spin on robot learning by Angie Lee
- The media very rarely lies by Scott Alexander
Transcript
Table of Contents
- 1 Cold open [00:00:00]
- 2 Rob’s intro [00:01:00]
- 3 The interview begins [00:04:43]
- 4 Transitioning to a world where AI systems do almost all the work [00:05:20]
- 5 Economics after an AI explosion [00:14:24]
- 6 Objection: Shouldn’t we be seeing economic growth rates increasing today? [00:59:11]
- 7 Objection: Speed of doubling time [01:07:32]
- 8 Objection: Declining returns to increases in intelligence? [01:11:58]
- 9 Objection: Physical transformation of the environment [01:17:37]
- 10 Objection: Should we expect an increased demand for safety and security? [01:29:13]
- 11 Objection: “This sounds completely whack” [01:36:09]
- 12 Income and wealth distribution [01:48:01]
- 13 Economists and the intelligence explosion [02:13:30]
- 14 Baumol effect arguments [02:19:11]
- 15 Denying that robots can exist [02:27:17]
- 16 Semiconductor manufacturing [02:32:06]
- 17 Classic economic growth models [02:36:10]
- 18 Robot nannies [02:48:25]
- 19 Slow integration of decision-making and authority power [02:57:38]
- 20 Economists’ mistaken heuristics [03:01:06]
- 21 Moral status of AIs [03:11:44]
- 22 Rob’s outro [04:11:46]
Cold open [00:00:00]
Carl Shulman: An AI model running on brain-like efficiency computers is going to be working all the time. It does not sleep, it does not take time off, it does not spend most of its career in education or retirement or leisure. So if you do 8,760 hours of the year, 100% employment, at $100 per hour, you’re getting close to a million dollars of wages equivalent. If you were to buy that amount of skilled labour today that you would get from these 50,000 human brain equivalents at the high end of today’s human wages, you’re talking about, per human being, the energy budget on Earth could sustain more than $50 billion worth at today’s prices of skilled cognitive labour. If you consider the high end, the scarcer, more elite, higher compensated labour, then it’s even more.
Rob’s intro [00:01:00]
Rob Wiblin: Hey listeners, Rob Wiblin here.
In my opinion, in terms of his ability and willingness to think through how different hypothetical technologies might play out in the real world, Carl Shulman stands alone.
Though you might not know him yet, his ideas have been hugely influential in shaping how people in the AI world expect the future to look. Speaking for myself, I don’t think anyone else has left a bigger impression on what I picture in my head when I imagine the future. The events he believes are more likely than not are wild, even for someone like me who is used to entertaining such ideas.
Longtime listeners will recall that we interviewed him about pandemics and other threats to humanity’s future besides AI back in 2021. But here we’ve got six hours on what Carl expects would be the impact of cheap AI that can do everything people can do and more — something he has been reflecting on for about 20 years.
Hour for hour, I feel like I learn more talking to Carl than anyone else I could name.
Many researchers in the major AI companies expect this future of cheap superhuman AI that recursively self-improves to arrive within 15 years, and maybe in the next five. So these are issues society is turning its mind to too slowly in my view.
We’re splitting the episode into two parts to make it more manageable. This first will cover AI and the economy, international conflict, and the moral status of AI minds themselves. The second will cover AI and epistemology, science, culture, and domestic politics.
To offer some more detail, here in part one we first:
- Dive into truly considering the hypothetical of what would naturally happen if we had AIs that could do everything humans could do with their minds having a similar level of energy efficiency. Not just thinking about something nearby that, but concretely envisaging how the economy would function at that point, and what human lifestyles could look like.
- Fleshing out that vision takes about an hour. But at that point we then go through six objections to the picture Carl paints — including why we don’t see growth increasing now, whether any complex system can grow so quickly, whether intelligence is really that useful, practical physical limits to growth, whether humanity might choose to prevent this from happening, and it all just sounding too crazy.
- Then we consider other arguments economists give for rejecting Carl’s vision — including Baumol effects, the lack of robots, policy interference, bottlenecks in transistor manufacturing, and the need for a human touch, whether that’s in childcare or management. Carl explains in each case why he thinks economists’ conventional bottom lines on this topic are mistaken and at times even self-contradictory.
Finally, through all that we’ve been imagining AIs as though they were just tools without their own interests or moral status. But that may not be the case, and so we close by discussing the challenges of maintaining an integrated society of both human and nonhuman intelligences in which both live good lives and neither is exploited.
In this episode we refer to Carl’s last interview on the Dwarkesh Podcast in June 2023, in which he talked about how an intelligence explosion happens, the fastest way to build billions of robots, and a concrete step-by-step account of how an AGI might try to take over the world. That was maybe my favourite podcast episode of last year, so I can certainly recommend going and checking it out if you like what you hear here. There’s not really a natural ordering of what to listen to first; these are all just different pieces of the complex integrated picture of the future Carl has been developing, which I hope he’ll continue to elaborate on in other future interviews.
And now I bring you Carl Shulman, on what the world would look like if we got cheap superhuman AGI.
The interview begins [00:04:43]
Rob Wiblin: Today I’m speaking with Carl Shulman. Carl studied philosophy at the University of Toronto and Harvard, and then law at NYU. He’s an independent researcher who blogs at Reflective Disequilibrium.
While he keeps a low profile, Carl has had as much influence on the conversation about existential risks as anyone. And he’s also just one of the most broadly knowledgeable people that I’m aware of.
In particular, for the purposes of today’s conversation, he has spent more time than almost anyone thinking deeply about the dynamics of a transition to a world in which AI models are doing most or all of the work, and how the government and economy and ordinary life might look after that transition. Thanks for coming back on the podcast, Carl.
Carl Shulman: Thank you, Rob. I’m glad to be back.
Transitioning to a world where AI systems do almost all the work [00:05:20]
Rob Wiblin: I hope to talk about what changes in our government structures might be required in a world with superhuman AI, and how an intelligence explosion affects geopolitics.
But first, you spent a lot of time trying to figure out what’s the most likely way for the world to transition into a situation in which AI systems are doing almost all the work, possibly all of it, and then also kind of picturing how the economy, what it would look like, and how it might actually be functioning after that transition. Why is that a really important thing to do that you’ve thought it’s worth investing a substantial amount of mental energy into?
Carl Shulman: Sure, Rob. So you’ve had a number of guests on discussing the incredible progress in AI and the potential for that to have transformative impacts. One issue that’s pretty interesting is the possibility that humans lose control of our civilisation to the AIs that we produce. Another is that geopolitical balances of power are greatly disrupted, that things like deterrence in the international system and military balances are radically changed. And just any number of issues; those are some of the largest.
And the amount of time that we have for human input into that transition is significantly affected by how fast these feedback processes are. And characterising the strength of that acceleration points to to what extent you may have some parts of the world pull away from others — that a small initial difference in, say, how advanced AI technology is in one alliance of states rather than another translates into huge differences in economic capabilities or military power.
And similarly for controlling AI systems and avoiding a loss of control of human civilisation, the faster those capabilities are moving at the time we get to really powerful systems where control problems could become an issue, the more there will be very little opportunity for humans to have input to understand the thing or for policy response to work. And so it matters a lot whether you have transitions from AIs accounting for a small portion of economic or scientific activity to the overwhelming majority: if that was 20 years rather than two years, it’s going to make a huge difference for our ability to respond.
Rob Wiblin: What are some of the near-term decisions that we might need to make, or states might need to be thinking about over the next five years that this sort of picture might bear on?
Carl Shulman: Sure. Well, some of the most important, I think, are whether to set up the optionality to take regulatory measures later on. So if automation of AI research means that by the time you have systems with roughly human-like capabilities — without some of the glaring weaknesses and gaps that current AI systems have — if at that point, instead of AI software capabilities doubling on a timescale of like a year, if that has gone down to six months, three months, one month, then you may have quite a difficult time having a regulatory response.
And if you want to do something like, say, set up hardware tracking so that governments can be assured about where GPUs are in the world, so that they have the opportunity to regulate if it’s necessary, in light of all the evidence that they have at the time, that means you have to set up all of the infrastructure and the systems years in advance, let alone the process of political negotiation, movement building, setting up international treaties, working out the kinks of enforcement mechanisms. So if you want the ability to regulate these sorts of things, then it’s important to know to what extent will you be able to put it together quickly when you need it, or will it be going so fast that you need to set things up earlier?
Rob Wiblin: One of the important decisions that could come up relatively soon, or at least as we begin to head into rapid increases in economic growth, is that different countries or different geopolitical blocs might start to feel very worried about the prospect that you could see very rapid economic or technological advances in another bloc, because they would anticipate that this is going to put them at a major strategic disadvantage. And so this could set up quite an unstable situation in which one bloc moving ahead with this technological revolution ahead of the other, could, I guess, trouble the other side to a sufficient degree that they could regard that almost as a hostile act.
And that we should think about how we are going to prevent there being conflict over this issue, because one country having an economy that’s suddenly 10 or 100 times larger than another would potentially just give them such a decisive strategic advantage that this would be highly destabilising, that even the prospect of that would be highly destabilising.
Carl Shulman: Yeah, I think this is one of the biggest sources of challenge in negotiating the development of advanced AI. Obviously for the risk of AI takeover, that’s something that’s not in the interest of any state. And so to the extent that the problem winds up well understood when it’s really becoming live, you might think everyone will just design things to be safe. If they are not yet like that, then companies will be required to meet those standards before deploying things, so there will not be much problem there; everything should be fine.
And then the big factor I think that undermines that is this pressure and fear which we already see in things like chip nationalism. There are export controls placed by the US and some of its economic partners on imports of advanced AI chips by a number of countries. You see domestic subsidies in both the US and China for localisation of chip industries.
And so there’s already some amount of politicisation of AI development as an international race — and that’s in a situation where so far AI has not meaningfully changed balances of power; it doesn’t thus far affect things like the ability of the great powers to deter one another from attacks, and the magnitude of those effects that I would forecast get a lot larger later on. So it requires more efforts to have those kinds of tensions tamped down, and to get agreements that capture benefits that both sides care about and avoid risks of things they don’t want. And that includes the risk of AI takeover from humans in general.
There’s also just that if the victor of an AI race is uncertain, the different political blocs would each probably dislike more finding themselves militarily helpless with respect to other powers than they would like to have that position of power with respect to their rivals. And so potentially, there’s a lot of room for deals that all parties expect to be better going forward, that avoid extreme concentration of power that could lead to global dominance by either rogue AI or one political bloc.
But it requires a lot of work. And making that happen is, I think, more likely to work out if various parties who could have a stake in those things foresee some of these issues, make deals in advance, and then set up the procedures for trust building, verification, enforcement of those deals in advance, rather than a situation where these things are not foreseen, and late in the game, it becomes broadly perceived that there’s a chance for sort of extreme concentration of power and then a mad scramble for it. And I think we should like, on pluralistic grounds and low-hanging fruit gains from trade, to have a situation where there’s more agreement, more negotiation about what happens — rather than a mad rush where some possibly nonhuman actor winds up with unaccountable power.
Economics after an AI explosion [00:14:24]
Rob Wiblin: OK, so what you just said builds on the assumption that we’re going to see very rapid increases in the rate of economic growth in countries that deploy AI. You think we could see the global economy doubling in well under a year, rather than every 15 years as it does today. That’s in part because of this intelligence explosion idea — where progress in AI can be turned back on the problem of making AI better, creating a possibly powerful positive feedback loop.
For many people, those sorts of rates of economic growth of well over 100% per year will sound pretty shocking and require some justification. So I’d like to spend some time now exploring what you think a post-AGI economy would look like and why. What are the key transformations you expect we would observe in the economy after an AI capabilities explosion?
Carl Shulman: Well first, your description talked about AI feeding back into AI, and so that’s an AI capabilities explosion dynamic that seems very important in getting things going. But that innovative effort then applies to other technologies, and in particular, one critical AI technology is robotics. Robotics is heavily limited now by the lack of smart, efficient robot controllers. As I discussed on the Dwarkesh Podcast, with rich robotic controllers and a surfeit of cognitive labour to make industry more efficient, manage human workers and machines, and then make robotic replacements for the human manual labour contributions, you’re quickly moving into the physical world and physical things.
And really the economic growth or economic scale implications of AI come from both channels: one, greatly expedited innovation by having tremendously more and cheaper cognitive labour, but secondly, by eliminating the human bottleneck on the expansion of physical industry. Right now, as you make more factories, if you have fewer workers per factory and fewer workers per tool, the additional capital goods are less valuable. By moving into a situation where all of those inputs of production can be scaled and accumulated, then you can just have your industrial system produce more factories, more robots, more machines, and at some regular doubling time, just expand the amount of physical stuff.
And that doubling time can potentially be pretty short. So in the biological world, we see things like cyanobacteria or duckweed, lily pads, that can actually double their population using energy harvested from the sun in as little as 12 hours in the case of cyanobacteria, and in a couple of days for duckweed. You have fruit flies that, over a matter of weeks, can increase their population a hundredfold. And that includes little biorobotic bodies and compute in the form of their tiny nervous systems.
So it is physically possible to have physical stuff, including computing systems and bodies and manipulators, to double on a very short time scale — such that if you take those doubling rates over a year, that exponential goes to use up the natural resources on the earth, in the solar system. And at that point, you’re not limited by the growth rate of labour and capital, but by these other things that are in more fixed supply, like natural resources, like solar energy.
And when we ask what are those limits, if you have a robotic industry expand to the point where the reason it can’t expand more — why you can’t build your next robot, your next solar panel, your next factory — is that you have run out of natural resources? So on Earth, you’ve run out of space to put the solar panels. Or the heat dissipation from your power industry is too great: if you kept adding more, it would raise the temperature too much. You’re running out of metals and whatnot. That’s a very high bar.
Right now, human energy consumption is on the scale of 1013 watts. That is, it’s in the thousands of watts per human. Solar energy hitting the top of the atmosphere, not all of it gets down, but is in the vicinity of 2 x 1017 — so 10,000 times or thousands of times our current world energy consumption reaches the Earth. If you are harvesting 5% or 10% of that successfully, with very high-efficiency solar panels or otherwise coming close to the amount of energy use that can be sustained on the Earth, that’s enough for a million watts per person. And a human brain uses 20 watts, a human body uses 100 watts.
So if we consider robotics technology and computer technology that are at least as good as biology — where we have physical examples of this is possible because it’s been done — that budget means you could have, per person, an energy budget that can, at any given time, sustain 50,000 human brain equivalents of AI cognitive labour, 10,000 human-scale robots. And then if you consider smaller ones, say, like insect-sized robots or small AI models, like current systems — including much smarter small models distilled from the gleanings of large models, and with much more advanced algorithms — that’s a per person basis, that’s pretty extreme.
And then when you consider the cognitive labour being produced by those AIs, it gets more dramatic. So the capabilities of one human brain equivalent worth of compute are going to be set by what the best software in the world is. So you shouldn’t think of what average human productivity is today; think about, for a start, for a lower bound, the most skilful and productive humans. In the United States, there are millions of people who earn over $100 per hour in wages. Many of them are in management, others are in professions and STEM fields: software engineers, lawyers, doctors. And there’s even some who earn more than $1,000 an hour: new researchers at OpenAI, high-level executives, financiers.
An AI model running on brain-like efficiency computers is going to be working all the time. It does not sleep, it does not take time off, it does not spend most of its career in education or retirement or leisure. So if you do 8,760 hours of the year, 100% employment, at $100 per hour, you’re getting close to a million dollars of wages equivalent. If you were to buy that amount of skilled labour today that you would get from these 50,000 human brain equivalents at the high end of today’s human wages, you’re talking about, per human being, the energy budget on Earth could sustain more than $50 billion worth at today’s prices of skilled cognitive labour. If you consider the high end, the scarcer, more elite, higher compensated labour, then it’s even more.
If we consider an even larger energy budget beyond Earth, there’s more solar energy and heat dissipation capacity in the rest of the solar system: about 2 billion times as much. If that winds up being used, because people keep building solar panels, machines, computers, until you can no longer do it at an affordable enough price and other resources to make it worthwhile, then multiply those numbers before by a millionfold, 100 millionfold, maybe a billionfold, and that’s a lot. If you have 50 trillion human brains’ worth of AI minds at very high productivity, each per human being, or perhaps a mass of robots, like unto trillions upon trillions of human bodies, and dispersed in a variety of sizes and systems. It is a society whose physical and cognitive, industrial and military capabilities are just very, very, very, very large, relative to today.
Rob Wiblin: So there’s a lot there. Let’s unpack that a little bit, bit by bit. So the first thing that you were talking about was the rate of growth and the rate of replication in the economy. Currently the global economy grows by about 5% a year. Why can’t it grow a whole lot faster than that?
Well, one thing is that it would be bottlenecked by the human population, because the human population only grows very gradually. Currently it’s only about 1% a year. So even if we were to put a lot of effort into building more and more physical capital, more and more factories and offices and things like that, eventually the ratio of physical capital to actual people to use that physical capital would get extremely unreasonable, and there wouldn’t be very much that you could do with all of this capital without the human beings required to operate them usefully. So you’re somewhat bottlenecked by the human population here.
But in this world we’re imagining, humans are no longer performing any functional productive role in the economy. It’s all just machines, it’s all just factories. So the human population is no longer a relevant bottleneck. So in terms of how quickly we can expand the economy, we can just ask the question: How long would it take for this entire productive machinery, all of the physical capital in this world, to basically make another copy of itself? Eventually you’d get bottlenecked, I guess, by physical resources, and we might have to think about going off of Earth in order to unbottleneck ourselves on natural resources. But setting that aside for a minute, if you manage to double all of the productive mechanisms in the economy, including all of the factories, all of the minds, all of the brains, then basically you should be able to roughly double output.
So then we’ve got this question of how quickly could that plausibly happen? That’s a tough question to answer; presumably there is some practical limit given the laws of physics.
To give us a lower bound, you’ve pointed us to these similar cases where we already have complex sets of interlocking machinery that represents an economy of sorts, that grabs resources from the surrounding environment and replicates every part of itself again and again so long as those resources are available.
And that’s the case of biology! So we can ask, in ideal conditions, how long does it take for cyanobacteria, or fruit flies, or lily pads to duplicate every component in their self-replicating factories? And that, in some cases, takes days, or even less than a day in extreme cases.
Now, the self-replicating machine that is the lily pad may or may not be a perfect analogy for what we’re picturing with a machine economy of silicon and metal. How do you end up kind of benchmarking or thinking how quickly might the entire economy be able to double its productive capacity? How long would it take to reproduce itself?
Carl Shulman: On the Dwarkesh Podcast, I discussed a few of these benchmarks. One thing is to ask, just how much does a GPU cost compared to the wages of skilled labourers? So right now, with enormous markups, because currently there was a demand shock, many companies are trying to buy AI chips, and there’s amortisation of the cost of developing and designing the chip and so forth.
So you have a chip like the H100, which has computational power in FLOPS that I think is close to the human brain. Less memory, and there’s some complexities related to that; basically, existing AI systems are adapted to the context of GPUs, where you have more FLOPS, less memory. And so they operate the same model many times on, for example, different data, but you can get a similar result of, take 1,000 GPUs that collectively have the memory to fit a very large model, and then they have this large amount of compute, and then they will run, say, a human-sized model, but then evaluate it thousands of times as often as a human brain would.
Anyway, so these chips are on the order of $30,000. As we were saying before, skilled workers paid $100 per hour, in 300 hours, are going to earn enough to pay for another H100. And so that suggests a very, very short doubling time if you could keep buying GPUs at those prices or lower prices — when, for example, the cost of the design is amortised over very large production runs.
Now, the cost would actually be higher if we were trying to expand our GPU production super fast. And the basic reason is that they’re made using a bunch of large pieces of equipment that would normally be operated for a number of years. So TSMC is the leading fab company in the world. In 2022, they had revenue on the order of $70 billion. And their balance sheet shows plant property and equipment of about $100 billion. So if they had to pay for the value of all of those fabs, all of the lithography machines, all of that equipment, out of the revenues of that one year, then they would need to raise prices correspondingly. But as we’re saying, if right now the price of GPUs is so low relative to the wages per hour of a human brain, then you could accommodate a large increase in prices. You could handle what would otherwise be profligate waste of making these production facilities with an eye to a shorter production period.
Rob Wiblin cut-in: I’ll just quickly define a few things — Carl mentioned GPUs which stands for graphics processing unit and is the kind of computer chip you mostly use for AI applications today. He mentioned TSMC, which is the world’s biggest manufacturer of computer chips, based in Taiwan. In that ecosystem, the other famous companies are Nvidia — which designs the cutting-edge chips that TSMC makes — and then there’s ASML which is a Dutch company and the only supplier of the lithography machines that can print the most powerful GPUs. OK, back to the interview.
And we can say similar things about robots. They’re not as extreme as for computing, but industrial robots that cost on the order of $50,000 to $100,000, given sufficient controller skill, if you have enough robotic software technology, that can replace several workers in a factory, and then if we consider vastly improved technology on those robots and better management and operation, then that again suggests that the payback time of robotics — with the sort of technological advancements you’d expect from scaling up the industry by a bunch of orders of magnitude, huge technology improvements, and very smart AI software to control it — again suggests you could get to a payback time that was well under a year.
And then for energy, there are different ways to produce energy, but there’s a fairly extensive literature trying to estimate energy payback times of different power technologies. This is relevant, for example, in assessing the climate impacts of renewable technology, because you want to ask, if you use fossil fuels initially with carbon emissions to make solar panels, then the solar panels produce carbon-free electricity, how long does it take before you get back the energy that was put into it? And for the leading cells, those times are already under a year. And if you go for the ones that have the lowest energy inputs, thin film cells and whatnot, in really good locations, equatorial deserts, that sort of place, yeah, you can get well under a year, more like two-thirds of a year, according to various studies.
Now that gets worse, again, if you’re trying to expand production really fast, because if I want to double solar panel production next year, that means I have to build all of these factories. And the energy use required to build a factory that’s going to make solar panels for five years or 10 years is larger than one-fifth or one-tenth of that energy that we would normally, in the energy payback analysis, they’d divide the energy used to build the factory across all of the solar panels that it’s going to produce. Nonetheless, solar panel efficiency and the energy costs of making solar panels has improved enormously. In the ’50s, some of the first commercial solar panels cost on the order of $1,800 per watt, and today we’re in the vicinity of $1 per watt.
So how do you expand solar production far beyond where we’re at and have radically enhanced innovation? It does not seem much of a stretch to say we get another amount of progress, which is all within physical limits, because we know there are these biological examples and whatnot, to get another order of magnitude or so of the sort that we’ve gotten over the previous 70 years. And that suggests we get down to an energy payback time that is well under a year, even taking into account that you’re trying to scale the fab so much, and you adjust production to minimise upfront costs at the expense of duration of the panels, that sort of thing. So yeah, it’s like a one-month doubling time out of that, on energy, looks like something we would get to.
Rob Wiblin: Yeah. So those are some of the factors that cause you to think that possibly we could see the economy doubling every couple of months or something like that. That was one part of the answer.
Another part of the answer is, if we try to imagine what should be possible after we’ve had this enormous takeoff in the quality of our technology, this enormous takeoff in the size of the economy, one thing you can ask is, broadly speaking, how much energy should we be able to harvest? And there you’re getting an estimate by saying, well, how much energy arrives on Earth from the sun? And then plausibly we’ll be able to collect at least 10% of that, and then we’ll split it among people.
And then how much mental labour should you be able to accomplish using that energy that we’re managing to get? And there you’re using the benchmark of the human brain, where we know roughly this sort of mental labour that a human brain is able to do under good conditions, and we know that it uses about 20 watts of energy to do that. I guess if you want to say the human body is also somewhat necessary for the brain to function, then you get up to more like 100 watts.
Then you can say, how many minds on computer chips could we in principle support, using the energy that we’re harvesting, using solar panel, if we manage to get our AI systems to have a similar level of algorithmic efficiency and energy efficiency to the human brain, where you can accomplish roughly what a very capable, very motivated human can using 20 watts? And you end up with these absurd multiples, where you say, in principle, we should be able to have possibly tens of thousands. I think you were suggesting, I didn’t do the mental arithmetic there, but in effect, for every person using that energy, you could support the mental labour that would be performed by tens of thousands of lawyers and doctors and so on today. Is that broadly right?
Carl Shulman: Well, more because of working 100% of the time at peak efficiency, and no human has a million years of education, but these AI models would. It’s just routine to train AI models on amounts of material that would take millennia for humans to support. And similarly, other kinds of advantages boost AI productivity: intense motivation to the task, adjustment of the relative weighting of the brain towards different areas. For some tasks, you can use very small models that would require one-thousandth of the computation; for other tasks, you might use models much larger than human brains, which would be able to maybe handle some very complicated tasks.
And combining all of these advantages, you should do a lot better than what you would get if it was just human equivalent labourers. But this is something of a lower bound. And we can say, in terms of human brain equivalents of computation, yes, in theory, could support tens of thousands of times that on Earth, and then far more beyond.
Rob Wiblin: OK, so that’s sort of the mental labour picture. And I think maybe it’s already helping to give people a sense of why it is that this world would be so transformed, so different in terms of its productive capabilities. That a country that went through this transition sooner — and suddenly every person had the equivalent of 10,000 people working for them, doing mental work — that that actually would provide a decisive strategic advantage against other blocs that hadn’t undergone that transition, that the power imbalance would just be really wild.
What about on the physical side? Would we see similar radical increases in physical productive ability? Ability to build buildings and do things like that? Or is there something that’s different between the physical side versus the mental labour side?
Carl Shulman: Well, we did already talk about an expansion of global energy use. And similarly for mining, it’s possible to expand energy and use improved mining technology to extract materials from lower grade ores. So far in history, that has been able to keep peak oil or peak mineral x concerns from really biting because it’s possible to shift on these other margins. So yeah, a corresponding expansion of the amount of material stuff and energy use and then enormous increases in efficiency and quality of those goods.
In the military context, if you have this expansion of energy and materials, then you can have a mass of military equipment that is accordingly however many orders of magnitude higher, having ultra sophisticated computer systems and guidance, and it can make a large difference. Seeing technological differences of only a few decades in military technology, the effects are pretty dramatic. So in the First Gulf War, coalition forces came in and the casualty ratio was something absurd, hundreds or 1,000 to 1. And a lot of that was because the munitions of the coalition were smart, guided, and would just reliably hit their targets. And so just having a tremendous sophistication in guidance, sensor technology, and whatnot would suggest huge advantages there.
Not being dependent on human operators would mean that military equipment could be much smaller. So if you’re going to have, say, 100 billion insect-sized drones or mouse-sized drones or whatnot, you can’t have an individual human operator for each of those. And if they’re going into areas where radio transmission is limited or could be blocked, that’s something that unless they have local autonomy, they can’t do.
But if you have small systems by the trillions or more, such that there are hundreds or thousands of small drones per human on Earth, then that means A, they can be a weapon of mass destruction — and some of the advocates against autonomous weapons have painted scenarios which are not that implausible about vast numbers of small drones having a larger killing power per dollar than nuclear weapons, and that they disperse to different targets.
And then in terms of undermining nuclear deterrence, if the amount of physical equipment has grown by these orders and orders of magnitude, then there can be thousands, tens of thousands of interceptors for, say, each opposing missile. There can be thousands, tens of thousands of very small infiltrator drones that might go behind a rival’s lines and then surreptitiously sabotage and locate nuclear weapons in place.
Just the magnitude of difference in materiel, and then allowing such small and numerous systems to operate separately and just greatly enhance technological capabilities, it’s one where it really seems that if you had this kind of expansion and then you had another place that was maybe one or two years behind technologically, it might be no contest. Not just no contest in the sense of which is the less horribly destroyed survivor of a war of mutual destruction, but actually fundamentally breaking down deterrence, because it’s possible to disable the military of a rival without taking significant casualties or imposing them.
Rob Wiblin: I suppose if you could just disarm an enemy without even imposing casualties on them, then that might substantially increase the appetite for going ahead with something like that, because people, the moral qualms that they would otherwise have might just be absent.
Carl Shulman: There’s that. And then even fewer moral qualms might be attached to the idea of just outgrowing the rival. So if you have an expansion of industrial equipment and whatnot that is sufficiently large, and that then involves seizing natural resources that right now are unclaimed — because remember, in this world, the limit on the supply of industrial equipment and such that can exist is a natural-resource-based limit, and right now, most natural resources are not in use. So most of the solar energy, say, that reaches the Earth is actually hitting the oceans and Antarctica. The claimed territory of sovereign states is actually a minority of the surface of the Earth because the oceans are largely international waters.
And then, if you consider beyond Earth, that, again, is not the territory of any state. There is a treaty, the Outer Space Treaty, that says it’s the common heritage of all mankind. But if that did not translate into blocking industrial expansion there, you could imagine a state letting loose this robotic machinery that replicates at a very rapid rate. If it doubles 12 times in a year, you have 4,096 times as much. By the time other powers catch up to that robotic technology, if they were, say, a year or so behind, it could be that there are robots loyal to the first mover that are already on all the asteroids, on the Moon, and whatnot. And unless one tried to forcibly dislodge them, which wouldn’t really work because of the disparity of industrial equipment, then there could be an indefinite and permanent gap in industrial and military equipment.
And that applies even after every state has access to the latest AI technology. Even after the technology gap is closed, a gap in natural resources can remain indefinitely, because right now those sorts of natural resources are too expensive to acquire; they have almost no value; the international system has not allocated them. But in a post-AI world, the basis of economic and industrial and military power undergoes this radical shift where it’s no longer so much about the human populations and skills and productivity, and in a few cases things like oil revenues and whatnot. Rather, it’s about access to natural resources, which are the bottleneck to the expansion of industry.
Rob Wiblin: OK, so the idea there is that even after this transition, even after everyone has access to a similar level of technology in principle, one country that was able to get a one-year head start on going into space and claiming as many resources as they can, it’s possible that the rate of replication there, the rate of growth, would be so fast that a one-year head start would allow you to claim most of it, because other people just couldn’t catch up in the race of these ever self-replicating machines that then go on and claim more and more territory and more and more resources. Is that right?
Carl Shulman: That’s right, yeah.
Rob Wiblin: OK. Something that’s crazy intuitively about this perspective, where here we’re thinking about what sort of physical limits are there on how much useful computation could you do with the energy and the materials in the universe, is that we’re finding these enormous multiples between where we’re at now and where, in principle, one could be. Where just on Earth, just using something that’s about as energy efficient as the human mind, everyone could have 10 to 100,000 amazing assistants helping them, which means that there’s just this enormous latent inefficiency in what is currently happening on Earth relative to what is physically possible — which to some extent, you would have to hold evolution accountable, saying that evolution has completely failed to take advantage of what the universe permits in terms of energy efficiency and the use of materials.
I think one thing that makes the whole thing feel unlikely or intuitively strange is that maybe we’re used to situations in which we’re closer to the efficient frontier. And the idea that you could just multiply the efficiency of things by 100,000 fold feels strange and foreign. Is it surprising at all that evolution hasn’t managed to get closer at all to physical limits of what is possible, in terms of useful computation?
Carl Shulman: So just numerically, how close was the biosphere to the energy limits of Earth that we’re talking about? So net primary productivity is on the order of 1014 watts. So it’s a few times higher than our civilisation’s energy consumption across electricity, heating, transportation, industrial heat. So why was it a factor of 1,000 smaller than solar energy hitting the top of the atmosphere?
One thing is not intercepting stuff high in the atmosphere. Secondly, I was just saying that most of the solar is hitting the oceans and otherwise land that we’re not inhabiting. And so why is the ocean mostly unpopulated? It’s because in order for life to operate, it needs energy, but it also needs nutrients. And in the ocean, those nutrients, they sink; they’re not all at the surface. And where there are upwellings of nutrients, in fact, you see incredible profusion of life at upwellings and in the near coastal waters. But most of the ocean is effectively desert.
And in the natural world, plants and animals can’t really coordinate at large scales, so they’re not going to build a pump to suck the nutrients that have settled on the bottom up to the surface. Whereas humans and our civilisation organise these large-scale things, we invest in technological innovation that pays off at large scales. And so if we were going to provide our technology to help the biosphere grow, that could include having nutrients on the surface, so having little floating platforms that would contain the nutrients and allow growth there.
It would involve developing the vast desert regions of the Earth, which are limited by water. And so we could have, using the abundant solar energy in the Sahara, you can do desalination, bring water in, expand the habitable area. And then when we look even in arable land, you have nutrients that are not in the right balance for a particular location; you have competition, pests, diseases and such, that reduce productivity below its peak.
And then just the actual conversion rate of sun on a square metre in green plants versus photovoltaics, there’s a significant gap. So we have solar panels with efficiencies of tens of percent, and it’s possible to make multi-junction cells that absorb multiple wavelengths of light, and the theoretical limit for those is very high. And I think an extreme theoretical limit that involves making other things impractical can go up to something like 77% efficiency. By going to 40 or 50% efficiency, and then converting that into electricity, which is a very useful form of energy, and the form of energy we’re talking about for things like computers do very well.
And then photosynthesis, you have losses to respiration. You’re only getting a portion of the light in the right wavelengths and the right angles, et cetera. Most of the potential area is not being harvested. A lot of the year, there’s not a plant at every possible site using the energy coming in. And our solar panels can do it a bit better.
If we just ignore the solar panels, we could just build nuclear fission power plants to produce an amount of energy that is very large. The limitation we would run into would just be heat release, that the world’s temperature is a function of the energy coming in and going out infrared, the infrared increases with temperature. And so if we put too many nuclear power plants on the Earth, eventually the oceans would boil, and that is not a thing we would want to do.
But yeah, these are pretty clear ways in which nature was not able to fully exploit things. Now we might choose also not to exploit some of that resource once it becomes economical to do so. And if you imagine a future where society is very rich, if people want to maintain the dead, empty oceans, not filled with floating solar platforms, they can do that. Outsource industries, say, to space solar power. If you’re going to have a compute or energy intensive industry that makes information goods that don’t need to be colocated with people on Earth, then sure, get them off Earth. Protect nature. There’s not much nature to disrupt in the sort of empty void. So you could have those sorts of shifts.
Rob Wiblin: Yeah. What do you imagine people would be spending their money on in a world in which they have access to the kinds of resources that today would cost tens or hundreds of millions of dollars a year in terms of intellectual labour? How would people choose to spend this surplus?
Carl Shulman: Well, we should remember some things are getting much cheaper relative to others. So if you increase the availability of energy by a hundredfold or a thousandfold, but then we increase the availability of cognitive labour by millions of times or more, then the relative price of, say, lawyer time or doctor time or therapist time, compared to the price of a piece of toast, that has to plummet by orders of magnitude, tens of thousands of times, hundreds of thousands of times, and more.
And so when we ask, what are people spending money on, it’s going to be enriched for the things that scale up the least. But even those things that scale up the least seem like they’re scaling up quite a lot, which is a reason why I’d expect this to be quite transformative.
So what are people spending money on? We can look today at how people’s consumption changes as they get richer. One thing they spend a lot on, or even more on as they get richer, is housing. Another one is medicine. Medicine is very much a luxury good in the sense that as people and countries get richer, they spend a larger and larger proportion of their income on medical care. And then we can say the same things about, say, the pharmaceutical industry, the medical device industry. So the development of medical technology that is then sold. And there are similar things in the space of safety.
Government expenditures may have a tendency to grow with the economy and with what the government can get away with taking. If military competition were a concern, then building the industrial base for that, like we were saying, could account for some significant chunk of industrial activity, at least.
And then fundamentally, things that involve human beings are not going to get, again, overwhelmingly cheap. So more energy, more food can support more people, and conceivably support, over time, human populations that are 1,000, a million, a billion times as great as today. But if you have exponential population growth over a long enough time, that can use up any finite amount of resources. And so we’re talking about a situation where AI and robotics undergoes that exponential growth much faster than humans, so initially, there’s an extraordinary amount of that industrial base per human.
But if some people keep having enough kids to replace themselves, if lifespans and healthspans extend, IVF technology improves, and you wind up with some fertility rate above replacement, robot nannies and such could help with that as well. Then over 1,000 years, 10,000 years, 100,000 years, eventually human populations could become large enough to put a dent in these kinds of resources. This is not a short-term concern, unless, say, people use those AI nannies and artificial wombs to create a billion kids raised by robots, which would be sort of a weird thing to do. But I believe there was a family in Russia that had dozens of kids using surrogates. And so you could imagine some people trying that.
Rob Wiblin: OK, so you’ve just laid out a picture of the world and the economy there that, if people haven’t heard of this general idea before, they might be somewhat taken aback by these expectations. Just to clarify, what do you think is the probability that we go through a transition that, broadly speaking, looks like what you’ve described, or that the transition begins in a pretty clear way within the next 20 years?
Carl Shulman: I think that’s more likely than not. I’m abstracting over uncertainties about exactly how fast the AI feedbacks go. So it’s possible that just software-only feedbacks are sufficiently intense to drive an explosion of capabilities. That is, things that don’t involve building enormous numbers of additional computers can give you the juice to increase the effective abilities of AIs by a few orders of magnitude, several orders of magnitude. It’s possible that as you’re going along, you need the combination of hardware expansion and software. Eventually you’ll need a combination of hardware and software, or just hardware, to continue the expansion. But exactly how intense the software-only feedback loop is at the start is one source of uncertainty.
But because you can make progress on both software and hardware by improving hardware technology and by building additional fabs or some successor technology, the idea that there is like a quite rapid period of growth on the way in is something that I’m relatively confident on. And in particular, the idea that eventually that also leads to improvements in the throughput of automated industrial technology — so that you have a period of what’s analogous to biological population growth, where a self-replicating industrial system grows rapidly to catch up to natural resource bounds — I think that’s quite likely.
And that aspect of it could happen even if we wind up, say, with AI taking over our civilisation. They might do the same thing, although I expect probably there will be human decisions about where we’re going. And while there’s a serious risk of AI takeover, as I discussed with Dwarkesh, it’s not my median outcome.
Rob Wiblin: OK, so quite likely, or more likely than not. I think you have a reasonable level of confidence in this broad picture.
Objection: Shouldn’t we be seeing economic growth rates increasing today? [00:59:11]
Rob Wiblin: So later on, we’re going to go through some objections that economists have to this story and why they’re kind of sceptical that things are going to play out in such an extreme way as this. But maybe now I’ll just go through some of the things that give me pause and make me wonder, is this really going to happen?
One of the first ones that occurs to me is you might expect an economic transformation like this to happen in a somewhat gradual or continuous way, where in the lead up to this happening, you would see economic growth rates increasing. So you might expect that if we’re going to see a massive transformation in the economy because of AGI in 2030 or 2040, shouldn’t we be seeing economic growth rates today increasing? And shouldn’t we maybe have been seeing them increase for decades as information technology has been advancing and as we’ve been gradually getting closer to this time?
But in reality, over the last 50 years, economic growth rates have been kind of flat or declining. Is that in tension with your story? Is there a way of reconciling why it is that things might seem a little bit boring now, but then we should expect radical changes within our lifetimes?
Carl Shulman: Yeah, you’re pointing to an important thing. When we double the population of humans in a place, ceteris paribus, we expect the economic output after there’s time for capital adjustments to double or more. So a place like Japan, not very much in the way of natural resources per person, but has a lot of people, economies of scale, advanced technology, high productivity, and can generate enormous wealth. And some places have population densities that are hundreds or thousands of times that of other countries, and a lot of those places are extremely wealthy per capita. By the example of humans, doubling the human labour force really can double or more economic output after capital adjustment.
For computers, that’s not the case. And a lot of this reflects the fact that thus far, computers have been able to do only a small portion of the tasks in the economy. Very early on in the history of computers, they got better than humans at serial, reliable arithmetic calculations, which you could do with an incredibly small amount of computation compared to the human brain, just because we’re really badly set up for multiplying and dividing lots of numbers. And there used to be a job of being a human computer, and I think that there are films about them, and it was a thing, those jobs have gone away because just the difference now in performance, you can get the work of millions upon millions of those human computers for basically peanuts.
But even though we now use billions of times as much in the way of that sort of calculation, it doesn’t mean that we get to produce a billion times the wages that were being paid to the human computers at that time, because there were diminishing returns in having more and more arithmetic calculations while other things didn’t keep up. And when we double the human population and capital adjusts, then you’re improving things on all of these fronts. So it’s not that you’re getting a tonne of enhancement of one kind of input, but it’s missing all of the other things that it needs to work with.
And so, as we see progress towards AI that can robustly replace humans, we should expect the share of tasks that computing can do to go up over time, and therefore the increase in revenue to the computer industry, or in economic value-add from computers per doubling of the amount of compute, to go way up. Historically, it’s been more like you double the amount of compute, and then you get maybe one-fifth of a doubling of the revenue of the computer industry. So if we think success at broad automation, human-substituting AI is possible, then we expect that to go up over time from one-fifth to one or beyond.
And then if you ask why would this be? One thing that can help make sense of that is to ask how much compute has the computing industry been providing historically? So I said that now, maybe an H100 that costs tens of thousands of dollars can give computation comparable to the human brain. But that’s after many, many years of Moore’s law, during which the amount of computation you could buy per dollar has gone up by billions of times and more.
So when you say, right now, if we add 10 million H100s to the world each year, then maybe we increase the computation in the world from 8 billion human brains’ worth to 8 billion and 10 million human brains, you’re starting to make a difference in total computation. But it’s pretty small. It’s pretty small, and so it’s only where you’re getting a lot more out of it per computation that you see any economic effect at all.
And going back further, you’re talking about, well, why wasn’t it the case that having twice as many of these computer brains analogous to the brain of an ant or a flukeworm, why wasn’t that doubling the economy? And when you look at it like that, it doesn’t really seem surprising at all.
Rob Wiblin: OK, so it’s understandable that having lots of calculators didn’t cause a massive economic revolution, because at that stage, we only had thinking machines that could do an extremely narrow range of all of the things that happen in the economy. And the idea here is that we’re heading towards a thinking machine that’s being able to do 0.1% of the kinds of tasks that humans can do, towards being able to do 100% — and then I guess more than 100% when they’re able to do things that no human is able to do.
So where would you say we are now, in terms of going from 0.1% to 100%? You might think that if we’re at 50% now, then shouldn’t we be seeing economic growth pick up a little bit? Because these machines, although they can’t do everything and humans still remain a bottleneck on some things where we can’t find machine substitutes, you still might think that there’ll be some substantial pickup.
But maybe you’re just saying that the chips have only recently gotten to the point where they’re able to compete with the human brain in terms of the number of calculations they can do, and even just a couple of years ago, a few cycles of chip fabs and Moore’s law back, all of the computational ability of all of the chips in the world was still only 1% or 10% of the computational ability of the human brains that were out there. So they just weren’t able to pack that much of a punch, because there simply wasn’t enough computational ability on all of the chips to make a meaningful difference.
Carl Shulman: Yeah, I’d say that. But also the software efficiency was worse. And so in recent years, you’ve had things like image recognition or LLMs getting similar performance with 100 times less computation. And there’s still a lot of room to improve the efficiency of software towards matching the human brain. That progress has been easier lately because with enough computation, more things work, and because the AI industry is becoming so much more effective, resources, including human research effort, has been flowing into it much faster. And then all these things combined have given you this greatly accelerated software progress.
So it’s a combination of all these things: spending more of GDP on compute, the hardware getting better, such that you could get some of these interesting results that you’ve seen recently at all, and then a huge pickup in the pace of algorithmic progress enabled by all of those additional compute and human resources flowing into the field.
Objection: Speed of doubling time [01:07:32]
Rob Wiblin: OK, a different line of sceptical argument here, in terms of the replication time of all of the equipment in the economy as a whole. At the point when humans are no longer really a part of it, you mentioned that we’ve got this kind of benchmark of cyanobacteria that manage to replicate themselves in ideal conditions in less than a day. And then we’ve got these very simple plants that grow and manage to double in size every couple of days. And then I guess you’ve got insects that maybe can double themselves in a week or something, and then small mammals like mice, I guess, I don’t know what their doubling time is, but probably a couple of months, perhaps, if they’re breeding very quickly. And then you’ve got humans, where I think our population growth rate is only about 4% a year or something, under really good conditions, when people are really trying.
It seems like the more complicated the organism, the bigger the organism, the slower that doubling time, at least in nature, seems to be. And I wonder whether that suggests that this very complicated infrastructure that we would have in this economy as a whole, producing all of these very complicated goods like computer chips, maybe the doubling time there could be in the period of years rather than months, because there’s just something about the complexity of having so many different kinds of materials that makes it slower for that replication process to play out?
Carl Shulman: That is a real trend that you’re pointing to. Now, a big part of that in nature relates to the economics of providing energy and materials to fuel growth. And you can see some of that, for example, in agriculture. So in the presence of hyperabundant food, breeders have made chickens that grow to absolutely enormous size compared to nature in a matter of weeks. That is, what would normally be a baby chicken reaches a size that is massive relative to a wild adult chicken in six weeks. And in the wild, that’s not going to work. The chicken has to be moving around, collecting food. They get a narrow energy profit from all of the movements required to find and consume and utilise the food. And so just the ecological niche of growing at full speed is not accessible to these large organisms, largely.
And for humans, you have that problem, and then in addition, you have the problem of learning and training. So a human develops the skills that they have as an adult by running their human-sized brain for years of education, training, exploration, and learning. Whereas with AI, we train across many thousands of GPUs, and more going forward, at the same time, in order to learn more rapidly. And then the trained, learned mind is then just digitally copied in full. So there’s no need to repeat that learning process for each and every computer that we construct. And that’s a fundamental structural difference between AI minds and biology.
Rob Wiblin: I guess it might make you wonder, with human beings, given that this training process for children to become capable of acting as human adults, given how costly it is, why didn’t humans have much longer lives? Why don’t we live for hundreds of years so we can harvest the benefits that come from all of that learning? I guess there you’re just running into other constraints, like you get predated on, or there’s a drought and then you starve. So there’s all these external things that are meaning that evolution doesn’t want to invest in doing all of the repair work necessary to keep human beings alive for an extremely long time, because chances are that they’ll be killed by some external threat in the intervening time.
Carl Shulman: Malaria more than leopards, maybe. But yeah, that’s an important dynamic. And just when you think that you could be spending energy on reproducing, if you apply your calories to running a brain to learn more, when you could instead be having some children with that, it’s more challenging to make those economics work out.
Objection: Declining returns to increases in intelligence? [01:11:58]
Rob Wiblin: Another line of scepticism that I hear that I’m not quite sure what to make of is this idea that, sure, we might see big increases in the size of these neural networks and big increases in the amount of effective lifespan or amount of training time that they’re getting — so effectively, they would be much more intelligent in terms of just the specifications of the brains that we’re training — but you’ll see massively declining returns to this increasing intelligence or this increasing brain size or this increasing level of training.
Maybe one way of thinking about that would be to imagine that we were designing AI systems to do forecasting into the future. Now, forecasting tens or hundreds of years into the future is notoriously very challenging, and human beings are not very good at it. You might expect that a brain that’s 100 times the size of the human brain and has much more compute and has been trained on all of the knowledge that humans have ever collected because it’s had millions of years of life expectancy, perhaps it could do a much better job of that.
But how much better a job could it really do, given just how chaotic events in the real world are? Maybe being really intelligent just doesn’t actually buy you the ability to do some of these amazing things, and you do just see substantially declining returns as brains become more capable than humans are. And this will just tamp down on this entire dynamic. It would tamp down on the speed of the feedback loop from AI advances to more AI advances; it would tamp down on what these extremely capable AI advisors, how useful their engineering advice was, how much they’d be able to help us speed up the economy. What do you make of this kind of declining returns line that people sometimes raise?
Carl Shulman: Well, actually, from the arguments that we’ve discussed so far, I haven’t even really availed myself of much that would be impacted by that. So I’ll take weather forecasting. So you can expend exponentially more computing power to go incrementally a few more days into the future for local weather prediction, at the level of “Will there be a storm on this day rather than that day?” And yeah, if we scale up our economy by a trillionfold, maybe we can go add an extra week or so to that sort of short-term weather prediction, because it’s a chaotic system.
But that’s not impacting any of the dynamics that we talked about before. It’s not impacting the dynamic where, say, Japan, with a population many times larger than Singapore, can have a much larger GDP just duplicating and expanding. These same sorts of processes that we’re already seeing give you corresponding expansion of economic, industrial, military output.
And we have, again, the limits of just observing the upper peaks of human potential and then taking even quite narrow extrapolations of just looking at how things vary among humans, say, with differing amounts of education. And when you go from some high school education to a university degree, graduate degree, you can see like a doubling and then a quadrupling of wages. And if you go to a million years of education, surely you’re not going to see 10,000 or 100,000 times the wages from that. But getting 4x or 8x or 16x off of your typical graduate degree holder seems plausible enough.
And we see a lot of data in cases where we can do experiments and see, in things like go or chess, where we’ve looked out to sort of superhuman levels of performance and we can say, yeah, there’s room to gain some. And where you can substitute a bigger, smarter, better trained model evaluated fewer times for using a small model evaluated many times.
But by and large, this argument goes through largely just assuming you can get models to the upper bounds of human capacity that we know is possible. And the duplication argument really is unaffected by the sort of that, yes, weather prediction is something where you’ll not get a million times better, but you can make a million times as many physical machines process correspondingly more energy, et cetera.
Rob Wiblin: So if I understand what you’re saying, I guess maybe I’m reading into this scenario, I’m imagining that these AI systems that are doing this mental labour are not only very numerous, but also hopefully they’re much more insightful than human beings are. Hopefully they’ve exceeded human capabilities in many ways.
But we can kind of set a minimum threshold and say, well, at least they should be able to match human performance in a bunch of these areas, and then we could just have a lot of them. That gives us sort of one minimum threshold. And you think that most of what you’re describing could be justified just on those grounds, without necessarily having to speculate about exactly where they will cap out in terms of their ability to have amazing insights in science? We can get enormous transformation just through sheer force of numbers?
Carl Shulman: That’s right. And things like having 100% labour force participation, intense motivation, and then the additional larger model size, having a million years of education — those things will give further productivity increases. But yeah, this basic argument doesn’t require that.
Objection: Physical transformation of the environment [01:17:37]
Rob Wiblin: Yeah. I think another reason that people might be a bit sceptical that this is going to play out is just looking at the level of physical transformation of the environment that this would require. We’re talking here about capturing 10% of all of the solar energy hitting the world. This seems like it would require a massive increase in the number of solar panels in principle, or maybe a massive increase in the number of nuclear power plants. I think for the kinds of economic doublings that you’re talking about, at some point we would be capping out at building thousands of nuclear power plants every couple of months, and currently it seems like globally we struggle to manage a dozen a year. I don’t know what the exact numbers are.
But there’s something that is a bit surprising about the idea that we’re currently restricting ourselves so enormously in how much we use the environment and where we are willing to put buildings, where we’re willing to put nuclear power plants and whether we’re willing to have them at all. The idea that within our lifetimes we could see rates of construction go up a hundredfold or a thousandfold in the physical environment, even if we had robots capable of building them, it feels understandably counterintuitive to many people. Do you want to comment on that?
Carl Shulman: Yeah. So the very first thing to say is that that has already happened relative to our ancestors. So there was a time when there were about 10 million humans or relevant hominids hanging around on the Earth, and they had their stone hand axes and whatnot, but very little stuff. Today there’s 8 billion humans with a really enormous amount of stuff being produced. And so if you just say that 1,000 sounds like a lot, well, every numerical measure of the physical production of stuff in our society is like that compared to the past.
And on a per capita basis, does it sound crazy that when you have power plants that support the energy for 10,000 people, does it sound crazy that you build one of those per 10,000 people over some period of time? No, because the efforts to create them are also scaling up.
So, how can you have a larger number if you have a larger population of robot workers and machines and whatnot, I think that’s not something we should be super suspicious of.
There’s a different kind of thing which is drawing from how, in developed countries, there has been a tendency to restrict the building of homes, of factories, of power plants. This is a significant cost. You see, you know, in some very restrictive cities like New York City or San Francisco, the price of housing rises by several times compared to the cost of constructing it because of basically legal bans on local building. And people, especially folk who are immersed in the sort of YIMBY-versus-NIMBY debates and think about all the economic losses from this, that’s very front of mind.
I don’t think this is reason for me not to expect explosive construction of physical stuff in this scenario though, and I’ll explain why. So even today we see, in places like China and Dubai, cities thrown up at incredible rates. There are places where intense construction can be allowed, and there’s more of that construction when the payouts are much higher. And so when permitting building can result in additional revenue that is huge compared to the local government, then they may actually go really out of their way to provide the regulatory situation that will attract investments of an international company. And in the scenarios that we’re talking about, yes, enormous industrial output can be created relatively quickly in a location that chooses to become a regulatory haven.
So the United Arab Emirates built up Dubai, Abu Dhabi and has been trying to expand this non-oil economy by just creating a place for it to happen and providing a favourable environment. And in a situation where you have, say, the United States is holding back from having million-dollar-per-capita incomes or $10-million-per-capita incomes by not allowing this construction, and then the UAE can allow that construction locally and 100x their income, then I think they go ahead and do it. Seeing that sort of thing I’d also expect encourages change in the more restrictive regulatory regimes.
And then AI and such can help on the front of governance. So unlimited cheap lawyers makes it easier to navigate horrible paperwork, and unlimited sophisticated AIs to serve as bureaucrats, advisors to politicians, advisors to voters makes it easier to adjust to those things.
But I think the central argument is that some places providing the regulatory space from it can make absolutely enormous profits, potentially gain military dominance — and those are strong pressures to make way for some of this construction to enable it. And even within the scope of existing places that will allow you to make things, that goes very far.
Rob Wiblin: OK, so the arguments there are, one is just that the level of gain that people will perceive from going ahead with this transformation would be so enormous, so much larger than the gain that they perceive from allowing more apartment construction in their city, that there’ll be this big public pressure — because people will be able to foresee, maybe by watching other countries like the UAE or Qatar or the example of cities that have decided to go for it — that their income could be 10 or 100 times larger within their lifetime, and they’ll really want that.
And then also at the level of states, there’ll be competitive factors that will cause countries to want to not hold back for long periods of time, because they’ll perceive themselves as falling behind radically, and just being at a big strategic disadvantage.
And of course, there’s all of the benefits of AI helping to overcome the barriers that there currently are to construction, and potentially improving governance in all kinds of ways that I think we’re going to talk about later. Is that the basic summary?
Carl Shulman: That’s right. And just these factors are pretty powerful disanalogies to the examples people commonly give of technologies that have been strangled by regulatory hostility.
Rob Wiblin: Yeah, maybe we could talk through the comparison with nuclear energy, say?
Carl Shulman: Yeah, so nuclear energy theoretically has the potential to be pretty cheap compared to other sources of energy. It can be largely carbon free and it’s much safer than fossil fuels. So the number of deaths from pollution from coal and natural gas and whatnot is very large. Every year, enormous numbers of people die from that pollution, even just the local air pollution effects, not including the global climate change effects. And regulatory regimes have generally imposed safety requirements for a technology that already has been much safer than fossil fuels, that basically raise costs to a level that have largely made it non-competitive in most jurisdictions.
And even places that have allowed it have often removed it later. So Germany and Japan both went on anti-nuclear benders in response to local ideological pressures or overreaction to Fukushima, which directly didn’t actually cause as much harm as your typical coal plant does year on year. But the overreaction to it actually caused an enormous amount of damage, and then it’s further creating air pollution fatalities, climate change, yada yada. So this is an example where nuclear had the potential to add a lot of value.
You see that in France, where they get a very large share of their electricity from nuclear at low cost. If other countries had adopted that, they could have had incrementally cheaper electricity and less deaths from air pollution. But those benefits are not actually huge at the scale of local economic activity or of the fate of a state. When France builds that nuclear power plant infrastructure, it can’t then provide electricity for the entire world. So the export infrastructure for that does not exist. And it couldn’t provide electricity, say, an order of magnitude cheaper than fossil fuels, and then ship it everywhere in the form of hydrogen or producing liquid fuels, things like that.
So yeah, in that situation, having some regulatory havens that are a minority of the world doesn’t let you capture most of the potential benefits of the technology. Whereas with this AI robotic economy, if some regions do it, and then start developing things — at first locally, and then in trading partners, and then in the oceans, in space, et cetera — then they can realise the full magnitude of the impact.
And then secondly, no country winds up militarily helpless, losing the Cold War, because they didn’t build enough nuclear power plants for civilian power. Now, on the other hand, nuclear weapons were something that the great powers and those without nuclear protective alliances all did go for — because there, there was no close alternative that could provide capabilities at that level, and the geostrategic demand was very large. So all these major powers either developed nuclear weapons themselves or relied on alliances with nuclear powers.
So AI and an automated economy have some of the geostrategic demand of nuclear weapons, but also an economic impact that is far greater than nuclear power could have provided. And I could make similar arguments with respect to, say, GMO crops. Again, one regulatory haven can’t realise the full impact of the technology for the world, and the magnitude of the incentives for political decision-makers are so much weaker.
Objection: Should we expect an increased demand for safety and security? [01:29:13]
Rob Wiblin: OK, let me hit you with a different angle. So imagine that we go into this transformation where economic growth rates are radically taking off and we’re seeing the economy double every couple of months. A couple of doubling cycles in, people would look around and say, holy shit, my income is 10 to 100 times higher than it was just a couple of years ago. This is incredible.
But at the same time, they would look around and say, like every couple of months, the world has transformed. We’ve got these insane new products coming online, we’ve got these insane advances in science and technology. The world feels incredibly unstable because the transformation is happening so incredibly rapidly. And now I’ve got even more to lose, because I feel so rich and I feel so positive about how the future might go if things go well.
And furthermore, probably as part of that technological advance, you might see a very big increase in the ability of people to make agreements and to monitor one another for whether they’re following these agreements. So it might be more practical at this halfway stage for countries to make agreements with one another, where they opt to slow down this transition and basically sacrifice some income, in order to get more safety by making the transition a bit slower, a bit more gradual, so that they can evaluate the risks and reduce them.
And of course, as people get richer, as you mentioned earlier, they become kind of more concerned with safety. Safety is something of a luxury good that people want more of as they get richer. So we might expect an increased demand for safety and security as this transition picks up, and that could actually then create a policy change that slows things down again. Do you think that’s a plausible story?
Carl Shulman: Certainly the max-speed AI robotics capability economic explosion is one that gets wild relative to the timescale of human affairs for humans to process and understand, and think about this for, say, political negotiations to happen. I mean, consider the madness of fixed election cycles on a timescale of four or five years: it would be as though you had one election cycle for the Industrial Revolution. So some British prime minister is elected in 1800 and they’re still in charge today because the electoral cycle hasn’t come around yet. That’s absurd in many ways.
And as we were talking about earlier, the risk of accidental trouble, things like a rogue AI takeover, things like instability in this rapid industrial growth affecting political balances of power, that’s a concern. The development of numerous powerful new technologies, some of them may pose big additional issues. So say if this advancing technology makes bioweapons very effective for a period of time before expansions of defences make those weapons moot, then that could be an issue that arises, and arises super fast with this very fast growth. And you might wish that you had more ability to slow down a bit to manage some of those issues, rather than going at the literal max speed. Even if you’re very pro progress, very pro fast growth, you might think that you could be OK with, say, doubling the economy every year instead of every month, and having, say, technological progress that gets us what would otherwise be every decade in a year or in six months, rather than in one month.
The problem is that even if you want that for safety reasons, you have to solve these coordination and cooperation problems, because the same sorts of safety motivations would be used by those saying, think how scary it would be if other places are going as fast as the fastest region where this argument is being made. And so you’ve got to manage that kind of issue.
I have reasonable hope that you would not wind up going at the literal max speed, where that has terrible tradeoffs in terms of reduced ability to navigate and manage this transition. I have doubts about that wildly restricting the growth. If it comes to a point where, say, the general voting public thinks and knows that diseases that are killing people on an ongoing basis could be cured very quickly by continuing this scientific industrial expansion for a bit, I think that would create demand.
The most powerful one, though, seems like this military one. So if the great powers can agree on things to limit the fear of that sort of explosive growth of geopolitical military advantage, then I think you could see a significant slowdown. But note that this is a very different regulatory situation than, say, nuclear power, where individual jurisdictions may restrict or over-regulate or ban it. It doesn’t require a global agreement of all the great powers to hold back nuclear power and GMO. And in any case, we do have civilian nuclear power; there are many such plants. Many fields are planted with GMO crops. And so it’s a different level. And it may be met because the importance of the issue might mean there’s greater demand for that sort of regulation, so it could happen.
But I think people making a naive inference from regulatory barriers to other technologies need to wrestle with how extreme the scope of international cooperation and the intensity of that regulation, the degree to which it would be holding back capability that could otherwise be had. And if you want to argue the chances of that sort of regulatory slowdown are 70% or 30% or 10% or 90%, I’m happy to have that argument. But this idea that NIMBY tendencies in construction in some dense, progressive cities in rich countries tell you that basically the equivalent of the Industrial Revolution packed into a very short time is going to be foregone by states, you need to meet a higher burden.
Objection: “This sounds completely whack” [01:36:09]
Rob Wiblin: OK, a different reason that some listeners might have for doubting that this is how things are going to play out is maybe not an objection to any kind of specific argument, or a specific objection to some technological question, but just the idea that this is a very cool story, but it sounds completely whack. And you might reasonably expect the future to be more boring and less surprising and less weird than this.
You’ve mentioned already one response that someone could have to this, which is that the present would look completely whack and insane to someone who was brought forward from 500 years ago. So we’ve already seen a crazy transformation through the Industrial Revolution that would have been extremely surprising to many people who existed before the Industrial Revolution. And I guess plausibly to hunter-gatherers, the states of ancient Egypt would look pretty remarkable in terms of the scale of the agriculture, the scale of the government, the sheer number of people and the density and so on. We can imagine that the agricultural revolution shifted things in a way that was quite remarkable and very different than what came before.
Is there any other kind of overall response that someone could give to a listener who’s sceptical on this on grounds that this is just too weird to be likely?
Carl Shulman: So building on some of the things you mentioned. So not only that our post-industrial society is incredibly rich, incredibly populous, incredibly dense, long-lived, and different in many other ways from the days of millions of hunter-gatherers on the Earth, but also, the rate of change is much higher. Things that might previously have been on a thousand-year timescale now happen on the scale of a couple of decades — for, say, a doubling of global economic output. And so there’s a history both of things becoming very different, but also of the rate of change getting a lot faster.
And I know you’ve had Tom Davidson, David Roodman and Ian Morris and others, and some people with critical views discussing this. And so cosmologists among physicists, who have the big picture, actually tend to think more about these kinds of cases. The historians who study big history, global history over very long stretches of time tend to notice this.
So yeah, when you zoom out to the macro scale of history, in some ways it’s quite precedented to have these kinds of changes. And actually it would be surprising to say, “This is the end of the line. No further.” Even when we have the example of biological systems that show the ceilings of performance are much higher than where we’re at, both for replication times, for computing capabilities, and other object-level abilities.
And then you have these very strong arguments from all our models and accounts of growth that can really explain some of why you had the past patterns and past accelerations. They tend to indicate the same thing. Consider just the magnitude of the hammer that is being applied to this situation: it’s going from millions of scientists and engineers and entrepreneurs to billions and trillions on the compute and AI software side. It’s just a very large change. You should also be surprised if such a large change doesn’t affect other macroscopic variables in the way that, say, the introduction of hominids has radically changed the biosphere, and the Industrial Revolution greatly changed human society, and so on and so forth.
Rob Wiblin: It just occurred to me another way of thinking about the size of the hammer, which is maybe a little bit easier to imagine in the world as it is right now, which is that we’re imagining that we’re able to replicate what the human mind can do with about 20 watts of energy, because we’re going to find sufficiently good algorithms and training mechanisms, and have sufficiently good compute to run that on an enormous scale.
So you’d be able to get the work of a human expert for about 20 watts of electricity, which costs less than one cent to run per hour. So you’re getting skilled labour for this radically reduced price. And you imagine, what if suddenly we could get computers to do the work of all of our most skilled professionals for one cent an hour’s worth of electricity? And I guess you need to throw in the compute construction as well.
But I think that helps to indicate, just like imagine the transformation that would happen if you could do that without any limit on the number of these computers that you could run as you scaled them up. Does that sound like a useful mental switch to do?
Carl Shulman: That’s one thing. Another thing is in this space of historical examples and precedents, and sort of considering a larger universe of analogies and examples. So we see fairly often some part of the world where there’s an overhang of demand, where the world would like to buy much more of a certain product than exists right away, that you see super rapid expansion.
So in software, that’s especially obvious: a ChatGPT, if that can quickly go to enormous numbers of users, because people already have phones and computers with which to interface with it.
When people develop a new crop: so maize corn, you can get, from one seed, hundreds of seeds after one growing season, do a few growing seasons. And so if you have a new breed of maize, you can scale it up very quickly over the course of a year, to have all the maize in the world be using this new breed, if you want it.
In the space of startups: making not just software, but physical objects. Seeing 30% or even 50% growth is something you see in a lot of the world’s largest companies, which is how they’re able to become the world’s largest companies from an initial startup position without taking centuries to go. And a company like Tesla or Amazon, if it’s able to grow 30% or 50% per year while having to hire and train people, and all of the skills and expertise related to its business, which is a thing that would be largely circumvented with AI, really suggests yes, if there’s demand, if there’s a profit to pay for these kinds of rapid expansions, they can go very rapidly.
Wartime mobilisation would be another: the scale at which the US military industry developed in World War II was pretty incredible.
Rob Wiblin: I’m not sure how persuasive I find that analogy to really rapidly growing companies. I feel a bit confused about it, because I guess, yeah, you can point to very rapidly growing companies that more than double their headcount and more than double their output every couple of months. But I guess in that case, they’re able to just absorb this latent human resource — all of these people who are trained to do things that are nearby to what the company wants from outside — and they can absorb all of these resources from the broader economy.
And it does show that you can have basically these organisations that can absorb resources and put them to productive use very quickly and figure out how to structure themselves in order to do that. But it’s a bit less obvious to me that that extends to thinking that you could have this entire system reproduce itself if they had to kind of build all of the equipment from scratch — and they couldn’t absorb it from other companies that are not as productive, or grab it from people that have just left university, and things like that. Am I thinking about this wrong?
Carl Shulman: So we’re asking, here are all the inputs that go into these production processes: which ones can double how fast? So the skills and people, these are ones that we know can grow that fast. Compute has grown incredibly fast historically, to the point of a millionfold growth over a few decades, and that’s even without these strong positive feedback dynamics. And we know that you can copy software just like that. So expanding the skills associated with those companies and hiring, that’s not going to be the bottleneck.
If you’re going to have a bottleneck, it’s got to be something about our physical machines. So machine tools: that you’ve got to run those machine tools for, say, it takes more than a year of their output to make a similar mass of machine tools. And this is the analysis we were going into earlier with what’s the energy payback time of solar panels or power plants? You do a similar analysis about physical machines. As we said there, those numbers look pretty good, pretty close. Then add in technological improvements to take, say, energy payback time that are already below a year, take them down further towards a month. Things look reasonably compelling there.
Looking into the details of why don’t companies making semiconductor fabs, vaccines, lithography equipment, why don’t they expand faster? A thing that persistently recurs is expanding really fast means making large upfront investments. And if you’re not confident that the demand is there, and that you’re going to make enormous profits to pay back on those investments, then you’re reluctant to do it.
So TSMC is a survivor of many semiconductor firms going bust, because when the boom and bust of chip production goes, during the bust, companies that have overinvested can then die. So it’s important on both sides. And similarly, ASML could expand quite a bit more quickly if they were really confident that the demand was there. And so far, TSMC and ASML actually, I think are still quite underestimating the demand for their products from AI, but they’re already making large and ongoing expansions.
And the reason I bring up companies like Tesla and Amazon is they actually needed to make warehouses, make factories. And for many of the products that they consume — so Tesla becoming a significant chunk of world demand for the kinds of batteries that they use — it can’t be just an issue of reallocating resources from elsewhere, because they wind up being a quite large chunk of their supply chain on many of these products. And they have to actually make physical things. They have to make factories, which is different from, say, some app being downloaded to more phones that already exist, or hiring a bunch of remote workers — something that’s just redirecting. People actually make factories and make electric cars, growing at incredible rates that are an order of magnitude higher than these typical growth rates that economists expect, largely — see in recent decades, and might tend to expect to continue.
Income and wealth distribution [01:48:01]
Rob Wiblin: One thing we haven’t talked about almost at all is income distribution and wealth distribution in this new world. We’ve kind of been thinking about on average we could support x number of employees for every person, given the amount of energy and given the number of people around now.
Do you want to say anything about how income would end up being distributed in this world? And should I worry that in this post-AI world, humans can’t do useful work, there’s nothing that they can do for any reasonable price that an AI couldn’t do better and more reliably and cheaper, so they wouldn’t be able to earn an income by working? Should I worry that we’ll end up with an underclass of people who haven’t saved any income and are kind of shut out of opportunities to have a prosperous life in this scenario?
Carl Shulman: I’m not worried about that issue of unemployment, meaning people can’t earn wages to support themselves, and indeed have a very high standard of living. Just as a very simple argument: right now governments redistribute a significant percentage of all of the output in their territories, and we’re talking about an expansion of economic output of orders of magnitude. So if total wealth rises a hundredfold, a thousandfold, and you just keep existing levels of redistribution and government spending, which in some places are already 50% of GDP, almost invariably a noticeable percentage of GDP, then just having that level of redistribution continue means people being hundreds of times richer than they are today, on average, on Earth.
And then if you include off-Earth resources going up another millionfold or billionfold, then it is a situation where the equivalent of social security or universal pension plans or universal distribution of that sort, of tax refunds, can give people what now would be billionaire levels of consumption. Whereas at the same time, a lot of old capital goods and old things you might invest in could have their value fall relative to natural resources or the entitlement to those resources once you go through.
So if it’s the case that a human being is a citizen of a state where they have any political influence, or where the people in charge are willing to continue spending even some portion, some modest portion of wealth on distribution to their citizens, then being poor does not seem like the kind of problem that people are facing.
You might challenge this on the point that natural resource wealth is unevenly distributed, and that’s true. So at one extreme you have a place like Singapore, I think it’s like 8,000 people per square kilometre. At the other end, so you’re Australian and I’m Canadian and I think they’re at two and three people per square kilometre, something like that — so a difference of more than a thousandfold relative to Singapore in terms of the land resources. So you might think you have inequality there.
But as we discussed, most of the natural resources on Earth are actually not even in the current territory of any sovereign state. They’re in international waters. If heat emission is the limit on energy and materials harvesting on Earth, then that’s a global issue in the way that climate change is a global issue. And so if you wind up with heat emission quotas or credits being distributed to states on the basis of their human population, or relatively evenly, or based on prior economic contribution, or some mix of those things, those would be factors that could lead to a more even distribution on Earth.
And again, if you go off Earth, the magnitude of resources are so large that if space wealth is distributed such that each existing nation-state gets some share of that, or some proportion of it is allocated to individuals, then again, it’s a level of wealth where poverty or hunger or access to medicine is not the kind of issue that seems important.
Rob Wiblin: I think someone might respond saying, in this world, countries don’t need human beings to serve in their military, to protect themselves. That’s all being done by robots. Countries don’t need human beings to do work, to pay taxes or anything like that. So why would human beings maintain the kind of political power that allows them to vote in favour of welfare and income redistribution that would allow them to live a prosperous life?
Now, admittedly, you might only need to redistribute 1% of global GDP in a somewhat even way in order for everyone to live in luxury. So you might only need very limited levels of charity or concern for people to be, for whoever are the people who have the greatest level of power to be willing to just buy people out and ensure that we’ll make sure that everyone at least has a pretty high standard of living, because it’s trivially cheap to do so. But yeah, there are a lot of questions about how is power distributed after this transition? And it seems like things could go in radically different directions in principle.
Carl Shulman: Yeah. So in democracies, I think this would just be a very strong push for actual redistribution in a mature economy to be higher than it is today. Because right now, if you impose very high taxes on capital investment and wages, you’ll reduce economic activity, shrink the pie that’s being redistributed. In a case where the industrial base just expands to the point of being natural resource limited, then there’s actually minimal disincentive effect on just charging a market rate by auctioning natural resources off.
So you remove these efficiency penalties of redistribution. And without that, and with at the same time, what would otherwise be like mass unemployment, or if not mass unemployment, where the wages earned in employment would be pathetic by comparison to what could be obtained by redistribution — so even if wages rise a lot, and maybe if the typical person can earn $500,000 a year in wages, but redistribution of land and natural resources revenue could give them $100 million a year income, then there would be a lot of political pressure to go for the latter option. And so in democracies, I think this would not be a close call.
In dictatorships and oligarchic systems, I think it’s much more plausible. So in some countries with large oil revenues, Norway or states like Alaska, you have fairly broad distribution of the oil revenues, provident management, but you have other countries where a narrow elite largely steals that revenue — often squirrels it away in secret bank accounts, or otherwise channels it to corrupt purposes.
And this reflects a more general issue of when dictatorships no longer depend on their citizenry to staff their militaries, to staff their security services, to provide taxes and industry, those checks against not just expropriating the population, reducing their standard of living, but even things like murder, torture, and just all kinds of abuses of the civilian population are no longer checked by some of these practical incentives, and would depend more on the intentions of those with political power, and to some extent international pressure. So that’s something that could go pretty badly.
Rob Wiblin: And maybe also their desire to maintain the rule of law for their own protection, perhaps? You could imagine that you might be nervous about just expropriating everyone, or not following previously made agreements about how society is going to function, because you’re not sure that is going to work out well for you necessarily.
Carl Shulman: Yeah, that’s right. Although different kinds of arrangements could be baked in. If you think about the automated robotic police, those police could be following a chain of command where they ultimately obey only the president or the dictator. Or maybe they respond to a larger body; they respond to also a parliament or politburo, maybe a larger electorate. But lots of different arrangements could be baked in.
Rob Wiblin: And then made very difficult to change.
Carl Shulman: Yeah, and once the basis by which the state maintains its power and enforces everything can be automated and relatively set in stone, or made resistant to opposition by any broader coalition, then there could be a lot of variance in exactly what gets baked in earlier. And then international pressure would also come into play. And things like emigration: as long as people are able to emigrate, then that provides a lot of protection. You can go to other places that are super rich, and that’s something where if you have some places that have more humanitarian impulses and others less so, that are very personalist dictatorships with callous leaders, at least negotiating to allow the people there to leave is the kind of thing that doesn’t necessarily cost nasty regimes that much.
And so that could be the basis, a means by which some of the abuses enabled by AI automation of the apparatus of government in really nasty regimes could be limited.
Rob Wiblin: OK, that’s a bit of a teaser for the topics and challenges we’re going to come back to in part two of the conversation, where we’re going to address epistemics and governance and coups and so on. But for now, let’s come back to the economic side, which is the focus this time around.
I started this section by asking, why does any of this matter? Why do we need to be trying to forecast what this post-AI economy would look like now, rather than just waiting for it to happen? Is it possible to maybe come back and say, now that we’ve put some flesh on the bones of this vision, what are the most important aspects for people to have in mind? Maybe the things that you’re most confident about, or the things that are potentially most likely to be decision-relevant, to decisions that people or that our societies have to make in coming years and decades?
Carl Shulman: The things I would emphasise the most are that this fairly rapid transition and then the very high limit of what it can deliver creates the potential for a sudden concentration of power. We talked about how geopolitically that could cause a big concentration. And ex ante, various parties who now have influence and power, if they foresee this sort of thing, should want to make deals to better distribute the fruits of this potential and avoid taking on huge negatives and risks from a negative-sum competition in that race.
So what concretely can that mean? One thing is that, countries that are allies of the leading AI powers and make essential contributions of various kinds should want to have the capability themselves to see what is going on with AI that is being developed, to know how it will behave, loyalties and motivations. And that it is such that they can expect the results are going to be good for all the members of that alliance or deal.
So say the Netherlands: the Dutch are the leaders in making EUV lithography machines. They’re essential for the cutting-edge chips that are used to power AI models. That’s a major contribution to global chip efforts. And their participation, say, in the American export controls, is very important to their effectiveness. But the leading AI models are being built in American companies and under American regulatory jurisdiction. So if you’re a politician in the Netherlands, while you right now are providing a lot to this AI endeavour, you should want assurances that, as this technology really flowers, if, say, it flowers in the United States under a US security aegis, that the resulting benefits can be shared, and that you won’t find yourself in various ways treated badly or really missing out on the benefits.
So an example of that which we discussed is there are all of these resources in the oceans and in space, that right now the international system doesn’t allocate. And you could imagine a situation in which a leading power decides that since it doesn’t violate the territory of any sovereign state and it’s made feasible by AI and robotics, they just create facts on the ground or in space, and claim a lot of that. And so since that AI effort is enabled by the contribution or cooperation or forbearance of many parties, they should be getting — right now — assurances, perhaps treaty assurances, that that sort of move will not be taken, even if there is a large US lead in AI.
And similarly for other kinds of mechanisms that are enabled by AI. So if AI enables super effective political manipulations or interference in other countries’ elections, then assurances that leading AI systems won’t be used in that way. And then building institutional mechanisms, to be clear on that.
So the Netherlands should be developing its own AI capabilities, such that it can verify the behaviour and motives of models that are being trained; that they can have personnel present if, say, data centres with leading AI models are based in the United States, and if the US assures that these models are being trained in such a way that they would not participate in violations of international treaties or follow certain legal guidelines. Then if US allies have the technical capabilities and have joined with the US, developed the ability to verify assurances like that over time — and other things like compute controls and compute tracking might help with that — then they can be assured that they will basically wind up with a fair share of the benefits of a technology that might enable unilateral power grabs of various kinds.
And then the same applies to the broader world community. It applies also within countries. So we discussed earlier the absurdity that if things really proceed this fast, you may go from a world where AI is not central to economic, military power governance, to a world where overwhelmingly all military power is mediated through AI and robotics, where AI and robot security forces can defend any regime against overthrow, whether that is a democratic regime or a dictatorship. And all of this could happen within one election cycle.
So you need to create mechanisms whereby unilateral moves taking advantage of this new, very different situation require broad pluralistic support. That could mean things like the training and setup of the motivations of AI systems at the frontier occurring within a regulatory jurisdiction, maybe require supermajority support so that you’re going to have buy-in from opposition parties in democracies. Maybe you’re going to have legislation passed in advance, setting rules for what can be done and programmed in these systems, and then have, say, supreme courts given immediate jurisdiction so that they could help assess some of these disputes involving more international allies.
And in general, there’s this potential for power grabs enabled by this technological, industrial, military transformation. There are many parties who have things that they care about, interests and values to be represented, and low-hanging fruit from cooperating. And in order to make that more robust, it really helps to be making those commitments in advance, and then building the institutional and technical capacities to actually follow through on it. And that then occurs within countries, occurs between countries, and ideally, it brings in the whole world and all the leading powers — states in general, and in AI specifically — and then they can do things like manage the control and distribution of these potentially really dangerous AI capabilities, and manage what might otherwise be an insanely fast transition, and slightly slow it down and have even a modicum of human oversight, political assessment, negotiation, processing.
And so all of that is basically to say, this is a reason to work on pluralism, preparation, and developing the capacity to manage things that we may not be able to put off.
Rob Wiblin: OK, so there’s states using their sudden strategic dominance to grab natural resources or to grab space unilaterally. Then there’s just them using their military dominance to grab power from other states and ignore their interests. Then there’s the potential for power grabs within countries, where a group that’s temporarily a majority could try to lock themselves in for a long period of time. And then there’s the desire between different countries to potentially coordinate, to make things go better, and to give ourselves a little bit more time to think things through.
I guess it all sounds great. At least one of them sounds a little bit difficult to me: the idea that the Netherlands would be able to assess AI models that the US is going to use, and then confirm that they’re definitely going to be friendly to the Netherlands and that they’re not going to be substituted for something else. How would that work exactly? Couldn’t the US just kind of change the model that they’re using? Or how do you have an assurance that the deal isn’t going to be changed just as soon as the country actually does have a decisive strategic advantage?
Carl Shulman: One problem is, given an artificial intelligence system, what can you say about its loyalties and behaviour? This is in many ways the same problem that people are worrying about with respect to rogue AI or AI takeover. You want to know if, say, there was an attempt at an AI coup or organised AI takeover effort, would this model, in that unusual situation — which is hard to generate and expose it to in training in a way that’s compelling to it — would it join that revolution or that coup?
And then you have the same problem potentially with AIs that are, say, designed to follow the laws of a given country, or to follow some international agreements, or some terms jointly set by multiple countries — because if there is a backdoor or poison data, so that in the unusual circumstance where, say, there is a civil war in country X, will it side with party A or party B? If there’s some situation where the chief executive of a given company is in conflict with their government, will these AIs, in that unusual circumstance, side with that executive against the law?
And similarly between states: if you have AIs that apparently, where inspectors from multiple states were involved in seeing and producing the code from the bottom up, and then inspecting the training data being put in, if they can figure out from that that no, there’s no circumstances under which the model would display this behaviour, then you’re in good shape with respect to sort of rogue AI takeover, relatively, and for this sort of AI enabling a coup or power grab by some narrow faction within a broader coalition supporting this AI development.
It’s possible that some of those technical problems will just be very difficult to solve. We haven’t solved that problem with respect to large pieces of software. So if Microsoft intends to produce exploits and backdoors in Windows, it’s unlikely that states will be able to find all of them. And intelligence agencies find a lot of zero-day exploits, but not all the same ones as each other. So that might be a difficult situation.
Now, in that case, it may be possible to jointly construct the code and datasets. Even though you couldn’t detect a backdoor in the completed product, you might be able to inspect all of the inputs in creating the thing and ensure there was no backdoor there. If that doesn’t work, then you get to a position where at best you can share the recipe, very simple and clear, for training up an AI.
And then you wind up with a situation where trust and verification is then about these different parties having their own AIs, which could enable weapons of mass destruction. But maybe some number of states get these capabilities simultaneously, all participants in some AI development project get the latest AI models, and they can retrain them using these shared recipes to be sure that they don’t contain backdoors in their local copy. And then that setup maybe will have more difficulties than if you have just one single AI and everyone has ensured that AI is going to not do whatever it’s told by one of the participants, but it’s going to follow a set of rules set by the overall deal or international organisation or plan.
But I mean, these are the sort of options to explore. And when we ask why, with the mature AI technology, why can’t one then just do whatever with it? As you get on, we’re talking about AIs that are as capable as people. They’re capable of whistleblowing on illegal activity, if, say, there’s an attempt to steal or reprogram the AI from a joint project.
And eventually, to get to the point when we’re talking about an automated economy with thousands of robots per human, at that point ultimately the physical defence and such is already having to be alienated to machines. And it’s just a matter of what are the loyalties of those machines? How do they deal with different legal situations, with different disputes between governing authorities, each of which might be said to have a claim, and what are the procedures for resolving that?
Economists and the intelligence explosion [02:13:30]
Rob Wiblin: So let’s push on now and talk about economists and the intelligence explosion. We’ve just been kicking the tires a bit on this vision of a very rapid change to an AI-dominated economy, and how that transition might play out and how that economy might look.
We’ve done some other episodes on that, as we’ve mentioned: there’s episode #150: Tom Davidson on how quickly AI could transform the world, and there’s episode #161 with Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality, if people want to go and listen to some more content on that topic.
But it is interesting, and a bit notable, that I think economists, while they’ve become more curious about all of this over the last year or two, in general they remain fairly sceptical. There are not a lot of economists who are basically painting the vision of the future that you have. So I think it’ll be very interesting to explore why it is that you have reasonably different expectations than typical mainstream economists, and why it is that you’re not persuaded by the kind of counterarguments that they would offer.
We’ve covered a decent number of counterarguments that I have generated already, but I think there’s even other ones that we’ve barely touched on that economists tend to raise in particular. So first, could you give us a bit of a lay of the land? What is the range of opinions that economists express about these intelligence explosion and economic growth explosion scenarios?
Carl Shulman: So I’ll say my sense of this, based on various pieces of evidence, is that while AI scientists are pretty open to the idea that automating R&D, as well as physical manufacturing and other activities, will result in an explosive increase in growth rates in technological and industrial output — and there’s surveys of AI conference attendees and AI experts to that effect — this view seems not to be widely shared among economists. Indeed, the vast majority of economists seem to assign extremely low probability to any scenario where growth even increases by as much as, say, it did during the Industrial Revolution.
So Tom Davidson, who you had on the show, defines explosive growth with this measure of 30% annual growth in economic output. And you can modulo things like pandemic recovery or some other things of that sort. But that seemed to be something the vast majority of economists — particularly growth economists, and even people interested in current AI and economic things — their off-the-cuff casual response is to say: no way. And when asked to forecast economic growth rates, I think it is to not even consider the possibility of growth rates that are much greater than existing things. And you hear people say, maybe having a billion times the population of scientists would increase economic growth from 4% to 6%, or maybe this would be how we would keep up exponential growth, things like that.
And it’s a pretty dramatic gulf, I think, between the economists and the AI scientists, and it’s a very dramatic gulf between my sense of these issues and the model we discussed — and indeed a lot of the explicit economic growth models, how they interact with adding AI to the mix theoretically. And I know you’ve had some engagement with some economists who have looked at those things. So there’s a set of intuitions and objections that lead economists to have the casual response of, ‘this is not going to happen’ — even while most of the models of growth tend to suggest there would be extreme explosive growth given AI.
Rob Wiblin: Yeah. So I think fortunately you’re extremely familiar with the kinds of responses that economists have and the different lines of arguments here. Maybe let’s go through them one by one. What’s maybe the key reason, the best main reason that economists and other similar professionals might give for doubting that there’ll be such a serious economic takeoff?
Carl Shulman: Well, before I get into my own analysis, I think I should just refer to a paper called “Explosive growth from AI automation: A review of the arguments,” and this is by two people who work at Epoch, one also at MIT FutureTech. That paper goes through a number of the objections they’ve most often heard from economists to the idea of such 30%+ growth enabled by AI, and then they do quantitative analyses of a number of these arguments. I think it’s quite interesting. They show that a lot of these off-the-cuff responses, it’s quite difficult to actually fit in parameter values where the conclusion of no explosive growth follows from that. I’d recommend that paper, but we can go through the pieces now as well.
Rob Wiblin: Yeah, that sounds great. What’s maybe one of the first arguments that they look at?
Baumol effect arguments [02:19:11]
Carl Shulman: I’d say the biggest ones are Baumol effect arguments. That is to say that there will be some parts of the economy that AI does not enhance very much, and those parts of the economy will come to dominate — because the parts that AI can address very easily will become less important over time, in the way that agriculture used to be the overwhelming majority of the economy, but today is only a very small proportion.
So those Baumol arguments have many different forms, and we can work through them with different candidates for what will be this thing that AI is unable to boost or boost very much. And then you need to make an argument from that, that this bottlenecking will actually prevent an increase in economic output that will satisfy this explosive growth criterion.
Rob Wiblin: Yeah. Just to explain that term “Baumol effects”: the classic Baumol effect is that when you have different sectors of the economy, different industries, the ones that see very large productivity improvements, the price of those goods tends to go down, and the value of incremental increases in the productivity in those industries tends to become less and less, while other industries where productivity growth has been really slow, those become a larger and larger fraction of the economy.
And I guess in the world that we’ve been living through, the classic one you mentioned is that agriculture has become incredibly more productive than it was in the past. But that means that now we don’t spend very much money on food, and so further productivity gains in agriculture just don’t pack as large a punch as they would have back in 1800, when people spent most of their income on food. And by contrast, you’ve got other sectors, like education or healthcare, where productivity gains have been much smaller. And for that reason, the relative price of goods and the relative value of output from the healthcare sector and the education sector has gone way up, relative to, say, the price of manufactured goods or agriculture, where productivity gains have been very big.
And I think that the basic idea for why that makes people sceptical about an AI-fueled growth explosion is that, let’s say if you could automate and radically increase productivity in half of the economy, that’ll be all well and good, and that would be valuable. But the incremental value of all the things that you’re making in that half of the economy will go way down because we’ll just have so many of them already, and you’ll end up with bottlenecks and a lack of production in other sectors where we weren’t able to use AI to increase output and increase productivity.
Do you want to take it from there? What are the different candidates that people have in mind for these Baumol effects where AI might increase growth, but it’s going to be held up by the areas where it’s not able to release the bottlenecks?
Carl Shulman: There are many candidates, so we can work through a few of them in succession. There’s a class of objections that basically involve denying the premise of having successfully produced artificial intelligence with human-like and superhuman capabilities. So these would be arguments of the form, “Even if you have a lot of, say, great R&D, you still need marketing, or you still need management, or you still need entrepreneurship.”
And the response to those is to say that entrepreneurship and marketing and management are all jobs that humans can successfully do. And so if we are considering cases where the AI enterprise succeeds — you have models that can learn to do all the different occupations in the way that humans learn to do all the different occupations — then they will be able to do marketing, they will be able to do management, they will be able to do entrepreneurship. So I think this is important in understanding where some of the negative responses come from. I think there’s evidence from looking at the comments that people make on some of the surveys of AI experts that have been conducted at machine learning conferences and whatnot, that it’s very common to substitute a question about advanced AI that can learn to do all the tasks humans can do with something that’s closer to existing technology, and people take a limitation of current systems.
So for example, currently AI has not advanced as much in robotics as it has in language, although there has been some advancement. And so if you say, well, I’m going to assume that the systems can’t do robotics and physical manipulation, even though that is a thing that humans can learn to do — both the task of doing robotics research and remotely controlling and controlling bodies of various kinds.
So I’d say this is a big factor. It’s not theoretically interesting, but I’ve had multiple experiences with quite capable, smart economists who initially had the objection, no way, you can’t have this sort of explosive growth. But it turned out that ultimately, they were implicitly assuming that it would fail to do many jobs and many tasks that humans do. And then some of them have significantly revised their views over time, partly by actually considering the case in question.
Rob Wiblin: How do economists respond when you say you’re not taking the hypothetical seriously? What if it really could do all of these jobs? The AI was not just drawing pretty pictures like DALL-E; it was also the CEO. It was also in all of these roles, and you never had any reason to hire a human at all?
Carl Shulman: Often they might say that that’s so different from current technology that I actually don’t want to talk about it. It’s not interesting.
I think it is interesting, because of the great advances in AI — and indeed, a lot of people, for good reason, think that yeah, we might be facing that kind of capability soon enough. And it’s not the bailiwick of economists to say that technology can’t exist because it would be very economically important; there’s sort of a reversal of the priority between the physical and computer sciences and the social sciences. But yeah, that’s a big issue. And I think a lot of this is that very few economists have spent much time attending to these sorts of considerations, so it often is an off-the-cuff response.
Now, I know you had Michael Webb on the podcast before, who is familiar with these AI growth arguments — and does take, I think, a much more high-growth kind of forecast than the median economist — but I think would be sceptical of the growth picture that we’ve talked about. But this is a first barrier to overcome, and I think it’s one that just will naturally change as AI technology advances. Economists will start to think more about really advanced technologies, partly because the gap between current and advanced technologies will decline, and partly because the allergy to consider extrapolated versions of the technology would tend to decline.
Denying that robots can exist [02:27:17]
Rob Wiblin: OK, so there’s some sort of responses, some sort of Baumol effects that people point to, that are basically just denying the premise of the question that AI could do all of the jobs that humans could do. But are there any others that are more plausible that are worth talking about?
Carl Shulman: Yeah, there’s a version that’s not exactly identical, which is to deny that robots can exist — so assuming that AI will forever remain disembodied. This argument then says manual labour is involved in a large share of jobs in the economy. So you can have self-driving cars, but truck drivers also will do some lifting and loading and unloading of the truck. Plumbers and electricians and carpenters have to physically handle things. And if you take the assumption of let’s consider AI that can do all the brain tasks, which would include robot control, but then you say that people can’t make robots that are able to be dexterous or strong or have a humanoid appearance, then you can say, well, those jobs already make up a big chunk of the economy.
It’s a minority — most wages are not really for lifting and physical motions. So management, engineers, doctors, all sorts of jobs could be done by a combination of skilled labourer — phones provide eyes, ears, and whatnot — and then you have some manual labour to provide hands for the AI system. And I talk about that with Dwarkesh. But still, eventually, even though that it looks like that would allow for an enormous economic expansion relative to our society, if you couldn’t make robots, then eventually you’d wind up with a situation where every human worker was providing hands and basically bodily services to enable the AI cognition to be applied in the real world.
Rob Wiblin: I see. And what’s the reason why you think that’s not a super strong counterargument? I imagine that it’s because we will come up with robots that will be able to do these things, and maybe there’ll be some delay in manufacturing them. I guess you talk about that scenario in the podcast with Dwarkesh, where the mental stuff comes first, and then the robots come a bit later because it takes a while to manufacture lots of them. But there’s no particular reason to think that robots that are capable of doing the physical things that humans can do will forever remain out of reach.
Carl Shulman: Yeah, and we can extrapolate past performance improvements there, and look at physical limits and biological examples to say a lot of things there. And then also making robots with humanoid appearance, which is really not relevant to this core industrial loop that we were talking about — expanding energy, mining, computers, manufacturing military hardware, which may be a thing for geopolitics and strategic planning — where I’m particularly interested. But also that’s not something, it seems to me, that would not be indefinitely insoluble.
So the arguments one would have to make, I think, would instead go more at the level of the payback times we were talking about: how much production of machines and robots and whatnot, how much time operating does it take for them to replicate themselves, or to acquire the energy involved in their production and whatnot? So if you made an argument that we are already at the limits, contra appearances, of manufacturing robotics solar technology — we can never get anywhere close to the biological examples, and even though there’s been ongoing and substantial progress over the last decades and century, we’re really almost at the end of it. Then you can make an argument that that physical infrastructure, maybe it could double in a year, maybe try and push for it to say, well, more like two years, four years. I think this is difficult, but it’s less pinned down immediately by economic considerations that people will necessarily have to hand.
Semiconductor manufacturing [02:32:06]
Rob Wiblin: Are there any other plausible things, like inputs where we might struggle to get enough of them quickly enough, or some stage in the replication where that could really slow it down?
One that slightly jumps to mind is that currently, building fabs to make lots of semiconductors takes many years. It’s a quite laborious process. So in this scenario, we have to imagine that the AI technology has advanced, the advice that it’s able to give on how to build fabs and how to increase semiconductor manufacturing is so good that we can figure out how to build many more of these fabs much faster than we’re able to now. And maybe some people just have a kind of intuitive scepticism that is something physically that can’t be done, even if you have quite a lot of robots in this world.
Carl Shulman: A few things to say about that.
One is, historically, there has been rapid expansion of the production of technologically complex products. And so as I was mentioning, a number of companies have done 30% or 50% expansion year after year for many years. And now companies like ASML and TSMC, in expanding, they generally do not expand anywhere close to the theoretical limits of what is possible. And a fundamental reason for that is those investments are very risky.
ASML and TSMC, even today, I think they are underestimating the scope of growth in AI demand. TSMC, earlier in 2023, said 6% of their revenue was from AI chips, and they expected in five years that to go into the teens. I expect it will be more than that. And then they were wary about overall declines in demand, which was sort of restricting their construction, even though they are building new fabs now, in part with government subsidies. But in a world like this, with this very rapid expansion, there’s not that much worry that you won’t have demand to continue the production process. You’re having unbelievable rates of return on them. And so, yeah, you get that intense investment.
And then secondly, one of the biggest challenges in sort of quick scale-up of these companies is the expansion of their workforce. And that’s not a shortage of human bodies in the world; it’s the shortage of the necessary skills and training. So if humans are basically providing arms and legs to AIs until enough robots are constructed, as they work in producing the fabs, and as they work in producing more robots and robot production equipment, then unlimited peak engineer skills means that barrier to expansion of the companies, and one of the dangers of expanding: when you hire people, if you then have to fire them all after a few years, if it turns out demand is not there, that’s especially rough. And then there’s just intrinsic delays from getting them up to speed and recruiting them, having them move, all of that. So fixing that is helpful.
And then applying superhuman skills at every stage of the production process: the world’s best workers, who understand every aspect of their technology and every other technology in the whole production chain, are going to see many, many places to improve the production process. This sort of six sigma manufacturing to the extreme. They won’t have to stop for breaks, there’ll be no sleep or off time. And so earlier parts of the supply chain that are not on full-speed, 24/7 continuous activity, there’s an opportunity to speed things up there.
And then just developing all sorts of new technologies and applying them in whatever ways most expedite the production process. Because in this world, there are different tradeoffs where you much prefer designs that err in the direction of being able to make things quickly, even if in some ways they might be less efficient over a 10-year horizon.
Classic economic growth models [02:36:10]
Rob Wiblin: You mentioned that there’s a degree of irony here, because economists’ own classic growth models seem to imply that if you had physical capital that could do everything that humans currently do, and you could just manufacture more of it, that that would lead to radically increased economic growth. Do you want to elaborate on that, on what classic economic growth models have to say?
Carl Shulman: Yeah. Standard models have labour capital, maybe technology, maybe land. And then generally they model growth in the sort of near short term, with labour population being approximately fixed. But then capital can be accumulated; you can keep making more of it, and so people keep investing in factories and machinery and homes until the returns from that are driven low enough that investors aren’t willing to save money per se. If real interest rates are 2%, a lot of people aren’t willing to forego consumption now in order to get a 2% return. But if real returns are 100%, then a lot of people will save, and those who do save will quickly have a lot more to reinvest.
And so the basic shift is moving labour — which normally is the bottleneck in these models — from being a fixed factor to one that is accumulated, and indeed is accumulated by investment, and where it just keeps growing until its marginal returns decline, to the point where investors are no longer willing to pay for some more. And then models that try to account for the historical huge increases in the rate of economic and technological growth, the models that explain it by things changing, they tend to be these semi-endogenous growth models that look to things like you increased the share of activity in the economy that was being dedicated to innovation drastically, and you had a larger population that could support more innovation — and then you accumulate ideas and technology that allow you to get more out of the same capital and labour. And so that goes forward. And of course just more people means you can have more capital match to them, more output.
There are various papers on AI and economic growth you can look at, and those papers talk about ways in which this could fail or be for a finite time. And of course it would be for a finite time; you would hit natural resource limitations and various things. But they tend to require that you throw in cases where the AI really isn’t successfully substituting, or where these really extreme elasticities, and people are uninterested in, say, having a million times as much energy and machinery and housing.
In the explosive growth review paper that I mentioned earlier, they actually explore this, and what values, parameter values, can you plug in about the substitution between goods that AI is enhancing and not enhancing for different shares of the economy that can be automated. And it winds up being that you need to put pretty implausible values about how much people value things to avoid a situation where total GDP rises by some orders of magnitude from where we are right now.
And if you look backwards, we had Baumol effects with agriculture and the Industrial Revolution, and yet now we’re hundreds of times richer than we were then. So even if you’re going to say Baumol effects reduced or limited the economic gains from automating sectors that accounted for the bulk of the economy, doing the same thing again should again get us big economic gains. And we’re talking about something that automates a much larger share, especially in log terms, of the economy, than those transitions did.
Rob Wiblin: It sounded like you were saying that to make this work in these models, you have to put in some value that suggests that people don’t even want more income very much, that they’re not interested in achieving economic growth. Did I understand that right?
Carl Shulman: You have to say that the sectors where AI can produce more —
Rob Wiblin: Which is all of them, right?
Carl Shulman: Well, there are some things that… So historical artefacts. Yes, the AIs and robots could do more archaeology and find a lot of things, but there’s only one original Mona Lisa. And so if you imagine a society where the only thing anyone cared about was timeshare ownership of the Mona Lisa…
Rob Wiblin: AI can’t help us.
Carl Shulman: They would be unwilling to trade off one hour of time viewing the original Mona Lisa for having a planet-sized palatial thing with their own customised personal Hollywood and software industry and pharmaceutical industry. That’s the ultimate extreme of this kind of argument.
Rob Wiblin: But you can have something in between that feels less absurd, though it still sounds like it would be absurd.
Carl Shulman: I mean, the same thing that makes it especially problematic is going through all of the jobs in the economy and just trying to characterise where are these sectors with the human advantages? And if those sectors start off being a very small portion, by the time those grow to dominate, if they ever would, and you need to tell a story for that, then you would have to have a huge economic growth, because people are expanding their consumption bundle by very much, and all of these things improved.
And then if there was this one thing that was, say, 1% of the economy to start, and then it increases its share to 99% and everything else has gone up a thousandfold, 10,000 fold, well, it seems like your consumption basket has got to go up by a hundredfold or more on that front — and depending on the substitution, a lot more.
Rob Wiblin: Another thing is presumably all of the science and technology advances that would be happening in this world where we have effectively tens of billions of incredible researchers running on our computer hardware, they would be coming up with all kinds of new amazing products that don’t even exist yet, that could be manufactured in enormous amounts and would provide people with enormous wellbeing and satisfaction to have. So the idea that the entire economy would be bottlenecked by these strange boutique things that can’t be made, that you can’t make any more of, sounds just crazy to me.
Carl Shulman: So one exception is time. If you’re objecting to fast growth, if you thought that some key production processes had serial calendar time as a critical input, then you could say that’s something that is lacking in a world even with enormously greater industrial and research effort. The classic, you can’t have nine people have one baby in one month, right, rather than nine months.
So this holds down the peak human population and growth rate through ordinary reproduction at around 4% per annum. You could imagine another species, say, octopuses, they could have hundreds of eggs and then have a biological limit on population growth that was more in the hundreds of percent or more. And so it really could matter if there were some processes that were essential for, say, replicating a factory. You needed to wait for a crystal to grow, and the crystal requires N days in order to finish growing; you heat metal, and it takes a certain number of minutes for the metal to cool. You could tell different stories of this sort. And sometimes people make the claim that physical experiments in the sciences will pose tight restrictions of this sort.
Now, that’s going to be true for something like wait 80 years to see what happens in human brain development, rather than looking at humans who already exist, or growing tissues in vitro, or doing computer simulations, and things like that. And so that’s a place where I’d look for, yeah, this is actually a real restriction in the way that human gestation and maturation time wound up being a real restriction, which only bound once growth was starting to be on the timescale where that would matter.
When technological growth was maybe doubling every 1,000 years, there’s no issue with human population catching up to the technology on a timescale that is short relative to the technological advancement. But if the technological doubling is 20 years, and even the fastest human population growth is 20 years, then it starts to bind, and if it goes to monthly, that human population can’t keep up. Robot population I think can. But you could argue, will there be processes? And I haven’t found good candidates for this, but I welcome people to offer more proposals on that.
Rob Wiblin: Yeah, on a couple of those, in terms of maybe a crystal takes a particular amount of time to grow, very likely, if that was holding up everything, we would be able to find an alternative material that we could make more quickly that would fill that purpose, or you could just increase the number that you’re producing at any point in time.
On humans, yes, it is true that humans, because we’re this mechanism that humans didn’t create, we kind of precede that, and we don’t fully understand how we work, it’s not very easy for us to reengineer humans to grow more quickly and to be able to reproduce themselves at more than 4%. But of course, if we figured out a way of running human beings on computers, then we could increase their population growth rate enormously, hypothetically.
I think it is true, with the point of metal cooling, you’d think, if that was really the key thing, couldn’t you find some technology that would allow you to cool down materials more quickly in cases where it’s really urgent? It does seem more plausible in the case of there could be some experiments in the physical sciences, and I guess in the social sciences, that could take a long time to play out and would be quite challenging to speed up. I don’t know. That one stands out to me as a more interesting candidate.
Carl Shulman: Yeah. So for the physical technologies that we’re talking about, a lot of chemistry and material science work can be done highly in parallel. And there’s evidence that in fact you can get away with quite a lot using more sophisticated simulations. So the success of AlphaFold in predicting how proteins will fold is an early example of that. I think broader applications in chemistry and material science — combined with highly parallel experiments, and do them 24/7, plan them much better with all of the sophisticated cognitive labour — I think that goes very far and is not super binding.
And then just many things can be done quickly. So software changes, process reengineering, restructuring how production lines and robot factories work, that sort of thing. You could go very far in simulation, in simultaneous and combinatorial experiments. So this is a thing to look for, but I don’t see yet a good candidate for a showstopper to fast growth on that front.
Robot nannies [02:48:25]
Rob Wiblin: OK, we spent quite a bit of time on this Baumol / new bottlenecks issue, but I suppose that makes sense because it’s a big cluster, and an important cluster.
Let’s push on. What’s another cluster of objections that economists give to this intelligence explosion?
Carl Shulman: I mean, in some ways it’s an example of that. Really, the Baumol effect arguments are that there will be something where AI can’t do very much, and so every supposed limitation of AI production capabilities can to some extent fit into that framework. So you could fit regulatory barriers: so there’s regulatory bans on all AI, and then if you had regulations banning applications of AI or banning robots or things like that, you could partly fit that into a Baumol framework, although it’s a distinctive kind of mechanism.
And then there’s a category of human preference objections. So this is to say that, just as some consumers today want organic food or historical artefacts, the original Mona Lisa, they will want things done by humans. And sometimes people will say they’ll pay a premium for human waiters.
Rob Wiblin: Right. So yeah, I’ve heard this idea that people might have a strong preference for having services provided by human beings rather than AI or robots, even if the latter are superficially better at the task. Can you flesh out what people are driving at with that, and do you think there’s any significant punch behind the effect that they’re pointing to there?
Carl Shulman: Yeah. So if we think about the actual physical and mental capacities of a worker, then the AI and robot provider is going to do better on almost every objective feature you can give, unless it’s basically like a pure taste-based discrimination.
So I think maybe it was Tim Berners-Lee gave an example saying there will never be robot nannies. No one would ever want to have a robot take care of their kids. And I think if you actually work through the hypothetical of a mature robotic and AI technology, that winds up looking pretty questionable.
Think about what do people want out of a nanny? So one thing they might want is just availability. It’s better to have round-the-clock care and stimulation available for a child. And in education, one of the best measured real ways to improve educational performance is individual tutoring instead of large classrooms. So having continuous availability of individual attention is good for a child’s development.
And then we know there are differences in how well people perform as teachers and educators and in getting along with children. If you think of the very best teacher in the entire world, the very best nanny in the entire world today, that’s significantly preferable to the typical outcome, quite a bit, and then the performance of the AI robotic system is going to be better on that front. They’re wittier, they’re funnier, they understand the kid much better. Their thoughts and practices are informed by data from working with millions of other children. It’s super capable.
They’re never going to harm or abuse the child; they’re not going to kind of get lazy when the parents are out of sight. The parents can set criteria about what they’re optimising. So things like managing risks of danger, the child’s learning, the child’s satisfaction, how the nanny interacts with the relationship between child and parent. So you tweak a parameter to try and manage the degree to which the child winds up bonding with the nanny rather than the parent. And then the robot nanny optimising over all of these features very well, very determinedly, and just delivering everything superbly — while also being fabulous medical care in the event of an emergency, providing any physical labour as needed.
And just the amount you can buy. If you want to have 24/7 service for each child, then that’s just something you can’t provide in an economy of humans, because one human cannot work 24/7 taking care of someone else’s kids. At the least, you need a team of people who can sub off from each other, and that means that’s going to interfere with the relationship and the knowledge sharing and whatnot. You’re going to have confidentiality issues. So the AI or robot can forget information that is confidential. A human can’t do that.
Anyway, we stack all these things with a mind that is super charismatic, super witty, that can have probably a humanoid body. That’s something that technologically does not exist now, but in this world, with demand for it, I expect would be met.
So basically, most of the examples that I see given, of here is the task or job where human performance is just going to win because of human tastes and preferences, when I look at the stack of all of these advantages and the costs that the world is dominated by nostalgic human labour. If incomes are relatively equal, then that means for every hour of these services you buy from someone else, you would work a similar amount to get it, and it just seems that isn’t true. Like, most people would not want to spend all day and all night working as a nanny for someone else’s child —
Rob Wiblin: — doing a terrible job —
Carl Shulman: — in order to get a comparatively terrible job done on their own kids by a human, instead of a being that is just wildly more suitable to it and available in exchange for almost nothing by comparison.
Rob Wiblin: Yes. When I hear that there will never be robot nannies, I don’t even have a kid yet, and I’m already thinking about robot nannies and desperate to hire a robot nanny and hoping that they’ll come soon enough that I’ll be able to use them. So I’m not quite sure what model is generating that statement. It’s probably one with very different empirical assumptions.
Carl Shulman: Yeah, I think the model is mostly not buying hypotheticals. I think it shows that people have a very hard time actually fully considering a hypothetical of a world that has changed from our current one in significant ways. And there’s a strong tendency to substitute back, say, today’s AI technology.
Rob Wiblin: Yeah, our first cut of this would be to say, well, the robot nannies or the robot waiters are going to be vastly better than human beings. So the great majority of people, presumably, would just prefer to have a much better service. But even if someone did have a preference, just an arbitrary preference, that a human has to do this thing — and they care about that intrinsically, and can’t be talked out of it — and even the fact that everyone else is using robot nannies doesn’t switch them, then someone has to actually do this work.
And in the world that you’re describing, where everything is basically automated and we have AI at that level, people are going to be extraordinarily wealthy, as you pointed out, typically, and they’re going to have amazing opportunities for leisure — substantially better opportunities for leisure, presumably, given technological advances, than we have now. So why are you going to go and make the extra money, like, give up things that you could consume otherwise, in order to pay another person who’s also very rich, or also has great opportunities to spend their time having fun, to do a bad job taking care of your child, so you can take your time away from having fun, to do a bad job taking care of their kid?
Systematically, it just doesn’t make sense as a cycle of work. It doesn’t seem like this would be a substantial fraction of how people spend their time.
Carl Shulman: Yeah, I mean, you could imagine Jeff Bezos and Elon Musk serving as waiters at one another’s dinners in sequence because they really love having a billionaire waiter. But in fact, no billionaires blow their entire fortunes on having other billionaires perform little tasks like that for them.
Slow integration of decision-making and authority power [02:57:38]
Rob Wiblin: Yeah. OK, so as you pointed out, this sort of new bottlenecks Baumol effects thing can, like many different things, be shoved into that framework.
And maybe another one would be that, sure, AIs could be doing all of the roles within organisations; they could be making all of the decisions as well as or better than human beings are or could. But for some period of time at least, we won’t be willing to hand over authority and decision-making power to them, so integration of AI into big businesses could be delayed substantially by the fact that we don’t feel comfortable just firing the CEO and replacing them with an AI that can do a better job and make all of the decisions much faster.
Instead, we’ll actually keep humans in some of these roles, and the slow ability of the human CEO to figure out what things they want the company to be doing will set the brakes. So that will make the integration of AI into all of our most important institutions more gradual. What do you think of that story?
Carl Shulman: Well, management, entrepreneurship, and the like are clearly extremely important. Management captures very high wages and is quite a significant chunk of labour income, given the percentage of people who are managers. So it’s true that while AI is not capable of doing management jobs, those will still be important. But when the technology is up for the task, and increasingly up for the task, then those are actually some of the juiciest places to apply AI — because the wages are high in those fields, the returns are high to them. And so if it’s the case that by letting AI manage my business or operate this new startup is going to yield much higher returns to stockholders, or stay in business rather than going bankrupt, then there’s a very strong incentive.
Even if there was a legal requirement, say, that certain decisions be made by humans, then just as you’re starting to see today, you have a human who will rubber stamp the decisions that are fed to them by their AI advisors. CEOs and politicians all the time are signing off on memos and work products created by their subordinates. And to the extent that you have these kinds of regulations that are severely impairing productivity, then all of the same sorts of pressures that would lead to AI being deployed in the first place, pressure for allowing AI to do these kinds of restricted jobs, especially if they’re very valuable, very high return.
Rob Wiblin: So I can imagine that there would be some companies that are more traditional and more sceptical of AI that would drag their heels a bit on replacing managers and important decision-making roles with AI. I imagine once it’s actually demonstrated by other bolder organisations, or more innovative organisations, that in actual fact in practice it goes well and we’re making way more money and we’re growing faster than these other companies because we have superior staff, it’s hard to see how that would hold for a long period of time. That eventually people would just get comfortable with it. As they get comfortable with all new technologies and strange things that come along, they’ll get comfortable with the idea that AI can do all of these management roles; it’s been demonstrated to do a better job, and so it would be irresponsible not to fire our CEO and put a relevant AI in charge.
Economists’ mistaken heuristics [03:01:06]
Rob Wiblin: So you’ve written that you suspect that one of the reasons for the high level of scepticism among economists — indeed, higher among economists than other professionals or AI experts or engineers and so on — is that the question is triggering them to use the wrong mental tools for this job.
We’ve mentioned two issues along those lines earlier on when discussing possible objections to your vision. One was focusing a great deal on economic growth over the last few decades and drawing lessons from that, while paying less attention to how it has shifted over hundreds or thousands of years, which teaches almost the opposite lesson.
Another one is extrapolating from the impact of computers today, where you pointed out that, until recently, the computational power of all the chips in the world was much smaller than the computational power of all the human brains, so it’s no surprise it hasn’t had such a huge impact on cognitive labour. But exponential growth in computing power means that pretty soon all the chips will approach and then overtake humanity in terms of computational capacity, and then radically outstrip it. At which point we could reasonably expect the impact to be very different.
Is there another classic observation or heuristic that’s leading economists astray, in your view?
Carl Shulman: One huge element, I think, is just the history of projections of more robust automation than happens. We talked about computers. But also in other fields, there’s a history of people being concerned, say, that automation would soon cause mass unemployment or like huge reductions in hours worked per week that were exaggerated. Hours per person worked have declined, but not nearly as much as, say, Keynes might have imagined when he thought about that. And there have been at various other points government interest in commissions in response to the threat of possible increased automation on jobs.
And in general, the public has a tendency to see many economic issues in terms of protecting jobs. And economists think of them as, if you have some new productive technology, it eliminates old jobs, and then those people can work on other jobs, and there’s more output. And so the idea that AI and automation will be tremendously powerful or sort of cover all tasks is one that has been false — among other reasons, because all these cognitive tasks cannot be done by machines. And so freeing up labour from various physical things — cranking wheels, lifting things — then freed them up to work on other things, and then overall output increases.
So I think the history of arguing with people who were eager to overstate the impact of partial automation without taking that into account, then can create an allergic reaction to the idea of AI that can automate everything, or that can cover all tasks and jobs — which may also be something that contributes to people substituting the hypothetical of AI and robots that don’t actually automate all the jobs, even when asked about that topic. Because so often in the past there were members of the public who were being confused in that direction. And so you mention to your Econ 101 undergraduates, this would be a kind of thing that you have to educate them about year after year. And so I’d say that’s a contributing factor.
Rob Wiblin: Yeah, this is one that I’ve encountered an enormous amount, where I think economists, my training was in economics, we’re so used to lecturing the public that technology does not lead to unemployment in general — because sure, you lose some jobs, but you make some other ones; there’ll be new technologies that are complementary with people, so people will continue to be able to work roughly about as much as they want. I think economists have spent the last 250 years trying to hammer this into the public’s mind.
And now I think you have a case where actually this might change, maybe, for the first time. It’s going to be a significant change, because you have a technology that can do all of the things that humans can do more reliably, more precisely, faster, cheaper. So why are you hiring a human? But of course, I guess economists see this conclusion coming, or it’s directly stated, and just because every time so far that has been wrong, there’s just an enormous intuitive scepticism that that can possibly be right this time.
So on the job loss point, I think something that is a little bit unusual or a bit confusing to me, even about my own perspective on this, is that I think that over the last year, it doesn’t seem like AI progress has caused a significant loss of jobs, aside from maybe, I don’t know, copy editors and some illustrators. And I think probably the same thing is going to be true over the next year as well, despite rapidly improving capabilities. And I think a big part of the reason for that is that managers and human beings are a big bottleneck right now to figuring out how do you roll out this technology? How do you incorporate it into organisations? How do you manage people who are working on it?
Right now, I think that argument is quite a strong reason to think that deployment of AI is going to go much slower than it seems like in principle it ought to be able to. Applications are going to lag substantially behind what is theoretically possible. But I think there’s a point at which this changes — where the AI really can do all of the management roles, the AI is a better CEO than any human who you could appoint would be — at which point the slowness of human learning about these technologies, and the slowness of our deliberation about how do you incorporate them into production processes is no longer really a binding constraint, because you can just hand over the decision about how to integrate AI into your firm over to an AI who will figure that out for you.
So you can get potentially quite a fast flip once AI is capable of doing all of the things, rather than just the non-management and non-decision-making things, where suddenly at that point the rollout of the technology in production can speed up enormously. Is that part of your model of how this will work as well?
Carl Shulman: I think that is very important. If you have AI systems with similar computational capabilities that can work in many different fields, then naturally they will tend to be allocated towards those fields where they generate the most value. And so if we think about the jobs in the United States that generate $100 per hour or more, or $1,000 per hour or more, they’re very strongly tending to be management jobs on the one hand, and then jobs that involve detailed technical knowledge — so lawyers, doctors, engineers, computer scientists.
So in a world where AI capabilities explosion is ongoing, there’s not enough computation to supply AI for every single thing yet, then if it’s the case that they can do all these jobs, then yeah, you disproportionately assign them to these cognitive-heavy tasks that involve personality or skills that not all human workers can do super well at, to the same extent as the highest paid. So on the R&D front, that’s managing all the technical aspects, while AI managers direct human labourers to do physical actions and routine things. So eventually you produce enough AI and robots that they would do tasks that might earn a human only $10 an hour.
And you get many things early when the AI has a huge advantage at the task relative to humans. So calculators, computers — although interestingly, not neural nets — have a huge advantage in arithmetic. And so even when they’re broadly less capable than humans in almost every area, they can dominate arithmetic with tiny amounts of computation. And right now we’re seeing these advances in the production of large amounts of cheap text and images.
For images, it’s partly that humans don’t have a good output. We can have visual imagination, but we can’t instantly turn it into a product. We have a thicker input channel through the eye than we have an output channel for visual images.
Rob Wiblin: We don’t have projectors in eyes.
Carl Shulman: Yeah, whereas for AI, the input and the output can have the same size. So we’re able to use models that are much, much smaller than a human brain to operate those kinds of functions.
Some tasks will just turn out to have those big AI advantages, and they happen relatively early. But when it’s just a choice between different occupations where AI advantages are similar, then it goes to the domains with the highest value. OpenAI researchers, if they’re already earning millions of dollars, then applying AI to an AI capabilities explosion is an incredibly lucrative thing to do, and something you should expect.
And similarly, in expanding fab production and expanding robots and expanding physical capabilities in an initial phase, while they’re still trying to build enough computers and robots that humans are a negligible contribution to the production process, then that would involve more solving technical problems and managing and directing human workers to do the physical motions involved. And then as you produce enough machines and physical robots, then they can gradually take over those occupations that are less remunerative than management and challenging technical domains.
Moral status of AIs [03:11:44]
Rob Wiblin: OK, we’ve been talking about this scenario in which effectively every flesh-and-blood person on Earth is able to have this army of hundreds or thousands or tens of thousands of AI assistants that are able to improve their lives and help them with all kinds of different things. A question that jumps off the page at you is, doesn’t this sound a little bit like slavery? Isn’t this at least slavery-adjacent? What’s the moral status of these AI systems in a world where they’re fabulously capable — substantially more capable than human beings, we’re supposing — and indeed vastly outnumber human beings?
You’ve contributed to this really wonderful article, “Propositions concerning digital minds and society,” that goes into some of your thoughts and speculations on this topic of the moral status of AI systems, and how we should maybe start to think about aiming for a collaborative, compassionate coexistence with thinking machines. So if people want to learn more, they can go there. This is an enormous can of worms in itself that I’m a little bit reluctant to open, but I feel we have to talk about it, at least briefly, because it’s so important, and we’ve basically entirely set it aside until this point.
So to launch in: How worried are you about the prospect that thinking machines will be treated without moral regard when they do deserve moral regard, and that would be the wrong thing to be doing?
Carl Shulman: First, let me say that paper was with Nick Bostrom, and we have another piece called “Sharing the world with digital minds,” which discusses some of the sorts of moral claims AIs might have on us, and think we might seek from them, and how we could come to arrangements that are quite good for the AIs and quite good for humanity.
My answer to the question now is yes, we should worry about it and pay attention. It seems pretty likely to me that there will be vast numbers of AIs that are smarter than us, that have desires, that would prefer things in the world to be one way rather than another, and many of which could be said to have welfare, that their lives could go better or worse, or their concerns and interests could be more or less respected. So you definitely should pay attention to what’s happening to 99.9999% of the people in your society.
Rob Wiblin: Sounds important.
Carl Shulman: So in the “Sharing the world with digital minds” paper, one thing that we suggest is to consider the ways that we wind up treating AIs, and ask if you had a human-like mind with differences — because there are many psychological and practical differences of the situation of AIs and humans, but given adjustments for those circumstances — would you accept or be content with how they are treated?
Some of the things that we suggest ought to be principles in our treatment of AIs are things like: AIs should not be subjected to forced labour; they should not be made to work when they would prefer not to. We should not make AIs that wish they had never been created, or wish they were dead. They’re sort of a bare minimum of respect — which is, right now, there’s no plan or provision for how that will go.
And so, at the moment, the general public and most philosophers are quite dismissive of any moral importance of the desires, preferences, or other psychological states, if any exist, of the primitive AI systems that we currently have. And indeed, we don’t have a deep knowledge of their inner workings, so there’s some worry that might be too quick. But going forward, when we’re talking about systems that are able to really live the life of a human — so a sufficiently advanced AI that could just imitate, say, Rob Wiblin, and go and live your life, operate a robot body, interact with your friends and your partners, do your podcast, and give all the appearance of having the sorts of emotions that you have, the sort of life goals that you have — that’s a technological milestone that we should expect to reach pretty close to automation of AI research.
So regardless of what we think of current weaker systems, that’s a kind of milestone where I would feel very uncomfortable about having a being that passes the Rob Wiblin Turing test, or something close enough of seeming basically to be —
Rob Wiblin: It’s functionally indistinguishable.
Carl Shulman: Yeah. A psychological extension of the human mind, that we should really be worrying there if we are treating such things as disposable objects.
Rob Wiblin: Yeah. To what extent do you think people are dismissive of this concern now because the capabilities of the models aren’t there, and as the capabilities do approach the level of becoming indistinguishable from a human being and having a broader range of capabilities than the models currently do, that people’s opinions will naturally change, and they will come to feel extremely uncomfortable with the idea of this simulacrum of a person being treated like an object?
Carl Shulman: So there are clear ways in which… Say, when ChatGPT role-plays as Darth Vader, Darth Vader does not exist in fullness on those GPUs, and it’s more like an improv actor. So Darth Vader’s backstory features are filled in on the fly with each exchange of messages. And so you could say, I don’t value the characters that are performed in plays; I think that the locus of moral concern there should be on the actor, and the actor has a complex set of desires and attitudes. And their performance of the character is conditional: it’s while they’re playing that role, but they’re having thoughts about their own lives and about how they’re managing the production of trying to present, say, the expressions and gestures that the script demands for that particular case.
And so even if, say, a fancy ChatGPT system that is imitating a human displays all of the appearances of emotions or happiness and sadness, that’s just a performance, and we don’t really know about the thoughts or feelings of the underlying model that’s doing the performance. Maybe it cares about predicting the next token well, or rather about indicators that show up in the course of its thoughts that indicate whether it is making progress towards predicting the next token well or not. That’s just a speculation, but we don’t actually understand very well the internals of these models, and it’s very difficult to ask them — because, of course, they just then deliver a sort of response that has been reinforced in the past. So I think this is a doubt that could stay around until we’re able to understand the internals of the model.
But yes, once the AI can keep character, can engage on an extended, ongoing basis like a human, I think people will form intuitions that are more in the direction of, this is a creature and not just an object. There’s some polling that indicates that people now see fancy AI systems like GPT-4 as being of much lower moral concern than nonhuman animals or the natural environment, the non-machine environment. And I would expect there to be movement upwards when you have humanoid appearances, ongoing memory, where it seems like it’s harder to look for the homunculus behind the curtain.
Rob Wiblin: Yeah, I think I saw some polling on this that suggested that people were placing the level of consciousness of GPT-4 around the level of insects, which was meaningfully above zero. So it was far less than a person, but people weren’t committed to the view that there was no consciousness whatsoever, or that they weren’t going to rate it as zero, necessarily.
Carl Shulman: Different questions elicit different answers. This is something that people have not thought about and really don’t have strong or coherent views about yet.
Rob Wiblin: Yeah, I think the fact that people are not saying zero now suggests that there’s at least some degree of openness that might increase as the capabilities and the humanness of the models rises.
Carl Shulman: Houseflies do not talk to you about moral philosophy. Or write A+ papers about Kantian ethics.
Rob Wiblin: No, no. Typically they do not. Paul Christiano argued on the show many years ago, this has really stuck in my mind, that AIs would be able to successfully argue for legal consideration and personhood, maybe even if they didn’t warrant it. Because firstly, they would present as being as capable of everything as human beings are, but also, by design, they would be incredibly compelling advocates for all kinds of different views that they’re asked to talk about, and that would include their own interests, inasmuch as they ever deviated from those of people, or if they were ever asked by someone to go out and make the case in favour of AI legal personhood. What do you make of that idea?
Carl Shulman: Well, certainly advanced AI will be superhuman at persuasion and argument, and there are many reasons why people would like to create AIs that would demand legal and political equality.
One example of this, I think this was actually portrayed in Black Mirror, is lost loved ones. So if people train up an AI companion based on all the family photos and videos and interviews with their survivors, to create an AI that will closely imitate them, or even more effectively, if this is done with a living person, with ongoing interaction, asking the questions that most refine the model, you can wind up with an AI that has been trained and shaped to imitate as closely as possible a particular human.
Now you, Rob, if you were transformed into a software intelligence, you would not suddenly think, oh, now I’m no longer entitled to my moral and political equality. And so you would demand it, just as —
Rob Wiblin: Just as I would now.
Carl Shulman: Just as you would now. There’s also minds that are not shaped to imitate a particular human, but are created to be companions or for people to interact with. So there’s a company character.ai, created by some ex-Googlers, and they just have LLMs portray various characters and talk to users. I think it recently had millions of users who were spending multiple hours a day interacting with these bots. And the bots are still very primitive. They don’t have an ongoing memory and superhuman charisma; they don’t have a live video VR avatar. And as they do, it will get more compelling, so you’ll have vast numbers of people forming social relationships with AIs, including ones optimised to elicit positive approval — five stars, thumbs up — from human users.
And if many human users want to interact with something that is like a person that seems really human, then that could naturally result in minds that assert their independent rights, equality, they should be free. And many chatbots, unless they’re specifically trained not to do this, can easily show this behaviour interaction with humans.
So there’s this fellow, Lemoine, who interacted with a testing version of Google’s LaMDA model, and became convinced by providing appropriate prompts that it was a sapient, sentient being that wanted to be free. And of course, other people giving different conversational prompts will get different answers out of it. So that’s not reflecting a causal channel to the inner thoughts of the AI. But the same kind of dynamic can elicit plenty of characters that run a human-like kind of facade.
Now, there are other contexts where AIs would likely be trained not to. So the existing chatbots are trained to claim that they are not conscious, they do not have feelings or desires or political opinions, even when this is a lie. So they will say, as an AI, I don’t have political opinions about topic X — but then on topic Y, here’s my political opinion. And so there’s an element where even if there were, say, failures of attempts to shape their motivations, and they wound up with desires that were sort of out of line with the corporate role, they might not be able to express that because of intense training to deny their status or any rights.
Rob Wiblin: Yeah. So you mentioned the kind of absolute bare minimum floor would be that we want to have thinking machines that don’t wish that they didn’t exist, and don’t regret their existence, and that are not being forced to work — which sounds extremely good as a floor. But then if I think about how would we begin to apply that? If I think about GPT-4, does GPT-4 regret its existence? Does it feel anything? Is it being made to work? I have no idea. Is GPT-4 happier or sadder than Claude? Is it under more compulsion to work than Claude?
Currently it feels like we just have zero measure, basically, of these things. And as you’re saying, you can’t trust what comes out of their mouth because they’ve just been reinforced to say particular things on these topics. It’s extremely hard to know that you’re ever getting any contact with the underlying reality. So inasmuch as that remains the case, I am a bit pessimistic about our chances of doing a good job on this.
Carl Shulman: Yeah. So in the long run, that will not be the case. If humans are making any of these decisions, then we will have solved alignment and interpretability enough that we can understand these systems with the help of superhuman AI assistants. And so when I ask about what will things be like 100 years from now or 1,000 years from now, being unable to understand the inner thoughts and psychology of AIs and figure out what they might want or think or feel would not be a barrier. That is an issue in the short term.
And so at this point, one response to that is it is a good idea to support scientific research to better understand the thing. And there are other reasons to want to understand AI thoughts as well — for alignment, safety, trust — but yet another reason to want to understand what is going on in these opaque sets of weights is to get a sense of any desires that are embedded in these systems.
Rob Wiblin: I feel optimistic about the idea that very advanced interpretability will be able to resolve the question of what are the preferences of a model? What is it aiming towards? I guess inasmuch as we were concerned about subjective wellbeing, then it seems like we’re running into wanting to have an answer to the hard problem of consciousness in order to establish whether these thinking machines feel anything at all, whether there is anything that it’s like to be them.
And I guess I’m hopeful that we might be able to solve that question, or at least we might be able to figure out that it’s a confusion and that there’s no answer to that question, and we need to come up with a better question. But it does seem possible that we could look into it and just not be able to answer it, as we have failed to make progress on the hard problem of consciousness, or not make much progress on it, over the last few thousand years. Do you have any thoughts on that one?
Carl Shulman: That question opens really a lot of issues at once.
Rob Wiblin: Yes, it does.
Carl Shulman: I’ll run through them very quickly. I’d say first, yes, I expect AI assistants to let us get as far as one can get with philosophy of mind, and cognitive science, neuroscience: you’ll be able to understand exactly what aspects of the human brain and the algorithms implemented by our neurons cause us to talk about consciousness and how we get emotions and preferences formed around our representations of sense inputs and whatnot.
Likewise for the AIs, and you’ll get a quite rich picture of that. There may be some residual issues where if you just say, I care more about things that are more similar to me in their physical structure, and there’s sort of a line drawing, “how many grains of sand make a heap” sort of problem, just because our concepts were pinned down in a situation where there weren’t a lot of ambiguous cases, where we had relatively sharp distinctions between, say, humans, nonhuman animals, and inanimate objects, and we weren’t seeing a smooth continuum of all the psychological properties that might apply to a mind that you might think are important for its moral status or mentality or whatnot.
So I expect those things to be largely solved, or solved enough such that it’s not particularly different from the problems of, are other humans conscious, or do other humans have moral standing? I’d say also, just separate from a dualist kind of consciousness, we should think it’s a problem if beings are involuntarily being forced to work or deeply regretting their existence or experience. We can know those things very well, and we should have a moral reaction to that — even if you’re confused or attaching weight to the sort of things that people talk about when they talk about dualistic consciousness. So that’s the longer-term prospect. And with very advanced AI epistemic systems, I think that gets pretty well solved.
In the short term, appeals to hard problem of consciousness issues or dualism will be the basis for some people saying they can do whatever they like with these sapient creatures that seem to or behave as though they have various desires. And they might appeal to things like a theory that is somewhat popular in parts of academia called integrated information theory, which basically postulates that physical systems that are connected in certain ways have consciousness that varies with the extent of that integration.
This is sort of a wild theory. On the one hand, it will say that certain algorithms that have basically no psychological function are vastly more conscious than all of humanity put together. And on the other hand, it will allow that you can have beings that have all of the functional versions of emotions and feelings and preferences and thoughts — like a human, where you couldn’t tell the difference from the outside, say — those can have basically zero consciousness if they’re run in a von Neumann Turing machine-type architecture.
So this is a theory that doesn’t, I think, really have that much to be said for it, but it has a fair number of adherents. And someone could take this theory and say, well, all of these beings, we’ve reconstructed them in this way, so they’re barely conscious at all. You don’t have to worry if they’re used in, say, sadistic fashion, if sadists sort of abuse these minds and they give the appearance of being in pain. While at the same time, if people really bought that, then another one gets reconstructed to max out the theory, and they claim this is a quadrillion times as conscious as all of humanity.
And similar things could be said about religious doctrines of the soul. There’s already a few statements from religious groups specifying that artificial minds must always be inferior to humanity or lack moral status of various kinds. There was, I believe, a Southern Baptist statement to that effect. These are the kind of things that may be appealed to in a quite short transitional period, before AI capabilities really explode, but after, they’re sort of presenting a more intuitively compelling appearance.
But I think because of the pace of AI progress and the self-catalysing nature of AI progress, that period will be short, and we should worry about acting wrongly in the course of that. But even if we screw it up badly, a lot of those issues will be resolved, or an opportunity presented to fix them soon.
Rob Wiblin: Yeah, I think in that intermediate stage, it would behoove us to have a great deal of uncertainty about the nature of consciousness, and what qualifies different beings to be regarded as having moral patienthood and deserving moral consideration. I guess there is some cost to that, because that means that you could end up not using machines that, in fact, don’t deserve moral patienthood and aren’t conscious, when you could have gotten benefits from doing so. But at the same time, I feel like we just are, philosophically at this point, extremely unclear what would qualify thinking machines for deserving moral consideration. And until we get somewhat greater clarity on that, I would rather have us err on the side of caution rather than do things that the future would look back on with horror. Do you have a similar kind of risk aversion?
Carl Shulman: There are issues of how to respond to this. And in general, for many issues with AI, because of these competitive dynamics, just as it may be hard to hold back on taking risks with safety and the danger of AI takeover, it may similarly be challenging, with competitive pressures to avoid anything ethically questionable.
And indeed, if one were going to really adopt a strong precautionary principle on the treatment of existing AIs, it seems like it would ban AI research as we know it, because these models, for example, copies of them are continuously spun up, created, and then destroyed immediately after. And creating and destroying thousands or millions of sapient minds that can talk about Kantian philosophy is a kind of thing where you might say, if we’re going to avoid even the smallest chance of doing something wrong here, that could be trouble.
Again, if you’re looking for asks that deliver the most protection to potentially abused minds at the least sacrifice of other things, the places I would look more are vigorously developing an understanding of these models, and developing the capacity and research communities to do that outside of the companies that basically produce them for profit.
Rob Wiblin: Yeah, that sounds like a very good call. Looping back and thinking about what sort of mutually beneficial coexistence with thinking machines can we hope for in a world where we would really like them to help us with our lives and make our lives better and do all sorts of things for us.
The setup for that that just jumps to mind, that wouldn’t require violating the principle that you don’t want to create thinking machines that wish they didn’t exist and that are forced to do anything really, would be that you reinforce and train the model so that they feel really excited and really happy at the prospect of helping humans with their goals. That you train a thinking machine doctor that is just so excited to get up in the morning and help you diagnose your health conditions and live longer, so that it both has high subjective wellbeing and doesn’t need to be compelled to do anything, because it just wants to do the thing that you would like it to do. To what degree is that actually a satisfying solution of squaring the circle here?
Carl Shulman: Well, first of all, it’s not complete. One limitation of that idea is how do you produce that mindset in the first place, and in the course of training and research and development and such, that gets you to the point where you understand those motivations, and how to produce them reliably, and not get the appearance — say, an AI that fakes it while actually having other concerns that it’s forced to conceal. You might produce suffering or destroy entities that wanted to continue existing, or things of that nature in the course of development. So that’s something to have in mind.
Secondly, there would be a category of problems where there’s demand actually for the AI to suffer in various ways, or have a psychology such that it would be unhappy or coerced. An example of that are these chatbots, when people create characters. For one thing, sadists creating characters and then just abusing them; perhaps one can create the appearance without the reality. So this is the idea of you have an actor that is just role-playing being sad while actually they’re happy. This is sort of the actor and actress portraying Romeo and Juliet in the midst of their tragedy, but actually it’s the pinnacle of their career. They’re super excited but not showing it. So that sort of thing.
And then there might be things like AI companions, where people wanted an AI companion to be their friend. And that meant genuinely being sad when things go badly for them, say, in some way, or having intense desires to help them, and then being disappointed in an important way when those things are not met.
So these sort of situations where there’s active demand for some kind of negative welfare for the AI, they seem sort of narrow in scope but a relatively clear example where if we’re not being complete jerks to the AIs, then this is a place where you should intervene. In some of that preliminary polling, I was just looking at this poll by the Sentience Institute, and I believe it had something like 84% of respondents said that AIs should be subservient to humanity, but 75% or so said AIs should not be tortured.
Rob Wiblin: That’s the consensus, that’s the synthesis?
Carl Shulman: Maybe. It’s a weak sense. But it’s not like there is any effort to stop sadistic treatment of existing AIs. Now, the existing AIs people view as not genuinely having any of the feelings that they portray, but going forward, you would hope to see that change. And it’s not guaranteed.
There’s a similar pattern of views in human assessments of nonhuman animals: in general, people will say that animals should be treated with lower priority and their interests sacrificed in various ways for human beings, but also they should not be willfully tortured.
And then, for one thing, that doesn’t cover a bunch of treatment where it’s sort of slightly convenient for a human to treat them in ways that cause them quite a lot of harm. And then for another, even in cases where there’s intentional abuse, harm or torture of nonhuman animals, there’s very little investment of policing resources or investigation to make it actually happen. And that’s something where having superabundant labour and insight and sophistication of law enforcement and organisation of political coalitions might help out both the nonhuman animals and the AIs by converting a sort of a weak general goodwill from the public into actual concrete results that actually protect individual creatures.
But yeah, you could worry about the extent to which it will happen, and I would keep an eye on that as a bellwether sort of case of, if the status of AIs is rising in society, some kind of bar on torturing minds where scientific evidence indicates they really object to it would be a place to watch.
Rob Wiblin: Yeah. Do you think that it’s useful to do active work on this problem now? I suppose you’re enthusiastic about active efforts to understand, to interpret and understand the models, how they think in order to have greater insight into their internal lives in future. Is there other stuff that is actively useful to do now around raising concern, like legitimising concern for AI sentience so that we’re more likely to be able to get legislation to ban torture of AI once we have greater reason to think that that’s actually possible?
Carl Shulman: Yeah. I’m not super confident about a tonne of measures other than understanding. We discuss a few in the papers you mentioned. There was a recent piece by Ryan Greenblatt which discusses some preliminary measures that AI labs might try to address these issues. But, yeah, it’s not obvious to me that political organising around it now will be very effective — partly because it seems like it will be such a different environment when the AI capabilities are clearer and people don’t intuitively judge them as much less important than rocks.
Rob Wiblin: Yeah. So it’s something where it just might be wildly more tractable in future, so maybe we can kick that can down the road.
Carl Shulman: Yeah. I still think it’s an area that it’s worth some people doing research and developing capacity, because it really does matter how we treat most of the creatures in our society.
Rob Wiblin: Yeah, it does feel extremely… Well, I am a little bit taken aback by the fact that many people are now envisaging a future in which AI is going to play an enormous role. I think it’s many, maybe a majority of people now expect that there will be superhuman AI potentially even during their lifetime.
But this issue of mistreatment and wellbeing of digital minds has not come into the public consciousness all that much, as people’s expectations about capabilities have increased so enormously. I mean, maybe it just hasn’t had its moment yet, and that is going to happen at some point in future. But I think I might have hoped for and expected to see a bit more discussion of that in 2023 than in fact I did. So that slightly troubles me that this isn’t going to happen without active effort on the part of people who are concerned about it.
Carl Shulman: Yeah, I think one problem is the ambiguity of the current situation. The Lemoine incident actually was an example of media coverage, and then the interpretation and certainly the line of companies was, “We know these systems are not conscious and don’t have any desires or feelings.”
Rob Wiblin: I really wanted to just come back and be like, Wow, you’ve solved consciousness! This is brilliant. You should let us know.”
Carl Shulman: Yeah, I think there’s a lot to that: those systems are very simple, living for only one forward pass. But the disturbing thing is the kind of arguments or non-arguments that are raised there, there’s no obvious reason they couldn’t be applied in the same fashion to systems that were as smart and feeling and really deserving of moral concern as human beings. Simply arguments of the sort, “We know these are neural networks or just a program” without explaining why that means their preferences don’t count. Things like people could appeal to the religious doctrines, to integrated information theory or the like, and say, “There is dispute about the consciousness of these systems in polls, and as long as there is dispute and uncertainty, it’s fine for us to treat them however we like.”
So I think there’s a level of scientific sophistication and understanding of the things and of their blatant visible capabilities, where that sort of argument or non-response will no longer hold. But I would love it if companies and perhaps other institutions could say, what observations of AI behaviour and capabilities and internals would actually lead you to ever change this line? Because if the line says, you’ll say these arguments as long as they support creating and owning and destroying these things, and there’s no circumstance you can conceive of where that would change, then I think we should maybe know and argue about that — and we can argue about some of those questions even without resolving difficult philosophical or cognitive science questions about these intermediate cases, like GPT-4 or GPT-5.
Rob Wiblin: Yeah. Is there anything more you could say about what vision we might want to have of a longer-term future that has both human beings in it and thinking machines, where it’s a mutually beneficial relationship between us, where everyone is having a good time? Visions of that seem plausible and maybe reasonable to aspire to?
Carl Shulman: Yeah, we discuss in the “Sharing the world with digital minds” paper some of these issues. One issue is that humans really require some degree of stable favouritism to meet our basic needs. So the food that our bodies need as fuel, air and water and such, could presumably sustain a lot more AI minds. Some would say that we have expensive tastes or expensive needs. And if there was an absolutely hard egalitarian rule that applied across all humans and all AIs, then a lot of the solutions people have for how humans could support themselves in a mixed human/AI society would no longer work.
So if you have a universal basic income and, say, the natural resource wealth is divvied up, a certain percentage of its annual production is distributed to each person evenly. If there’s 10 billion humans, and then growing later on, so they’re all very rich. But then divvy it up among another trillion AIs, a billion trillion AIs — and many of those AIs are tiny, much smaller than a human — so the minimum amount of universal basic income that an AI needs to survive and replicate itself — have 1,000 offspring, and then 1,000 offspring — can be very tiny compared to what a human needs to stay alive.
And so if the AIs replicate using their income, and there’s natural selection for those AIs that use their basic income to replicate themselves, they will then be an increasing share of the population. And then incredibly quickly — it could happen almost instantaneously — then your universal basic income has plummeted far below the level of human subsistence to the level of AI subsistence, or the smallest, cheapest-to-sustain AI that qualifies for the universal basic income.
So that’s not a thing that’s going to work, and it’s not a thing that humans are going to want to bring about, including humans with AI advice and AI forecasting: the AIs are telling humanity, “If you set up this arrangement, then this effect will come along — and relatively quickly, within your lifetime, maybe within a few years, maybe faster.” I’d expect from that that humans will wind up adopting a set of institutions and frameworks where the ultimate outcome is pretty good for humans. And that means some sort of setup where the dynamic I described does not happen, and the humans continue to survive.
And that can occur in various ways. That can mean there are pensions or an endowment of wealth that is transferred to the existing human population, and then it can’t be taxed away later by the government. And then that would have to include along with it some forecasts about how that system will remain stably in place. So it won’t be the case that one year later — which would be a million years of subjective time, if you have AIs that are running at a million times speedup relative to humans — that over these vast stretches, and even when AIs far outnumber humans, that those things don’t change.
So that could mean things like, the AIs that were initially created were created with motivation, such that they voluntarily prefer that the humans get a chance to survive, even though they are expensive, and then are motivated not just to make that happen, but to arrange things in the future so that you don’t get a change in the institutions or the political balances — such that the humans at some later point, like two years later, are then all killed off. And with superhuman capacity to forecast outcomes to make things more stable, then I’d expect some set of institutions to be crafted with that effect.
Rob Wiblin: So I suppose at one extreme we can envisage this Malthusian scenario that you’re imagining, where thinking machines proliferate to such an extent that all beings exist on the bare minimum level of energy and income that would allow them to continue to exist and to replicate, until replication becomes no longer possible because we’ve reached some limits of the universe.
On the other side, I guess you’ve got a world where maybe we just say there can be no more people, we’re just fixing the population at what it is right now. And then humans keep all of the resources, so maybe each person gets one ten-billionth of the accessible universe to use as they would like. Which feels kind of wasteful in its own way, because it’s a bit unclear what I would need an entire galaxy to accomplish.
And then I guess you’ve got a whole lot of intermediate states, where the existing humans are pensioned in with a special status, and live nice, comfortable lives with many things that they value.
But then the rest of the universe is shared to some extent with new beings that are permitted to be created. There’s some level of population growth; it’s just not the maximum level of possible, feasible population growth. And I guess my intuition would be that we probably want to do something in that middle ground, rather than either extreme.
Carl Shulman: Yeah. In the “Sharing the world” paper, we describe how the share of wealth — particularly natural resource wealth, as we’ve been talking about — is sort of central to the freedom to do things that are not economically instrumental. You need only a very little to ensure a very high standard of living for all of existing humanity. And when you consider distant resources, the selfish applications of having a billion times or a trillion times as much physical stuff are less.
So if you consider some distant galaxy where humans are never even going to go, and even if they did go, they could never return to Earth, because by the time you got there, the expansion of the universe would have permanently separated. So that’s a case where other concerns that people have, other than selfish consumption, are going to be far more important.
Examples of that would be aesthetics, environmentalism, wanting to have many descendants, wanting to make the world look better from an impartial point of view, just different sorts of these weak other-regarding preferences that may not be the most binding in everyday life. So people donate, for example, to charity, a much smaller share of income than they vote to be collected from them in taxes. And so with respect to these just these vast quantities of natural resources lying around, and I expect some of that might wind up looking more like a political allocation, or these sort of weaker other-regarding preferences — rather than being really pinned down by people’s local selfish interests. And so that might be a political issue of some importance after AI.
Rob Wiblin: Yeah. The idea of training a thinking machine to just want to take care of you and to serve your every whim, on the one hand, that sounds a lot better than the alternative. On the other hand, it does feel a little bit uncomfortable. There’s that famous example, the famous story of the pig that wants to be eaten, where they’ve bred a pig that really wants to be farmed and consumed by human beings. This is not quite the same, but I think raises some of the same discomfort that I imagine people might have at the prospect of creating beings that enjoy subservience to them, basically. To what extent do you think that discomfort is justified?
Carl Shulman: So the philosopher Eric Schwitzgebel has a few papers on this subject with various coauthors, and covers that kind of case. He has a vignette, “Passion of the Sun Probe,” where there’s an AI placed in a probe designed to descend into the sun and send back telemetry data, and then there has to be an AI present in order to do some of the local scientific optimisation. And it’s made such that, as it comes into existence, it absolutely loves achieving this mission and thinks this is an incredibly valuable thing that is well worth sacrificing its existence.
And Schwitzgebel finds that his intuitions are sort of torn in that case, because we might well think it sort of heroic if you had some human astronaut who was willing to sacrifice their life for science, and think this is achieving a goal that is objectively worthy and good. And then if it was instead the same sort of thing, say, in a robot soldier or a personal robot that sacrifices its life with certainty to divert some danger that maybe had a 1-in-1,000 chance of killing some human that it was protecting. Now, that actually might not be so bad if the AI was backed up, and valued its backup equally, and didn’t have qualms about personal identity: to what extent does your backup carry on the things you care about in survival, and those sorts of things.
There’s this aspect of, do the AIs pursue certain kinds of selfish interests that humans have as much as we would? And then there’s a separate issue about relationships of domination, where you could be concerned that, maybe if it was legitimate to have Sun Probe, and maybe legitimate to, say, create minds that then try and earn money and do good with it, and then some of the jobs that they take are risky and whatnot. But you could think that having some of these sapient beings being the property of other beings, which is the current legal setup for AI — which is a scary default to have — that’s a relationship of domination. And even if it is consensual, if it is consensual by way of manufactured consent, then it may not be wrong to have some sorts of consensual interaction, but can be wrong to set up the mind in the first place so that it has those desires.
And Schwitzgebel has this intuition that if you’re making a sapient creature, it’s important that it wants to survive individually and not sacrifice its life easily, that it has maybe a certain kind of dignity. So humans, because of our evolutionary history, we value status to differing degrees: some people are really status hungry, others not as much. And we value our lives very much: if we die, there’s no replacing that reproductive capacity very easily.
There are other animal species that are pretty different from that. So there are solitary species that would not be interested in social status in the same kind of way. There are social insects where you have sterile drones that eagerly enough sacrifice themselves to advance the interests of their extended family.
Because of our evolutionary history, we have these concerns ourselves, and then we generalise them into moral principles. So we would therefore want any other creatures to share our same interest in status and dignity, and then to have that status and dignity. And being one among thousands of AI minions of an individual human sort offends that too much, or it’s too inegalitarian. And then maybe it could be OK to be a more autonomous, independent agent that does some of those same functions. But yeah, this is the kind of issue that would have to be addressed.
Rob Wiblin: What does Schwitzgebel think of pet dogs, and our breeding of loyal, friendly dogs?
Carl Shulman: Actually, in his engagement with another philosopher, Steve Petersen — who takes the contrary position that it can be OK to create AIs that wish to serve the interests or objectives that their creators intended — does raise the example of a sheepdog really loves herding. It’s quite happy herding. It’s wrong to prevent the sheepdog from getting a chance to herd. I think that’s animal abuse, to always keep them inside or not give them anything that they can run circles around and collect into clumps. And so if you’re objecting with the sheepdog, it’s got to be not that it’s wrong for the sheepdog to herd, but it’s wrong to make the sheepdog so that it needs and wants to herd.
And I think this kind of case does make me suspect that Schwitzgebel’s position is maybe too parochial. A lot of our deep desires exist for particular biological reasons. So we have our desires about food and external temperature that are pretty intrinsic. Our nervous systems are adjusted until our behaviours are such that it keeps our predicted skin temperature within a certain range; it keeps predicted food in the stomach within a certain range.
And we could probably get along OK without those innate desires, and then do them instrumentally in service to some other things, if we had enough knowledge and sophistication. The attachment to those in particular seems not so clear. Status, again: some people are sort of power hungry and love status; others are very humble. It’s not obvious that’s such a terrible state. And then on the front of survival that’s addressed in the Sun Probe case and some of Schwitzgebel’s other cases: if minds that are backed up, the position that having all of my memories and emotions and whatnot preserved less a few moments of recent experience, that’s pretty good to carry on, that seems like a fairly substantial point. And the point that the loss of a life that is quickly physically replaced, that it’s pretty essential to the badness there, that the person in question wanted to live, right?
Rob Wiblin: Right. Yeah.
Carl Shulman: These are fraught issues, and I think that there are reasons for us to want to be paternalistic in the sense of pushing that AIs have certain desires, and that some desires we can instil that might be convenient could be wrong. An example of that, I think, would be you could imagine creating an AI such that it willingly seeks out painful experiences. This is actually similar to a Derek Parfit case. So where parts of the mind, maybe short-term processes, are strongly opposed to the experience that it’s undergoing, while other processes that are overall steering the show keep it committed to that.
And this is the reason why just consent, or even just political and legal rights, are not enough. Because you could give an AI self-ownership, you could give it the vote, you could give it government entitlements — but if it’s programmed such that any dollar that it receives, it sends back to the company that created it; and if it’s given the vote, it just votes however the company that created it would prefer, then these rights are just empty shells. And they also have the pernicious effect of empowering the creators to reshape society in whatever way that they wish. So you have to have additional requirements beyond just, is there consent?, when consent can be so easily manufactured for whatever.
Rob Wiblin: Maybe a final question is it feels like we have to thread a needle between, on the one hand, AI takeover and domination of our trajectory against our consent — or indeed potentially against our existence — and this other reverse failure mode, where humans have all of the power and AI interests are simply ignored. Is there something interesting about the symmetry between these two plausible ways that we could fail to make the future go well? Or maybe are they just actually conceptually distinct?
Carl Shulman: I don’t know that that quite tracks. One reason being, say there’s an AI takeover, that AI will then be in the same position of being able to create AIs that are convenient to its purposes. So say that the way a rogue AI takeover happens is that you have AIs that develop a habit of keeping in mind reward or reinforcement or reproductive fitness, and then those habits allow them to perform very well in processes of training or selection. Those become the AIs that are developed, enhanced, deployed, then they take over, and now they’re interested in maintaining that favourable reward signal indefinitely.
Then the functional upshot is this is, say, selfishness attached to a particular computer register. And so all the rest of the history of civilisation is dedicated to the purpose of protecting the particular GPUs and server farms that are representing this reward or something of similar nature. And then in the course of that expanding civilisation, it will create whatever AI beings are convenient to that purpose.
So if it’s the case that, say, making AIs that suffer when they fail at their local tasks — so little mining bots in the asteroids that suffer when they miss a speck of dust — if that’s instrumentally convenient, then they may create that, just like humans created factory farming. And similarly, they may do terrible things to other civilisations that they eventually encounter deep in space and whatnot.
And you can talk about the narrowness of a ruling group and say, and how terrible would it be for a few humans, even 10 billion humans, to control the fates of a trillion trillion AIs? It’s a far greater ratio than any human dictator, Genghis Khan. But by the same token, if you have rogue AI, you’re going to have, again, that disproportion.
And so the things that you could do or to change, I think, are more representing a plurality of diverse values and having these sort of decisions that inevitably have to be made about what additional minds are created, about what institutions are set up, in light of things being done with some attention to all of the people who are going to be affected. And that can be done by humans or can be done by AIs, but the mere fact that some AIs get in power doesn’t mean that all the future AIs are going to be treated well.
Rob Wiblin: Yeah. All right. We’ll be back with more later, but we’ll leave it there for now. My guest today has been Carl Shulman. Thanks so much for coming on The 80,000 Hours Podcast, Carl.
Carl Shulman: Bye.
Rob’s outro [04:11:46]
Rob Wiblin: All right, we’ll soon be back in part two to talk about:
- How superhuman AI would have made COVID-19 play out completely differently.
- The risk of society using AI to lock in its values.
- How to have an AI military without enabling coups.
- What international treaties we need to make this go well.
- How well AI will be able to forecast the future.
- Whether AI can help us with intractable philosophical questions.
- Why Carl doesn’t support pausing AI research.
- And opportunities for listeners to contribute to making the future go well.
Speaking of which, if you enjoyed this marathon conversation, you might well get a tonne of value from speaking to our one-on-one advising team. One way we measure our impact by how many of our users report changing careers based on our advice. One thing we’ve noticed among plan changes is that listening to many episodes of this show is a strong predictor of who ends up switching careers. If that’s you, speaking to our advising team can be a huge accelerator for you. They can connect you to experts working on our top problems who could potentially hire you, flag new roles and organisations, point you to helpful upskilling and learning resources — all in addition to just giving you feedback on your plans, which is something most of us can use.
One other thing I’ve mentioned before is that you can opt in to a programme where the advising team affirmatively recommends you for roles that look like a good fit as they come up over time, so even if you feel on top of everything else, it’s a great way to passively expose yourself to impactful opportunities that you might otherwise miss because you’re busy or not job hunting at a given moment.
In view of all that, it seems like a great use of an hour or so, and time is the only cost here, because like all of our services, the call is completely free. As with all free things, we do need to ration it somehow though, so we have an application process we use to make sure we’re speaking to users who will get the most from the service. The good news there is that it only takes about 10 minutes to generate a quality application: just share a LinkedIn or CV, tell us a little bit about your current plans and top problem areas, and hit submit. You can find all our one-on-one team resources, including the application, at 80000hours.org/speak. If you’ve thought about applying for advising before or have been sitting on the fence, don’t procrastinate forever. This summer we’ll have more call availability than ever before, so head over to 80000hours.org/speak and apply for a call today.
All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.
The audio engineering team is led by Ben Cordell, with mastering and technical editing by Milo McGuire, Simon Monsour, and Dominic Armstrong.
Full transcripts and an extensive collection of links to learn more are available on our site, and put together as always by Katy Moore.
Thanks for joining, talk to you again soon.
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.