Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

Do you remember seeing these photographs of generally women sitting in front of these huge panels and connecting calls, plugging different calls between different numbers? The automated version of that was invented in 1892.

However, the number of human manual operators peaked in 1920 — 30 years after this. At which point, AT&T is the monopoly provider of this, and they are the largest single employer in America, 30 years after they’ve invented the complete automation of this thing that they’re employing people to do. And the last person who is a manual switcher does not lose their job, as it were: that job doesn’t stop existing until I think like 1980.

So it takes 90 years from the invention of full automation to the full adoption of it in a single company that’s a monopoly provider. It can do what it wants, basically. And so the question perhaps you might have is why?

Michael Webb

In today’s episode, host Luisa Rodriguez interviews economist Michael Webb of DeepMind, the British Government, and Stanford about how AI progress is going to affect people’s jobs and the labour market.

They cover:

  • The jobs most and least exposed to AI
  • Whether we’ll we see mass unemployment in the short term
  • How long it took other technologies like electricity and computers to have economy-wide effects
  • Whether AI will increase or decrease inequality
  • Whether AI will lead to explosive economic growth
  • What we can we learn from history, and reasons to think this time is different
  • Career advice for a world of LLMs
  • Why Michael is starting a new org to relieve talent bottlenecks through accelerated learning, and how you can get involved
  • Michael’s take as a musician on AI-generated music
  • And plenty more

If you’d like to work with Michael on his new org to radically accelerate how quickly people acquire expertise in critical cause areas, he’s now hiring! Check out Quantum Leap’s website.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Highlights

The jobs most exposed to robots, software, and AI

Michael Webb: So I did a lot of work in my paper looking at, if you’re just aggregating, how does exposure vary overall on average as a function of how much education you have, or how much your job is paid currently, whatever it is. And I found a really interesting pattern of results, comparing AI to these previous technologies. So think about a graph where, on the x-axis, you have income or salary for a job — so on the left-hand side it’s very low paid; right-hand side it’s very high paid — and then on the y-axis, you have how exposed jobs at that level are.

So for robots, you have a line that basically starts high on the left and then goes down a lot: so it’s very low-skilled jobs, low-paid jobs that are exposed to robots, and high-skilled jobs are not at all exposed.

With software, you have a very different pattern, which is that actually the lower-skilled jobs are not exposed and the higher-skilled jobs are not exposed; it’s the middle-skilled jobs that are most exposed. And what’s cool is that this reflects a pattern that lots of other very careful research in economics has found about the impact of software in particular: it’s really impacted middle-skilled jobs.

Like, really careful studies specifically for software, middle-skilled ones are most exposed. So it was cool that I kind of replicated that with this very different method.

But the really interesting thing is that for AI, it’s a completely different pattern again. So for AI, it’s actually the upper-middle-skill jobs that are most exposed. So the line starts on the bottom left, at a low level, and then goes up and up and up and it peaks, I think, in the 88th percentile of jobs as sorted by salary — so really upper, upper income, high-paid jobs — and then goes down at the very top. So that the CEOs are paid the most and not exposed so much, but the lawyers and the accountants and whatever, they actually are exposed.

The really interesting thing is that the OpenAI paper — using a different methodology and focusing very much on GPT-4 and these new large language models, as opposed to the slightly earlier vintage of AI I was focusing on — they replicate this figure with their measure, and it’s basically exactly the same. So the same pattern.

Now, it turns out that many of those jobs are the most regulated jobs. So the doctors and the lawyers and the accountants, they’re the ones who actually have the most power in the economy and society to put up barriers and stop the exposure that might otherwise cause them to be paid lower wages or whatever. They can pull up the drawbridge and stay happy as they are. But on the pure economics of this — before getting to the political economy; instead it’s a fancy pretend world where there’s no actual humans and there’s no politics — it’s those jobs that are most exposed.

How automation can actually *create* jobs

Michael Webb: Let’s just look at this one sector that’s getting automated, and think about whether it really is the case that when you have big automation in the sector, the number of humans goes down. That’s intuitive, right? Automation means fewer humans. Done. Turns out, it’s not that simple. So there’s a few examples I’ll start with, and we can talk about what the broader lesson is.

So here’s one example. I think this is due to Jim Bessen, who’s an economist who studied ATMs, cash machines, where you go to a bank branch and get cash out. So before ATMs, there were individual humans in the bank. You’d go up to them and show some ID and get your account details, and they would give you some cash. Bank tellers, I think they were called. And you would think, ATM comes along, that’s it for those people: no more bank tellers, huge declines in employment in the banking sector.

What in fact happened is something quite different. So the ATM did indeed reduce the number of people doing that specific task of handing out money. But there are other things people do in bank branches as well. The big thing that happened is that because a given bank branch no longer needed to have all these very expensive humans, doing the cash-handing-out, it became much cheaper to open bank branches. So whereas before, there were only bank branches perhaps in the larger towns, suddenly banks were competing to open branches everywhere — because the more you can go into the smaller and smaller towns and villages, you can have more customers and provide them a service and so on.

So what happened was the ATM meant there were fewer staff per bank branch, but enabled the opening of many more bank branches overall. And that actually offset the first impact. So fewer staff per bank branch, but so many more bank branches that the total number of people in bank branches actually went up.

What they were doing was quite different. The humans now are doing more higher-value-add activities. They’re not handing out cash. They are doing other kinds of services, but you know, similar people doing a similarish job, and there’s actually more of them now.

The fancy economist way of putting this is: you have a “demand elasticity in the presence of complementarity.” So those are crazy silly words, but I’ll tell you what they mean. “Demand elasticity” means when you reduce the price of something, you actually want more of it. So automation generally brings the cost of things down. But what normally happens is, one doesn’t say, “Great, I’ll have the same amount of stuff.” They say, “No, I want more of that stuff now. Give me more, more, more.”

Then “in the presence of complementarity”: so “complementary” we think of, if humans are complementary to the automation, the technology, whatever it is, in some way, there’s still some humans involved — fewer than before, per unit of output, but still some. Then because people now want more and more of this stuff, each unit of the thing is more automated, but there’s still some humans involved. And therefore, you end up possibly having ever more humans totally in demand, doing slightly different things, but still roughly in the same ballpark. Does that make sense?

How automation affects employment at the individual level

Michael Webb: So the final thing I think it’s really interesting to think about, and it’s often not intuitive, is thinking about the impacts on individuals. So we’ve talked about, we’ve accepted that there definitely could be some individuals whose jobs existed, and then they don’t — they disappear because they’re being automated. Nothing I’ve said so far is saying that doesn’t happen there; that certainly happens a tonne. And I’ve given you some examples of why perhaps we shouldn’t worry so much about it, because there’s more demand for other parts of the economy, whatever. But what does that look like for the actual person experiencing it? And is it good or bad? And when is it good or bad? So there’s a couple of really interesting facts about the way things kind of work in the economy that I think are worth touching on briefly.

The first one is that there is this not-very-nice term, but it has a benign consequence: the term is “natural wastage.” So if you are a company, and you’re hiring people. Let’s say you’re McDonald’s. You’re McDonald’s, and people leave — their average tenure is they start working for you, and six months later, they leave and go and get a better job. So that’s half of people leave within six months, whatever, that’s called natural wastage: people naturally leaving. And you would include people retiring and whatever as well as part of that — that natural churn. That means there’s a very natural attrition happening in all companies all the time.

Let’s take McDonald’s as an example. So if McDonald’s somehow automated everything, like the burger flipping and the cashiers. They’ve been trying for a long time, right? That’s slowly happening, but there’s still some humans there right now. Suppose they did it. All they would have to do is stop hiring any new people, and within a year, they would just have no employees, because everyone naturally leaves and goes and gets a better job anyway. That generally is what happens; the average tenure is like six months at McDonald’s. So you just sit and wait and everyone goes off on their own accord — no firing required, no displacement required.

And it makes a tonne of sense, right? Because if you are the mastermind organising the economy, and allocating people to different jobs — obviously, that’s not what’s happening — but if you are the mastermind, it would naturally be the right thing to say that people who have got all the human capital, and they’ve worked in the industry, and they’re going to find it really hard to move: let them keep the jobs. And then the young people, they shouldn’t get into it because that’s a bad bet for the long run; they should do something else. And people make those decisions for themselves, and that’s what happens. So you have these really interesting effects of that kind.

So the big macro thing is that. Older people will stay in, and younger people move into different things. And that’s by far the most important individual-level effect. Now, where does that go wrong? It generally goes wrong in a couple of circumstances. Namely, it’s very inflected by geography. So what we know, in terms of where you can go and see people who have actually really been hurt by an automation technology coming along, or maybe another kind, trade is a big example: you know, China comes along and suddenly makes things cheaper. If you are a young person in a big city and you were doing some job, then that job goes away, whatever.

If you are an older person who’s been at a particular firm for a very long time in a town where that firm is the only large employer, and there was no other industry in that town — and also you’ve got this amazing union job: your wages are really high because of decades of strong worker empowerment and so on — and then that company goes away from that town, that is not a good place to be. Because empirically, people turn out to be stuck in their towns, right? They just don’t like moving. And if you’re in your 40’s, 50’s, got a family… Your house is now worth nothing, because there’s no jobs anymore. So you can’t sell it. You can’t sell up and move somewhere else. That’s really hard. You can’t sell your cheap house and move to a much more expensive house in a city somewhere else. Your kids are in school, as you’re saying, et cetera. So what you see is people kind of get stuck, and there is no job of any comparable quality that they can do.

So on average, when you have these big plant closures, people do tend to go and get other jobs, but they often experience big wage declines, like 25% enduring wage decline. That’s not nice. That’s really horrible. That’s a really horrible thing to happen to someone. And that happened to large numbers of people at the same time in these geographically concentrated ways. That’s where things get bad. So if you’re young in a city, you’re kind of fine. If you’re older, mid-career, older in a small town with a single employer, and that’s the thing that gets automated: that’s when things look much less rosy.

How long it took other game-changing technologies to have economy-wide effects

Michael Webb: We can start by very quickly talking about what’s the baseline — like, how long do these things take for other technologies that were as big as AI seems like it will be — and then we can talk about why might AI be different and what will be the same.

So the two big examples that are everyone’s favourite are IT — computers in general — and then electricity. These are probably the two biggest general-purpose technologies of the last certainly 150 years. So how long did they take? Well, there’s an astonishing regularity in how long these things took. You can date the arrival of electrification to 1894, which is the time economists who study this tend to use — I think it’s a couple of years after the first proper power station was built — and date IT to 1971. I’m not sure why economists use that date; maybe it was when some IBM mainframe properly came online or something. Anyway, those are the dates people seem to use in economics.

And if you plot the x-axis as years following the arrival of IT or electrification, and then the y-axis is percent of adoption that’s happened — so the 0% is no one has it; 100% is now everyone has it — it turns out those two lines sit exactly on top of each other. So IT diffused basically as fast as [electricity]. So surprising point number one is that these things that were 100 years apart almost took as long as each other, even though you might expect things to be moving faster later in history. And the second interesting fact is that it took a long time. So it took 30 years to get to 50% adoption.

One final quick interesting fact: If you think about all technology and capital in the economy — take the US, and think of every bit of factory equipment and every computer and everything you might think of broadly as technology, capital equipment type stuff. So in 1970, there was basically close enough to 0% of the capital stock consisted of software and computer equipment, hardware and software. In 1990, it had only got to about 2%. And then by 2000, it had gotten to 8%. So the real inflection is about 1995, if you look at the graph. The point is there were two and a half decades of actually very slow [growth]. Everyone thought, “This is it. We’re here: IT era. Go!” And yeah, 25 years later, nothing to see. And only after 30 years do you see a real increase. And even then, even in 2000, only 8% of the capital stock consisted of computer software and equipment.

Luisa Rodriguez: Yeah. And was most of the thing happening in that early period the technology improving? Or was it just the technology being incorporated into the world, and the world catching up in various different ways took that long?

Michael Webb: Very much both. Very much both. Think about technology in the 70’s compared to 1990’s, the IT was getting ever more user friendly, ever cheaper. You know, Moore’s law was happening all through this time: so you wait a few years, it gets twice as fast and half as expensive. So that’s happening. And people always wait a long time to get to the point where it’s actually worth adopting. And it takes a long time for companies to adjust all their operations to make good use of this stuff, right? And we’ll say more about that in a second when we think about LLMs.

Another example, actually: what is interesting is the automation of the telephone system. So do you remember seeing these photographs of generally women sitting in front of these huge panels and connecting calls, plugging different calls between different numbers? The automated version of that was invented in 1892. However, the number of human manual operators peaked in 1920 — 30 years after this. At which point, AT&T is like the monopoly provider of this, and is the largest employer in the US. They are the largest single employer in America, 30 years after they’ve invented the complete automation of this thing that they’re employing people to do. And the last person who is a manual switcher does not lose their job, as it were: that job doesn’t stop existing until I think like 1980.

So it takes 90 years from the invention of full automation to the full adoption of it in a single company that’s a monopoly provider. It can do what it wants, basically. And so the question perhaps you might have is why?

So the way it worked in the case of AT&T was: there’s a fixed cost of automating any particular telephone exchange. The exchanges are physically located in different places. The telephone exchange in a city is going to have thousands, hundreds of thousands of wires coming into it. So by switching that to automated, you save loads of humans. Whereas all these different exchanges in the middle of nowhere in much of rural areas, they might only have one human. You don’t save much by switching, but the cost of doing all that change in the equipment is actually still really high. There’s a huge fixed cost, and so you don’t bother doing it until you really, really have to. If you look at the history of AT&T, they started by automating these big cities and the very last thing to be switched over from human to automated was, I think it was on some island somewhere, like a tiny population. It was just like the last thing that was worth doing.

Ways LLMs might be similar to previous technologies

Luisa Rodriguez: What will make AI similar to other technologies that have been kind of general-purpose, big game changers?

Michael Webb: So I think there’s two buckets: there’s a “humans are humans” bucket, and then there’s the government bucket. So let’s start with the government bucket. The government bucket is basically regulation. So I put it as a broader bucket, just call it “collective action.” Government is one kind of societal-wise collective action, but there are other things — like unions and professional bodies and all this kind of stuff.

So, here’s a question: Do you think that in 10 years’ time, you’ll be able to just talk to a language model, and it will prescribe you a prescription-only medication, which you can then go and collect from a pharmacy? Do you think that would be legal? Because by the way, it’s possible today. It’s good enough. Basically we’re there, right? You can do that already today. It’s going to be good enough. Would it be legal?

Luisa Rodriguez: Yeah. As soon as I start thinking about it, I’m like, there are a whole bunch of interest groups that are going to want that not to happen. There are some interest groups that are going to feel worried that it’s going to make mistakes; there are interest groups that just want to be protecting the people in the jobs that are doing that now. So it seems at least plausible to me that people somewhere will decide that we shouldn’t make it legal. Though I don’t know. In 10 years, it also wouldn’t surprise me, to be honest.

Michael Webb: Right. You’re absolutely right in the sense that there are these very powerful interest groups. So some of the areas that will be most affected by AI — that we all agree, I think, seem very likely to be able to — are things like what the doctors do and what the lawyers do. Doctors and lawyers, separately, have the most powerful lobby groups you can possibly imagine: the American Medical Association, the British Medical Association, and then for lawyers, it’s the Bar, the Bar Council, the various solicitors’ things. So here’s one thing that happens: they do all of the kind of professional standards for that profession. They decide who gets to be a doctor, and they decide how many doctors get to be accredited as doctors every year, or lawyers, whatever. Right? If you just open a newspaper basically any day of the week, you will see how powerful doctors are.

And so regulation has always been something that is kind of regulation by the government / collective interest groups. So unions, whether they’re blue collar unions or whether they’re professional white collar workers — which sound like they’re not unions, but they really are unions; they don’t have the word union in the title, but they’re definitely unions — they’re very, very, very powerful. And so these really, really slow down all kinds of applications — possibly for good reasons a lot of the time. An open question for any given question is whether we should or shouldn’t slow down the application, given the harms involved. But they are always going to argue for “You need the human completely in the loop, and we shouldn’t change a thing, and we should keep our salaries the same” and so on and so forth. So I have no idea what’s going to happen in any particular case. But I think we can be extremely sure that there’s a tonne of interest groups that are going to be pretty successful for a pretty long time in stopping things from changing faster than it’s in their interests for them to change.

Then the other bucket of “humans are humans” in terms of the way they make decisions: So I talked about how LLMs could make it easier to retrain, but you still have to want to retrain, or do things differently in some way.

Think about teaching as an example: LLMs could completely change the way classrooms are run. And the teacher will spend much less of their time marking, and maybe lecturing, and more time doing one-to-one support, whatever it is. Maybe teachers want to do that, maybe they don’t. I don’t know. I imagine most of them would want to do that, actually. But one thing I’m quite sure in saying is that there is no way the government will be able to force teachers to start adopting their software and using it in certain ways. The teacher is master of their classroom, right? There’s been many examples of governments wanting to make teachers do things differently, and generally, it’s very hard. Occasionally, I know with phonics in the UK, things can get trained in certain places — but in general, teachers’ unions have a lot of power, and the government cannot control what happens in classrooms. And so that again applies in lots of different places. The stronger the union, the more it applies. But in general, humans don’t like change for the most part. They like things the way they are.

Whether AI will be rolled out faster than government can regulate it

Luisa Rodriguez: AI seems like it moves incredibly quickly. If we’re going to get improvements to GPT-4 — that basically double from the ones that we got last year, in the next year — will there already just be really extreme impacts? And not just impacts, but adoption, that means that some of these regulatory effects just don’t keep up, and so don’t slow things down the way you might expect they would, or they have in other cases?

Michael Webb: I think that the things we were talking about before — in terms of all the reasons that interest groups and lobby groups can slow things down — as I said, I think those very much apply here. And so even though the technology is moving really quickly, they will “keep up” in terms of stopping it being used, right? However fast it’s moving, you can always pass a bill to say no, right?

So the thing that I’d be more worried about is the sharp end of capabilities — the things that you’ve had many guests on this podcast talk about — as well as misuse and those kinds of things. That’s where I’d be more concerned about regulation keeping pace. Because there, it’s not like you have to persuade lots of people in the world economy to adopt your thing and change their systems. All you need is just one bad person to have a very clever thing and to do bad stuff with it, right?

It’s those kinds of things that you have to worry more about regulation moving fast enough. But even there, I’m not an expert on the history of nuclear regulation, but I believe something like the following is true. At some point, someone convinced the US government, the US president, that nuclear was a really big deal, and it was possibly very dangerous. And with a single stroke of the pen — I don’t know whether that was a presidential executive order or congressional legislation — but almost overnight, all research on nuclear anything was classified. So you’re a researcher, you’re just doing your PhD, sitting at home, doing some physics of whatever. Suddenly, from tomorrow, you doing any more work on that is illegal. The government can just do that, right? The US government can do that.

And you can imagine that if people do enough to convince governments that this stuff is really, really scary — in terms of the existential risk level of this — the government can be like, “OK, you convinced me. As of now, we are classifying all research on AI.” That could just happen tomorrow, and then all these companies would just shut down overnight. And that would be the law, and they couldn’t do anything about it, end of story. That’s a completely possible scenario, in terms of the powers governments have.

Luisa Rodriguez: So it’s not that fast government action is impossible; it’s that it doesn’t happen that often. Sometimes it does happen suboptimally. It’s too slow.

Michael Webb: Always it happens suboptimally, right? It’s obviously slow. Or it’s too fast and it’s too blunt. As I say, I’m not an expert. I imagine there’s stuff that was classified under the nuclear stuff that was completely reasonable to not be classified, and people should still be able to, but they couldn’t. Maybe we’d have much better nuclear energy today if that hadn’t happened.

So there’s all kinds of ways in which any regulation is going to be very much not first best, second best, or at best, third best. And I think we’re in a really scary place right now, because regulation, if it happens, could do a lot of good. It could do a lot of harm as well. And so we’re going to have to tread very, very carefully.

Whether AI will cause mass unemployment in the long term

Luisa Rodriguez: Yeah. It feels both low and high to me. But it could be really high. It could be 50%; it could be 90%. At some point, we’ll probably get to superhuman AI, and it can do all the tasks we can and more. But even 50% feels pretty different to what’s happening now. And I’m wondering if, at that point, any of these models will even apply? At that point, is the world just too different for this kind of conversation to be applicable?

Michael Webb: Yeah. So I think I’m going to stand up for economists here and say yes: the models do apply, all these considerations do apply. So let’s think about the question: Wouldn’t it be different if we’re talking about 90% of jobs being automated? Let’s go back to a place we started earlier in the conversation, thinking about agriculture in the US. In 1790, it was a true statement to say, “In the coming years, 90% of jobs will be fully automated.” That’s a true fact. That’s in fact what happened.

That happened over a 100-, 150-, 200-year timeframe, and so the speed of this change is really important. But then don’t forget — back to our talk about unions and the American Medical Association and politics and so on, not to mention all the rational decisions of company CEOs and so on — there’s all kinds of forces that mean these things take a long time, even if in theory one could do lots of stuff quickly. There’s also just these capital availability constraints and all kinds of things as well.

There’s just not enough spare cash flowing around in the world for everyone to do that at the same time. Or there’s not enough resources, because adopting technology requires all kinds of work to be done, and you can’t just stop the entire economy whilst you retool everything.

People still want to eat food, and they still want to fly in planes, and whatever it is. You can’t like down tools to say, “No, all we’re doing for the next five years is switching everything over to LLMs.” You can only take so many planks out of your boats and replace them while you’re sailing in the water at the same time.

And so all these kinds of constraints I think are not obvious until you think about them. So that’s point one: Even in a world with 90% of tasks automated, we have been there before. It happened. It happened lots of times. And we’re still here, and things are fine, right? Things look quite different from 1790, but many things are still the same. In that sense, things can get weird, but there’s still some sort of upper limit in how fast I think they will naturally get weird from an economic perspective.

That said, let’s think about what happens when it is 90%, whether that comes in 100 years’ time or whether it comes in 10 years’ time. I think there’s a few really important things here. So we generally are going around saying, “Gosh, what if it automated 90% of cognitive tasks?” Big emphasis around the word “cognitive.” Many, many tasks in the economy are not cognitive tasks. And back to the old thing we’ve been discussing all the way through: when you automate something, suddenly all the incentives go towards how do you make more value out of the stuff that is left that is not automated, or that humans can now do because they’ve been freed up and they can do something else now. And I think there are many, many things that are not cognitive, that there’ll be huge amounts of demand for humans to do.

Articles, books, and other media discussed in the show

Michael’s work:

Technology, innovation, and the economy:

Effects of current AI on jobs and the labour market:

Other 80,000 Hours podcast episodes:

Everything else:

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.