Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

Frankly, this is to me the worst-case scenario we’re on right now — the one I had hoped wouldn’t happen. I had hoped that it was going to be harder to get here, so it would take longer. So we would have more time to do some AI safety.

I also hoped that the way we would ultimately get here would be a way where we had more insight into how the system actually worked, so that we could trust it more because we understood it. Instead, what we’re faced with is these humongous black boxes with 200 billion knobs on them and it magically does this stuff.

Max Tegmark

On January 1, 2015, physicist Max Tegmark gave up something most of us love to do: complain about things without ever trying to fix them.

That “put up or shut up” New Year’s resolution led to the first Puerto Rico conference and Open Letter on Artificial Intelligence — milestones for researchers taking the safe development of highly-capable AI systems seriously.

Max’s primary work has been cosmology research at MIT, but his energetic and freewheeling nature has led him into so many other projects that you would be forgiven for forgetting it. In the 2010s he wrote two best-selling books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, and Life 3.0: Being Human in the Age of Artificial Intelligence, and in 2014 founded a non-profit, the Future of Life Institute, which works to reduce all sorts of threats to humanity’s future including nuclear war, synthetic biology, and AI.

Max has complained about many other things over the years, from killer robots to the impact of social media algorithms on the news we consume. True to his ‘put up or shut up’ resolution, he and his team went on to produce a video on so-called ‘Slaughterbots’ which attracted millions of views, and develop a podcast and website called ‘Improve The News’ to help readers separate facts from spin.

But given the stunning recent advances in capabilities — from OpenAI’s DALL-E to DeepMind’s Gato — AI itself remains top of his mind.

You can now give an AI system like GPT-3 the text: “I’m going to go to this mountain with the faces on it. What is the capital of the state to the east of the state that that’s in?” And it gives the correct answer (Saint Paul, Minnesota) — something most AI researchers would have said was impossible without fundamental breakthroughs just seven years ago.

So back at MIT, he now leads a research group dedicated to what he calls “intelligible intelligence.” At the moment, AI systems are basically giant black boxes that magically do wildly impressive things. But for us to trust these systems, we need to understand them.

He says that training a black box that does something smart needs to just be stage one in a bigger process. Stage two is: “How do we get the knowledge out and put it in a safer system?”

His favourite MIT project so far involved taking a bunch of data from the 100 most complicated or famous physics equations, creating an Excel spreadsheet with each of the variables and the results, and saying to the computer, “OK, here’s the data. Can you figure out what the formula is?”

For general formulas, this is really hard. About 400 years ago, Johannes Kepler managed to get hold of the data that Tycho Brahe had gathered regarding how the planets move around the solar system. Kepler spent four years staring at the data until he figured out what the data meant: that planets orbit in an ellipse.

Max’s team’s code was able to discover that in just an hour.

Today’s conversation starts off giving a broad overview of the key questions about artificial intelligence: What’s the potential? What are the threats? How might this story play out? What should we be doing to prepare?

Rob and Max then move on to recent advances in capabilities and alignment, the mood we should have, and possible ways we might misunderstand the problem.

They then spend roughly the last third talking about Max’s current big passion: improving the news we consume — where Rob has a few reservations.

They also cover:

  • Whether we would be able to understand what superintelligent systems were doing
  • The value of encouraging people to think about the positive future they want
  • How to give machines goals
  • Whether ‘Big Tech’ is following the lead of ‘Big Tobacco’
  • Whether we’re sleepwalking into disaster
  • Whether people actually just want their biases confirmed
  • Why Max is worried about government-backed fact-checking
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Highlights

What's actually going to happen with AI?

Max Tegmark: I think a very common misconception, especially among nonscientists, is that intelligence is something mysterious that can only exist inside of biological organisms like human beings. And if we’ve learned anything from physics, it’s that no, intelligence is about information processing. It really doesn’t matter whether the information is processed by carbon atoms in neurons, in brains, in people, or by silicon atoms in some GPU somewhere. It’s the information processing itself that matters. It’s this substrate-independent nature of information processing: that it doesn’t matter whether it’s a Mac or a PC you’re running it on, or a Linux box, or for that matter what the CPU manufacturer is — or even whether it’s biological or silicon-based that matters.

Max Tegmark: It’s just the information processing that matters. That’s really been the number one core idea, I would say, that’s caused the revolution in AI: that you can keep swapping out your hardware and using the same algorithms. Once you accept that — that you and I are blobs of quarks and electrons that happen to be arranged in a way such that they can process information well — it’s pretty obvious that, unless you have way more hubris than I do, we are not the most optimized quark blobs possible for information processing, of course not.

Max Tegmark: And then of course it’s possible, but the question then is how long will it take for us to figure out how to do it? A second fallacy that makes people underestimate the future progress is they think that before we can build machines that are smarter than us, we have to figure out how our intelligence works. And that’s just wrong. Just think about airplanes. When was the last time you visited the US?

Rob Wiblin: Oh, it’s actually been a while. It’s been a couple of years now.

Max Tegmark: So when you came over to the US, do you remember, did you cross the Atlantic in the mechanical flying bird, or in some other kind of machine?

Rob Wiblin: No, I think I came across in a plane rather than on an ornithopter.

Max Tegmark: There’s an awesome TED Talk that anyone listening to this should Google about how they actually built the flying bird. But it took 100 years longer to figure out how birds fly than to build some other kind of machine that could fly even faster than birds. It turned out that the reason bird flight was so complicated was because evolution optimized birds not just to fly, but it had all these other weird constraints. It had to be a flying machine that could self-assemble. Boeing and Airbus don’t care about that constraint. It has to be able to self-repair, and you have to be able to build the flying machine out of only a very small subset of atoms that happen to be very abundant in nature, like carbon and oxygen, nitrogen, and so on. And it also has very, very tight constraints on its energy budgets, because a lot of animals starve to death.

Max Tegmark: Your brain can do all this great stuff on 25 watts. It is obviously much more optimized for that than your laptop is. Once you let go of all these evolutionary constraints, which we don’t have as engineers, it turns out there are much easier ways of building flying machines. And I’m quite confident that there are also much easier ways of building machines with human-level intelligence than the one we have in our head.

Max Tegmark: It’s cool to do some neuroscience, and I’ve written some neuroscience papers — steal some cool ideas from how the brain does stuff for inspiration. Even the whole idea of an artificial neural network, of course, came from looking at brains and seeing that they have neural networks inside. But no, we are told the first time we’re going to figure out really how our brain works is when we first build artificial general intelligence — and then it helps us figure out how the brain works.

Slaughterbots

Max Tegmark: It used to be that it’s a very honored tradition in the military that humans should take responsibility for things. You can’t just be in the British army and decide to go shoot a bunch of people because you felt like it. They will ask, “Who ordered you to do this? And who is responsible?” But there was the United Nations report that came out showing that last year, for the first time, we had these slaughterbots in Libya — that had been sold to one of the warring parties there by a Turkish company — that actually hunted down humans and killed them, because the machines decided that they were bad guys. This is very different from the drone warfare that’s mostly on the news now with Ukraine, for example, where there’s a human looking at cameras and deciding what to do. It’s where you actually delegate it to the machine: “Just go figure out who’s a bad guy and then kill them.”

Rob Wiblin: Do you know on what kind of basis the drones were making those decisions?

Max Tegmark: That was ultimately proprietary information from the company that they chose not to release.

Rob Wiblin: Wow. OK. I didn’t know that was already a thing.

Max Tegmark: And so far, the relatively few people who have been killed by this, as usual, it tends to be more vulnerable people in developing countries who get screwed first. But it’s not hard to imagine that this is something that could escalate enormously. We don’t generally like to have weapons of mass destruction where very few can kill very many, because it’s very destabilizing. And these slaughterbots — if you can mass produce them for the cost of an iPhone, each one, and you can buy a million of them for a few hundred million dollars — would mean that one person, in principle, could then go off and kill a million people.

Max Tegmark: And you might think it’s fine because we can program these to only be ethical and only kill the good guys or whatever, if you don’t have any other moral qualms. But who’s to say what ethics you put into it? Well, the owner says that, right? So if the owner of them decides that the ethical thing to do is to kill everybody of a certain ethnic group, for example, then that’s what these machines will go off and do. And I think this kind of weapon of mass destruction will be much more harmful to the future of humanity than any of the ones we’ve had before, precisely because it gives such outsized power to a very tiny group of people. And in contrast to other conflicts where we’ve had a lot of people do bad things, there were often officers or some soldiers who refused to follow orders or assassinated the dictator or whatever. These machines are the ultimate Adolf Eichmann on steroids, who have been programmed to be just completely loyal.

Max Tegmark: So when we started warning about this, we worked with Stuart Russell, for example, to make this video called Slaughterbots a few years ago, which actually has racked up over a million views now. Some people accused us of being completely unrealistic, and now they’ve stopped saying that, because they’ve been reading in the newspaper that it’s already happened.

Making sure AI benefits us all

Max Tegmark: Even before we get to the point where we have artificial general intelligence, which can just do all our jobs, some pretty spectacular changes are going to happen in society.

Max Tegmark: It could be great, in that we might just produce this abundance of services and goods that can be shared so that everybody gets better off. Or it could kind of go to hell in a handbasket by causing an incredible power concentration, which is ultimately harmful for humanity as a whole. If you’re not worried about this, then just take a moment and think about your least favorite political leader on the planet. Don’t tell me who it is, but just close your eyes and imagine the face of that person. And then just imagine that they will be in charge of whatever company or organization has the best AI going forward as it gets ever better, and gradually become in charge of the entire planet through that. How does that make you feel? Great, or less so?

Rob Wiblin: Less so.

Max Tegmark: We’re not talking about the AI itself, the machine taking over. It’s still this person in charge.

Rob Wiblin: It seems suboptimal.

Max Tegmark: But you don’t look too excited.

Rob Wiblin: No. I would not be psyched by that.

Max Tegmark: Yeah. So that’s the challenge then. We can already see slow trends in that direction. Just look at the stock market: what were the largest companies in the US, for example, 10 years ago? They were oil companies and this and that and the other thing. Now, all the largest companies on the S&P 500 are tech companies, and that’s never going to be undone. Tech companies are gradually going to continue consolidating, growing, and eating up more and more of the lunch of the other companies, and become ever more dominant. And those who control them, therefore, get ever more power.

Max Tegmark: I personally am a big democracy fan. I love Winston Churchill’s quip there, that democracy is a terrible system of government, except for all the other ways. If we believe in the democratic ideal, the solution is obviously to figure out a way of making this ever-growing power that comes from having this tech be in the hands of people of Earth, so that everybody gets better off. It’s very easy in principle to take an ever-growing pie and divide it up in such a way that everyone gets better off and nobody gets seriously screwed over. But that’s not what happens by default, right? That’s not what’s been happening in recent decades. The poorest Americans have been getting actually poorer rather than richer. It’s an open question, I think, of how to deal with this.

Max Tegmark: This is not the question we should go blame my AI research friends for not having solved by themselves. It’s a question economists, political scientists, and everybody else has to get in on and think about: how do we structure our society to make sure that this great abundance ultimately gets controlled in a way that benefits us all?

Max Tegmark: The kind of tools that have already caused the problems that I mentioned — for example, weapons that can give an outsized power to very few, or machine learning tools that, through media and social media, let very few control very many — those obviously have to be part of the conversation we have. How do we make sure that those tools don’t get deployed in harmful ways, so that we get this democratically prosperous future that I’m hoping for?

Imagining a wide range of possible futures

Max Tegmark: This approach of just encouraging people to think about the positive future they want is very inspired by the rest of my life. I spend so much time giving career advice to students who walk into my office — and through 80,000 Hours, you have a lot of experience with this. And the first thing I ask is always, “What is the future that you are excited about?” And if all she can say is, “Oh, maybe I’ll get cancer. Maybe I’ll get run over by a bus. Maybe I’ll get murdered” — terrible strategy for career planning, right? If all you do is make lists of everything that can go wrong, you’re just going to end up a paranoid hypochondriac, and it’s not even going to improve your odds. Instead, I want to see fire in her eyes. I want her to be like, “This is where I want to be in the future.” And then we can talk about the obstacles that have to be circumvented to get there.

Max Tegmark: This is what we need to do as a society also. And then you go to the movies and watch some film about the future and it’s dystopian. Almost every time it’s dystopian. Or you read something in the news about what people are talking about, the future. One crisis disaster after another. So I think we, as a species, are making exactly the same mistake that we would find ridiculous if young people made when we were giving them career advice. That’s why I put this in the book.

Max Tegmark: And I also think it’s important that this job of articulating and inspiring positive vision is not something we can just delegate to tech nerds, like me. People who know how to train a neural network in PyTorch, that doesn’t give them any particular qualifications in human psychology to figure out what makes people truly happy. We want everybody in on this one, and talking about the destination that we’re aiming for. That’s also a key reason I wrote the book: I wanted people to take seriously that there are all these different possibilities, and start having conversations with their friends about what they would like today and their future life to be like — rather than just wait to see some commercial that told them how it was supposed to be. That’s the way we become masters of our own destiny. We figure out where we want to go and then we steer in that direction.

Recent advances in capabilities and alignment

Max Tegmark: We were definitely right seven years ago when we took this seriously as something that wasn’t science fiction — because a whole bunch of the things that some of the real skeptics then thought would maybe never happen have already happened.

Max Tegmark: And also, we’ve learned a very interesting thing about how it’s happening. Because even as recently as seven years ago, you could definitely have argued that in order to get this performance that we have now — where you can just, for example, ask for a picture of an armchair that looks like an avocado, and then get something as cool as what DALL·E made, or have those logical reasoning things from PaLM… Maybe for the listeners who haven’t read through the nerd paper, we can just mention an example. So there’s this text: “I’m going to go to this mountain with the faces on it. What is the capital of the state to the east of the state that that’s in?” And it gives the correct answer.

Max Tegmark: So back even as recently as seven years ago, I think a lot of AI researchers would’ve said that that’s impossible to do, unless you have developed some fundamental new breakthroughs in logic-based systems, having some really clever sort of internal knowledge representation. You would really need to build a lot of new tools. And instead, what we’ve seen is it wasn’t actually necessary.

Max Tegmark: People have built these gigantic black boxes. You basically take a bunch of simulated neurons like we have in our brain — basically, you can think of them as wires sending voltages to each other in a certain way. And then you have a bunch of knobs, which are called “parameters,” which you can tweak, affecting how the different neurons affect one another. Then you have some definition of what good performance means. Like maybe answering a lot of questions correctly. And then it just becomes a problem of tweaking all these knobs to get the best performance. This we call “training.” And you can tell the computer what it means to be good, and then it can keep tweaking these knobs, which in computer science is called an “optimization problem.”

Max Tegmark: And basically, that very simple thing with some fairly simple architectures has gotten us all the way here. There have been a few technical innovations. There’s an architecture called “transformers,” which is a particular way of connecting the neurons together, whatever, but it’s actually pretty simple. It’s just turned out that when you just kept adding more and more data and more and more compute, it became able to do all of these sorts of things.

Max Tegmark: And frankly, this is to me the worst-case scenario we’re on right now — the one I had hoped wouldn’t happen. I had hoped that it was going to be harder to get here, so it would take longer. So we would have more time to do some AI safety. I also hoped that the way we would ultimately get here would be a way where we had more insight into how the system actually worked, so that we could trust it more because we understood it.

Max Tegmark: Instead, what we’re faced with is these humongous black boxes with 200 billion knobs on them and it magically does this stuff. A very poor understanding of how it works. We have this, and it turned out to be easy enough to do it that every company and everyone and their uncle is doing their own, and there’s a lot of money to be made. It’s hard to envision a situation where we as a species decide to stop for a little bit and figure out how to make them safe.

Regulatory capture

Max Tegmark: For example, let’s look at some past failures. So in the 1950s, the first article came out in the New England Journal of Medicine saying smoking causes lung cancer. Twenty years later, that whole idea was largely silenced and marginalized, and it took decades until there was much policy, and warning labels on cigarettes, and restrictions on marketing cigarettes to minors. Why was that? Because of a failure of alignment. Big Tobacco was so rich and so powerful that they successfully pulled off a regulatory capture, where they actually hacked the system that was supposed to align them and bought them.

Max Tegmark: Big Oil did the same thing. They’ve of course known for a very long time that there was a little conflict between their personal profits and maybe what was best for society. So they did a regulatory capture, invested a lot of money in manufacturing doubt about whether what they were doing was actually bad. They hired really, really good lawyers. So even though in the social contract the idea had been that the governments would be so powerful that they could give the right incentives to the companies, that failed.

Rob Wiblin: I guess the companies became too close in power to the government, so they could no longer be properly constrained anymore.

Max Tegmark: Exactly. And whenever the regulator becomes smaller or has less money or power than the one that they’re supposed to regulate, you have a potential problem like this. That’s exactly why we have to be careful with an AI that’s smarter than the humans that are supposed to regulate it. What I’m saying is it’s trivial to envision exactly the same failure mode happening now. If whatever company that first builds AGI realizes that they can take over the world and do whatever the CEO wants with the world — but that’s illegal in the country they’re in — well, they can just follow the playbook of Big Tobacco and Big Oil and take over the government.

Max Tegmark: I would actually go as far as saying that’s already started to happen. One of the most depressing papers I’ve read in many years was written by two brothers, Abdalla and Abdalla, where they made a comparison between Big Tobacco and Big Tech.

Max Tegmark: Even though the paper is full of statistics and charts that I’ll spare you — people can find it on archive.org — they open with this just spectacular hypothetical: suppose you go to this public health conference. Huge conference, thousands of top researchers there. And the person that is on the stage, giving this keynote about public health and smoking and lung cancer and so on, you realize that that person actually is funded by a tobacco company. But nobody told you about that: it doesn’t say in the bio, and they didn’t say when they introduced it. Then you go out into the expo area and you see all these nice booths there by Philip Morris and Marlboro, and you realize that they are the main sponsors of the whole conference. That would be anathema in a public health conference. You would never tolerate that.

Max Tegmark: Now you go to NeurIPS — tomorrow is the deadline for my group to submit two papers; this is the biggest AI conference of the year — and you have all these people talking in some session about AI in society or AI ethics. And they forget to mention that they got all these grants from Big Tech. And then you go out to the expo area and there’s the Facebook booth and there’s the Google booth and so on and so forth. And for some reason, this kind of capture of academia that would be considered completely unacceptable at a public health conference, or for that matter a climate change conference, is considered completely OK in the AI community.

Articles, books, and other media discussed in the show

Max’s work:

AI alignment work, regulations, and Big Tech:

Media bias and ‘disinformation’:

Other 80,000 Hours Podcast episodes:

Everything else:

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.