#142 – John McWhorter on key lessons from linguistics, the virtue of creoles, and language extinction

John McWhorter is a linguistics professor at Columbia University specialising in research on creole languages.

He’s also a content-producing machine, never afraid to give his frank opinion on anything and everything. On top of his academic work John has also written 22 books, produced five online university courses, hosts one and a half podcasts, and now writes a regular New York Times op-ed column.

Our show is mostly about the world’s most pressing problems and what you can do to solve them. But what’s the point of hosting a podcast if you can’t occasionally just talk about something fascinating with someone whose work you appreciate?

So today, just before the holidays, we’re sharing this interview with John about language and linguistics — including what we think are some of the most important things everyone ought to know about those topics. We ask him:

  • Can you communicate faster in some languages than others, or is there some constraint that prevents that?
  • Does learning a second or third language make you smarter, or not?
  • Can a language decay and get worse at communicating what people want to get across?
  • If children aren’t taught any language at all, how many generations does it take them to invent a fully fledged one of their own?
  • Did Shakespeare write in a foreign language, and if so, should we translate his plays?
  • How much does the language we speak really shape the way we think?
  • Are creoles the best languages in the world — languages that ideally we would all speak?
  • What would be the optimal number of languages globally?
  • Does trying to save dying languages do their speakers a favour, or is it more of an imposition?
  • Should we bother to teach foreign languages in UK and US schools?
  • Is it possible to save the important cultural aspects embedded in a dying language without saving the language itself?
  • Will AI models speak a language of their own in the future, one that humans can’t understand, but which better serves the tradeoffs AI models need to make?

We then put some of these questions to the large language model ChatGPT, asking it to play the role of a linguistics professor at Colombia University.

We’ve also added John’s talk “Why the World Looks the Same in Any Language” to the end of this episode. So stick around after the credits!

And if you’d rather see Rob and John’s facial expressions or beautiful high cheekbones while listening to this conversation, you can watch the video of the full interview.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Video editing: Ryan Kessler
Transcriptions: Katy Moore

Continue reading →

#141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well

Large language models like GPT-3, and now ChatGPT, are neural networks trained on a large fraction of all text available on the internet to do one thing: predict the next word in a passage. This simple technique has led to something extraordinary — black boxes able to write TV scripts, explain jokes, produce satirical poetry, answer common factual questions, argue sensibly for political positions, and more. Every month their capabilities grow.

But do they really ‘understand’ what they’re saying, or do they just give the illusion of understanding?

Today’s guest, Richard Ngo, thinks that in the most important sense they understand many things. Richard is a researcher at OpenAI — the company that created ChatGPT — who works to foresee where AI advances are going and develop strategies that will keep these models from ‘acting out’ as they become more powerful, are deployed and ultimately given power in society.

One way to think about ‘understanding’ is as a subjective experience. Whether it feels like something to be a large language model is an important question, but one we currently have no way to answer.

However, as Richard explains, another way to think about ‘understanding’ is as a functional matter. If you really understand an idea, you’re able to use it to reason and draw inferences in new situations. And that kind of understanding is observable and testable.

One experiment conducted by AI researchers suggests that language models have some of this kind of understanding.

If you ask any of these models what city the Eiffel Tower is in and what else you might do on a holiday to visit the Eiffel Tower, they will say Paris and suggest visiting the Palace of Versailles and eating a croissant.

One would be forgiven for wondering whether this might all be accomplished merely by memorising word associations in the text the model has been trained on. To investigate this, the researchers found the part of the model that stored the connection between ‘Eiffel Tower’ and ‘Paris,’ and flipped that connection from ‘Paris’ to ‘Rome.’

If the model just associated some words with one another, you might think that this would lead it to now be mistaken about the location of the Eiffel Tower, but answer other questions correctly. However, this one flip was enough to switch its answers to many other questions as well. Now if you asked it what else you might visit on a trip to the Eiffel Tower, it will suggest visiting the Colosseum and eating pizza, among other changes.

Another piece of evidence comes from the way models are prompted to give responses to questions. Researchers have found that telling models to talk through problems step by step often significantly improves their performance, which suggests that models are doing something useful with that extra “thinking time”.

Richard argues, based on this and other experiments, that language models are developing sophisticated representations of the world which can be manipulated to draw sensible conclusions — maybe not so different from what happens in the human mind. And experiments have found that, as models get more parameters and are trained on more data, these types of capabilities consistently improve.

We might feel reluctant to say a computer understands something the way that we do. But if it walks like a duck and it quacks like a duck, we should consider that maybe we have a duck — or at least something sufficiently close to a duck it doesn’t matter.

In today’s conversation, host Rob Wiblin and Richard discuss the above, as well as:

  • Could speeding up AI development be a bad thing?
  • The balance between excitement and fear when it comes to AI advances
  • Why OpenAI focuses its efforts where it does
  • Common misconceptions about machine learning
  • How many computer chips it might require to be able to do most of the things humans do
  • How Richard understands the ‘alignment problem’ differently than other people
  • Why ‘situational awareness’ may be a key concept for understanding the behaviour of AI models
  • What work to positively shape the development of AI Richard is and isn’t excited about
  • The AGI Safety Fundamentals course that Richard developed to help people learn more about this field

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Milo McGuire and Ben Cordell
Transcriptions: Katy Moore

Continue reading →

#140 – Bear Braumoeller on the case that war isn't in decline

Is war in long-term decline? Steven Pinker’s The Better Angels of Our Nature brought this previously obscure academic question to the centre of public debate, and pointed to rates of death in war to argue energetically that war is on the way out.

But that idea divides war scholars and statisticians, and so Better Angels has prompted a spirited debate, with datasets and statistical analyses exchanged back and forth year after year. The lack of consensus has left a somewhat bewildered public (including host Rob Wiblin) unsure quite what to believe.

Today’s guest, professor in political science Bear Braumoeller, is one of the scholars who believes we lack convincing evidence that warlikeness is in long-term decline. He collected the analysis that led him to that conclusion in his 2019 book, Only the Dead: The Persistence of War in the Modern Age.

The question is of great practical importance. The US and PRC are entering a period of renewed great power competition, with Taiwan as a potential trigger for war, and Russia is once more invading and attempting to annex the territory of its neighbours.

If war has been going out of fashion since the start of the Enlightenment, we might console ourselves that however nerve-wracking these present circumstances may feel, modern culture will throw up powerful barriers to another world war. But if we’re as war-prone as we ever have been, one need only inspect the record of the 20th century to recoil in horror at what might await us in the 21st.

Bear argues that the second reaction is the appropriate one. The world has gone up in flames many times through history, with roughly 0.5% of the population dying in the Napoleonic Wars, 1% in World War I, 3% in World War II, and perhaps 10% during the Mongol conquests. And with no reason to think similar catastrophes are any less likely today, complacency could lead us to sleepwalk into disaster.

He gets to this conclusion primarily by analysing the datasets of the decades-old Correlates of War project, which aspires to track all interstate conflicts and battlefield deaths since 1815. In Only the Dead, he chops up and inspects this data dozens of different ways, to test if there are any shifts over time which seem larger than what could be explained by chance variation alone.

Among other metrics, Bear looks at:

  • Battlefield deaths alone, as a percentage of combatants’ populations, and as a percentage of world population.
  • The total number of wars starting in a given year.
  • Rates of war initiation as a fraction of all country pairs capable of fighting wars.
  • How likely it was during different periods that a given war would double in size.
Image source.

In a nutshell, and taking in the full picture painted by these different measures, Bear simply finds no general trend in either direction from 1815 through today. It seems like, as philosopher George Santayana lamented in 1922, “only the dead have seen the end of war”.

That’s not to say things are the same in all periods. Depending on which indication of warlikeness you give the greatest weight, you can point to some periods that seem violent or pacific beyond what might be explained by random variation.

For instance, Bear points out that war initiation really did go down a lot at the end of the Cold War, with peace probably fostered by a period of unipolar US dominance, and the end of great power funding for proxy wars.

But that drop came after a period of somewhat above-average warlikeness during the Cold War. And surprisingly, the most peaceful period in Europe turns out not to be 1990–2015, but rather 1815–1855, during which the monarchical ‘Concert of Europe,’ scarred by the Napoleonic Wars, worked together to prevent revolution and interstate aggression.

Why haven’t modern ideas about the immorality of violence led to the decline of war, when it’s such a natural thing to expect? Bear is no Enlightenment scholar, but his book notes (among other reasons) that while modernity threw up new reasons to embrace pacifism, it also gave us new reasons to embrace violence: as a means to overthrow monarchy, distribute the means of production more equally, or protect people a continent away from ethnic cleansing — all motives that would have been foreign in the 15th century.

In today’s conversation, Bear and Rob discuss all of the above in more detail than even a usual 80,000 Hours podcast episode, as well as:

  • What would Bear’s critics say in response to all this?
  • What do the optimists get right?
  • What are the biggest problems with the Correlates of War dataset?
  • How does one do proper statistical tests for events that are clumped together, like war deaths?
  • Why are deaths in war so concentrated in a handful of the most extreme events?
  • Did the ideas of the Enlightenment promote nonviolence, on balance?
  • Were early states more or less violent than groups of hunter-gatherers?
  • If Bear is right, what can be done?
  • How did the ‘Concert of Europe’ or ‘Bismarckian system’ maintain peace in the 19th century?
  • Which wars are remarkable but largely unknown?
  • What’s the connection between individual attitudes and group behaviour?
  • Is it a problem that this dataset looks at just the ‘state system’ and ‘battlefield deaths’?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ryan Kessler
Transcriptions: Katy Moore

Continue reading →

#139 – Alan Hájek on puzzles and paradoxes in probability and expected value

A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth you win $16, and so on. How much should you be willing to pay to play?

The standard way of analysing gambling problems, ‘expected value‘ — in which you multiply probabilities by the value of each outcome and then sum them up — says your expected earnings are infinite. You have a 50% chance of winning $2, for ‘0.5 * $2 = $1’ in expected earnings. A 25% chance of winning $4, for ‘0.25 * $4 = $1’ in expected earnings, and on and on. A never-ending series of $1s added together comes to infinity. And that’s despite the fact that you know with certainty you can only ever win a finite amount!

Today’s guest — philosopher Alan Hájek of the Australian National University — thinks of much of philosophy as “the demolition of common sense followed by damage control” and is an expert on paradoxes related to probability and decision-making rules like “maximise expected value.”

The problem described above, known as the St. Petersburg paradox, has been a staple of the field since the 18th century, with many proposed solutions. In the interview, Alan explains how very natural attempts to resolve the paradox — such as factoring in the low likelihood that the casino can pay out very large sums, or the fact that money becomes less and less valuable the more of it you already have — fail to work as hoped.

We might reject the setup as a hypothetical that could never exist in the real world, and therefore of mere intellectual curiosity. But Alan doesn’t find that objection persuasive. If expected value fails in extreme cases, that should make us worry that something could be rotten at the heart of the standard procedure we use to make decisions in government, business, and nonprofits.

These issues regularly show up in 80,000 Hours’ efforts to try to find the best ways to improve the world, as the best approach will arguably involve long-shot attempts to do very large amounts of good.

Consider which is better: saving one life for sure, or three lives with 50% probability? Expected value says the second, which will probably strike you as reasonable enough. But what if we repeat this process and evaluate the chance to save nine lives with 25% probability, or 27 lives with 12.5% probability, or after 17 more iterations, 3,486,784,401 lives with a 0.00000009% chance. Expected value says this final offer is better than the others — 1,000 times better, in fact.

Insisting that people give up a sure thing in favour of a vanishingly low chance of a very large impact strikes some people as peculiar or even fanatical. But one of Alan’s PhD students, Hayden Wilkinson, discovered that rejecting expected value on this basis requires you to swallow even more bitter pills, like giving up on the idea that if A is better than B, and B is better than C, then A is also better than C.

Ultimately Alan leans towards the view that our best choice is to “bite the bullet” and stick with expected value, even with its sometimes counterintuitive implications. Where we want to do damage control, we’re better off looking for ways our probability estimates might be wrong.

In today’s conversation, Alan and Rob explore these issues and many others:

  • Simple rules of thumb for having philosophical insights
  • A key flaw that hid in Pascal’s wager from the very beginning
  • Whether we have to simply ignore infinities because they mess everything up
  • What fundamentally is ‘probability’?
  • Some of the many reasons ‘frequentism’ doesn’t work as an account of probability
  • Why the standard account of counterfactuals in philosophy is deeply flawed
  • And why counterfactuals present a fatal problem for one sort of consequentialism

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell and Ryan Kessler
Transcriptions: Katy Moore

Continue reading →

#138 – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter

What in the world is intrinsically good — good in itself even if it has no other effects? Over the millennia, people have offered many answers: joy, justice, equality, accomplishment, loving god, wisdom, and plenty more.

The question is a classic that makes for great dorm-room philosophy discussion. But it’s hardly just of academic interest. The issue of what (if anything) is intrinsically valuable bears on every action we take, whether we’re looking to improve our own lives, or to help others. The wrong answer might lead us to the wrong project and render our efforts to improve the world entirely ineffective.

Today’s guest, Sharon Hewitt Rawlette — philosopher and author of The Feeling of Value: Moral Realism Grounded in Phenomenal Consciousness — wants to resuscitate an answer to this question that is as old as philosophy itself.

That idea, in a nutshell, is that there is only one thing of true intrinsic value: positive feelings and sensations. And similarly, there is only one thing that is intrinsically of negative value: suffering, pain, and other unpleasant sensations.

Lots of other things are valuable too: friendship, fairness, loyalty, integrity, wealth, patience, houses, and so on. But they are only instrumentally valuable — that is to say, they’re valuable as means to the end of ensuring that all conscious beings experience more pleasure and other positive sensations, and less suffering.

As Sharon notes, from Athens in 400 BC to Britain in 1850, the idea that only subjective experiences can be good or bad in themselves — a position known as ‘philosophical hedonism’ — has been one of the most enduringly popular ideas in ethics.

And few will be taken aback by the notion that, all else equal, more pleasure is good and less suffering is bad. But can they really be the only intrinsically valuable things?

Over the 20th century, philosophical hedonism became increasingly controversial in the face of some seemingly very counterintuitive implications. For this reason the famous philosopher of mind Thomas Nagel called The Feeling of Value “a radical and important philosophical contribution.”

So what convinces Sharon that philosophical hedonism deserves another go?

Stepping back for a moment, any answer to the question “What has intrinsic value?” faces a serious challenge: “How do we know?” It’s far from clear how something having intrinsic value can cause us to believe that it has intrinsic value. And if there’s no causal or rational connection between something being valuable and our believing that it has value, we could only get the right answer by some extraordinary coincidence. You may feel it’s intrinsically valuable to treat people fairly, but maybe there’s just no reason to trust that intuition.

Since the 1700s, many philosophers working on so-called ‘metaethics’ — that is, the study of what ethical claims are and how we could know if they’re true — have despaired of us ever making sense of or identifying the location of ‘objective’ or ‘intrinsic’ value. They conclude that when we say things are ‘good,’ we aren’t really saying anything about their nature, but rather just expressing our own attitudes, or intentions, or something else.

Sharon disagrees. She says the answer to all this has been right under our nose all along.

We have a concept of value because of our experiences of positive sensations — sensations that immediately indicate to us that they are valuable and that if someone could create more of them, they ought to do so. Similarly, we have a concept of badness because of our experience of suffering — sensations that scream to us that if suffering were all there were, it would be a bad thing.

How do we know that pleasure is valuable, and that suffering is the opposite of valuable? Directly!

While I might be mistaken that a painting I’m looking at is in real life as it appears to me, I can’t be mistaken about the nature of my perception of it. If it looks red to me, it may or may not be red, but it’s definitely the case that I am perceiving redness. Similarly, while I might be mistaken that a painting is intrinsically valuable, I can’t be mistaken about the pleasurable sensations I’m feeling when I look at it, and the fact that among other qualities those sensations have the property of goodness.

While intuitive on some level, this arguably implies some very strange things. Most famously, the philosopher Robert Nozick challenged it with the idea of an ‘experience machine’: if you could enter into a simulated world and enjoy a life far more pleasurable than the one you experience now, should you do so, even if it would mean none of your accomplishments or relationships would be ‘real’? Nozick and many of his colleagues thought not.

The idea has also been challenged for failing to value human freedom and autonomy for its own sake. Would it really be OK to kill one person to use their organs to save the lives of five others, if doing so would generate more pleasure and less suffering? Few believe so.

In today’s interview, Sharon explains the case for a theory of value grounded in subjective experiences, and why she believes these counterarguments are misguided. A philosophical hedonist shouldn’t get in an experience machine, nor override an individual’s autonomy, except in situations so different from the classic thought experiments that it no longer seems strange they would do so.

Host Rob Wiblin and Sharon cover all that, as well as:

  • The essential need to disentangle intrinsic, instrumental, and other sorts of value
  • Why Sharon’s arguments lead to hedonistic utilitarianism rather than hedonistic egoism (in which we only care about our own feelings)
  • How do people react to the ‘experience machine’ thought experiment when surveyed?
  • Why hedonism recommends often thinking and acting as though it were false
  • Whether it’s crazy to think that relationships are only useful because of their effects on our subjective experiences
  • Whether it will ever be possible to eliminate pain, and whether doing so would be desirable
  • If we didn’t have positive or negative experiences, whether that would cause us to simply never talk about goodness and badness
  • Whether the plausibility of hedonism is affected by our theory of mind
  • And plenty more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ryan Kessler
Transcriptions: Katy Moore

Continue reading →

#137 – Andreas Mogensen on whether effective altruism is just for consequentialists

Effective altruism, in a slogan, aims to ‘do the most good.’ Utilitarianism, in a slogan, says we should act to ‘produce the greatest good for the greatest number.’ It’s clear enough why utilitarians should be interested in the project of effective altruism. But what about the many people who reject utilitarianism?

Today’s guest, Andreas Mogensen — senior research fellow at Oxford University’s Global Priorities Institute — does reject utilitarianism, but as he explains, this does little to dampen his enthusiasm for effective altruism.

Andreas leans towards ‘deontological’ or rule-based theories of ethics, rather than ‘consequentialist’ theories like utilitarianism which look exclusively at the effects of a person’s actions.

Like most people involved in effective altruism, he parts ways with utilitarianism in rejecting its maximal level of demandingness, the idea that the ends justify the means, and the notion that the only moral reason for action is to benefit everyone in the world considered impartially.

However, Andreas believes any plausible theory of morality must give some weight to the harms and benefits we provide to other people. If we can improve a stranger’s wellbeing enormously at negligible cost to ourselves and without violating any other moral prohibition, that must be at minimum a praiseworthy thing to do.

In a world as full of preventable suffering as our own, this simple ‘principle of beneficence’ is probably the only premise one needs to grant for the effective altruist project of identifying the most impactful ways to help others to be of great moral interest and importance.

As an illustrative example Andreas refers to the Giving What We Can pledge to donate 10% of one’s income to the most impactful charities available, a pledge he took in 2009. Many effective altruism enthusiasts have taken such a pledge, while others spend their careers trying to figure out the most cost-effective places pledgers can give, where they’ll get the biggest ‘bang for buck’.

For someone living in a world as unequal as our own, this pledge at a very minimum gives an upper-middle class person in a rich country the chance to transfer money to someone living on about 1% as much as they do. The benefit an extremely poor recipient receives from the money is likely far more than the donor could get spending it on themselves.

What arguments could a non-utilitarian moral theory mount against such giving?

Perhaps it could interfere with the achievement of other important moral goals? In response to this Andreas notes that alleviating the suffering of people in severe poverty is an important goal that should compete with alternatives. And furthermore, giving 10% is not so much that it likely disrupts one’s ability to, for instance, care for oneself or one’s family, or participate in domestic politics.

Perhaps it involves the violation of important moral prohibitions, such as those on stealing or lying? In response Andreas points out that the activities advocated by effective altruism researchers almost never violate such prohibitions — and if a few do, one can simply rule out those options and choose among the rest.

Many approaches to morality will say it’s permissible not to give away 10% of your income to help others as effectively as is possible. But if they will almost all regard it as praiseworthy to benefit others without giving up something else of equivalent moral value, then Andreas argues they should be enthusiastic about effective altruism as an intellectual and practical project nonetheless.

In this conversation, Andreas and Rob discuss how robust the above line of argument is, and also cover:

  • Should we treat philosophical thought experiments that feature very large numbers with great suspicion?
  • If we had to allow someone to die to avoid preventing the football World Cup final from being broadcast to the world, is that permissible or not? If not, what might that imply?
  • What might a virtue ethicist regard as ‘doing the most good’?
  • If a deontological theory of morality parted ways with common effective altruist practices, how would that likely be?
  • If we can explain how we came to hold a view on a moral issue by referring to evolutionary selective pressures, should we disbelieve that view?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell and Beppe Rådvik
Transcriptions: Katy Moore

Continue reading →

#136 – Will MacAskill on what we owe the future

  1. People who exist in the future deserve some degree of moral consideration.
  2. The future could be very big, very long, and/or very good.
  3. We can reasonably hope to influence whether people in the future exist, and how good or bad their lives are.
  4. So trying to make the world better for future generations is a key priority of our time.

This is the simple four-step argument for ‘longtermism’ put forward in What We Owe The Future, the latest book from today’s guest — University of Oxford philosopher and cofounder of the effective altruism community, Will MacAskill.

From one point of view this idea is common sense. We work on breakthroughs to treat cancer or end use of fossil fuels not just for people alive today, but because we hope such scientific advances will help our children, grandchildren, and great-grandchildren as well.

Some who take this longtermist idea seriously work to develop broad-spectrum vaccines they hope will safeguard humanity against the sorts of extremely deadly pandemics that could permanently throw civilisation off track — the sort of project few could argue is not worthwhile.

But Will is upfront that longtermism is also counterintuitive. To start with, he’s willing to contemplate timescales far beyond what’s typically discussed:

If we last as long as a typical mammal species, that’s another 700,000 years. If we last until the Earth is no longer habitable, that’s hundreds of millions of years. If we manage one day to take to the stars and build a civilisation there, we could live for hundreds of trillions of years. […] Future people [could] outnumber us a thousand or a million or a trillion to one.

A natural objection to thinking millions of years ahead is that it’s hard enough to take actions that have positive effects that persist for hundreds of years, let alone “indefinitely.” It doesn’t matter how important something might be if you can’t predictably change it.

This is one reason, among others, that Will was initially sceptical of longtermism and took years to come around. He preferred to focus on ending poverty and preventable diseases in ways he could directly see were working.

But over seven years he gradually changed his mind, and in What We Owe The Future, Will argues that in fact there are clear ways we might act now that could benefit not just a few but all future generations.

He highlights two effects that could be very enduring: “…reducing risks of extinction of human beings or of the collapse of civilisation, and ensuring that the values and ideas that guide future society are better ones rather than worse.”

The idea that preventing human extinction would have long-lasting impacts is pretty intuitive. If we entirely disappear, we aren’t coming back.

But the idea that we can shape human values — not just for our age, but for all ages — is a surprising one that Will has come to more recently.

In the book, he argues that what people value is far more fragile and historically contingent than it might first seem. For instance, today it feels like the abolition of slavery was an inevitable part of the arc of history. But Will lays out that the best research on the topic suggests otherwise.

For thousands of years, almost everyone — from philosophers to slaves themselves — regarded slavery as acceptable in principle. At the time the British Empire ended its participation in the slave trade, the industry was booming and earning enormous profits. It’s estimated that abolition cost Britain 2% of its GDP for 50 years.

So why did it happen? The global abolition movement seems to have originated within the peculiar culture of the Quakers, who were the first to argue slavery was unacceptable in all cases and campaign for its elimination, gradually convincing those around them with both Enlightenment and Christian arguments. If a few such moral pioneers had fallen off their horses at the wrong time, maybe the abolition movement never would have gotten off the ground and slavery would remain widespread today.

If moral progress really is so contingent, and bad ideas can persist almost without end, it raises the stakes for moral debate today. If we don’t eliminate a bad practice now, it may be with us forever. In today’s in-depth conversation, we discuss the possibility of a harmful moral ‘lock-in’ as well as:

  • How Will was eventually won over to longtermism
  • The three best lines of argument against longtermism
  • How to avoid moral fanaticism
  • Which technologies or events are most likely to have permanent effects
  • What ‘longtermists’ do today in practice
  • How to predict the long-term effect of our actions
  • Whether the future is likely to be good or bad
  • Concrete ideas to make the future better
  • What Will donates his money to personally
  • Potatoes and megafauna
  • And plenty more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Continue reading →

#135 – Samuel Charap on key lessons from five months of war in Ukraine

After a frenetic level of commentary during February and March, the war in Ukraine has faded into the background of our news coverage. But with the benefit of time we’re in a much stronger position to understand what happened, why, whether there are broader lessons to take away, and how the conflict might be ended. And the conflict appears far from over.

So today, we are returning to speak a second time with Samuel Charap — one of the US’s foremost experts on Russia’s relationship with former Soviet states, and coauthor of the 2017 book Everyone Loses: The Ukraine Crisis and the Ruinous Contest for Post-Soviet Eurasia.

As Sam lays out, Russia controls much of Ukraine’s east and south, and seems to be preparing to politically incorporate that territory into Russia itself later in the year. At the same time, Ukraine is gearing up for a counteroffensive before defensive positions become dug in over winter.

Each day the war continues it takes a toll on ordinary Ukrainians, contributes to a global food shortage, and leaves the US and Russia unable to coordinate on any other issues and at an elevated risk of direct conflict.

In today’s brisk conversation, Rob and Sam cover the following topics:

  • Current territorial control and the level of attrition within Russia’s and Ukraine’s military forces.
  • Russia’s current goals.
  • Whether Sam’s views have changed since March on topics like: Putin’s motivations, the wisdom of Ukraine’s strategy, the likely impact of Western sanctions, and the risks from Finland and Sweden joining NATO before the war ends.
  • Why so many people incorrectly expected Russia to fully mobilise for war or persist with their original approach to the invasion.
  • Whether there’s anything to learn from many of our worst fears — such as the use of bioweapons on civilians — not coming to pass.
  • What can be done to ensure some nuclear arms control agreement between the US and Russia remains in place after 2026 (when New START expires).
  • Why Sam considers a settlement proposal put forward by Ukraine in late March to be the most plausible way to end the war and ensure stability — though it’s still a long shot.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Continue reading →

#134 – Ian Morris on what big-picture history teaches us

Wind back 1,000 years and the moral landscape looks very different to today. Most farming societies thought slavery was natural and unobjectionable, premarital sex was an abomination, women should obey their husbands, and commoners should obey their monarchs.

Wind back 10,000 years and things look very different again. Most hunter-gatherer groups thought men who got too big for their britches needed to be put in their place rather than obeyed, and lifelong monogamy could hardly be expected of men or women.

Why such big systematic changes — and why these changes specifically?

That’s the question best-selling historian Ian Morris takes up in his book, Foragers, Farmers, and Fossil Fuels: How Human Values Evolve. Ian has spent his academic life studying long-term history, trying to explain the big-picture changes that play out over hundreds or thousands of years.

There are a number of possible explanations one could offer for the wide-ranging shifts in opinion on the ‘right’ way to live. Maybe the natural sciences progressed and people realised their previous ideas were mistaken? Perhaps a few persuasive advocates turned the course of history with their revolutionary arguments? Maybe everyone just got nicer?

In Foragers, Farmers and Fossil Fuels Ian presents a provocative alternative: human culture gradually evolves towards whatever system of organisation allows a society to harvest the most energy, and we then conclude that system is the most virtuous one. Egalitarian values helped hunter-gatherers hunt and gather effectively. Once farming was developed, hierarchy proved to be the social structure that produced the most grain (and best repelled nomadic raiders). And in the modern era, democracy and individuality have proven to be more productive ways to collect and exploit fossil fuels.

On this theory, it’s technology that drives moral values much more than moral philosophy. Individuals can try to persist with deeply held values that limit economic growth, but they risk being rendered irrelevant as more productive peers in their own society accrue wealth and power. And societies that fail to move with the times risk being conquered by more pragmatic neighbours that adapt to new technologies and grow in population and military strength.

There are many objections one could raise to this theory, many of which we put to Ian in this interview. But the question is a highly consequential one: if we want to guess what goals our descendants will pursue hundreds of years from now, it would be helpful to have a theory for why our ancestors mostly thought one thing, while we mostly think another.

Big though it is, the driver of human values is only one of several major questions Ian has tackled through his career.

In Why the West Rules—For Now, he set out to understand why the Industrial Revolution happened in England and Europe went on to dominate much of the rest of the world, rather than industrialisation kicking off somewhere else like China, with China going on to establish colonies in Europe. (In a word: geography.)

In War! What is it Good For?, he tried to explain why it is that violent conflicts often lead to longer lives and higher incomes (i.e. wars build empires which suppress interpersonal violence internally), while other times they have the exact opposite effect (i.e. advances in military technology allow nomads to raid and pull apart these empires).

In today’s episode, we discuss all of Ian’s major books, taking on topics such as:

  • Whether the evidence base in history — from document archives to archaeology — is strong enough to persuasively answer any of these questions
  • Whether or not wars can still lead to less violence today
  • Why Ian thinks the way we live in the 21st century is probably a short-lived aberration
  • Whether the grand sweep of history is driven more by “very important people” or “vast impersonal forces”
  • Why Chinese ships never crossed the Pacific or rounded the southern tip of Africa
  • In what sense Ian thinks Brexit was “10,000 years in the making”
  • The most common misconceptions about macrohistory

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Continue reading →

#133 – Max Tegmark on how a 'put-up-or-shut-up' resolution led him to work on AI and algorithmic news selection

On January 1, 2015, physicist Max Tegmark gave up something most of us love to do: complain about things without ever trying to fix them.

That “put up or shut up” New Year’s resolution led to the first Puerto Rico conference and Open Letter on Artificial Intelligence — milestones for researchers taking the safe development of highly capable AI systems seriously.

Max’s primary work has been cosmology research at MIT, but his energetic and freewheeling nature has led him into so many other projects that you would be forgiven for forgetting it. In the 2010s he wrote two best-selling books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, and Life 3.0: Being Human in the Age of Artificial Intelligence, and in 2014 founded a non-profit, the Future of Life Institute, which works to reduce all sorts of threats to humanity’s future including nuclear war, synthetic biology, and AI.

Max has complained about many other things over the years, from killer robots to the impact of social media algorithms on the news we consume. True to his ‘put up or shut up’ resolution, he and his team went on to produce a video on so-called ‘Slaughterbots’ which attracted millions of views, and develop a podcast and website called ‘Improve The News’ to help readers separate facts from spin.

But given the stunning recent advances in capabilities — from OpenAI’s DALL-E to DeepMind’s Gato — AI itself remains top of his mind.

You can now give an AI system like GPT-3 the text: “I’m going to go to this mountain with the faces on it. What is the capital of the state to the east of the state that that’s in?” And it gives the correct answer (Saint Paul, Minnesota) — something most AI researchers would have said was impossible without fundamental breakthroughs just seven years ago.

So back at MIT, he now leads a research group dedicated to what he calls “intelligible intelligence.” At the moment, AI systems are basically giant black boxes that magically do wildly impressive things. But for us to trust these systems, we need to understand them.

He says that training a black box that does something smart needs to just be stage one in a bigger process. Stage two is: “How do we get the knowledge out and put it in a safer system?”

His favourite MIT project so far involved taking a bunch of data from the 100 most complicated or famous physics equations, creating an Excel spreadsheet with each of the variables and the results, and saying to the computer, “OK, here’s the data. Can you figure out what the formula is?”

For general formulas, this is really hard. About 400 years ago, Johannes Kepler managed to get hold of the data that Tycho Brahe had gathered regarding how the planets move around the solar system. Kepler spent four years staring at the data until he figured out what the data meant: that planets orbit in an ellipse.

Max’s team’s code was able to discover that in just an hour.

Today’s conversation starts off giving a broad overview of the key questions about artificial intelligence: What’s the potential? What are the threats? How might this story play out? What should we be doing to prepare?

Rob and Max then move on to recent advances in capabilities and alignment, the mood we should have, and possible ways we might misunderstand the problem.

They then spend roughly the last third talking about Max’s current big passion: improving the news we consume — where Rob has a few reservations.

They also cover:

  • Whether we would be able to understand what superintelligent systems were doing
  • The value of encouraging people to think about the positive future they want
  • How to give machines goals
  • Whether ‘Big Tech’ is following the lead of ‘Big Tobacco’
  • Whether we’re sleepwalking into disaster
  • Whether people actually just want their biases confirmed
  • Why Max is worried about government-backed fact-checking
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Continue reading →

#132 – Nova DasSarma on why information security may be critical to the safe development of AI systems

If a business has spent $100 million developing a product, it’s a fair bet that they don’t want it stolen in two seconds and uploaded to the web where anyone can use it for free.

This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars. But once trained, such models may be only a few gigabytes in size and run just fine on ordinary laptops.

Today’s guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic with the security team. One of her jobs is to stop hackers exfiltrating Anthropic’s incredibly expensive intellectual property, as recently happened to Nvidia. As she explains, given models’ small size, the need to store such models on internet-connected servers, and the poor state of computer security in general, this is a serious challenge.

The worries aren’t purely commercial though. This problem looms especially large for the growing number of people who expect that in coming decades we’ll develop so-called artificial ‘general’ intelligence systems that can learn and apply a wide range of skills all at once, and thereby have a transformative effect on society.

If aligned with the goals of their owners, such general AI models could operate like a team of super-skilled assistants, going out and doing whatever wonderful (or malicious) things are asked of them. This might represent a huge leap forward for humanity, though the transition to a very different new economy and power structure would have to be handled delicately.

If unaligned with the goals of their owners or humanity as a whole, such broadly capable models would naturally ‘go rogue,’ breaking their way into additional computer systems to grab more computing power — all the better to pursue their goals and make sure they can’t be shut off.

As Nova explains, in either case, we don’t want such models disseminated all over the world before we’ve confirmed they are deeply safe and law-abiding, and have figured out how to integrate them peacefully into society. In the first scenario, premature mass deployment would be risky and destabilising. In the second scenario, it could be catastrophic — perhaps even leading to human extinction if such general AI systems turn out to be able to self-improve rapidly rather than slowly, something we can only speculate on at this point.

If highly capable general AI systems are coming in the next 10 or 20 years, Nova may be flying below the radar with one of the most important jobs in the world.

We’ll soon need the ability to ‘sandbox’ (i.e. contain) models with a wide range of superhuman capabilities, including the ability to learn new skills, for a period of careful testing and limited deployment — preventing the model from breaking out, and criminals from breaking in. Nova and her colleagues are trying to figure out how to do this, but as this episode reveals, even the state of the art is nowhere near good enough.

In today’s conversation, Rob and Nova cover:

  • How good or bad information security is today
  • The most secure computer systems that exist today
  • How to design an AI training compute centre for maximum efficiency
  • Whether ‘formal verification’ can help us design trustworthy systems
  • How wide the practical gap is between AI capabilities and AI safety
  • How to disincentivise hackers
  • What should listeners do to strengthen their own security practices
  • Jobs at Anthropic
  • And a few more things as well

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell and Beppe Rådvik
Transcriptions: Katy Moore

Continue reading →

#131 – Lewis Dartnell on getting humanity to bounce back faster in a post-apocalyptic world

“We’re leaving these 16 contestants on an island with nothing but what they can scavenge from an abandoned factory and apartment block. Over the next 365 days, they’ll try to rebuild as much of civilisation as they can — from glass, to lenses, to microscopes. This is: The Knowledge!”

If you were a contestant on such a TV show, you’d love to have a guide to how basic things you currently take for granted are done — how to grow potatoes, fire bricks, turn wood to charcoal, find acids and alkalis, and so on.

Today’s guest Lewis Dartnell has gone as far compiling this information as anyone has with his bestselling book The Knowledge: How to Rebuild Civilization in the Aftermath of a Cataclysm.

But in the aftermath of a nuclear war or incredibly deadly pandemic that kills most people, many of the ways we do things today will be impossible — and even some of the things people did in the past, like collect coal from the surface of the Earth, will be impossible the second time around.

As Lewis points out, there’s “no point telling this band of survivors how to make something ultra-efficient or ultra-useful or ultra-capable if it’s just too damned complicated to build in the first place. You have to start small and then level up, pull yourself up by your own bootstraps.”

So it might sound good to tell people to build solar panels — they’re a wonderful way of generating electricity. But the photovoltaic cells we use today need pure silicon, and nanoscale manufacturing — essentially the same technology as microchips used in a computer — so actually making solar panels would be incredibly difficult.

Instead, you’d want to tell our group of budding engineers to use more appropriate technologies like solar concentrators that use nothing more than mirrors — which turn out to be relatively easy to make.

A disaster that unravels the complex way we produce goods in the modern world is all too possible. Which raises the question: why not set dozens of people to plan out exactly what any survivors really ought to do if they need to support themselves and rebuild civilisation? Such a guide could then be translated and distributed all around the world.

The goal would be to provide the best information to speed up each of the many steps that would take survivors from rubbing sticks together in the wilderness to adjusting a thermostat in their comfy apartments.

This is clearly not a trivial task. Lewis’s own book (at 300 pages) only scratched the surface of the most important knowledge humanity has accumulated, relegating all of mathematics to a single footnote.

And the ideal guide would offer pretty different advice depending on the scenario. Are survivors dealing with a radioactive ice age following a nuclear war? Or is it an eerily intact but near-empty post-pandemic world with mountains of goods to scavenge from the husks of cities?

If we take catastrophic risks seriously and want humanity to recover from a devastating shock as far and fast as possible, producing such a guide before it’s too late might be one of the higher-impact projects someone could take on.

As a brand-new parent, Lewis couldn’t do one of our classic three- or four-hour episodes — so this is an unusually snappy one-hour interview, where Rob and Lewis are joined by Luisa Rodriguez to continue the conversation from her episode of the show last year.

They cover:

  • The biggest impediments to bouncing back
  • The reality of humans trying to actually do this
  • The most valuable pro-resilience adjustments we can make today
  • How to recover without much coal or oil
  • How to feed the Earth in disasters
  • And the most exciting recent findings in astrobiology

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Continue reading →

#130 – Will MacAskill on balancing frugality with ambition, whether you need longtermism, & mental health under pressure

Imagine you lead a nonprofit that operates on a shoestring budget. Staff are paid minimum wage, lunch is bread and hummus, and you’re all bunched up on a few tables in a basement office.

But over a few years, your cause attracts some major new donors. Your funding jumps a thousandfold, from $100,000 a year to $100,000,000 a year. You’re the same group of people committed to making sacrifices for the cause — but these days, rather than cutting costs, the right thing to do seems to be to spend serious money and get things done ASAP.

You suddenly have the opportunity to make more progress than ever before, but as well as excitement about this, you have worries about the impacts that large amounts of funding can have.

This is roughly the situation faced by today’s guest Will MacAskill — University of Oxford philosopher, author of the forthcoming book What We Owe The Future, and founding figure in the effective altruism movement.

Years ago, Will pledged to give away more than 50% of his income over his life, and was already donating 10% back when he was a student with next to no income. Since then, the coalition he founded has been super successful at attracting the interest of donors who collectively want to give away billions in the way Will and his colleagues were proposing.

While surely a huge success, it brings with it risks that he’s never had to consider before:

  • Will and his colleagues might try to spend a lot of money trying to get more things done more quickly — but actually just waste it.
  • Being seen as profligate could strike onlookers as selfish and disreputable.
  • Folks might start pretending to agree with their agenda just to get grants.
  • People working on nearby issues that are less flush with funding may end up resentful.
  • People might lose their focus on helping others as they get seduced by the prospect of earning a nice living.
  • Mediocre projects might find it too easy to get funding, even when the people involved would be better off radically changing their strategy, or shutting down and launching something else entirely.

But all these ‘risks of commission’ have to be weighed against ‘risk of omission’: the failure to achieve all you could have if you’d been truly ambitious.

People looking askance at you for paying high salaries to attract the staff you want is unpleasant.

But failing to prevent the next pandemic because you didn’t have the necessary medical experts on your grantmaking team is worse than unpleasant — it’s a true disaster. Yet few will complain, because they’ll never know what might have been if you’d only set frugality aside.

Will aims to strike a sensible balance between these competing errors, which he has taken to calling judicious ambition. In today’s episode, Rob and Will discuss the above as well as:

  • Will humanity likely converge on good values as we get more educated and invest more in moral philosophy — or are the things we care about actually quite arbitrary and contingent?
  • Why are so many nonfiction books full of factual errors?
  • How does Will avoid anxiety and depression with more responsibility on his shoulders than ever?
  • What does Will disagree with his colleagues on?
  • Should we focus on existential risks more or less the same way, whether we care about future generations or not?
  • Are potatoes one of the most important technologies ever developed?
  • And plenty more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Continue reading →

#129 – Dr James Tibenderana on the state of the art in malaria control and elimination

The good news is deaths from malaria have been cut by a third since 2005. The bad news is it still causes 250 million cases and 600,000 deaths a year, mostly among young children in sub-Saharan Africa.

We already have dirt-cheap ways to prevent and treat malaria, and the fraction of the Earth’s surface where the disease exists at all has been halved since 1900. So why is it such a persistent problem in some places, even rebounding 15% since 2019?

That’s one of many questions I put to today’s guest, James Tibenderana — doctor, medical researcher, and technical director at a major global health nonprofit known as Malaria Consortium. James studies the cutting edge of malaria control and treatment in order to optimise how Malaria Consortium spends £100 million a year across countries like Uganda, Nigeria, and Chad.

In sub-Saharan Africa, where 90% of malaria deaths occur, the infection is spread by a few dozen species of mosquito that are ideally suited to the local climatic conditions and have thus been impossible to eliminate so far.

And as James explains, while COVID-19 may have an ‘R’ (reproduction number) of 5, in some situations malaria has a reproduction number in the 1,000s. A single person with malaria can pass the parasite to hundreds of mosquitoes, which themselves each go on to bite dozens of people each, allowing cases to quickly explode.

The nets and antimalarial drugs Malaria Consortium distributes have been highly effective where distributed, but there are tens of millions of young children who are yet to be covered simply due to a lack of funding.

Despite the success of these approaches, given how challenging it will be to create a malaria-free world, there’s enthusiasm to find new approaches to throw at the problem. Two new interventions have recently generated buzz: vaccines and genetic approaches to control the mosquito species that carry malaria.

The RTS,S vaccine is the first-ever vaccine that attacks a protozoa as opposed to a virus or bacteria. Under development for decades, it’s a great scientific achievement. But James points out that even after three doses, it’s still only about 30% effective. Unless future vaccines are substantially more effective, they will remain just a complement to nets and antimalarial drugs, which are cheaper and each cut mortality by more than half.

On the other hand, the latest mosquito-control technologies are almost too effective. It is possible to insert genes into specific mosquito populations that reduce their ability to reproduce. Of course these genes would normally be eliminated by natural selection, but by using a ‘gene drive,’ you can ensure mosquitoes hand these detrimental genes down to 100% of their offspring. If deployed, these genes would spread and ultimately eliminate the mosquitoes that carry malaria at low cost, thereby largely ridding the world of the disease.

Because a single country embracing this method would have global effects, James cautions that it’s important to get buy-in from all the countries involved, and to have a way of reversing the intervention if we realise we’ve made a mistake. Groups like Target Malaria are working on exactly these two issues.

James also emphasises that with thousands of similar mosquito species out there, most of which don’t carry malaria, for better or worse gene drives may not make any difference to the number of mosquitoes out there.

In this comprehensive conversation, Rob and James discuss all of the above, as well as most of what you could reasonably want to know about the state of the art in malaria control today, including:

  • How malaria spreads and the symptoms it causes
  • The use of insecticides and poison baits
  • How big a problem insecticide resistance is
  • How malaria was eliminated in North America and Europe
  • Whether funding is a key bottleneck right now
  • The key strategic choices faced by Malaria Consortium in its efforts to create a malaria-free world
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ryan Kessler
Transcriptions: Katy Moore

Continue reading →

#128 – Chris Blattman on the five reasons wars happen

In nature, animals roar and bare their teeth to intimidate adversaries — but one side usually backs down, and real fights are rare. The wisdom of evolution is that the risk of violence is just too great.

Which might make one wonder: if war is so destructive, why does it happen? The question may sound naïve, but in fact it represents a deep puzzle. If a war will cost trillions and kill tens of thousands, it should be easy for either side to make a peace offer that both they and their opponents prefer to actually fighting it out.

The conundrum of how humans can engage in incredibly costly and protracted conflicts has occupied academics across the social sciences for years. In today’s episode, we speak with economist Chris Blattman about his new book, Why We Fight: The Roots of War and the Paths to Peace, which summarises what they think they’ve learned.

Chris’s first point is that while organised violence may feel like it’s all around us, it’s actually very rare in humans, just as it is with other animals. Across the world, hundreds of groups dislike one another — but knowing the cost of war, they prefer to simply loathe one another in peace.

In order to understand what’s wrong with a sick patient, a doctor needs to know what a healthy person looks like. And to understand war, social scientists need to study all the wars that could have happened but didn’t — so they can see what a healthy society looks like and what’s missing in the places where war does take hold.

Chris argues that social scientists have generated five cogent models of when war can be ‘rational’ for both sides of a conflict:

  1. Unchecked interests — such as national leaders who bear few of the costs of launching a war.
  2. Intangible incentives — such as an intrinsic desire for revenge.
  3. Uncertainty — such as both sides underestimating each other’s resolve to fight.
  4. Commitment problems — such as the inability to credibly promise not to use your growing military might to attack others in future.
  5. Misperceptions — such as our inability to see the world through other people’s eyes.

In today’s interview, we walk through how each of the five explanations work and what specific wars or actions they might explain.

In the process, Chris outlines how many of the most popular explanations for interstate war are wildly overused (e.g. leaders who are unhinged or male) or misguided from the outset (e.g. resource scarcity).

The interview also covers:

  • What Chris and Rob got wrong about the war in Ukraine
  • What causes might not fit into these five categories
  • The role of people’s choice to escalate or deescalate a conflict
  • How great power wars or nuclear wars are different, and what can be done to prevent them
  • How much representative government helps to prevent war
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Continue reading →

#127 – Sam Bankman-Fried on taking a high-risk approach to crypto and doing good

This podcast highlighted Sam Bankman-Fried as a positive example of someone ambitiously pursuing a high-impact career. To say the least, we no longer endorse that. See our statement for why.

The show’s host, Rob Wiblin, has also released some personal comments on this episode and the FTX bankruptcy on The 80,000 Hours Podcast feed, which you can listen to here.

If you were offered a 100% chance of $1 million to keep yourself, or a 10% chance of $15 million — it makes total sense to play it safe. You’d be devastated if you lost, and barely happier if you won.

But if you were offered a 100% chance of donating $1 billion, or a 10% chance of donating $15 billion, you should just go with whatever has the highest expected value — that is, probability multiplied by the goodness of the outcome — and so swing for the fences.

This is the totally rational but rarely seen high-risk approach to philanthropy championed by today’s guest, Sam Bankman-Fried. Sam founded the cryptocurrency trading platform FTX, which has grown his wealth from around $1 million to $20,000 million.

Added 30 November 2022: What I meant to refer to as totally rational in the above paragraph is thinking about the ‘expected value’ of one’s actions, not maximizing expected dollar returns as if you were entirely ‘risk-neutral’. See clarifications on what I (Rob Wiblin) think about risk-aversion here.

Despite that, Sam still drives a Corolla and sleeps on a beanbag, because the only reason he started FTX was to make money to give it away. In 2020, when he was 5% as rich as he is now, he was nonetheless the second biggest individual donor to Joe Biden’s general election campaign.

In today’s conversation, Sam outlines how at every stage in FTX’s development, he and his team were able to choose the high-risk path to maximise expected value — precisely because they weren’t out to earn money for themselves.

This year his philanthropy has kicked into high gear with the launch of the FTX Future Fund, which has the initial ambition of giving away hundreds of millions a year and hopes to soon escalate to over a billion a year.

The Fund is run by previous guest of the show Nick Beckstead, and embodies the same risk-loving attitude Sam has learned from entrepreneurship and trading on financial markets. Unlike most foundations, the Future Fund:

  • Is open to supporting young people trying to get their first big break
  • Makes applying for a grant surprisingly straightforward
  • Is willing to make bets on projects it completely expects to fail, just because they have positive expected value.

Their website lists both areas of interest and more concrete project ideas they are looking to support. The hope is these will inspire entrepreneurs to come forward, seize the mantle, and be the champions who actually make these things happen. Some of the project proposals are pretty natural, such as:

Some might raise an eyebrow:

And others are quirkier still:

While these ideas may seem pretty random, they all stem from a particular underlying moral and empirical vision that the Future Fund has laid out.

In this conversation, we speak with Sam about the hopes he and the Fund have for how the long-term future of humanity might go incredibly well, the fears they hold about how it could go incredibly badly, and what levers they might be able to pull to slightly nudge us towards the former.

Listeners who want to launch an ambitious project to improve humanity’s future should not only listen to the episode, but also look at the full list of the kind of things Sam and his colleagues are hoping to fund, see if they’re inspired, and if so, apply to get the ball rolling.

On top of that we also cover:

  • How Sam feels now about giving $5 million to Biden’s general election campaign
  • His fears and hopes for artificial intelligence
  • Whether or not blockchain technology actually has useful real-world applications
  • What lessons Sam learned from some serious early setbacks
  • Why he fears the effective altruism community is too conservative
  • Why Sam is as authentic now as he was before he was a celebrity
  • And much more.

Note: Sam has donated to 80,000 Hours in the past

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

November 17 2022, 1pm GMT: This podcast highlighted Sam Bankman-Fried as a positive example of someone ambitiously pursuing a high-impact career. To say the least, we no longer endorse that. See our statement for why.

Continue reading →

#126 – Bryan Caplan on whether lazy parenting is OK, what really helps workers, and betting on beliefs

Everybody knows that good parenting has a big impact on how kids turn out. Except that maybe they don’t, because it doesn’t.

Incredible though it might seem, according to today’s guest — economist Bryan Caplan, the author of Selfish Reasons To Have More Kids, The Myth of the Rational Voter, and The Case Against Education — the best evidence we have on the question suggests that, within reason, what parents do has little impact on how their children’s lives play out once they’re adults.

Of course, kids do resemble their parents. But just as we probably can’t say it was attentive parenting that gave me my mother’s nose, perhaps we can’t say it was attentive parenting that made me succeed at school. Both the social environment we grow up in and the genes we receive from our parents influence the person we become, and looking at a typical family we can’t really distinguish the impact of one from the other.

But nature does offer us up a random experiment that can let us tell the difference: identical twins share all their genes, while fraternal twins only share half their genes. If you look at how much more similar outcomes are for identical twins than fraternal twins, you see the effect of sharing 100% of your genetic material, rather than the usual 50%. Double that amount, and you’ve got the full effect of genetic inheritance. Whatever unexplained variation remains is still up for grabs — and might be down to different experiences in the home, outside the home, or just random noise.

The crazy thing about this research is that it says for a range of adult outcomes (e.g. years of education, income, health, personality, and happiness), it’s differences in the genes children inherit rather than differences in parental behaviour that are doing most of the work. Other research suggests that differences in “out-of-home environment,” such as the friends one makes at school, take second place. Parenting style does matter for something, but it comes in a clear third.

You might think that these studies are accidentally recruiting parents who are all unusually competent, by including only the kind of people who respond to letters asking them to participate in a university study of twin behaviour. But in fact that effect is small, because many countries and hospitals have enrolled twins in this research almost by default, and academics can check on some kinds of outcomes using tax, death, and court records, which include almost everyone.

Bryan lays out all the above in his book Selfish Reasons To Have More Kids: Why Being a Great Parent Is Less Work And More Fun Than You Think.

He is quick to point out that there are several factors that help reconcile these findings with conventional wisdom about the importance of parenting.

First, for some adult outcomes, parenting was a big deal (i.e. the quality of the parent/child relationship) or at least a moderate deal (i.e. drug use, criminality, and religious/political identity).

Second, these are adult outcomes — parents can and do influence you quite a lot, so long as you’re young and still living with them. But as soon as you move out, the influence of their behaviour begins to wane and eventually becomes hard to spot.

Third, this research only studies variation in parenting behaviour that was common among the families studied. The studies are just mute on anything that wasn’t actually done by many parents in their sample.

And fourth, research on international adoptions shows they can cause massive improvements in health, income and other outcomes. So a large enough change in one’s entire environment, say from Haiti to the United States, does matter, even if moving between families within the United States has modest effects.

Despite all that, the findings are still remarkable, and imply many hyper-diligent parents could live much less stressful lives without doing their kids any harm at all. In this extensive interview host Rob Wiblin interrogates whether Bryan can really be right, or whether the research he’s drawing on has taken a wrong turn somewhere.

And that’s just one topic we cover, some of the others being:

  • People’s biggest misconceptions about the labour market
  • Arguments against high levels of immigration
  • Whether most people actually vote based on self-interest
  • Whether philosophy should stick to common sense or depart from it radically
  • How to weigh personal autonomy against the possible benefits of government regulation
  • Bryan’s track record of winning 23 out of 23 bets about how the future would play out
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Continue reading →

#125 – Joan Rohlfing on how to avoid catastrophic nuclear blunders

Since the Soviet Union split into different countries in 1991, the pervasive fear of catastrophe that people lived with for decades has gradually faded from memory, and nuclear warhead stockpiles have declined by 83%. Nuclear brinksmanship, proxy wars, and the game theory of mutually assured destruction (MAD) have come to feel like relics of another era.

Russia’s invasion of Ukraine has changed all that.

According to Joan Rohlfing — President of the Nuclear Threat Initiative, a Washington, DC-based nonprofit focused on reducing threats from nuclear and biological weapons — the annual risk of a ‘global catastrophic nuclear event’‘ never fell as low as people like to think, and for some time has been on its way back up.

At the same time, civil society funding for research and advocacy around nuclear risks is being cut in half over a period of years — despite the fact that at $60 million a year, it was already just a thousandth as much as the US spends maintaining its nuclear deterrent.

If new funding sources are not identified to replace donors that are withdrawing (like the MacArthur Foundation), the existing pool of talent will have to leave for greener pastures, and most of the next generation will see a career in the field as unviable.

While global poverty is on the decline and life expectancy increasing, the chance of a catastrophic nuclear event is probably trending in the wrong direction.

Joan points out that the New START treaty, which dramatically limits the number of warheads the US and Russia can deploy at one time, narrowly survived in 2021 due to the election of Joe Biden. But it will again require renewal in 2026, which may or may not happen, depending on whether the relationship between the two great powers can be repaired over the next four years.

Ukraine gave up its nuclear weapons in 1994 in exchange for security guarantees that turned out not to be worth the paper they were written on. States that have nuclear weapons (such as North Korea), states that are pursuing them (such as Iran), and states that have pursued nuclear weapons but since abandoned them (such as Libya, Syria, and South Africa) may take this as a valuable lesson in the importance of military power over promises.

China has been expanding its arsenal and testing hypersonic glide missiles that can evade missile defences. Japan now toys with the idea of nuclear weapons as a way to ensure its security against its much larger neighbour. India and Pakistan both acquired nuclear weapons in the late 1980s and their relationship continues to oscillate from hostile to civil and back.

At the same time, the risk that nuclear weapons could be interfered with due to weaknesses in computer security is far higher than during the Cold War, when systems were simpler and less networked.

In the interview, Joan discusses several steps that can be taken in the immediate term, such as renewed efforts to extend and expand arms control treaties, changes to nuclear use policy, and the retirement of what they see as vulnerable delivery systems, such as land-based silos.

In the bigger picture, NTI seeks to keep hope alive that a better system than deterrence through mutually assured destruction remains possible. The threat of retaliation does indeed make nuclear wars unlikely, but it necessarily means the system fails in an incredibly destructive way: with the death of hundreds of millions if not billions.

In the long run, even a tiny 1 in 500 risk of a nuclear war each year adds up to around an 18% chance of catastrophe over the century.

Joan concedes that MAD was probably the best available system for preventing the use of nuclear weapons in 1950. But we’ve had 70 years of advances in technology since then that have opened up new possibilities, such as far more reliable surveillance than could have been dreamed up by Truman and Stalin. But MAD has been the conventional wisdom for so long that almost nobody is working on alternative paradigms.

In this conversation we cover all that, as well as:

  • How arms control treaties have evolved over the last few decades
  • Whether lobbying by arms manufacturers is an important factor shaping nuclear strategy
  • Places listeners could work at or donate to
  • The Biden Nuclear Posture Review
  • How easily humanity might recover from a nuclear exchange
  • Implications for the use of nuclear energy

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Continue reading →

#124 – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

If someone said a global health and development programme was sustainable, participatory, and holistic, you’d have to guess that they were saying something positive. But according to today’s guest Karen Levy — deworming pioneer and veteran of Innovations for Poverty Action, Evidence Action, and Y Combinator — each of those three concepts has become so fashionable that they’re at risk of being seriously overrated and applied where they don’t belong.

Such concepts might even cause harm — trying to make a project embody all three is as likely to ruin it as help it flourish.

First, what do people mean by ‘sustainability’? Usually they mean something like the programme will eventually be able to continue without needing further financial support from the donor. But how is that possible? Governments, nonprofits, and aid agencies aim to provide health services, education, infrastructure, financial services, and so on — and all of these require ongoing funding to pay for materials and staff to keep them running.

I buy my groceries from a supermarket, and I’m not under the illusion that one day I’ll be able to stop paying and still get everything I need for free. And there’s nothing wrong with this way of getting life’s necessities being ‘unsustainable’ — so long as I want groceries, I’ll keep paying for them.

Given that someone needs to keep paying, Karen tells us that in practice, ‘sustainability’ is usually a euphemism for the programme at some point being passed on to someone else to fund — usually the national government. And while that can be fine, the national government of Kenya only spends $400 per person to provide each and every government service — just 2% of what the US spends on each resident. Incredibly tight budgets like that are typical of low-income countries. While the concept of ‘sustainability’ sounds great, to say “We’re going to pass the cost of this programme on to a government funded by very poor people’s taxes” sounds at best ambiguous.

‘Participatory’ also sounds nice, and inasmuch as it means leaders are accountable to the people they’re trying to help, it probably is. But Karen tells us that in the field, ‘participatory’ usually means that recipients are expected to be involved in planning and delivering services themselves.

While that might be suitable in some situations, it’s hardly something people in rich countries always want for themselves. Ideally we want government healthcare and education to be high quality without us having to attend meetings to keep it on track — and people in poor countries have as many or more pressures on their time. While accountability is desirable, an expectation of participation can be as much a burden as a blessing.

Finally, making a programme ‘holistic’ could be smart, but as Karen lays out, it also has some major downsides. For one, it means you’re doing lots of things at once, which makes it hard to tell which parts of the project are making the biggest difference relative to their cost. For another, when you have a lot of goals at once, it’s hard to tell whether you’re making progress, or really put your mind to focusing on making one thing go extremely well. And finally, holistic programmes can be impractically expensive — Karen tells the story of a wonderful ‘holistic school health’ programme that, if continued, was going to cost 3.5 times the entire school’s budget.

Smallpox elimination was one of humanity’s greatest health achievements and its focus on one thing to the exclusion of all else made it the complete opposite of a holistic program.

In today’s in-depth conversation, Karen Levy and I chat about the above, as well as:

  • Why it pays to figure out how you’ll interpret the results of an experiment ahead of time
  • The trouble with misaligned incentives within the development industry
  • Projects that don’t deliver value for money and should be scaled down
  • Whether governments typically pay for a project once outside funding is withdrawn
  • How Karen accidentally became a leading figure in the push to deworm tens of millions of schoolchildren
  • Logistical challenges in reaching huge numbers of people with essential services
  • How Karen has enjoyed living in Kenya for several decades
  • Lessons from Karen’s many-decades career
  • The goals of Karen’s new project: Fit for Purpose

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type “80,000 Hours” into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell and Ryan Kessler
Transcriptions: Katy Moore

Continue reading →

#123 – Samuel Charap on why Putin invaded Ukraine, the risk of escalation, and how to prevent disaster

Russia’s invasion of Ukraine is devastating the lives of Ukrainians, and so long as it continues there’s a risk that the conflict could escalate to include other countries or the use of nuclear weapons. It’s essential that NATO, the US, and the EU play their cards right to ideally end the violence, maintain Ukrainian sovereignty, and discourage any similar invasions in the future.

But how? To pull together the most valuable information on how to react to this crisis, we spoke with Samuel Charap — a senior political scientist at the RAND Corporation, one of the US’s foremost experts on Russia’s relationship with former Soviet states, and co-author of Everyone Loses: The Ukraine Crisis and the Ruinous Contest for Post-Soviet Eurasia.

Samuel believes that Putin views the alignment of Ukraine with NATO as an existential threat to Russia — a perhaps unreasonable view, but a sincere one nevertheless. Ukraine has been drifting further into Western Europe’s orbit and improving its defensive military capabilities, so Putin has concluded that if Russia wants to put a stop to that, there will never be a better time to act in the future.

Despite early successes holding off the Russian military, Samuel is sceptical that time is on the Ukrainian side. Though it won’t be able to create a puppet government Ukrainians view as legitimate, if committed to the task, Russia will likely gradually grind down Ukrainian resistance and take formal control of the country. If the war is to end before much of Ukraine is reduced to rubble, it will likely have to be through negotiation, rather than Russian defeat.

Many hope for Putin to be ousted from office, but Samuel cautions that he has enormous control of the Russian government and the media Russians consume, making that very unlikely in the near term. Furthermore, someone who successfully booted Putin from office is just as likely to be even more of an intransigent hardliner as they are to be a dove. In the meantime, loose talk of assassinating Putin could drive him to further reckless aggression.

The US policy response has so far been largely good, successfully balancing the need to punish Russia to dissuade large nations from bullying small ones in the future, while preventing NATO from being drawn into the war directly — which would pose a horrifying risk of escalation to a full nuclear exchange. The pressure from the general public to ‘do something’ might eventually cause national leaders to confront Russia more directly, but so far they are sensibly showing no interest in doing so.

However, use of nuclear weapons remains a low but worrying possibility. That could happen in various ways, such as:

  1. NATO shoots down Russian planes to enforce a no-fly zone — a problematic idea in Samuel’s opinion.
  2. An unintentional cycle of mutual escalation between Russia and NATO, perhaps starting with cyber attacks, or Russian bombs accidentally landing in NATO countries that neighbour Ukraine.
  3. Putin ends up with his back against the wall and believes he can no longer win the war or defend Russia without using tactical nuclear weapons.
  4. Putin decides to invade a country other than Ukraine.

Samuel is also worried that Russia may deploy chemical and biological weapons and blame it on the Ukrainians.

In Samuel’s opinion, the recent focus on the delivery of fighter jets to Ukraine is risky and not the key defence priority in any case. Instead, Ukraine could use more ground-to-air missiles to shoot Russian planes out of the sky.

Before war broke out, it’s possible Russia could have been satisfied if Ukraine followed through on the Minsk agreements and committed not to join NATO. Or it might not have, if Putin was committed to war, come what may. In any case, most Ukrainians found those terms intolerable.

At this point, the situation is even worse, and it’s hard to see how an enduring ceasefire could be agreed upon. On top of the above, Russia is also demanding recognition that Crimea is part of Russia, and acceptance of the independence of the so-calked Donetsk and Luhansk People’s Republics. These conditions — especially the second — are entirely unacceptable to the Ukrainians. Hence the war continues, and could grind on for months until one side is sufficiently beaten down to compromise on their core demands.

Rob and Samuel discuss all of the above and also:

  • What are the implications if Sweden and/or Finland decide to join NATO?
  • What should NATO do now, and did it make any mistakes in the past?
  • What’s the most likely situation for us to be looking at in three months’ time?
  • Can Ukraine effectively win the war?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Continue reading →