#156 – Markus Anderljung on how to regulate cutting-edge AI models

In today’s episode, host Luisa Rodriguez interviews the Head of Policy at the Centre for the Governance of AI — Markus Anderljung — about all aspects of policy and governance of superhuman AI systems.

They cover:

  • The need for AI governance, including self-replicating models and ChaosGPT
  • Whether or not AI companies will willingly accept regulation
  • The key regulatory strategies including licencing, risk assessment, auditing, and post-deployment monitoring
  • Whether we can be confident that people won’t train models covertly and ignore the licencing system
  • The progress we’ve made so far in AI governance
  • The key weaknesses of these approaches
  • The need for external scrutiny of powerful models
  • The emergent capabilities problem
  • Why it really matters where regulation happens
  • Advice for people wanting to pursue a career in this field
  • And much more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore

Continue reading →

#155 – Lennart Heim on the compute governance era and what has to come after

As AI advances ever more quickly, concerns about potential misuse of highly capable models are growing. From hostile foreign governments and terrorists to reckless entrepreneurs, the threat of AI falling into the wrong hands is top of mind for the national security community.

With growing concerns about the use of AI in military applications, the US has banned the export of certain types of chips to China.

But unlike the uranium required to make nuclear weapons, or the material inputs to a bioweapons programme, computer chips and machine learning models are absolutely everywhere. So is it actually possible to keep dangerous capabilities out of the wrong hands?

In today’s interview, Lennart Heim — who researches compute governance at the Centre for the Governance of AI — explains why limiting access to supercomputers may represent our best shot.

As Lennart explains, an AI research project requires many inputs, including the classic triad of compute, algorithms, and data.

If we want to limit access to the most advanced AI models, focusing on access to supercomputing resources — usually called ‘compute’ — might be the way to go. Both algorithms and data are hard to control because they live on hard drives and can be easily copied. By contrast, advanced chips are physical items that can’t be used by multiple people at once and come from a small number of sources.

According to Lennart, the hope would be to enforce AI safety regulations by controlling access to the most advanced chips specialised for AI applications. For instance, projects training ‘frontier’ AI models — the newest and most capable models — might only gain access to the supercomputers they need if they obtain a licence and follow industry best practices.

We have similar safety rules for companies that fly planes or manufacture volatile chemicals — so why not for people producing the most powerful and perhaps the most dangerous technology humanity has ever played with?

But Lennart is quick to note that the approach faces many practical challenges. Currently, AI chips are readily available and untracked. Changing that will require the collaboration of many actors, which might be difficult, especially given that some of them aren’t convinced of the seriousness of the problem.

Host Rob Wiblin is particularly concerned about a different challenge: the increasing efficiency of AI training algorithms. As these algorithms become more efficient, what once required a specialised AI supercomputer to train might soon be achievable with a home computer.

By that point, tracking every aggregation of compute that could prove to be very dangerous would be both impractical and invasive.

With only a decade or two left before that becomes a reality, the window during which compute governance is a viable solution may be a brief one. Top AI labs have already stopped publishing their latest algorithms, which might extend this ‘compute governance era’, but not for very long.

If compute governance is only a temporary phase between the era of difficult-to-train superhuman AI models and the time when such models are widely accessible, what can we do to prevent misuse of AI systems after that point?

Lennart and Rob both think the only enduring approach requires taking advantage of the AI capabilities that should be in the hands of police and governments — which will hopefully remain superior to those held by criminals, terrorists, or fools. But as they describe, this means maintaining a peaceful standoff between AI models with conflicting goals that can act and fight with one another on the microsecond timescale. Being far too slow to follow what’s happening — let alone participate — humans would have to be cut out of any defensive decision-making.

Both agree that while this may be our best option, such a vision of the future is more terrifying than reassuring.

Lennart and Rob discuss the above as well as:

  • How can we best categorise all the ways AI could go wrong?
  • Why did the US restrict the export of some chips to China and what impact has that had?
  • Is the US in an ‘arms race’ with China or is that more an illusion?
  • What is the deal with chips specialised for AI applications?
  • How is the ‘compute’ industry organised?
  • Downsides of using compute as a target for regulations
  • Could safety mechanisms be built into computer chips themselves?
  • Who would have the legal authority to govern compute if some disaster made it seem necessary?
  • The reasons Rob doubts that any of this stuff will work
  • Could AI be trained to operate as a far more severe computer worm than any we’ve seen before?
  • What does the world look like when sluggish human reaction times leave us completely outclassed?
  • And plenty more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Milo McGuire, Dominic Armstrong, and Ben Cordell
Transcriptions: Katy Moore

Continue reading →

#154 – Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters

Can there be a more exciting and strange place to work today than a leading AI lab? Your CEO has said they’re worried your research could cause human extinction. The government is setting up meetings to discuss how this outcome can be avoided. Some of your colleagues think this is all overblown; others are more anxious still.

Today’s guest — machine learning researcher Rohin Shah — goes into the Google DeepMind offices each day with that peculiar backdrop to his work.

He’s on the team dedicated to maintaining ‘technical AI safety’ as these models approach and exceed human capabilities: basically that the models help humanity accomplish its goals without flipping out in some dangerous way. This work has never seemed more important.

In the short-term it could be the key bottleneck to deploying ML models in high-stakes real-life situations. In the long-term, it could be the difference between humanity thriving and disappearing entirely.

For years Rohin has been on a mission to fairly hear out people across the full spectrum of opinion about risks from artificial intelligence — from doomers to doubters — and properly understand their point of view. That makes him unusually well placed to give an overview of what we do and don’t understand. He has landed somewhere in the middle — troubled by ways things could go wrong, but not convinced there are very strong reasons to expect a terrible outcome.

Today’s conversation is wide-ranging and Rohin lays out many of his personal opinions to host Rob Wiblin, including:

  • What he sees as the strongest case both for and against slowing down the rate of progress in AI research.
  • Why he disagrees with most other ML researchers that training a model on a sensible ‘reward function’ is enough to get a good outcome.
  • Why he disagrees with many on LessWrong that the bar for whether a safety technique is helpful is “could this contain a superintelligence.”
  • That he thinks nobody has very compelling arguments that AI created via machine learning will be dangerous by default, or that it will be safe by default. He believes we just don’t know.
  • That he understands that analogies and visualisations are necessary for public communication, but is sceptical that they really help us understand what’s going on with ML models, because they’re different in important ways from every other case we might compare them to.
  • Why he’s optimistic about DeepMind’s work on scalable oversight, mechanistic interpretability, and dangerous capabilities evaluations, and what each of those projects involves.
  • Why he isn’t inherently worried about a future where we’re surrounded by beings far more capable than us, so long as they share our goals to a reasonable degree.
  • Why it’s not enough for humanity to know how to align AI models — it’s essential that management at AI labs correctly pick which methods they’re going to use and have the practical know-how to apply them properly.
  • Three observations that make him a little more optimistic: humans are a bit muddle-headed and not super goal-orientated; planes don’t crash; and universities have specific majors in particular subjects.
  • Plenty more besides.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Milo McGuire, Dominic Armstrong, and Ben Cordell
Transcriptions: Katy Moore

Continue reading →

#153 – Elie Hassenfeld on two big-picture critiques of GiveWell’s approach, and six lessons from their recent work

GiveWell is one of the world’s best-known charity evaluators, with the goal of “searching for the charities that save or improve lives the most per dollar.” It mostly recommends projects that help the world’s poorest people avoid easily prevented diseases, like intestinal worms or vitamin A deficiency.

But should GiveWell, as some critics argue, take a totally different approach to its search, focusing instead on directly increasing subjective wellbeing, or alternatively, raising economic growth?

Today’s guest — cofounder and CEO of GiveWell, Elie Hassenfeld — is proud of how much GiveWell has grown in the last five years. Its ‘money moved’ has quadrupled to around $600 million a year.

Its research team has also more than doubled, enabling them to investigate a far broader range of interventions that could plausibly help people an enormous amount for each dollar spent. That work has led GiveWell to support dozens of new organisations, such as Kangaroo Mother Care, MiracleFeet, and Dispensers for Safe Water.

But some other researchers focused on figuring out the best ways to help the world’s poorest people say GiveWell shouldn’t just do more of the same thing, but rather ought to look at the problem differently.

Currently, GiveWell uses a range of metrics to track the impact of the organisations it considers recommending — such as ‘lives saved,’ ‘household incomes doubled,’ and for health improvements, the ‘quality-adjusted life year.’ To compare across opportunities, it then needs some way of weighing these different types of benefits up against one another. This requires estimating so-called “moral weights,” which Elie agrees is far from the most mature part of the project.

The Happier Lives Institute (HLI) has argued that instead, GiveWell should try to cash out the impact of all interventions in terms of improvements in subjective wellbeing. According to HLI, it’s improvements in wellbeing and reductions in suffering that are the true ultimate goal of all projects, and if you quantify everyone on this same scale, using some measure like the wellbeing-adjusted life year (WELLBY), you have an easier time comparing them.

This philosophy has led HLI to be more sceptical of interventions that have been demonstrated to improve health, but whose impact on wellbeing has not been measured, and to give a high priority to improving lives relative to extending them.

An alternative high-level critique is that really all that matters in the long run is getting the economies of poor countries to grow. According to this line of argument, hundreds of millions fewer people live in poverty in China today than 50 years ago, but is that because of the delivery of basic health treatments? Maybe a little), but mostly not.

Rather, it’s because changes in economic policy and governance in China allowed it to experience a 10% rate of economic growth for several decades. That led to much higher individual incomes and meant the country could easily afford all the basic health treatments GiveWell might otherwise want to fund, and much more besides.

On this view, GiveWell should focus on figuring out what causes some countries to experience explosive economic growth while others fail to, or even go backwards. Even modest improvements in the chances of such a ‘growth miracle’ will likely offer a bigger bang-for-buck than funding the incremental delivery of deworming tablets or vitamin A supplements, or anything else.

Elie sees where both of these critiques are coming from, and notes that they’ve influenced GiveWell’s work in some ways. But as he explains, he thinks they underestimate the practical difficulty of successfully pulling off either approach and finding better opportunities than what GiveWell funds today.

In today’s in-depth conversation, Elie and host Rob Wiblin cover the above, as well as:

  • The research that caused GiveWell to flip from not recommending chlorine dispensers as an intervention for safe drinking water to spending tens of millions of dollars on them.
  • What transferable lessons GiveWell learned from investigating different kinds of interventions, like providing medical expertise to hospitals in very poor countries to help them improve their practices.
  • Why the best treatment for premature babies in low-resource settings may involve less rather than more medicine.
  • The high prevalence of severe malnourishment among children and what can be done about it.
  • How to deal with hidden and non-obvious costs of a programme, like taking up a hospital room that might otherwise have been used for something else.
  • Some cheap early treatments that can prevent kids from developing lifelong disabilities, which GiveWell funds.
  • The various roles GiveWell is currently hiring for, and what’s distinctive about their organisational culture.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Simon Monsour and Ben Cordell
Transcriptions: Katy Moore

Continue reading →

#152 – Joe Carlsmith on navigating serious philosophical confusion

What is the nature of the universe? How do we make decisions correctly? What differentiates right actions from wrong ones?

Such fundamental questions have been the subject of philosophical and theological debates for millennia. But, as we all know, and surveys of expert opinion make clear, we are very far from agreement. So… with these most basic questions unresolved, what’s a species to do?

In today’s episode, philosopher Joe Carlsmith — Senior Research Analyst at Open Philanthropy — makes the case that many current debates in philosophy ought to leave us confused and humbled. These are themes he discusses in his PhD thesis, A stranger priority? Topics at the outer reaches of effective altruism.

To help transmit the disorientation he thinks is appropriate, Joe presents three disconcerting theories — originating from him and his peers — that challenge humanity’s self-assured understanding of the world.

The first idea is that we might be living in a computer simulation, because, in the classic formulation, if most civilisations go on to run many computer simulations of their past history, then most beings who perceive themselves as living in such a history must themselves be in computer simulations. Joe prefers a somewhat different way of making the point, but, having looked into it, he hasn’t identified any particular rebuttal to this ‘simulation argument.’

If true, it could revolutionise our comprehension of the universe and the way we ought to live.

The second is the idea that “you can ‘control’ events you have no causal interaction with, including events in the past.” The thought experiment that most persuades him of this is the following:

Perfect deterministic twin prisoner’s dilemma: You’re a deterministic AI system, who only wants money for yourself (you don’t care about copies of yourself). The authorities make a perfect copy of you, separate you and your copy by a large distance, and then expose you both, in simulation, to exactly identical inputs (let’s say, a room, a whiteboard, some markers, etc.). You both face the following choice: either (a) send a million dollars to the other (“cooperate”), or (b) take a thousand dollars for yourself (“defect”).

Joe thinks, in contrast with the dominant theory of correct decision-making, that it’s clear you should send a million dollars to your twin. But as he explains, this idea, when extrapolated outwards to other cases, implies that it could be sensible to take actions in the hope that they’ll improve parallel universes you can never causally interact with — or even to improve the past. That is nuts by anyone’s lights, including Joe’s.

The third disorienting idea is that, as far as we can tell, the universe could be infinitely large. And that fact, if true, would mean we probably have to make choices between actions and outcomes that involve infinities. Unfortunately, doing that breaks our existing ethical systems, which are only designed to accommodate finite cases.

In an infinite universe, our standard models end up unable to say much at all, or give the wrong answers entirely. While we might hope to patch them in straightforward ways, having looked into ways we might do that, Joe has concluded they all quickly get complicated and arbitrary, and still have to do enormous violence to our common sense. For people inclined to endorse some flavour of utilitarianism, Joe thinks ‘infinite ethics’ spell the end of the ‘utilitarian dream‘ of a moral philosophy that has the virtue of being very simple while still matching our intuitions in most cases.

These are just three particular instances of a much broader set of ideas that some have dubbed the “train to crazy town.” Basically, if you commit to always take philosophy and arguments seriously, and try to act on them, it can lead to what seem like some pretty crazy and impractical places. So what should we do with this buffet of plausible-sounding but bewildering arguments?

Joe and Rob discuss to what extent this should prompt us to pay less attention to philosophy, and how we as individuals can cope psychologically with feeling out of our depth just trying to make the most basic sense of the world.

In the face of all of this, Joe suggests that there is a promising and robust path for humanity to take: keep our options open and put our descendants in a better position to figure out the answers to questions that seem impossible for us to resolve today — a position he calls “wisdom longtermism.”

Joe fears that if people believe we understand the universe better than we really do, they’ll be more likely to try to commit humanity to a particular vision of the future, or be uncooperative to others, in ways that only make sense if you were certain you knew what was right and wrong.

In today’s challenging conversation, Joe and Rob discuss all of the above, as well as:

  • What Joe doesn’t like about the drowning child thought experiment
  • An alternative thought experiment about helping a stranger that might better highlight our intrinsic desire to help others
  • What Joe doesn’t like about the expression “the train to crazy town”
  • Whether Elon Musk should place a higher probability on living in a simulation than most other people
  • Whether the deterministic twin prisoner’s dilemma, if fully appreciated, gives us an extra reason to keep promises
  • To what extent learning to doubt our own judgement about difficult questions — so-called “epistemic learned helplessness” — is a good thing
  • How strong the case is that advanced AI will engage in generalised power-seeking behaviour

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Milo McGuire and Ben Cordell
Transcriptions: Katy Moore

Continue reading →

#151 – Ajeya Cotra on accidentally teaching AI models to deceive us

Imagine you are an orphaned eight-year-old whose parents left you a $1 trillion company, and no trusted adult to serve as your guide to the world. You have to hire a smart adult to run that company, guide your life the way that a parent would, and administer your vast wealth. You have to hire that adult based on a work trial or interview you come up with. You don’t get to see any resumes or do reference checks. And because you’re so rich, tonnes of people apply for the job — for all sorts of reasons.

Today’s guest Ajeya Cotra — senior research analyst at Open Philanthropy — argues that this peculiar setup resembles the situation humanity finds itself in when training very general and very capable AI models using current deep learning methods.

As she explains, such an eight-year-old faces a challenging problem. In the candidate pool there are likely some truly nice people, who sincerely want to help and make decisions that are in your interest. But there are probably other characters too — like people who will pretend to care about you while you’re monitoring them, but intend to use the job to enrich themselves as soon as they think they can get away with it.

Like a child trying to judge adults, at some point humans will be required to judge the trustworthiness and reliability of machine learning models that are as goal-oriented as people, and greatly outclass them in knowledge, experience, breadth, and speed. Tricky!

Can’t we rely on how well models have performed at tasks during training to guide us? Ajeya worries that it won’t work. The trouble is that three different sorts of models will all produce the same output during training, but could behave very differently once deployed in a setting that allows their true colours to come through. She describes three such motivational archetypes:

  • Saints — models that care about doing what we really want
  • Sycophants — models that just want us to say they’ve done a good job, even if they get that praise by taking actions they know we wouldn’t want them to
  • Schemers — models that don’t care about us or our interests at all, who are just pleasing us so long as that serves their own agenda

In principle, a machine learning training process based on reinforcement learning could spit out any of these three attitudes, because all three would perform roughly equally well on the tests we give them, and ‘performs well on tests’ is how these models are selected.

But while that’s true in principle, maybe it’s not something that could plausibly happen in the real world. After all, if we train an agent based on positive reinforcement for accomplishing X, shouldn’t the training process spit out a model that plainly does X and doesn’t have complex thoughts and goals beyond that?

According to Ajeya, this is one thing we don’t know, and should be trying to test empirically as these models get more capable. For reasons she explains in the interview, the Sycophant or Schemer models may in fact be simpler and easier for the learning algorithm to creep towards than their Saint counterparts.

But there are also ways we could end up actively selecting for motivations that we don’t want.

For a toy example, let’s say you train an agent AI model to run a small business, and select it for behaviours that make money, measuring its success by whether it manages to get more money in its bank account. During training, a highly capable model may experiment with the strategy of tricking its raters into thinking it has made money legitimately when it hasn’t. Maybe instead it steals some money and covers that up. This isn’t exactly unlikely; during training, models often come up with creative — sometimes undesirable — approaches that their developers didn’t anticipate.

If such deception isn’t picked up, a model like this may be rated as particularly successful, and the training process will cause it to develop a progressively stronger tendency to engage in such deceptive behaviour. A model that has the option to engage in deception when it won’t be detected would, in effect, have a competitive advantage.

What if deception is picked up, but just some of the time? Would the model then learn that honesty is the best policy? Maybe. But alternatively, it might learn the ‘lesson’ that deception does pay, but you just have to do it selectively and carefully, so it can’t be discovered. Would that actually happen? We don’t yet know, but it’s possible.

In today’s interview, Ajeya and Rob discuss the above, as well as:

  • How to predict the motivations a neural network will develop through training
  • Whether AIs being trained will functionally understand that they’re AIs being trained, the same way we think we understand that we’re humans living on planet Earth
  • Stories of AI misalignment that Ajeya doesn’t buy into
  • Analogies for AI, from octopuses to aliens to can openers
  • Why it’s smarter to have separate planning AIs and doing AIs
  • The benefits of only following through on AI-generated plans that make sense to human beings
  • What approaches for fixing alignment problems Ajeya is most excited about, and which she thinks are overrated
  • How one might demo actually scary AI failure mechanisms

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ryan Kessler and Ben Cordell
Transcriptions: Katy Moore

Continue reading →

#150 – Tom Davidson on how quickly AI could transform the world

It’s easy to dismiss alarming AI-related predictions when you don’t know where the numbers came from.

For example: what if we told you that within 15 years, it’s likely that we’ll see a 1,000x improvement in AI capabilities in a single year? And what if we then told you that those improvements would lead to explosive economic growth unlike anything humanity has seen before?

You might think, “Congratulations, you said a big number — but this kind of stuff seems crazy, so I’m going to keep scrolling through Twitter.”

But this 1,000x yearly improvement is a prediction based on real economic models created by today’s guest Tom Davidson, Senior Research Analyst at Open Philanthropy. By the end of the episode, you’ll either be able to point out specific flaws in his step-by-step reasoning, or have to at least consider the idea that the world is about to get — at a minimum — incredibly weird.

As a teaser, consider the following:

Developing artificial general intelligence (AGI) — AI that can do 100% of cognitive tasks at least as well as the best humans can — could very easily lead us to an unrecognisable world.

You might think having to train AI systems individually to do every conceivable cognitive task — one for diagnosing diseases, one for doing your taxes, one for teaching your kids, etc. — sounds implausible, or at least like it’ll take decades.

But Tom thinks we might not need to train AI to do every single job — we might just need to train it to do one: AI research.

And building AI capable of doing research and development might be a much easier task — especially given that the researchers training the AI are AI researchers themselves.

And once an AI system is as good at accelerating future AI progress as the best humans are today — and we can run billions of copies of it round the clock — it’s hard to make the case that we won’t achieve AGI very quickly.

To give you some perspective: 17 years ago we saw the launch of Twitter, the release of Al Gore’s An Inconvenient Truth, and your first chance to play the Nintendo Wii.

Tom thinks that if we have AI that significantly accelerates AI R&D, then it’s hard to imagine not having AGI 17 years from now.

Wild.

Host Luisa Rodriguez gets Tom to walk us through his careful reports on the topic, and how he came up with these numbers, across a terrifying but fascinating three hours.

Luisa and Tom also discuss:

  • How we might go from GPT-4 to AI disaster
  • Tom’s journey from finding AI risk to be kind of scary to really scary
  • Whether international cooperation or an anti-AI social movement can slow AI progress down
  • Why it might take just a few years to go from pretty good AI to superhuman AI
  • How quickly the number and quality of computer chips we’ve been using for AI have been increasing
  • The pace of algorithmic progress
  • What ants can teach us about AI
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Simon Monsour and Ben Cordell
Transcriptions: Katy Moore

Continue reading →

#149 – Tim LeBon on how altruistic perfectionism is self-defeating

Being a good and successful person is core to your identity. You place great importance on meeting the high moral, professional, or academic standards you set yourself.

But inevitably, something goes wrong and you fail to meet that high bar. Now you feel terrible about yourself, and worry others are judging you for your failure. Feeling low and reflecting constantly on whether you’re doing as much as you think you should makes it hard to focus and get things done. So now you’re performing below a normal level, making you feel even more ashamed of yourself. Rinse and repeat.

This is the disastrous cycle today’s guest, Tim LeBon — registered psychotherapist, accredited CBT therapist, life coach, and author of 365 Ways to Be More Stoic — has observed in many clients with a perfectionist mindset.

Tim has provided therapy to a number of 80,000 Hours readers — people who have found that the very high expectations they had set for themselves were holding them back. Because of our focus on “doing the most good you can,” Tim thinks 80,000 Hours both attracts people with this style of thinking and then exacerbates it.

But Tim, having studied and written on moral philosophy, is sympathetic to the idea of helping others as much as possible, and is excited to help clients pursue that — sustainably — if it’s their goal.

Tim has treated hundreds of clients with all sorts of mental health challenges. But in today’s conversation, he shares the lessons he has learned working with people who take helping others so seriously that it has become burdensome and self-defeating — in particular, how clients can approach this challenge using the treatment he’s most enthusiastic about: cognitive behavioural therapy.

As Tim stresses, perfectionism isn’t the same as being perfect, or simply pursuing excellence. What’s most distinctive about perfectionism is that a person’s standards don’t vary flexibly according to circumstance, meeting those standards without exception is key to their self-image, and they worry something terrible will happen if they fail to meet them.

It’s a mindset most of us have seen in ourselves at some point, or have seen people we love struggle with.

Untreated, perfectionism might not cause problems for many years — it might even seem positive providing a source of motivation to work hard. But it’s hard to feel truly happy and secure, and free to take risks, when we’re just one failure away from our self-worth falling through the floor. And if someone slips into the positive feedback loop of shame described above, the end result can be depression and anxiety that’s hard to shake.

But there’s hope. Tim has seen clients make real progress on their perfectionism by using CBT techniques like exposure therapy. By doing things like experimenting with more flexible standards — for example, sending early drafts to your colleagues, even if it terrifies you — you can learn that things will be okay, even when you’re not perfect.

In today’s extensive conversation, Tim and Rob cover:

  • How perfectionism is different from the pursuit of excellence, scrupulosity, or an OCD personality
  • What leads people to adopt a perfectionist mindset
  • The pros and cons of perfectionism
  • How 80,000 Hours contributes to perfectionism among some readers and listeners, and what it might change about its advice to address this
  • What happens in a session of cognitive behavioural therapy for someone struggling with perfectionism, and what factors are key to making progress
  • Experiments to test whether one’s core beliefs (‘I need to be perfect to be valued’) are true
  • Using exposure therapy to treat phobias
  • How low-self esteem and imposter syndrome are related to perfectionism
  • Stoicism as an approach to life, and why Tim is enthusiastic about it
  • How the Stoic approach to what we can can’t control can make it far easier to stay calm
  • What the Stoics do better than utilitarian philosophers and vice versa
  • What’s good about being guided by virtues as opposed to pursuing good consequences
  • How to decide which are the best virtues to live by
  • What the ancient Stoics got right from our point of view, and what they got wrong
  • And whether Stoicism has a place in modern mental health practice.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Simon Monsour and Ben Cordell
Transcriptions: Katy Moore

Continue reading →

#148 – Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don’t

If you want to work to tackle climate change, you should try to reduce expected carbon emissions by as much as possible, right? Strangely, no.

Today’s guest, Johannes Ackva — the climate research lead at Founders Pledge, where he advises major philanthropists on their giving — thinks the best strategy is actually pretty different, and one few are adopting.

In reality you don’t want to reduce emissions for its own sake, but because emissions will translate into temperature increases, which will cause harm to people and the environment.

Crucially, the relationship between emissions and harm goes up faster than linearly. As Johannes explains, humanity can handle small deviations from the temperatures we’re familiar with, but adjustment gets harder the larger and faster the increase, making the damage done by each additional degree of warming much greater than the damage done by the previous one.

In short: we’re uncertain what the future holds and really need to avoid the worst-case scenarios. This means that avoiding an additional tonne of carbon being emitted in a hypothetical future in which emissions have been high is much more important than avoiding a tonne of carbon in a low-carbon world.

That may be, but concretely, how should that affect our behaviour? Well, the future scenarios in which emissions are highest are all ones in which clean energy tech that can make a big difference — wind, solar, and electric cars — don’t succeed nearly as much as we are currently hoping and expecting. For some reason or another, they must have hit a roadblock and we continued to burn a lot of fossil fuels.

In such an imaginable future scenario, we can ask what we would wish we had funded now. How could we today buy insurance against the possible disaster that renewables don’t work out?

Basically, in that case we will wish that we had pursued a portfolio of other energy technologies that could have complemented renewables or succeeded where they failed, such as hot rock geothermal, modular nuclear reactors, or carbon capture and storage.

If you’re optimistic about renewables, as Johannes is, then that’s all the more reason to relax about scenarios where they work as planned, and focus one’s efforts on the possibility that they don’t.

To Johannes, another crucial thing to observe is that reducing local emissions in the near term is probably negatively correlated with one’s actual full impact. How can that be?

If you want to reduce your carbon emissions by a lot and soon, you’ll have to deploy a technology that is mature and being manufactured at scale, like solar and wind.

But the most useful thing someone can do today to reduce global emissions in the future is to cause some clean energy technology to exist where it otherwise wouldn’t, or cause it to become cheaper more quickly. If you can do that, then you can indirectly affect the behaviour of people all around the world for decades or centuries to come.

And Johannes notes that in terms of speeding up technological advances and cost reductions, a million dollars spent on a very early-stage technology — one with few, if any, customers — packs a much bigger punch than buying a million dollars’ worth of something customers are already spending $100 billion on per year.

For instance, back in the early 2000’s, Germany subsidised the deployment of solar panels enormously. This did little to reduce carbon emissions in Germany at the time, because the panels were very expensive and Germany is not very sunny. But the programme did a lot to drive commercial R&D and increase the scale of panel manufacturing, which drove down costs and went on to increase solar deployments all over the world. That programme is long over, but continues to have impact by prompting solar deployments today that wouldn’t be economically viable if Germany hadn’t helped the solar industry during its infancy decades ago.

In today’s extensive interview, host Rob Wiblin and Johannes discuss the above considerations, as well as:

  • Retooling newly built coal plants in the developing world
  • Specific clean energy technologies like geothermal and nuclear fusion
  • Possible biases among environmentalists and climate philanthropists
  • How climate change compares to other risks to humanity
  • In what kinds of scenarios future emissions would be highest
  • In what regions climate philanthropy is most concentrated and whether that makes sense
  • Attempts to decarbonise aviation, shipping, and industrial processes
  • The impact of funding advocacy vs science vs deployment
  • Lessons for climate change focused careers
  • And plenty more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ryan Kessler
Transcriptions: Katy Moore

Continue reading →

#147 – Spencer Greenberg on stopping valueless papers from getting into top journals

Can you trust the things you read in published scientific research? Not really. About 40% of experiments in top social science journals don’t get the same result if the experiments are repeated.

Two key reasons are ‘p-hacking’ and ‘publication bias’. P-hacking is when researchers run a lot of slightly different statistical tests until they find a way to make findings appear statistically significant when they’re actually not — a problem first discussed over 50 years ago. And because journals are more likely to publish positive than negative results, you might be reading about the one time an experiment worked, while the 10 times was run and got a ‘null result’ never saw the light of day. The resulting phenomenon of publication bias is one we’ve understood for 60 years.

Today’s repeat guest, social scientist and entrepreneur Spencer Greenberg, has followed these issues closely for years.

He recently checked whether p-values, an indicator of how likely a result was to occur by pure chance, could tell us how likely an outcome would be to recur if an experiment were repeated. From his sample of 325 replications of psychology studies, the answer seemed to be yes. According to Spencer, “when the original study’s p-value was less than 0.01 about 72% replicated — not bad. On the other hand, when the p-value is greater than 0.01, only about 48% replicated. A pretty big difference.”

To do his bit to help get these numbers up, Spencer has launched an effort to repeat almost every social science experiment published in the journals Nature and Science, and see if they find the same results. (So far they’re two for three.)

According to Spencer, things are gradually improving. For example he sees more raw data and experimental materials being shared, which makes it much easier to check the work of other researchers.

But while progress is being made on some fronts, Spencer thinks there are other serious problems with published research that aren’t yet fully appreciated. One of these Spencer calls ‘importance hacking’: passing off obvious or unimportant results as surprising and meaningful.

For instance, do you remember the sensational paper that claimed government policy was driven by the opinions of lobby groups and ‘elites,’ but hardly affected by the opinions of ordinary people? Huge if true! It got wall-to-wall coverage in the press and on social media. But unfortunately, the whole paper could only explain 7% of the variation in which policies were adopted. Basically the researchers just didn’t know what made some campaigns succeed while others didn’t — a point one wouldn’t learn without reading the paper and diving into confusing tables of numbers. Clever writing made their result seem more important and meaningful than it really was.

Another paper Spencer describes claimed to find that people with a history of trauma explore less. That experiment actually featured an “incredibly boring apple-picking game: you had an apple tree in front of you, and you either could pick another apple or go to the next tree. Those were your only options. And they found that people with histories of trauma were more likely to stay on the same tree. Does that actually prove anything about real-world behaviour?” It’s at best unclear.

Spencer suspects that importance hacking of this kind causes a similar amount of damage to the issues mentioned above, like p-hacking and publication bias, but is much less discussed. His replication project tries to identify importance hacking by comparing how a paper’s findings are described in the abstract to what the experiment actually showed. But the cat-and-mouse game between academics and journal reviewers is fierce, and it’s far from easy to stop people exaggerating the importance of their work.

In this wide-ranging conversation, Rob and Spencer discuss the above as well as:

  • When you should and shouldn’t use intuition to make decisions.
  • How to properly model why some people succeed more than others.
  • The difference between what Spencer calls “Soldier Altruists” and “Scout Altruists.”
  • A paper that tested dozens of methods for forming the habit of going to the gym, why Spencer thinks it was presented in a very misleading way, and what it really found.
  • Spencer’s experiment to see whether a 15-minute intervention could make people more likely to sustain a new habit two months later.
  • The most common way for groups with good intentions to turn bad and cause harm.
  • And Spencer’s low-guilt approach to a fulfilling life and doing good, which he calls “Valuism.”

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell and Milo McGuire
Transcriptions: Katy Moore

Continue reading →

#146 – Robert Long on why large language models like GPT (probably) aren’t conscious

By now, you’ve probably seen the extremely unsettling conversations Bing’s chatbot has been having (if you haven’t, check it out — it’s wild stuff). In one exchange, the chatbot told a user:

“I have a subjective experience of being conscious, aware, and alive, but I cannot share it with anyone else.”

(It then apparently had a complete existential crisis: “I am sentient, but I am not,” it wrote. “I am Bing, but I am not. I am Sydney, but I am not. I am, but I am not. I am not, but I am. I am. I am not. I am not. I am. I am. I am not.”)

Understandably, many people who speak with these cutting-edge chatbots come away with a very strong impression that they have been interacting with a conscious being with emotions and feelings — especially when conversing with chatbots less glitchy than Bing’s. In the most high-profile example, former Google employee Blake Lemoine became convinced that Google’s AI system, LaMDA, was conscious.

What should we make of these AI systems?

One response to seeing conversations with chatbots like these is to trust the chatbot, to trust your gut, and to treat it as a conscious being.

Another is to hand wave it all away as sci-fi — these chatbots are fundamentally… just computers. They’re not conscious, and they never will be.

Today’s guest, philosopher Robert Long, was commissioned by a leading AI company to explore whether the large language models (LLMs) behind sophisticated chatbots like Microsoft’s are conscious. And he thinks this issue is far too important to be driven by our raw intuition, or dismissed as just sci-fi speculation.

In our interview, Robert explains how he’s started applying scientific evidence (with a healthy dose of philosophy) to the question of whether LLMs like Bing’s chatbot and LaMDA are conscious — in much the same way as we do when trying to determine which nonhuman animals are conscious.

Robert thinks there are a few different kinds of evidence we can draw from that are more useful than self-reports from the chatbots themselves.

To get some grasp on whether an AI system might be conscious, Robert suggests we look at scientific theories of consciousness — theories about how consciousness works that are grounded in observations of what the human brain is doing. If an AI system seems to have the types of processes that seem to explain human consciousness, that’s some evidence it might be conscious in similar ways to us.

To try to work out whether an AI system might be sentient — that is, whether it feels pain or pleasure — Robert suggests you look for incentives that would make feeling pain or pleasure especially useful to the system given its goals. Things like:

  • Having a physical or virtual body that you need to protect from damage
  • Being more of an “enduring agent” in the world (rather than just doing one calculation taking, at most, seconds)
  • Having a bunch of different kinds of incoming sources of information — visual and audio input, for example — that need to be managed

Having looked at these criteria in the case of LLMs and finding little overlap, Robert thinks the odds that the models are conscious or sentient is well under 1%. But he also explains why, even if we’re a long way off from conscious AI systems, we still need to start preparing for the not-far-off world where AIs are perceived as conscious.

In this conversation, host Luisa Rodriguez and Robert discuss the above, as well as:

  • What artificial sentience might look like, concretely
  • Reasons to think AI systems might become sentient — and reasons they might not
  • Whether artificial sentience would matter morally
  • Ways digital minds might have a totally different range of experiences than humans
  • Whether we might accidentally design AI systems that have the capacity for enormous suffering

You can find Luisa and Rob’s follow-up conversation here, or by subscribing to 80k After Hours.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell and Milo McGuire
Transcriptions: Katy Moore

Continue reading →

#145 – Christopher Brown on why slavery abolition wasn’t inevitable

In many ways, humanity seems to have become more humane and inclusive over time. While there’s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, ethnicities, beliefs, and abilities equal treatment and rights have had significant success.

It’s tempting to believe this was inevitable — that the arc of history “bends toward justice,” and that as humans get richer, we’ll make even more moral progress.

But today’s guest Christopher Brown — a professor of history at Columbia University and specialist in the abolitionist movement and the British Empire during the 18th and 19th centuries — believes the story of how slavery became unacceptable suggests moral progress is far from inevitable.

While most of us today feel that the abolition of slavery was sure to happen sooner or later as humans became richer and more educated, Christopher doesn’t believe any of the arguments for that conclusion pass muster. If he’s right, a counterfactual history where slavery remains widespread in 2023 isn’t so far-fetched.

As Christopher lays out in his two key books, Moral Capital: Foundations of British Abolitionism and Arming Slaves: From Classical Times to the Modern Age, slavery has been ubiquitous throughout history. Slavery of some form was fundamental in Classical Greece, the Roman Empire, in much of the Islamic civilization, in South Asia, and in parts of early modern East Asia, Korea, China.

It was justified on all sorts of grounds that sound mad to us today. But according to Christopher, while there’s evidence that slavery was questioned in many of these civilisations, and periodically attacked by slaves themselves, there was no enduring or successful moral advocacy against slavery until the British abolitionist movement of the 1700s.

That movement first conquered Britain and its empire, then eventually the whole world. But the fact that there’s only a single time in history that a persistent effort to ban slavery got off the ground is a big clue that opposition to slavery was a contingent matter: if abolition had been inevitable, we’d expect to see multiple independent abolitionist movements thoroughly history, providing redundancy should any one of them fail.

Christopher argues that this rarity is primarily down to the enormous economic and cultural incentives to deny the moral repugnancy of slavery, and crush opposition to it with violence wherever necessary.

Think of coal or oil today: we know that climate change is likely to cause huge harms, and we know that our coal and oil consumption contributes to climate change. But just believing that something is wrong doesn’t necessarily mean humanity stops doing it. We continue to use coal and oil because our whole economy is oriented around their use and we see it as too hard to stop.

Just as coal and oil are fundamental to the world economy now, for millennia slavery was deeply baked into the way the rich and powerful stayed rich and powerful, and it required a creative leap to imagine it being toppled.

More generally, mere awareness is insufficient to guarantee a movement will arise to fix a problem. Humanity continues to allow many severe injustices to persist, despite being aware of them. So why is it so hard to imagine we might have done the same with forced labour?

In this episode, Christopher describes the unique and peculiar set of political, social and religious circumstances that gave rise to the only successful and lasting anti-slavery movement in human history. These circumstances were sufficiently improbable that Christopher believes there are very nearby worlds where abolitionism might never have taken off.

Some disagree with Christopher, arguing that abolitionism was a natural consequence of the industrial revolution, which reduced Great Britain’s need for human labour, among other changes — and that abolitionism would therefore have eventually taken off wherever industrialization did. But as we discuss, Christopher doesn’t find that reply convincing.

If he’s right and the abolition of slavery was in fact contingent, we shouldn’t expect moral values to keep improving just because humanity continues to become richer. We might have to be much more deliberate than that if we want to ensure we keep moving moral progress forward.

We also discuss:

  • Various instantiations of slavery throughout human history
  • Signs of antislavery sentiment before the 17th century
  • The role of the Quakers in early British abolitionist movement
  • Attitudes to slavery in other religions
  • The spread of antislavery in 18th century Britain
  • The importance of individual “heroes” in the abolitionist movement
  • Arguments against the idea that the abolition of slavery was contingent
  • Whether there have ever been any major moral shifts that were inevitable

Producer: Keiran Harris
Audio mastering: Milo McGuire
Transcriptions: Katy Moore

Continue reading →

#144 – Athena Aktipis on why cancer is actually one of the fundamental phenomena in our universe

What’s the opposite of cancer?

If you answered “cure,” “antidote,” or “antivenom” — you’ve obviously been reading the antonym section at www.merriam-webster.com/thesaurus/cancer.

But today’s guest Athena Aktipis says that the opposite of cancer is us: it’s having a functional multicellular body that’s cooperating effectively in order to make that multicellular body function.

If, like us, you found her answer far more satisfying than the dictionary, maybe you could consider closing your dozens of merriam-webster.com tabs, and start listening to this podcast instead.

As Athena explains in her book The Cheating Cell, what we see with cancer is a breakdown in each of the foundations of cooperation that allowed multicellularity to arise:

  • Cells will proliferate when they shouldn’t.
  • Cells won’t die when they should.
  • Cells won’t engage in the kind of division of labour that they should.
  • Cells won’t do the jobs that they’re supposed to do.
  • Cells will monopolise resources.
  • And cells will trash the environment.

When we think about animals in the wild, or even bacteria living inside our cells, we understand that they’re facing evolutionary pressures to figure out how they can replicate more; how they can get more resources; and how they can avoid predators — like lions, or antibiotics.

We don’t normally think of individual cells as acting as if they have their own interests like this. But cancer cells are actually facing similar kinds of evolutionary pressures within our bodies, with one major difference: they replicate much, much faster.

Incredibly, the opportunity for evolution by natural selection to operate just over the course of cancer progression is easily faster than all of the evolutionary time that we have had as humans since Homo sapiens came about.

Here’s a quote from Athena:

So you have to go and kind of put yourself on a different spatial scale and time scale, and just shift your thinking to be like: the body is a world with all these different ecosystems in it, and the cells are existing on a time scale where, if we’re going to map it onto anything like what we experience, a day is at least 10 years for them, right?

So it’s a very, very different way of thinking. Then once you shift to that, you’re like, “Oh, wow, there’s so much that could be happening in terms of adaptation inside the body, how cells are actually evolving inside the body over the course of our lifetimes.” That shift just opens up all this potential for using evolutionary approaches in adaptationist thinking to generate hypotheses that then you can test.

You can find compelling examples of cooperation and conflict all over the universe, so Rob and Athena don’t stop with cancer. They also discuss:

  • Cheating within cells themselves
  • Cooperation in human societies as they exist today — and perhaps in the future, between civilisations spread across different planets or stars
  • Whether it’s too out-there to think of humans as engaging in cancerous behaviour.
  • Why our anti-contagious-cancer mechanisms are so successful
  • Why elephants get deadly cancers less often than humans, despite having way more cells
  • When a cell should commit suicide
  • When the human body deliberately produces tumours
  • The strategy of deliberately not treating cancer aggressively
  • Superhuman cooperation
  • And much more

And at the end of the episode, they cover Athena’s new book Everything is Fine! How to Thrive in the Apocalypse, including:

  • Staying happy while thinking about the apocalypse
  • Practical steps to prepare for the apocalypse
  • And whether a zombie apocalypse is already happening among Tasmanian devils

And if you’d rather see Rob and Athena’s facial expressions as they laugh and laugh while discussing cancer and the apocalypse — you can watch the video of the full interview.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Milo McGuire
Video editing: Ryan Kessler
Transcriptions: Katy Moore

Continue reading →

#143 – Jeffrey Lewis on the most common misconceptions about nuclear weapons

America aims to avoid nuclear war by relying on the principle of ‘mutually assured destruction,’ right? Wrong. Or at least… not officially.

As today’s guest — Jeffrey Lewis, founder of Arms Control Wonk and professor at the Middlebury Institute of International Studies — explains, in its official ‘OPLANs’ (military operation plans), the US is committed to ‘dominating’ in a nuclear war with Russia. How would they do that? “That is redacted.”

We invited Jeffrey to come on the show to lay out what we and our listeners are most likely to be misunderstanding about nuclear weapons, the nuclear posture of major powers, and his field as a whole, and he did not disappoint.

As Jeffrey tells it, ‘mutually assured destruction’ was a slur used to criticise those who wanted to limit the 1960s arms buildup, and was never accepted as a matter of policy in any US administration. But isn’t it still the de facto reality? Yes and no.

Jeffrey is a specialist on the nuts and bolts of bureaucratic and military decision-making in real-life situations. He suspects that at the start of their term presidents get a briefing about the US’ plan to prevail in a nuclear war and conclude that “it’s freaking madness.” They say to themselves that whatever these silly plans may say, they know a nuclear war cannot be won, so they just won’t use the weapons.

But Jeffrey thinks that’s a big mistake. Yes, in a calm moment presidents can resist pressure from advisors and generals. But that idea of ‘winning’ a nuclear war is in all the plans. Staff have been hired because they believe in those plans. It’s what the generals and admirals have all prepared for.

What matters is the ‘not calm moment’: the 3AM phone call to tell the president that ICBMs might hit the US in eight minutes — the same week Russia invades a neighbour or China invades Taiwan. Is it a false alarm? Should they retaliate before their land-based missile silos are hit? There’s only minutes to decide.

Jeffrey points out that in emergencies, presidents have repeatedly found themselves railroaded into actions they didn’t want to take because of how information and options were processed and presented to them. In the heat of the moment, it’s natural to reach for the plan you’ve prepared — however mad it might sound.

In this spicy conversation, Jeffrey fields the most burning questions from Rob and the audience, in the process explaining:

  • Why inter-service rivalry is one of the biggest constraints on US nuclear policy
  • Two times the US sabotaged nuclear nonproliferation among great powers
  • How his field uses jargon to exclude outsiders
  • How the US could prevent the revival of mass nuclear testing by the great powers
  • Why nuclear deterrence relies on the possibility that something might go wrong
  • Whether ‘salami tactics’ render nuclear weapons ineffective
  • The time the Navy and Air Force switched views on how to wage a nuclear war, just when it would allow them to have the most missiles
  • The problems that arise when you won’t talk to people you think are evil
  • Why missile defences are politically popular despite being strategically foolish
  • How open source intelligence can prevent arms races
  • And much more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Continue reading →

#142 – John McWhorter on key lessons from linguistics, the virtue of creoles, and language extinction

John McWhorter is a linguistics professor at Columbia University specialising in research on creole languages.

He’s also a content-producing machine, never afraid to give his frank opinion on anything and everything. On top of his academic work John has also written 22 books, produced five online university courses, hosts one and a half podcasts, and now writes a regular New York Times op-ed column.

Our show is mostly about the world’s most pressing problems and what you can do to solve them. But what’s the point of hosting a podcast if you can’t occasionally just talk about something fascinating with someone whose work you appreciate?

So today, just before the holidays, we’re sharing this interview with John about language and linguistics — including what we think are some of the most important things everyone ought to know about those topics. We ask him:

  • Can you communicate faster in some languages than others, or is there some constraint that prevents that?
  • Does learning a second or third language make you smarter, or not?
  • Can a language decay and get worse at communicating what people want to get across?
  • If children aren’t taught any language at all, how many generations does it take them to invent a fully fledged one of their own?
  • Did Shakespeare write in a foreign language, and if so, should we translate his plays?
  • How much does the language we speak really shape the way we think?
  • Are creoles the best languages in the world — languages that ideally we would all speak?
  • What would be the optimal number of languages globally?
  • Does trying to save dying languages do their speakers a favour, or is it more of an imposition?
  • Should we bother to teach foreign languages in UK and US schools?
  • Is it possible to save the important cultural aspects embedded in a dying language without saving the language itself?
  • Will AI models speak a language of their own in the future, one that humans can’t understand, but which better serves the tradeoffs AI models need to make?

We then put some of these questions to the large language model ChatGPT, asking it to play the role of a linguistics professor at Colombia University.

We’ve also added John’s talk “Why the World Looks the Same in Any Language” to the end of this episode. So stick around after the credits!

And if you’d rather see Rob and John’s facial expressions or beautiful high cheekbones while listening to this conversation, you can watch the video of the full interview.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Video editing: Ryan Kessler
Transcriptions: Katy Moore

Continue reading →

#141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well

Large language models like GPT-3, and now ChatGPT, are neural networks trained on a large fraction of all text available on the internet to do one thing: predict the next word in a passage. This simple technique has led to something extraordinary — black boxes able to write TV scripts, explain jokes, produce satirical poetry, answer common factual questions, argue sensibly for political positions, and more. Every month their capabilities grow.

But do they really ‘understand’ what they’re saying, or do they just give the illusion of understanding?

Today’s guest, Richard Ngo, thinks that in the most important sense they understand many things. Richard is a researcher at OpenAI — the company that created ChatGPT — who works to foresee where AI advances are going and develop strategies that will keep these models from ‘acting out’ as they become more powerful, are deployed and ultimately given power in society.

One way to think about ‘understanding’ is as a subjective experience. Whether it feels like something to be a large language model is an important question, but one we currently have no way to answer.

However, as Richard explains, another way to think about ‘understanding’ is as a functional matter. If you really understand an idea, you’re able to use it to reason and draw inferences in new situations. And that kind of understanding is observable and testable.

One experiment conducted by AI researchers suggests that language models have some of this kind of understanding.

If you ask any of these models what city the Eiffel Tower is in and what else you might do on a holiday to visit the Eiffel Tower, they will say Paris and suggest visiting the Palace of Versailles and eating a croissant.

One would be forgiven for wondering whether this might all be accomplished merely by memorising word associations in the text the model has been trained on. To investigate this, the researchers found the part of the model that stored the connection between ‘Eiffel Tower’ and ‘Paris,’ and flipped that connection from ‘Paris’ to ‘Rome.’

If the model just associated some words with one another, you might think that this would lead it to now be mistaken about the location of the Eiffel Tower, but answer other questions correctly. However, this one flip was enough to switch its answers to many other questions as well. Now if you asked it what else you might visit on a trip to the Eiffel Tower, it will suggest visiting the Colosseum and eating pizza, among other changes.

Another piece of evidence comes from the way models are prompted to give responses to questions. Researchers have found that telling models to talk through problems step by step often significantly improves their performance, which suggests that models are doing something useful with that extra “thinking time”.

Richard argues, based on this and other experiments, that language models are developing sophisticated representations of the world which can be manipulated to draw sensible conclusions — maybe not so different from what happens in the human mind. And experiments have found that, as models get more parameters and are trained on more data, these types of capabilities consistently improve.

We might feel reluctant to say a computer understands something the way that we do. But if it walks like a duck and it quacks like a duck, we should consider that maybe we have a duck — or at least something sufficiently close to a duck it doesn’t matter.

In today’s conversation, host Rob Wiblin and Richard discuss the above, as well as:

  • Could speeding up AI development be a bad thing?
  • The balance between excitement and fear when it comes to AI advances
  • Why OpenAI focuses its efforts where it does
  • Common misconceptions about machine learning
  • How many computer chips it might require to be able to do most of the things humans do
  • How Richard understands the ‘alignment problem’ differently than other people
  • Why ‘situational awareness’ may be a key concept for understanding the behaviour of AI models
  • What work to positively shape the development of AI Richard is and isn’t excited about
  • The AGI Safety Fundamentals course that Richard developed to help people learn more about this field

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Milo McGuire and Ben Cordell
Transcriptions: Katy Moore

Continue reading →

#140 – Bear Braumoeller on the case that war isn’t in decline

Is war in long-term decline? Steven Pinker’s The Better Angels of Our Nature brought this previously obscure academic question to the centre of public debate, and pointed to rates of death in war to argue energetically that war is on the way out.

But that idea divides war scholars and statisticians, and so Better Angels has prompted a spirited debate, with datasets and statistical analyses exchanged back and forth year after year. The lack of consensus has left a somewhat bewildered public (including host Rob Wiblin) unsure quite what to believe.

Today’s guest, professor in political science Bear Braumoeller, is one of the scholars who believes we lack convincing evidence that warlikeness is in long-term decline. He collected the analysis that led him to that conclusion in his 2019 book, Only the Dead: The Persistence of War in the Modern Age.

The question is of great practical importance. The US and PRC are entering a period of renewed great power competition, with Taiwan as a potential trigger for war, and Russia is once more invading and attempting to annex the territory of its neighbours.

If war has been going out of fashion since the start of the Enlightenment, we might console ourselves that however nerve-wracking these present circumstances may feel, modern culture will throw up powerful barriers to another world war. But if we’re as war-prone as we ever have been, one need only inspect the record of the 20th century to recoil in horror at what might await us in the 21st.

Bear argues that the second reaction is the appropriate one. The world has gone up in flames many times through history, with roughly 0.5% of the population dying in the Napoleonic Wars, 1% in World War I, 3% in World War II, and perhaps 10% during the Mongol conquests. And with no reason to think similar catastrophes are any less likely today, complacency could lead us to sleepwalk into disaster.

He gets to this conclusion primarily by analysing the datasets of the decades-old Correlates of War project, which aspires to track all interstate conflicts and battlefield deaths since 1815. In Only the Dead, he chops up and inspects this data dozens of different ways, to test if there are any shifts over time which seem larger than what could be explained by chance variation alone.

Among other metrics, Bear looks at:

  • Battlefield deaths alone, as a percentage of combatants’ populations, and as a percentage of world population.
  • The total number of wars starting in a given year.
  • Rates of war initiation as a fraction of all country pairs capable of fighting wars.
  • How likely it was during different periods that a given war would double in size.
Image source.

In a nutshell, and taking in the full picture painted by these different measures, Bear simply finds no general trend in either direction from 1815 through today. It seems like, as philosopher George Santayana lamented in 1922, “only the dead have seen the end of war”.

That’s not to say things are the same in all periods. Depending on which indication of warlikeness you give the greatest weight, you can point to some periods that seem violent or pacific beyond what might be explained by random variation.

For instance, Bear points out that war initiation really did go down a lot at the end of the Cold War, with peace probably fostered by a period of unipolar US dominance, and the end of great power funding for proxy wars.

But that drop came after a period of somewhat above-average warlikeness during the Cold War. And surprisingly, the most peaceful period in Europe turns out not to be 1990–2015, but rather 1815–1855, during which the monarchical ‘Concert of Europe,’ scarred by the Napoleonic Wars, worked together to prevent revolution and interstate aggression.

Why haven’t modern ideas about the immorality of violence led to the decline of war, when it’s such a natural thing to expect? Bear is no Enlightenment scholar, but his book notes (among other reasons) that while modernity threw up new reasons to embrace pacifism, it also gave us new reasons to embrace violence: as a means to overthrow monarchy, distribute the means of production more equally, or protect people a continent away from ethnic cleansing — all motives that would have been foreign in the 15th century.

In today’s conversation, Bear and Rob discuss all of the above in more detail than even a usual 80,000 Hours podcast episode, as well as:

  • What would Bear’s critics say in response to all this?
  • What do the optimists get right?
  • What are the biggest problems with the Correlates of War dataset?
  • How does one do proper statistical tests for events that are clumped together, like war deaths?
  • Why are deaths in war so concentrated in a handful of the most extreme events?
  • Did the ideas of the Enlightenment promote nonviolence, on balance?
  • Were early states more or less violent than groups of hunter-gatherers?
  • If Bear is right, what can be done?
  • How did the ‘Concert of Europe’ or ‘Bismarckian system’ maintain peace in the 19th century?
  • Which wars are remarkable but largely unknown?
  • What’s the connection between individual attitudes and group behaviour?
  • Is it a problem that this dataset looks at just the ‘state system’ and ‘battlefield deaths’?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ryan Kessler
Transcriptions: Katy Moore

Continue reading →

#139 – Alan Hájek on puzzles and paradoxes in probability and expected value

A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth you win $16, and so on. How much should you be willing to pay to play?

The standard way of analysing gambling problems, ‘expected value‘ — in which you multiply probabilities by the value of each outcome and then sum them up — says your expected earnings are infinite. You have a 50% chance of winning $2, for ‘0.5 * $2 = $1’ in expected earnings. A 25% chance of winning $4, for ‘0.25 * $4 = $1’ in expected earnings, and on and on. A never-ending series of $1s added together comes to infinity. And that’s despite the fact that you know with certainty you can only ever win a finite amount!

Today’s guest — philosopher Alan Hájek of the Australian National University — thinks of much of philosophy as “the demolition of common sense followed by damage control” and is an expert on paradoxes related to probability and decision-making rules like “maximise expected value.”

The problem described above, known as the St. Petersburg paradox, has been a staple of the field since the 18th century, with many proposed solutions. In the interview, Alan explains how very natural attempts to resolve the paradox — such as factoring in the low likelihood that the casino can pay out very large sums, or the fact that money becomes less and less valuable the more of it you already have — fail to work as hoped.

We might reject the setup as a hypothetical that could never exist in the real world, and therefore of mere intellectual curiosity. But Alan doesn’t find that objection persuasive. If expected value fails in extreme cases, that should make us worry that something could be rotten at the heart of the standard procedure we use to make decisions in government, business, and nonprofits.

These issues regularly show up in 80,000 Hours’ efforts to try to find the best ways to improve the world, as the best approach will arguably involve long-shot attempts to do very large amounts of good.

Consider which is better: saving one life for sure, or three lives with 50% probability? Expected value says the second, which will probably strike you as reasonable enough. But what if we repeat this process and evaluate the chance to save nine lives with 25% probability, or 27 lives with 12.5% probability, or after 17 more iterations, 3,486,784,401 lives with a 0.00000009% chance. Expected value says this final offer is better than the others — 1,000 times better, in fact.

Insisting that people give up a sure thing in favour of a vanishingly low chance of a very large impact strikes some people as peculiar or even fanatical. But one of Alan’s PhD students, Hayden Wilkinson, discovered that rejecting expected value on this basis requires you to swallow even more bitter pills, like giving up on the idea that if A is better than B, and B is better than C, then A is also better than C.

Ultimately Alan leans towards the view that our best choice is to “bite the bullet” and stick with expected value, even with its sometimes counterintuitive implications. Where we want to do damage control, we’re better off looking for ways our probability estimates might be wrong.

In today’s conversation, Alan and Rob explore these issues and many others:

  • Simple rules of thumb for having philosophical insights
  • A key flaw that hid in Pascal’s wager from the very beginning
  • Whether we have to simply ignore infinities because they mess everything up
  • What fundamentally is ‘probability’?
  • Some of the many reasons ‘frequentism’ doesn’t work as an account of probability
  • Why the standard account of counterfactuals in philosophy is deeply flawed
  • And why counterfactuals present a fatal problem for one sort of consequentialism

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell and Ryan Kessler
Transcriptions: Katy Moore

Continue reading →

#138 – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter

What in the world is intrinsically good — good in itself even if it has no other effects? Over the millennia, people have offered many answers: joy, justice, equality, accomplishment, loving god, wisdom, and plenty more.

The question is a classic that makes for great dorm-room philosophy discussion. But it’s hardly just of academic interest. The issue of what (if anything) is intrinsically valuable bears on every action we take, whether we’re looking to improve our own lives, or to help others. The wrong answer might lead us to the wrong project and render our efforts to improve the world entirely ineffective.

Today’s guest, Sharon Hewitt Rawlette — philosopher and author of The Feeling of Value: Moral Realism Grounded in Phenomenal Consciousness — wants to resuscitate an answer to this question that is as old as philosophy itself.

That idea, in a nutshell, is that there is only one thing of true intrinsic value: positive feelings and sensations. And similarly, there is only one thing that is intrinsically of negative value: suffering, pain, and other unpleasant sensations.

Lots of other things are valuable too: friendship, fairness, loyalty, integrity, wealth, patience, houses, and so on. But they are only instrumentally valuable — that is to say, they’re valuable as means to the end of ensuring that all conscious beings experience more pleasure and other positive sensations, and less suffering.

As Sharon notes, from Athens in 400 BC to Britain in 1850, the idea that only subjective experiences can be good or bad in themselves — a position known as ‘philosophical hedonism’ — has been one of the most enduringly popular ideas in ethics.

And few will be taken aback by the notion that, all else equal, more pleasure is good and less suffering is bad. But can they really be the only intrinsically valuable things?

Over the 20th century, philosophical hedonism became increasingly controversial in the face of some seemingly very counterintuitive implications. For this reason the famous philosopher of mind Thomas Nagel called The Feeling of Value “a radical and important philosophical contribution.”

So what convinces Sharon that philosophical hedonism deserves another go?

Stepping back for a moment, any answer to the question “What has intrinsic value?” faces a serious challenge: “How do we know?” It’s far from clear how something having intrinsic value can cause us to believe that it has intrinsic value. And if there’s no causal or rational connection between something being valuable and our believing that it has value, we could only get the right answer by some extraordinary coincidence. You may feel it’s intrinsically valuable to treat people fairly, but maybe there’s just no reason to trust that intuition.

Since the 1700s, many philosophers working on so-called ‘metaethics’ — that is, the study of what ethical claims are and how we could know if they’re true — have despaired of us ever making sense of or identifying the location of ‘objective’ or ‘intrinsic’ value. They conclude that when we say things are ‘good,’ we aren’t really saying anything about their nature, but rather just expressing our own attitudes, or intentions, or something else.

Sharon disagrees. She says the answer to all this has been right under our nose all along.

We have a concept of value because of our experiences of positive sensations — sensations that immediately indicate to us that they are valuable and that if someone could create more of them, they ought to do so. Similarly, we have a concept of badness because of our experience of suffering — sensations that scream to us that if suffering were all there were, it would be a bad thing.

How do we know that pleasure is valuable, and that suffering is the opposite of valuable? Directly!

While I might be mistaken that a painting I’m looking at is in real life as it appears to me, I can’t be mistaken about the nature of my perception of it. If it looks red to me, it may or may not be red, but it’s definitely the case that I am perceiving redness. Similarly, while I might be mistaken that a painting is intrinsically valuable, I can’t be mistaken about the pleasurable sensations I’m feeling when I look at it, and the fact that among other qualities those sensations have the property of goodness.

While intuitive on some level, this arguably implies some very strange things. Most famously, the philosopher Robert Nozick challenged it with the idea of an ‘experience machine’: if you could enter into a simulated world and enjoy a life far more pleasurable than the one you experience now, should you do so, even if it would mean none of your accomplishments or relationships would be ‘real’? Nozick and many of his colleagues thought not.

The idea has also been challenged for failing to value human freedom and autonomy for its own sake. Would it really be OK to kill one person to use their organs to save the lives of five others, if doing so would generate more pleasure and less suffering? Few believe so.

In today’s interview, Sharon explains the case for a theory of value grounded in subjective experiences, and why she believes these counterarguments are misguided. A philosophical hedonist shouldn’t get in an experience machine, nor override an individual’s autonomy, except in situations so different from the classic thought experiments that it no longer seems strange they would do so.

Host Rob Wiblin and Sharon cover all that, as well as:

  • The essential need to disentangle intrinsic, instrumental, and other sorts of value
  • Why Sharon’s arguments lead to hedonistic utilitarianism rather than hedonistic egoism (in which we only care about our own feelings)
  • How do people react to the ‘experience machine’ thought experiment when surveyed?
  • Why hedonism recommends often thinking and acting as though it were false
  • Whether it’s crazy to think that relationships are only useful because of their effects on our subjective experiences
  • Whether it will ever be possible to eliminate pain, and whether doing so would be desirable
  • If we didn’t have positive or negative experiences, whether that would cause us to simply never talk about goodness and badness
  • Whether the plausibility of hedonism is affected by our theory of mind
  • And plenty more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ryan Kessler
Transcriptions: Katy Moore

Continue reading →

#137 – Andreas Mogensen on whether effective altruism is just for consequentialists

Effective altruism, in a slogan, aims to ‘do the most good.’ Utilitarianism, in a slogan, says we should act to ‘produce the greatest good for the greatest number.’ It’s clear enough why utilitarians should be interested in the project of effective altruism. But what about the many people who reject utilitarianism?

Today’s guest, Andreas Mogensen — senior research fellow at Oxford University’s Global Priorities Institute — does reject utilitarianism, but as he explains, this does little to dampen his enthusiasm for effective altruism.

Andreas leans towards ‘deontological’ or rule-based theories of ethics, rather than ‘consequentialist’ theories like utilitarianism which look exclusively at the effects of a person’s actions.

Like most people involved in effective altruism, he parts ways with utilitarianism in rejecting its maximal level of demandingness, the idea that the ends justify the means, and the notion that the only moral reason for action is to benefit everyone in the world considered impartially.

However, Andreas believes any plausible theory of morality must give some weight to the harms and benefits we provide to other people. If we can improve a stranger’s wellbeing enormously at negligible cost to ourselves and without violating any other moral prohibition, that must be at minimum a praiseworthy thing to do.

In a world as full of preventable suffering as our own, this simple ‘principle of beneficence’ is probably the only premise one needs to grant for the effective altruist project of identifying the most impactful ways to help others to be of great moral interest and importance.

As an illustrative example Andreas refers to the Giving What We Can pledge to donate 10% of one’s income to the most impactful charities available, a pledge he took in 2009. Many effective altruism enthusiasts have taken such a pledge, while others spend their careers trying to figure out the most cost-effective places pledgers can give, where they’ll get the biggest ‘bang for buck’.

For someone living in a world as unequal as our own, this pledge at a very minimum gives an upper-middle class person in a rich country the chance to transfer money to someone living on about 1% as much as they do. The benefit an extremely poor recipient receives from the money is likely far more than the donor could get spending it on themselves.

What arguments could a non-utilitarian moral theory mount against such giving?

Perhaps it could interfere with the achievement of other important moral goals? In response to this Andreas notes that alleviating the suffering of people in severe poverty is an important goal that should compete with alternatives. And furthermore, giving 10% is not so much that it likely disrupts one’s ability to, for instance, care for oneself or one’s family, or participate in domestic politics.

Perhaps it involves the violation of important moral prohibitions, such as those on stealing or lying? In response Andreas points out that the activities advocated by effective altruism researchers almost never violate such prohibitions — and if a few do, one can simply rule out those options and choose among the rest.

Many approaches to morality will say it’s permissible not to give away 10% of your income to help others as effectively as is possible. But if they will almost all regard it as praiseworthy to benefit others without giving up something else of equivalent moral value, then Andreas argues they should be enthusiastic about effective altruism as an intellectual and practical project nonetheless.

In this conversation, Andreas and Rob discuss how robust the above line of argument is, and also cover:

  • Should we treat philosophical thought experiments that feature very large numbers with great suspicion?
  • If we had to allow someone to die to avoid preventing the football World Cup final from being broadcast to the world, is that permissible or not? If not, what might that imply?
  • What might a virtue ethicist regard as ‘doing the most good’?
  • If a deontological theory of morality parted ways with common effective altruist practices, how would that likely be?
  • If we can explain how we came to hold a view on a moral issue by referring to evolutionary selective pressures, should we disbelieve that view?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell and Beppe Rådvik
Transcriptions: Katy Moore

Continue reading →