What could an AI-caused existential catastrophe actually look like?

At 5:29 AM on July 16, 1945, deep in the Jornada del Muerto desert in New Mexico, the Manhattan Project carried out the world’s first successful test of a nuclear weapon.

From that moment, we’ve had the technological capacity to wipe out humanity.

But if you asked someone in 1945 to predict exactly how this risk would play out, they would almost certainly have got it wrong. They may have thought there would have been more widespread use of nuclear weapons in World War II. They certainly would not have predicted the fall of the USSR 45 years later. Current experts are concerned about India–Pakistan nuclear conflict and North Korean state action, but 1945 was before even the partition of India or the Korean War.

That is to say, you’d have real difficulty predicting anything about how nuclear weapons would be used. It would have been even harder to make these predictions in 1933, when Leo Szilard first realised that a nuclear chain reaction of immense power could be possible, without any concrete idea of what these weapons would look like.

Despite this difficulty, you wouldn’t be wrong to be concerned.

In our problem profile on AI, we describe a very general way in which advancing AI could go wrong. But there are lots of specifics we can’t know much about at this point.

Continue reading →

#136 – Will MacAskill on what we owe the future

  1. People who exist in the future deserve some degree of moral consideration.
  2. The future could be very big, very long, and/or very good.
  3. We can reasonably hope to influence whether people in the future exist, and how good or bad their lives are.
  4. So trying to make the world better for future generations is a key priority of our time.

This is the simple four-step argument for ‘longtermism’ put forward in What We Owe The Future, the latest book from today’s guest — University of Oxford philosopher and cofounder of the effective altruism community, Will MacAskill.

From one point of view this idea is common sense. We work on breakthroughs to treat cancer or end use of fossil fuels not just for people alive today, but because we hope such scientific advances will help our children, grandchildren, and great-grandchildren as well.

Some who take this longtermist idea seriously work to develop broad-spectrum vaccines they hope will safeguard humanity against the sorts of extremely deadly pandemics that could permanently throw civilisation off track — the sort of project few could argue is not worthwhile.

But Will is upfront that longtermism is also counterintuitive. To start with, he’s willing to contemplate timescales far beyond what’s typically discussed:

If we last as long as a typical mammal species, that’s another 700,000 years. If we last until the Earth is no longer habitable, that’s hundreds of millions of years. If we manage one day to take to the stars and build a civilisation there, we could live for hundreds of trillions of years. […] Future people [could] outnumber us a thousand or a million or a trillion to one.

A natural objection to thinking millions of years ahead is that it’s hard enough to take actions that have positive effects that persist for hundreds of years, let alone “indefinitely.” It doesn’t matter how important something might be if you can’t predictably change it.

This is one reason, among others, that Will was initially sceptical of longtermism and took years to come around. He preferred to focus on ending poverty and preventable diseases in ways he could directly see were working.

But over seven years he gradually changed his mind, and in What We Owe The Future, Will argues that in fact there are clear ways we might act now that could benefit not just a few but all future generations.

He highlights two effects that could be very enduring: “…reducing risks of extinction of human beings or of the collapse of civilisation, and ensuring that the values and ideas that guide future society are better ones rather than worse.”

The idea that preventing human extinction would have long-lasting impacts is pretty intuitive. If we entirely disappear, we aren’t coming back.

But the idea that we can shape human values — not just for our age, but for all ages — is a surprising one that Will has come to more recently.

In the book, he argues that what people value is far more fragile and historically contingent than it might first seem. For instance, today it feels like the abolition of slavery was an inevitable part of the arc of history. But Will lays out that the best research on the topic suggests otherwise.

For thousands of years, almost everyone — from philosophers to slaves themselves — regarded slavery as acceptable in principle. At the time the British Empire ended its participation in the slave trade, the industry was booming and earning enormous profits. It’s estimated that abolition cost Britain 2% of its GDP for 50 years.

So why did it happen? The global abolition movement seems to have originated within the peculiar culture of the Quakers, who were the first to argue slavery was unacceptable in all cases and campaign for its elimination, gradually convincing those around them with both Enlightenment and Christian arguments. If a few such moral pioneers had fallen off their horses at the wrong time, maybe the abolition movement never would have gotten off the ground and slavery would remain widespread today.

If moral progress really is so contingent, and bad ideas can persist almost without end, it raises the stakes for moral debate today. If we don’t eliminate a bad practice now, it may be with us forever. In today’s in-depth conversation, we discuss the possibility of a harmful moral ‘lock-in’ as well as:

  • How Will was eventually won over to longtermism
  • The three best lines of argument against longtermism
  • How to avoid moral fanaticism
  • Which technologies or events are most likely to have permanent effects
  • What ‘longtermists’ do today in practice
  • How to predict the long-term effect of our actions
  • Whether the future is likely to be good or bad
  • Concrete ideas to make the future better
  • What Will donates his money to personally
  • Potatoes and megafauna
  • And plenty more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Continue reading →

Do recent breakthroughs mean transformative AI is coming sooner than we thought?

Is transformative AI coming sooner than we thought?

It seems like it probably is, which would mean that work to ensure this transformation goes well (rather than disastrously) is even more urgent than we thought.

In the last six months, there have been some shocking AI advances:

This caused the live forecast on Metaculus for when “artificial general intelligence” will arrive to plunge — the median declined 15 years, from 2055 to 2040.

You might think this was due to random people on the internet over-updating on salient evidence, but if you put greater weight on the forecasters who have made the most accurate forecasts in the past, the decline was still 11 years.

Last year, Jacob Steinhardt commissioned professional forecasters to make a five-year forecast on three AI capabilities benchmarks. His initial impression was that the forecasts were aggressive, but one year in, actual progress was ahead of predictions on all three benchmarks.

Particularly shocking were the results on a benchmark of difficult high school maths problems. The state-of-the-art model leapt from a score of 7% to 50% in just one year — more than five years of predicted progress. (And these questions were hard — e.g.

Continue reading →

#135 – Samuel Charap on key lessons from five months of war in Ukraine

After a frenetic level of commentary during February and March, the war in Ukraine has faded into the background of our news coverage. But with the benefit of time we’re in a much stronger position to understand what happened, why, whether there are broader lessons to take away, and how the conflict might be ended. And the conflict appears far from over.

So today, we are returning to speak a second time with Samuel Charap — one of the US’s foremost experts on Russia’s relationship with former Soviet states, and coauthor of the 2017 book Everyone Loses: The Ukraine Crisis and the Ruinous Contest for Post-Soviet Eurasia.

As Sam lays out, Russia controls much of Ukraine’s east and south, and seems to be preparing to politically incorporate that territory into Russia itself later in the year. At the same time, Ukraine is gearing up for a counteroffensive before defensive positions become dug in over winter.

Each day the war continues it takes a toll on ordinary Ukrainians, contributes to a global food shortage, and leaves the US and Russia unable to coordinate on any other issues and at an elevated risk of direct conflict.

In today’s brisk conversation, Rob and Sam cover the following topics:

  • Current territorial control and the level of attrition within Russia’s and Ukraine’s military forces.
  • Russia’s current goals.
  • Whether Sam’s views have changed since March on topics like: Putin’s motivations, the wisdom of Ukraine’s strategy, the likely impact of Western sanctions, and the risks from Finland and Sweden joining NATO before the war ends.
  • Why so many people incorrectly expected Russia to fully mobilise for war or persist with their original approach to the invasion.
  • Whether there’s anything to learn from many of our worst fears — such as the use of bioweapons on civilians — not coming to pass.
  • What can be done to ensure some nuclear arms control agreement between the US and Russia remains in place after 2026 (when New START expires).
  • Why Sam considers a settlement proposal put forward by Ukraine in late March to be the most plausible way to end the war and ensure stability — though it’s still a long shot.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Continue reading →

Risks from atomically precise manufacturing

Both the risks and benefits of advances in atomically precise manufacturing seem like they might be significant, and there is currently little effort to shape the trajectory of this technology. However, there is also relatively little investment going into developing atomically precise manufacturing, which reduces the urgency of the issue.

Continue reading →

Expression of interest: Head of Operations

80,000 Hours

80,000 Hours provides research and support to help people switch into careers that effectively tackle the world’s most pressing problems.

We’ve had over 8 million visitors to our website, and more than 3,000 people have told us that they’ve significantly changed their career plans due to our work. We’re also the largest single source of people getting involved in the effective altruism community, according to the most recent EA Survey.

The internal systems team

This role is on the internal systems team, which is here to build the organisation and systems that support 80,000 Hours to achieve its mission.

We oversee 80,000 Hours’ office, tech systems, organisation-wide metrics and impact evaluation, as well as HR, recruiting, finances, and much of our fundraising.

Currently, we have four full-time staff, some part-time staff, and receive support from the Centre for Effective Altruism (our fiscal sponsor).

The role

As 80,000 Hours’ Head of Operations, you would:

  • Oversee a wide range of our internal operations, including team-wide processes, much of our fundraising, our office, finances, tech systems, data practices, and external relations.
  • Manage a team of two operations specialists, including investing in their professional development and identifying opportunities for advancement where appropriate.
  • Grow your team to build capacity in the areas you oversee, including identifying 80,000 Hours’ operational needs and designing roles that will address these.
  • Develop our internal operations strategy — in particular,

Continue reading →

    Open position: Marketer

    Applications for this position are now closed.

    We’re looking for a new marketer to help us expand our readership and scale up our marketing channels.

    We’d like to support the person in this role to take on more responsibility over time as we expand our marketing team.

    80,000 Hours provides free research and support to help people find careers tackling the world’s most pressing problems.

    We’ve had over 8 million visitors to our website, and more than 3,000 people have told us that they’ve significantly changed their career plans due to our work. We’re also the largest single source of people getting involved in the effective altruism community, according to the most recent EA Survey.

    Even so, about 90% of US college graduates have never heard of effective altruism, and we estimate that just 0.5% of students at top colleges are highly engaged in EA. As a marketer with 80,000 Hours, you would help us achieve our goal of reaching all students and recent graduates who might be interested in our work. We anticipate that the right person in this role could help us grow our readership to 5–10 times its current size, and lead to hundreds or thousands of additional people pursuing high-impact careers.

    We’re looking for a marketing generalist who will:

    • Start managing (and eventually own) our two largest existing marketing channels:
      • Sponsorships with people who have large audiences,

    Continue reading →

      #134 – Ian Morris on what big-picture history teaches us

      Wind back 1,000 years and the moral landscape looks very different to today. Most farming societies thought slavery was natural and unobjectionable, premarital sex was an abomination, women should obey their husbands, and commoners should obey their monarchs.

      Wind back 10,000 years and things look very different again. Most hunter-gatherer groups thought men who got too big for their britches needed to be put in their place rather than obeyed, and lifelong monogamy could hardly be expected of men or women.

      Why such big systematic changes — and why these changes specifically?

      That’s the question best-selling historian Ian Morris takes up in his book, Foragers, Farmers, and Fossil Fuels: How Human Values Evolve. Ian has spent his academic life studying long-term history, trying to explain the big-picture changes that play out over hundreds or thousands of years.

      There are a number of possible explanations one could offer for the wide-ranging shifts in opinion on the ‘right’ way to live. Maybe the natural sciences progressed and people realised their previous ideas were mistaken? Perhaps a few persuasive advocates turned the course of history with their revolutionary arguments? Maybe everyone just got nicer?

      In Foragers, Farmers and Fossil Fuels Ian presents a provocative alternative: human culture gradually evolves towards whatever system of organisation allows a society to harvest the most energy, and we then conclude that system is the most virtuous one. Egalitarian values helped hunter-gatherers hunt and gather effectively. Once farming was developed, hierarchy proved to be the social structure that produced the most grain (and best repelled nomadic raiders). And in the modern era, democracy and individuality have proven to be more productive ways to collect and exploit fossil fuels.

      On this theory, it’s technology that drives moral values much more than moral philosophy. Individuals can try to persist with deeply held values that limit economic growth, but they risk being rendered irrelevant as more productive peers in their own society accrue wealth and power. And societies that fail to move with the times risk being conquered by more pragmatic neighbours that adapt to new technologies and grow in population and military strength.

      There are many objections one could raise to this theory, many of which we put to Ian in this interview. But the question is a highly consequential one: if we want to guess what goals our descendants will pursue hundreds of years from now, it would be helpful to have a theory for why our ancestors mostly thought one thing, while we mostly think another.

      Big though it is, the driver of human values is only one of several major questions Ian has tackled through his career.

      In Why the West Rules—For Now, he set out to understand why the Industrial Revolution happened in England and Europe went on to dominate much of the rest of the world, rather than industrialisation kicking off somewhere else like China, with China going on to establish colonies in Europe. (In a word: geography.)

      In War! What is it Good For?, he tried to explain why it is that violent conflicts often lead to longer lives and higher incomes (i.e. wars build empires which suppress interpersonal violence internally), while other times they have the exact opposite effect (i.e. advances in military technology allow nomads to raid and pull apart these empires).

      In today’s episode, we discuss all of Ian’s major books, taking on topics such as:

      • Whether the evidence base in history — from document archives to archaeology — is strong enough to persuasively answer any of these questions
      • Whether or not wars can still lead to less violence today
      • Why Ian thinks the way we live in the 21st century is probably a short-lived aberration
      • Whether the grand sweep of history is driven more by “very important people” or “vast impersonal forces”
      • Why Chinese ships never crossed the Pacific or rounded the southern tip of Africa
      • In what sense Ian thinks Brexit was “10,000 years in the making”
      • The most common misconceptions about macrohistory

      Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

      Producer: Keiran Harris
      Audio mastering: Ben Cordell
      Transcriptions: Katy Moore

      Continue reading →

      #133 – Max Tegmark on how a 'put-up-or-shut-up' resolution led him to work on AI and algorithmic news selection

      On January 1, 2015, physicist Max Tegmark gave up something most of us love to do: complain about things without ever trying to fix them.

      That “put up or shut up” New Year’s resolution led to the first Puerto Rico conference and Open Letter on Artificial Intelligence — milestones for researchers taking the safe development of highly capable AI systems seriously.

      Max’s primary work has been cosmology research at MIT, but his energetic and freewheeling nature has led him into so many other projects that you would be forgiven for forgetting it. In the 2010s he wrote two best-selling books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, and Life 3.0: Being Human in the Age of Artificial Intelligence, and in 2014 founded a non-profit, the Future of Life Institute, which works to reduce all sorts of threats to humanity’s future including nuclear war, synthetic biology, and AI.

      Max has complained about many other things over the years, from killer robots to the impact of social media algorithms on the news we consume. True to his ‘put up or shut up’ resolution, he and his team went on to produce a video on so-called ‘Slaughterbots’ which attracted millions of views, and develop a podcast and website called ‘Improve The News’ to help readers separate facts from spin.

      But given the stunning recent advances in capabilities — from OpenAI’s DALL-E to DeepMind’s Gato — AI itself remains top of his mind.

      You can now give an AI system like GPT-3 the text: “I’m going to go to this mountain with the faces on it. What is the capital of the state to the east of the state that that’s in?” And it gives the correct answer (Saint Paul, Minnesota) — something most AI researchers would have said was impossible without fundamental breakthroughs just seven years ago.

      So back at MIT, he now leads a research group dedicated to what he calls “intelligible intelligence.” At the moment, AI systems are basically giant black boxes that magically do wildly impressive things. But for us to trust these systems, we need to understand them.

      He says that training a black box that does something smart needs to just be stage one in a bigger process. Stage two is: “How do we get the knowledge out and put it in a safer system?”

      His favourite MIT project so far involved taking a bunch of data from the 100 most complicated or famous physics equations, creating an Excel spreadsheet with each of the variables and the results, and saying to the computer, “OK, here’s the data. Can you figure out what the formula is?”

      For general formulas, this is really hard. About 400 years ago, Johannes Kepler managed to get hold of the data that Tycho Brahe had gathered regarding how the planets move around the solar system. Kepler spent four years staring at the data until he figured out what the data meant: that planets orbit in an ellipse.

      Max’s team’s code was able to discover that in just an hour.

      Today’s conversation starts off giving a broad overview of the key questions about artificial intelligence: What’s the potential? What are the threats? How might this story play out? What should we be doing to prepare?

      Rob and Max then move on to recent advances in capabilities and alignment, the mood we should have, and possible ways we might misunderstand the problem.

      They then spend roughly the last third talking about Max’s current big passion: improving the news we consume — where Rob has a few reservations.

      They also cover:

      • Whether we would be able to understand what superintelligent systems were doing
      • The value of encouraging people to think about the positive future they want
      • How to give machines goals
      • Whether ‘Big Tech’ is following the lead of ‘Big Tobacco’
      • Whether we’re sleepwalking into disaster
      • Whether people actually just want their biases confirmed
      • Why Max is worried about government-backed fact-checking
      • And much more

      Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

      Producer: Keiran Harris
      Audio mastering: Ben Cordell
      Transcriptions: Katy Moore

      Continue reading →

      Know what you’re optimising for

      There is (sometimes) such a thing as a free lunch

      You live in a world where most people, most of the time, think of things as categorical, rather than continuous. People either agree with you or they don’t. Food is healthy or unhealthy. Your career is ‘good for the world,’ or it’s neutral, or maybe even it’s bad — but it’s only the category that matters, not the size of the benefit or harm. Ideas are wrong, or they are right. Predictions end up confirmed or falsified.

      In my view, one of the central ideas of effective altruism is the realisation that ‘doing good’ is not such a binary. That as well as it mattering that we help others at all, it matters how much we help. That helping more is better than helping less, and helping a lot more is a lot better.

      For me, this is also a useful framing for thinking rationally. Here, rather than ‘goodness,’ the continuous quantity is truth. The central realisation is that ideas are not simply true or false; they are all flawed attempts to model reality, and just how flawed is up for grabs. If we’re wrong, our response should not be to give up, but to try to be less wrong.

      When you realise something is continuous that most people are treating as binary,

      Continue reading →

      #132 – Nova DasSarma on why information security may be critical to the safe development of AI systems

      If a business has spent $100 million developing a product, it’s a fair bet that they don’t want it stolen in two seconds and uploaded to the web where anyone can use it for free.

      This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars. But once trained, such models may be only a few gigabytes in size and run just fine on ordinary laptops.

      Today’s guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic with the security team. One of her jobs is to stop hackers exfiltrating Anthropic’s incredibly expensive intellectual property, as recently happened to Nvidia. As she explains, given models’ small size, the need to store such models on internet-connected servers, and the poor state of computer security in general, this is a serious challenge.

      The worries aren’t purely commercial though. This problem looms especially large for the growing number of people who expect that in coming decades we’ll develop so-called artificial ‘general’ intelligence systems that can learn and apply a wide range of skills all at once, and thereby have a transformative effect on society.

      If aligned with the goals of their owners, such general AI models could operate like a team of super-skilled assistants, going out and doing whatever wonderful (or malicious) things are asked of them. This might represent a huge leap forward for humanity, though the transition to a very different new economy and power structure would have to be handled delicately.

      If unaligned with the goals of their owners or humanity as a whole, such broadly capable models would naturally ‘go rogue,’ breaking their way into additional computer systems to grab more computing power — all the better to pursue their goals and make sure they can’t be shut off.

      As Nova explains, in either case, we don’t want such models disseminated all over the world before we’ve confirmed they are deeply safe and law-abiding, and have figured out how to integrate them peacefully into society. In the first scenario, premature mass deployment would be risky and destabilising. In the second scenario, it could be catastrophic — perhaps even leading to human extinction if such general AI systems turn out to be able to self-improve rapidly rather than slowly, something we can only speculate on at this point.

      If highly capable general AI systems are coming in the next 10 or 20 years, Nova may be flying below the radar with one of the most important jobs in the world.

      We’ll soon need the ability to ‘sandbox’ (i.e. contain) models with a wide range of superhuman capabilities, including the ability to learn new skills, for a period of careful testing and limited deployment — preventing the model from breaking out, and criminals from breaking in. Nova and her colleagues are trying to figure out how to do this, but as this episode reveals, even the state of the art is nowhere near good enough.

      In today’s conversation, Rob and Nova cover:

      • How good or bad information security is today
      • The most secure computer systems that exist today
      • How to design an AI training compute centre for maximum efficiency
      • Whether ‘formal verification’ can help us design trustworthy systems
      • How wide the practical gap is between AI capabilities and AI safety
      • How to disincentivise hackers
      • What should listeners do to strengthen their own security practices
      • Jobs at Anthropic
      • And a few more things as well

      Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

      Producer: Keiran Harris
      Audio mastering: Ben Cordell and Beppe Rådvik
      Transcriptions: Katy Moore

      Continue reading →

      #131 – Lewis Dartnell on getting humanity to bounce back faster in a post-apocalyptic world

      “We’re leaving these 16 contestants on an island with nothing but what they can scavenge from an abandoned factory and apartment block. Over the next 365 days, they’ll try to rebuild as much of civilisation as they can — from glass, to lenses, to microscopes. This is: The Knowledge!”

      If you were a contestant on such a TV show, you’d love to have a guide to how basic things you currently take for granted are done — how to grow potatoes, fire bricks, turn wood to charcoal, find acids and alkalis, and so on.

      Today’s guest Lewis Dartnell has gone as far compiling this information as anyone has with his bestselling book The Knowledge: How to Rebuild Civilization in the Aftermath of a Cataclysm.

      But in the aftermath of a nuclear war or incredibly deadly pandemic that kills most people, many of the ways we do things today will be impossible — and even some of the things people did in the past, like collect coal from the surface of the Earth, will be impossible the second time around.

      As Lewis points out, there’s “no point telling this band of survivors how to make something ultra-efficient or ultra-useful or ultra-capable if it’s just too damned complicated to build in the first place. You have to start small and then level up, pull yourself up by your own bootstraps.”

      So it might sound good to tell people to build solar panels — they’re a wonderful way of generating electricity. But the photovoltaic cells we use today need pure silicon, and nanoscale manufacturing — essentially the same technology as microchips used in a computer — so actually making solar panels would be incredibly difficult.

      Instead, you’d want to tell our group of budding engineers to use more appropriate technologies like solar concentrators that use nothing more than mirrors — which turn out to be relatively easy to make.

      A disaster that unravels the complex way we produce goods in the modern world is all too possible. Which raises the question: why not set dozens of people to plan out exactly what any survivors really ought to do if they need to support themselves and rebuild civilisation? Such a guide could then be translated and distributed all around the world.

      The goal would be to provide the best information to speed up each of the many steps that would take survivors from rubbing sticks together in the wilderness to adjusting a thermostat in their comfy apartments.

      This is clearly not a trivial task. Lewis’s own book (at 300 pages) only scratched the surface of the most important knowledge humanity has accumulated, relegating all of mathematics to a single footnote.

      And the ideal guide would offer pretty different advice depending on the scenario. Are survivors dealing with a radioactive ice age following a nuclear war? Or is it an eerily intact but near-empty post-pandemic world with mountains of goods to scavenge from the husks of cities?

      If we take catastrophic risks seriously and want humanity to recover from a devastating shock as far and fast as possible, producing such a guide before it’s too late might be one of the higher-impact projects someone could take on.

      As a brand-new parent, Lewis couldn’t do one of our classic three- or four-hour episodes — so this is an unusually snappy one-hour interview, where Rob and Lewis are joined by Luisa Rodriguez to continue the conversation from her episode of the show last year.

      They cover:

      • The biggest impediments to bouncing back
      • The reality of humans trying to actually do this
      • The most valuable pro-resilience adjustments we can make today
      • How to recover without much coal or oil
      • How to feed the Earth in disasters
      • And the most exciting recent findings in astrobiology

      Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

      Producer: Keiran Harris
      Audio mastering: Ben Cordell
      Transcriptions: Katy Moore

      Continue reading →

      Let’s get serious about preventing the next pandemic

      I recently argued that it’s time for our community to be more ambitious.

      And when it comes to preventing pandemics, it’s starting to happen.

      Andrew Snyder-Beattie is a programme officer at Open Philanthropy, a foundation which has more than $1 billion available to fund big pandemic-prevent projects. He and Ethan Alley (co-CEO at Alvea) recently wrote an exciting list of projects they’d like to see get founded, including:

      • An international early detection centre
      • Actually good PPE
      • Rapid and broad-spectrum antivirals and vaccines
      • A bioweapons watchdog
      • Self-sterilising buildings
      • Refuges

      One of these ideas is already happening. Alvea aims to produce a cheap, flexible vaccine platform using a new type of vaccine (DNA vaccines), starting with an Omicron-specific shot. In two months, they hired 35 people and started preclinical trials.

      The above are technical solutions, which make it possible for a relatively small number of people to make a significant difference to the problem. But policy change is also an important angle.

      Guarding Against Pandemics was an effort to lobby the US government for $30 billion in funding for pandemic prevention. Unfortunately the relevant bill didn’t pass, but the sum at stake made it clearly worth trying.

      Efforts in the UK might be more successful —

      Continue reading →

      Clay Graubard and Robert de Neufville on forecasting the war in Ukraine

      In this episode of 80k After Hours, Rob Wiblin interviews Clay Graubard and Robert de Neufville about forecasting the war between Russia and Ukraine.

      They cover:

      • Their early predictions for the war
      • The performance of the Russian military
      • The risk of use of nuclear weapons
      • The most interesting remaining topics on Russia and Ukraine
      • General lessons we can take from the war
      • The evolution of the forecasting space
      • What Robert and Clay were reading back in February
      • Forecasters vs. subject matter experts
      • Ways to get involved with the forecasting community
      • Impressive past predictions
      • And more

      Who this episode is for:

      • People interested in forecasting
      • People interested in the war in Ukraine
      • People who prefer to know how likely they are to die in a nuclear war

      Who this episode isn’t for:

      • People who’d hate it if a friend said they were 65% likely to come out for drinks
      • People who’d prefer if their death from nuclear war was a total surprise

      Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ’80k After Hours’ into your podcasting app. Or read the transcript below.

      Producer: Keiran Harris
      Audio mastering: Ben Cordell
      Transcriptions: Katy Moore

      Gershwin – Rhapsody in Blue, original 1924 version” by Jason Weinberger is licensed under creative commons

      Continue reading →

      #130 – Will MacAskill on balancing frugality with ambition, whether you need longtermism, & mental health under pressure

      Imagine you lead a nonprofit that operates on a shoestring budget. Staff are paid minimum wage, lunch is bread and hummus, and you’re all bunched up on a few tables in a basement office.

      But over a few years, your cause attracts some major new donors. Your funding jumps a thousandfold, from $100,000 a year to $100,000,000 a year. You’re the same group of people committed to making sacrifices for the cause — but these days, rather than cutting costs, the right thing to do seems to be to spend serious money and get things done ASAP.

      You suddenly have the opportunity to make more progress than ever before, but as well as excitement about this, you have worries about the impacts that large amounts of funding can have.

      This is roughly the situation faced by today’s guest Will MacAskill — University of Oxford philosopher, author of the forthcoming book What We Owe The Future, and founding figure in the effective altruism movement.

      Years ago, Will pledged to give away more than 50% of his income over his life, and was already donating 10% back when he was a student with next to no income. Since then, the coalition he founded has been super successful at attracting the interest of donors who collectively want to give away billions in the way Will and his colleagues were proposing.

      While surely a huge success, it brings with it risks that he’s never had to consider before:

      • Will and his colleagues might try to spend a lot of money trying to get more things done more quickly — but actually just waste it.
      • Being seen as profligate could strike onlookers as selfish and disreputable.
      • Folks might start pretending to agree with their agenda just to get grants.
      • People working on nearby issues that are less flush with funding may end up resentful.
      • People might lose their focus on helping others as they get seduced by the prospect of earning a nice living.
      • Mediocre projects might find it too easy to get funding, even when the people involved would be better off radically changing their strategy, or shutting down and launching something else entirely.

      But all these ‘risks of commission’ have to be weighed against ‘risk of omission’: the failure to achieve all you could have if you’d been truly ambitious.

      People looking askance at you for paying high salaries to attract the staff you want is unpleasant.

      But failing to prevent the next pandemic because you didn’t have the necessary medical experts on your grantmaking team is worse than unpleasant — it’s a true disaster. Yet few will complain, because they’ll never know what might have been if you’d only set frugality aside.

      Will aims to strike a sensible balance between these competing errors, which he has taken to calling judicious ambition. In today’s episode, Rob and Will discuss the above as well as:

      • Will humanity likely converge on good values as we get more educated and invest more in moral philosophy — or are the things we care about actually quite arbitrary and contingent?
      • Why are so many nonfiction books full of factual errors?
      • How does Will avoid anxiety and depression with more responsibility on his shoulders than ever?
      • What does Will disagree with his colleagues on?
      • Should we focus on existential risks more or less the same way, whether we care about future generations or not?
      • Are potatoes one of the most important technologies ever developed?
      • And plenty more.

      Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

      Producer: Keiran Harris
      Audio mastering: Ben Cordell
      Transcriptions: Katy Moore

      Continue reading →

      Climate change

      Climate change is going to significantly and negatively impact the world. Its impacts on the poorest people in our society and our planet’s biodiversity are cause for particular concern. Looking at the worst possible scenarios, it could be an important factor that increases existential threats from other sources, like great power conflicts, nuclear war, or pandemics. But because the worst potential consequences seem to run through those other sources, and these other risks seem larger and more neglected, we think most readers can have a greater impact in expectation working directly on one of these other risks.

      We think your personal carbon footprint is much less important than what you do for work, and that some ways of making a difference on climate change are likely to be much more effective than others. In particular, you could use your career to help develop technology or advocate for policy that would reduce our current emissions, or research technology that could remove carbon from the atmosphere in the future.

      Continue reading →

      Effective altruism and the current funding situation

      This post gives an overview of how I’m thinking about the “funding in EA” issue, building on many conversations. Although I’m involved with a number of organisations in EA, this post is written in my personal capacity. You might also want to see my EAG talk which has a related theme, though with different emphases. For helpful comments, I thank Abie Rohrig, Asya Bergal, Claire Zabel, Eirin Evjen, Julia Wise, Ketan Ramakrishnan, Leopold Aschenbrenner, Matt Wage, Max Daniel, Nick Beckstead, Stephen Clare, and Toby Ord.

      Main points

      • EA is in a very different funding situation than it was when it was founded. This is both an enormous responsibility and an incredible opportunity.
      • It means the norms and culture that made sense at EA’s founding will have to adapt. It’s good that there’s now a serious conversation about this.
      • There are two ways we could fail to respond correctly:
        • By commission: we damage, unnecessarily, the aspects of EA culture that make it valuable; we support harmful projects; or we just spend most of our money in a way that’s below-the-bar.
        • By omission: we aren’t ambitious enough, and fail to make full use of the opportunities we now have available to us. Failure by omission is much less salient than failure by commission, but it’s no less real, and may be more likely.
      • Though it’s hard, we need to inhabit both modes of mind at once.

      Continue reading →

      Data collection for AI alignment

      Why might becoming an expert in data collection for AI alignment be high impact?

      We think it’s crucial that we work to positively shape the development of AI, including through technical research on how to ensure that any potentially transformative AI we develop does what we want it to do (known as the alignment problem). If we don’t find ways to align AI with our values and goals — or worse, don’t find ways to prevent AI from actively harming us or otherwise working against our values — the development of AI could pose an existential threat to humanity.

      There are lots of different proposals for building aligned AI, and it’s unclear which (if any) of these approaches will work. A sizeable subset of these approaches require humans to give data to machine learning models, including include AI safety via debate, microscope AI, and iterated amplification.

      These proposals involve collecting human data on tasks like:

      • Evaluating whether a critique of an argument was good
      • Breaking a difficult question into easier subquestions
      • Examining the outputs of tools that interpret deep neural networks
      • Using one model as a tool to make a judgement on how good or bad the outputs of another model are
      • Finding ways to make models behave badly (e.g. generating adversarial examples by hand)

      Collecting this data —

      Continue reading →

      #129 – Dr James Tibenderana on the state of the art in malaria control and elimination

      The good news is deaths from malaria have been cut by a third since 2005. The bad news is it still causes 250 million cases and 600,000 deaths a year, mostly among young children in sub-Saharan Africa.

      We already have dirt-cheap ways to prevent and treat malaria, and the fraction of the Earth’s surface where the disease exists at all has been halved since 1900. So why is it such a persistent problem in some places, even rebounding 15% since 2019?

      That’s one of many questions I put to today’s guest, James Tibenderana — doctor, medical researcher, and technical director at a major global health nonprofit known as Malaria Consortium. James studies the cutting edge of malaria control and treatment in order to optimise how Malaria Consortium spends £100 million a year across countries like Uganda, Nigeria, and Chad.

      In sub-Saharan Africa, where 90% of malaria deaths occur, the infection is spread by a few dozen species of mosquito that are ideally suited to the local climatic conditions and have thus been impossible to eliminate so far.

      And as James explains, while COVID-19 may have an ‘R’ (reproduction number) of 5, in some situations malaria has a reproduction number in the 1,000s. A single person with malaria can pass the parasite to hundreds of mosquitoes, which themselves each go on to bite dozens of people each, allowing cases to quickly explode.

      The nets and antimalarial drugs Malaria Consortium distributes have been highly effective where distributed, but there are tens of millions of young children who are yet to be covered simply due to a lack of funding.

      Despite the success of these approaches, given how challenging it will be to create a malaria-free world, there’s enthusiasm to find new approaches to throw at the problem. Two new interventions have recently generated buzz: vaccines and genetic approaches to control the mosquito species that carry malaria.

      The RTS,S vaccine is the first-ever vaccine that attacks a protozoa as opposed to a virus or bacteria. Under development for decades, it’s a great scientific achievement. But James points out that even after three doses, it’s still only about 30% effective. Unless future vaccines are substantially more effective, they will remain just a complement to nets and antimalarial drugs, which are cheaper and each cut mortality by more than half.

      On the other hand, the latest mosquito-control technologies are almost too effective. It is possible to insert genes into specific mosquito populations that reduce their ability to reproduce. Of course these genes would normally be eliminated by natural selection, but by using a ‘gene drive,’ you can ensure mosquitoes hand these detrimental genes down to 100% of their offspring. If deployed, these genes would spread and ultimately eliminate the mosquitoes that carry malaria at low cost, thereby largely ridding the world of the disease.

      Because a single country embracing this method would have global effects, James cautions that it’s important to get buy-in from all the countries involved, and to have a way of reversing the intervention if we realise we’ve made a mistake. Groups like Target Malaria are working on exactly these two issues.

      James also emphasises that with thousands of similar mosquito species out there, most of which don’t carry malaria, for better or worse gene drives may not make any difference to the number of mosquitoes out there.

      In this comprehensive conversation, Rob and James discuss all of the above, as well as most of what you could reasonably want to know about the state of the art in malaria control today, including:

      • How malaria spreads and the symptoms it causes
      • The use of insecticides and poison baits
      • How big a problem insecticide resistance is
      • How malaria was eliminated in North America and Europe
      • Whether funding is a key bottleneck right now
      • The key strategic choices faced by Malaria Consortium in its efforts to create a malaria-free world
      • And much more

      Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

      Producer: Keiran Harris
      Audio mastering: Ryan Kessler
      Transcriptions: Katy Moore

      Continue reading →