#81 – Ben Garfinkel on scrutinising classic AI risk arguments

80,000 Hours, along with many other members of the effective altruism movement, has argued that helping to positively shape the development of artificial intelligence may be one of the best ways to have a lasting, positive impact on the long-term future. Millions of dollars in philanthropic spending, as well as lots of career changes, have been motivated by these arguments.

Today’s guest, Ben Garfinkel, Research Fellow at Oxford’s Future of Humanity Institute, supports the continued expansion of AI safety as a field and believes working on AI is among the very best ways to have a positive impact on the long-term future. But he also believes the classic AI risk arguments have been subject to insufficient scrutiny given this level of investment.

In particular, the case for working on AI if you care about the long-term future has often been made on the basis of concern about AI accidents; it’s actually quite difficult to design systems that you can feel confident will behave the way you want them to in all circumstances.

Nick Bostrom wrote the most fleshed out version of the argument in his book, Superintelligence. But Ben reminds us that, apart from Bostrom’s book and essays by Eliezer Yudkowsky, there’s very little existing writing on existential accidents. Some more recent AI risk arguments do seem plausible to Ben, but they’re fragile and difficult to evaluate since they haven’t yet been expounded at length.

There have also been very few skeptical experts that have actually sat down and fully engaged with it, writing down point by point where they disagree or where they think the mistakes are. This means that Ben has probably scrutinised classic AI risk arguments as carefully as almost anyone else in the world.

He thinks that most of the arguments for existential accidents often rely on fuzzy, abstract concepts like optimisation power or general intelligence or goals, and toy thought experiments. And he doesn’t think it’s clear we should take these as a strong source of evidence.

Ben’s also concerned that these scenarios often involve massive jumps in the capabilities of a single system, but it’s really not clear that we should expect such jumps or find them plausible.

These toy examples also focus on the idea that because human preferences are so nuanced and so hard to state precisely, it should be quite difficult to get a machine that can understand how to obey them.

But Ben points out that it’s also the case in machine learning that we can train lots of systems to engage in behaviours that are actually quite nuanced and that we can’t specify precisely. If AI systems can recognise faces from images, and fly helicopters, why don’t we think they’ll be able to understand human preferences?

Despite these concerns, Ben is still fairly optimistic about the value of working on AI safety or governance.

He doesn’t think that there are any slam-dunks for improving the future, and so the fact that there are at least plausible pathways for impact by working on AI safety and AI governance, in addition to it still being a very neglected area, puts it head and shoulders above most areas you might choose to work in.

This is the second episode hosted by our Strategy Advisor Howie Lempel, and he and Ben cover, among many other things:

  • The threat of AI systems increasing the risk of permanently damaging conflict or collapse
  • The possibility of permanently locking in a positive or negative future
  • Contenders for types of advanced systems
  • What role AI should play in the effective altruism portfolio

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#80 – Stuart Russell on the flaws that make today’s AI architecture unsafe, and a new approach that could fix them

Stuart Russell, Professor at UC Berkeley and co-author of the most popular AI textbook, thinks the way we approach machine learning today is fundamentally flawed.

In his new book, Human Compatible, he outlines the ‘standard model’ of AI development, in which intelligence is measured as the ability to achieve some definite, completely-known objective that we’ve stated explicitly. This is so obvious it almost doesn’t even seem like a design choice, but it is.

Unfortunately there’s a big problem with this approach: it’s incredibly hard to say exactly what you want. AI today lacks common sense, and simply does whatever we’ve asked it to. That’s true even if the goal isn’t what we really want, or the methods it’s choosing are ones we would never accept.

We already see AIs misbehaving for this reason. Stuart points to the example of YouTube’s recommender algorithm, which reportedly nudged users towards extreme political views because that made it easier to keep them on the site. This isn’t something we wanted, but it helped achieve the algorithm’s objective: maximise viewing time.

Like King Midas, who asked to be able to turn everything into gold but ended up unable to eat, we get too much of what we’ve asked for.

This ‘alignment’ problem will get more and more severe as machine learning is embedded in more and more places: recommending us news, operating power grids, deciding prison sentences, doing surgery, and fighting wars. If we’re ever to hand over much of the economy to thinking machines, we can’t count on ourselves correctly saying exactly what we want the AI to do every time.

Stuart isn’t just dissatisfied with the current model though, he has a specific solution. According to him we need to redesign AI around 3 principles:

  1. The AI system’s objective is to achieve what humans want.
  2. But the system isn’t sure what we want.
  3. And it figures out what we want by observing our behaviour.

Stuart thinks this design architecture, if implemented, would be a big step forward towards reliably beneficial AI.

For instance, a machine built on these principles would be happy to be turned off if that’s what its owner thought was best, while one built on the standard model should resist being turned off because being deactivated prevents it from achieving its goal. As Stuart says, “you can’t fetch the coffee if you’re dead.”

These principles lend themselves towards machines that are modest and cautious, and check in when they aren’t confident they’re truly achieving what we want.

We’ve made progress toward putting these principles into practice, but the remaining engineering problems are substantial. Among other things, the resulting AIs need to be able to interpret what people really mean to say based on the context of a situation. And they need to guess when we’ve rejected an option because we’ve considered it and decided it’s a bad idea, and when we simply haven’t thought about it at all.

Stuart thinks all of these problems are surmountable, if we put in the work. The harder problems may end up being social and political.

When each of us can have an AI of our own — one smarter than any person — how do we resolve conflicts between people and their AI agents? How considerate of other people’s interests do we expect AIs to be? How do we avoid them being used in malicious or anti-social ways?

And if AIs end up doing most work that people do today, how can humans avoid becoming enfeebled, like lazy children tended to by machines, but not intellectually developed enough to know what they really want?

Despite all these problems, the rewards of success could be enormous. If cheap thinking machines can one day do most of the work people do now, it could dramatically raise everyone’s standard of living, like a second industrial revolution.

Without having to work just to survive, people might flourish in ways they never have before.

In today’s conversation we cover, among many other things:

  • What are the arguments against being concerned about AI?
  • Should we develop AIs to have their own ethical agenda?
  • What are the most urgent research questions in this area?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#79 – A.J. Jacobs on radical honesty, following the whole Bible, and reframing global problems as puzzles

Today’s guest, New York Times bestselling author A.J. Jacobs, always hated Judge Judy. But after he found out that she was his seventh cousin, he thought, “You know what, she’s not so bad”.

Hijacking this bias towards family and trying to broaden it to everyone led to his three-year adventure to help build the biggest family tree in history.

He’s also spent months saying whatever was on his mind, tried to become the healthiest person in the world, read 33,000 pages of facts, spent a year following the Bible literally, thanked everyone involved in making his morning cup of coffee, and tried to figure out how to do the most good. His next book will ask: if we reframe global problems as puzzles, would the world be a better place?

This is the first time I’ve hosted the podcast, and I’m hoping to convince people to listen with this attempt at a clever blog post that changes styles each paragraph to reference different A.J. experiments. I don’t actually think it’s that clever, but all of my other ideas seemed worse. I really have no idea how people will react to this episode; I loved it, but I definitely think I’m more entertaining than almost anyone else will. (Radical Honesty.)

We do talk about some useful stuff — one of which is the concept of micro goals. When you wake up in the morning, just commit to putting on your workout clothes. Once they’re on, maybe you’ll think that you might as well get on the treadmill — just for a minute. And once you’re on for 1 minute, you’ll often stay on for 20. So I’m not asking you to commit to listening to the whole episode — just to put on your headphones. (Drop Dead Healthy.)

Another reason to listen is for the facts:

  • The Bayer aspirin company invented heroin as a cough suppressant
  • Coriander is just the British way of saying cilantro
  • Dogs have a third eyelid to protect the eyeball from irritants
  • and A.J. read all 44 million words of the Encyclopedia Britannica from A to Z, which drove home the idea that we know so little about the world (although he does now know that opossums have 13 nipples). (The Know-It-All.)

One extra argument for listening: If you interpret the second commandment literally, then it tells you not to make a likeness of anything in heaven, on earth, or underwater — which rules out basically all images. That means no photos, no TV, no movies. So, if you want to respect the Bible, you should definitely consider making podcasts your main source of entertainment (as long as you’re not listening on the Sabbath). (The Year of Living Biblically.)

I’m so thankful to A.J. for doing this. But I also want to thank Julie, Jasper, Zane and Lucas who allowed me to spend the day in their home; Rob and the rest of the 80,000 Hours team for their help; the thousands of people who’ll listen to this; my fiancée who let me talk about her to those thousands of people; the construction worker who told me how to get to my subway platform on the morning of the interview; Queen Jadwiga for making bagels popular in the 14th century, which kept me going during the recording; and the folks at the New York reservoir whose work allows A.J.’s coffee to be made, without which he’d never have had the energy to talk to me for more than five minutes. (Thanks a Thousand.)

We also discuss:

  • The most extreme ideas A.J.’s ever considered
  • Respecting your older self
  • Blackmailing yourself
  • The experience of having his book made into a CBS sitcom
  • Talking to friends and family about effective altruism
  • Utilitarian movie reviews
  • The value of fiction focused on the long-term future
  • Doing good as a journalist
  • And much more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#78 – Danny Hernandez on forecasting and the drivers of AI progress

Companies use about 300,000 times more computation training the best AI systems today than they did in 2012 and algorithmic innovations have also made them 25 times more efficient at the same tasks.

These are the headline results of two recent papers — AI and Compute and AI and Efficiency — from the Foresight Team at OpenAI. In today’s episode I spoke with one of the authors, Danny Hernandez, who joined OpenAI after helping develop better forecasting methods at Twitch and Open Philanthropy.

Danny and I talk about how to understand his team’s results and what they mean (and don’t mean) for how we should think about progress in AI going forward.

Debates around the future of AI can sometimes be pretty abstract and theoretical. Danny hopes that providing rigorous measurements of some of the inputs to AI progress so far can help us better understand what causes that progress, as well as ground debates about the future of AI in a better shared understanding of the field.

If this research sounds appealing, you might be interested in applying to join OpenAI’s Foresight team — they’re currently hiring research engineers.

In the interview, Danny and I (Arden Koehler) also discuss a range of other topics, including:

  • The question of which experts to believe
  • Danny’s journey to working at OpenAI
  • The usefulness of “decision boundaries”
  • The importance of Moore’s law for people who care about the long-term future
  • What OpenAI’s Foresight Team’s findings might imply for policy
  • The question whether progress in the performance of AI systems is linear
  • The safety teams at OpenAI and who they’re looking to hire
  • One idea for finding someone to guide your learning
  • The importance of hardware expertise for making a positive impact

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#77 – Marc Lipsitch on whether we’re winning or losing against COVID-19

In March Professor Marc Lipsitch — director of Harvard’s Center for Communicable Disease Dynamics — abruptly found himself a global celebrity, his social media following growing 40-fold and journalists knocking down his door, as everyone turned to him for information they could trust.

Here he lays out where the fight against COVID-19 stands today, why he’s open to deliberately giving people COVID-19 to speed up vaccine development, and how we could do better next time.

As Marc tells us, island nations like Taiwan and New Zealand are successfully suppressing SARS-COV-2. But everyone else is struggling.

Even Singapore, with plenty of warning and one of the best test and trace systems in the world, lost control of the virus in mid-April after successfully holding back the tide for 2 months.

This doesn’t bode well for how the US or Europe will cope as they ease their lockdowns. It also suggests it would have been exceedingly hard for China to stop the virus before it spread overseas.

But sadly, there’s no easy way out.

The original estimates of COVID-19’s infection fatality rate, of 0.5-1%, have turned out to be basically right. And the latest serology surveys indicate only 5-10% of people in countries like the US, UK and Spain have been infected so far, leaving us far short of herd immunity. To get there, even these worst affected countries would need to endure something like ten times the number of deaths they have so far.

Marc has one good piece of news: research suggests that most of those who get infected do indeed develop immunity, for a while at least.

To escape the COVID-19 trap sooner rather than later, Marc recommends we go hard on all the familiar options — vaccines, antivirals, and mass testing — but also open our minds to creative options we’ve so far left on the shelf.

Despite the importance of his work, even now the training and grant programs that produced the community of experts Marc is a part of, are shrinking. We look at a new article he’s written about how to instead build and improve the field of epidemiology, so humanity can respond faster and smarter next time we face a disease that could kill millions and cost tens of trillions of dollars.

We also cover:

  • How listeners might contribute as future contagious disease experts, or donors to current projects
  • How we can learn from cross-country comparisons
  • Modelling that has gone wrong in an instructive way
  • What governments should stop doing
  • How people can figure out who to trust, and who has been most on the mark this time
  • Why Marc supports infecting people with COVID-19 to speed up the development of a vaccines
  • How we can ensure there’s population-level surveillance early during the next pandemic
  • Whether people from other fields trying to help with COVID-19 has done more good than harm
  • Whether it’s experts in diseases, or experts in forecasting, who produce better disease forecasts

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#76 – Tara Kirk Sell on COVID-19 misinformation, who’s done well and badly, and what we should reopen first

Amid a rising COVID-19 death toll, and looming economic disaster, we’ve been looking for good news — and one thing we’re especially thankful for is the Johns Hopkins Center for Health Security (CHS).

CHS focuses on protecting us from major biological, chemical or nuclear disasters, through research that informs governments around the world. While this pandemic surprised many, just last October the Center ran a simulation of a ‘new coronavirus’ scenario to identify weaknesses in our ability to quickly respond. Their expertise has given them a key role in figuring out how to fight COVID-19.

Today’s guest, Dr Tara Kirk Sell, did her PhD in policy and communication during disease outbreaks, and has worked at CHS for 11 years on a range of important projects.

Last year she was a leader on Collective Intelligence for Disease Prediction, designed to sound the alarm about upcoming pandemics before others are paying attention. Incredibly, the project almost closed in December, with COVID-19 just starting to spread around the world — but received new funding that allowed the project to respond quickly to the emerging disease.

She also contributed to a recent report attempting to explain the risks of specific types of activities resuming when COVID-19 lockdowns end.

It’s not possible to reach zero risk — so differentiating activities on a spectrum is crucial. Choosing wisely can help us lead more normal lives without reviving the pandemic.

Dance clubs will have to stay closed, but hairdressers can adapt to minimise transmission, and Tara (who happens to also be an Olympic silver medalist swimmer) suggests outdoor non-contact sports could resume soon at little risk.

Her latest work deals with the challenge of misinformation during disease outbreaks.

Analysing the Ebola communication crisis of 2014, they found that even trained coders with public health expertise sometimes needed help to distinguish between true and misleading tweets — showing the danger of a continued lack of definitive information surrounding a virus and how it’s transmitted.

The challenge for governments is not simple. If they acknowledge how much they don’t know, people may look elsewhere for guidance. But if they pretend to know things they don’t, or actively mislead the public, the result can be a huge loss of trust.

Despite their intense focus on COVID-19, researchers at the Center for Health Security know that this is not a one-time event. Many aspects of our collective response this time around have been alarmingly poor, and it won’t be long before Tara and her colleagues need to turn their mind to next time.

You can now donate to CHS through Effective Altruism Funds. Donations made through EA Funds are tax-deductible in the US, the UK, and the Netherlands.

Tara and Rob also discuss:

  • Who has overperformed and underperformed expectations during COVID-19?
  • When are people right to mistrust authorities?
  • The media’s responsibility to be right
  • What policies should be prioritised for next time
  • Should we prepare for future pandemic while the COVID-19 is still going?
  • The importance of keeping non-COVID health problems in mind
  • The psychological difference between staying home voluntarily and being forced to
  • Mistakes that we in the general public might be making
  • Emerging technologies with the potential to reduce global catastrophic biological risks

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#75 – Michelle Hutchinson on what people most often ask 80,000 Hours

Since it was founded, 80,000 Hours has done one-on-one calls to supplement our online content and offer more personalised advice. We try to help people get clear on the most plausible paths for them, the key uncertainties they face in choosing between them, and provide resources, pointers, and introductions to help them in those paths.

I (Michelle Hutchinson) joined the team a couple of years ago after working at Oxford’s Global Priorities Institute, and these days I’m 80,000 Hours’ Head of Advising. Since then, chatting to hundreds of people about their career plans has given me some idea of the kinds of things it’s useful for people to hear about when thinking through their careers.

We all thought it would be useful to discuss some of those on the show for others to hear. Among other topics we cover:

  • The difficulty of maintaining the ambition to increase your social impact, while also being proud of and motivated by what you’re already accomplishing.
  • Why traditional careers advice involves thinking through what types of roles you enjoy followed by which of those are impactful, while we recommend going the other way: ranking roles on impact, and then going down the list to find the one you think you’d most flourish in.
  • That if you’re pitching your job search at the right level of role, you’ll need to apply to a large number of different jobs. So it’s wise to broaden your options, by applying for both stretch and backup roles, and not over-emphasising a small number of organisations.
  • Our suggested process for writing a longer term career plan: 1. shortlist your best medium to long-term career options, then 2. figure out the key uncertainties in choosing between them, and 3. map out concrete next steps to resolve those uncertainties.
  • Why many listeners aren’t spending enough time finding out about what the day-to-day work is like in paths they’re considering, or reaching out to people for advice or opportunities.

I also thought it might be useful to give people a sense of what I do and don’t do in advising calls, to help them figure out if they should sign up for it.

If you’re wondering whether you’ll benefit from advising, bear in mind that it tends to be more useful to people:

  1. With similar views to 80,000 Hours on what the world’s most pressing problems are, because we’ve done most research on the problems we think it’s most important to address.
  2. Who don’t yet have close connections with people working at effective altruist organisations.
  3. Who aren’t strongly locationally constrained.

If you’re unsure, it doesn’t take long to apply and a lot of people say they find the application form itself helps them reflect on their plans. We’re particularly keen to hear from people from under-represented backgrounds.

Want to talk to one of our advisors?

We speak to hundreds of people each year and can offer introductions and answer specific questions you might have. You can join the waitlist here:

Request a career advising session

Also in this episode:

  • I describe mistakes I’ve made in advising, and career changes made by people I’ve spoken with.
  • Rob and I argue about what risks to take with your career, like when it’s sensible to take a study break, or start from the bottom in a new career path.
  • I try to forecast how I’ll change after I have a baby, Rob speculates wildly on what motherhood is like, and Arden and I mercilessly mock Rob.

It continues to be awe inspiring to me how many people I talk to are donating to save lives, making dietary changes to avoid intolerable suffering, and carefully planning their lives to improve the future trajectory of the world. I hope we can continue to support each other in doing those things, and appreciate how important all this work is.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#74 – Dr Greg Lewis on COVID-19 and reducing global catastrophic biological risks

Our lives currently revolve around the global emergency of COVID-19; you’re probably reading this while confined to your house, as the death toll from the worst pandemic since 1918 continues to rise.

The question of how to tackle COVID-19 has been foremost in the minds of many, including here at 80,000 Hours.

Today’s guest, Dr Gregory Lewis, acting head of the Biosecurity Research Group at Oxford University’s Future of Humanity Institute, puts the crisis in context, explaining how COVID-19 compares to other diseases, pandemics of the past, and possible worse crises in the future.

COVID-19 is a vivid reminder that we are vulnerable to biological threats and underprepared to deal with them. We have been unable to suppress the spread of COVID-19 around the world and, tragically, global deaths will at least be in the hundreds of thousands.

How would we cope with a virus that was even more contagious and even more deadly? Greg’s work focuses on these risks — of outbreaks that threaten our entire future through an unrecoverable collapse of civilisation, or even the extinction of humanity.

If such a catastrophe were to occur, Greg believes it’s more likely to be caused by accidental or deliberate misuse of biotechnology than by a pathogen developed by nature.

There are a few direct causes for concern: humans now have the ability to produce some of the most dangerous diseases in history in the lab; technological progress may enable the creation of pathogens which are nastier than anything we see in nature; and most biotechnology has yet to even be conceived, so we can’t assume all the dangers will be familiar.

This is grim stuff, but it needn’t be paralysing. In the years following COVID-19, humanity may be inspired to better prepare for the existential risks of the next century: improving our science, updating our policy options, and enhancing our social cohesion.

COVID-19 is a tragedy of stunning proportions, and its immediate threat is undoubtedly worthy of significant resources.

But we will get through it; if a future biological catastrophe poses an existential risk, we may not get a second chance. It is therefore vital to learn every lesson we can from this pandemic, and provide our descendants with the security we wish for ourselves.

Today’s episode is the hosting debut of our Strategy Advisor, Howie Lempel.

80,000 Hours has focused on COVID-19 for the last few weeks and published over ten pieces about it, and a substantial benefit of this interview was to help inform our own views. As such, at times this episode may feel like eavesdropping on a private conversation, and it is likely to be of most interest to people primarily focused on making the long-term future go as well as possible.

In this episode, Howie and Greg cover:

  • Reflections on the first few months of the pandemic
  • Common confusions around COVID-19
  • How COVID-19 compares to other diseases
  • What types of interventions have been available to policymakers
  • Arguments for and against working on global catastrophic biological risks (GCBRs)
  • Why state actors would even use or develop biological weapons
  • How to know if you’re a good fit to work on GCBRs
  • The response of the effective altruism community, as well as 80,000 Hours in particular, to COVID-19
  • And much more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type “80,000 Hours” into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

Emergency episode: Rob & Howie on the menace of COVID-19, and what both governments & individuals might be able to do to help

Hours ago from home isolation Rob and Howie recorded an episode on:

  1. How many could die in the coronavirus crisis, and the risk to your health personally.
  2. What individuals might be able to do.
  3. What we suspect governments should do.
  4. The properties of the SARS-CoV-2 virus, the importance of not contributing to its spread, and how you can reduce your chance of catching it.
  5. The ways some societies have screwed up, which countries have been doing better than others, how we can avoid this happening again, and why we’re optimistic.

We’ve rushed this episode out, accepting a higher risk of errors, in order to share information as quickly as possible about a very fast-moving situation.

We’ve compiled 70 links below to projects you could get involved with, as well as academic papers and other resources to understand the situation and what’s needed to fix it.

A rough transcript is also available.

Please also see our ‘problem profile’ on global catastrophic biological risks for information on these grave risks and how you can contribute to preventing them.

For more see the COVID-19 landing page on our site. You can also keep up to date by following Rob and 80,000 Hours’ Twitter feeds.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Producer: Keiran Harris.

Continue reading →

#73 – Phil Trammell on how becoming a ‘patient philanthropist’ might allow you to do far more good

To do good, most of us look to use our time and money to affect the world around us today. But perhaps that’s all wrong.

If you took $1,000 you were going to donate and instead put it in the stock market — where it grew on average 5% a year — in 100 years you’d have $125,000 to give away instead. And in 200 years you’d have $17 million.

This astonishing fact has driven today’s guest, economics researcher Philip Trammell at Oxford’s Global Priorities Institute, to investigate the case for and against so-called ‘patient philanthropy’ in depth. If the case for patient philanthropy is as strong as Phil believes, many of us should be trying to improve the world in a very different way than we are now.

He points out that on top of being able to dispense vastly more, whenever your trustees decide to use your gift to improve the world, they’ll also be able to rely on the much broader knowledge available to future generations. A donor two hundred years ago couldn’t have known distributing anti-malarial bed nets was a good idea. Not only did bed nets not exist — we didn’t even know about germs, and almost nothing in medicine was justified by science.

What similar leaps will our descendants have made in 200 years, allowing your now vast foundation to benefit more people in even greater ways?

And there’s a third reason to wait as well. What are the odds that we today live at the most critical point in history, when resources happen to have the greatest ability to do good? It’s possible. But the future may be very long, so there has to be a good chance that some moment in the future will be both more pivotal and more malleable than our own.

Of course, there are many objections to this proposal. If you start a foundation you hope will wait around for centuries, might it not be destroyed in a war, revolution, or financial collapse?

Or might it not drift from its original goals, eventually just serving the interest of its distant future trustees, rather than the noble pursuits you originally intended?

Or perhaps it could fail for the reverse reason, by staying true to your original vision — if that vision turns out to be as deeply morally mistaken as the Rhodes’ Scholarships initial charter, which limited it to ‘white Christian men’.

Alternatively, maybe the world will change in the meantime, making your gift useless. At one end, humanity might destroy itself before your trust tries to do anything with the money. Or perhaps everyone in the future will be so fabulously wealthy, or the problems of the world already so overcome, that your philanthropy will no longer be able to do much good.

Are these concerns, all of them legitimate, enough to overcome the case in favour of patient philanthropy? In today’s conversation with researcher Phil Trammell and my 80,000 Hours colleague Howie Lempel, we try to answer that, and also discuss:

  • Real attempts at patient philanthropy in history and how they worked out
  • Should we have a mixed strategy, where some altruists are patient and others impatient?
  • Which causes are most likely to need money now, and which later?
  • What is the research frontier in this issue of global prioritisation?
  • What does this all mean for what listeners should do differently?

COVID-19

Finally, note that we recorded this podcast before the appearance of COVID-19. And as we discuss, Phil makes the case that patient philanthropists should wait for moments in history when patient philanthropic resources can do the most good. Could the coronavirus crisis be one of those important historical episodes during which Phil would argue that even patient philanthropists should ramp up their spending?

We’ve spoken with him more recently, and he says that this strikes him as unlikely. The virus is certainly doing widespread damage, but most of this damage is expected to accrue in the next few years at most. As a result, this is the sort of crisis that governments and impatient philanthropists are happy to spend on (to the extent that spending can help at all).

On Phil’s view, therefore, patient philanthropists are still best advised to wait i) until they’re rich enough to better address, or fund more substantial preparation for, similar future crises, or, ii) until we face crises with unusually long-lasting impacts, not just unusually severe impacts.

If this is right, COVID-19 just serves as an example of the many temptations to spend in the present that patient philanthropists will have to resist, in order to reap the benefits that can come from waiting to do good.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#72 – Toby Ord on the precipice and humanity’s potential futures

This week Oxford academic and advisor to 80,000 Hours Toby Ord released his new book The Precipice: Existential Risk and the Future of Humanity. It’s about how our long-term future could be better than almost anyone believes, but also how humanity’s recklessness is putting that future at grave risk, in Toby’s reckoning a 1 in 6 chance of being extinguished this century.

I loved the book and learned a great deal from it.

While preparing for this interview I copied out 87 facts that were surprising to me or seemed important. Here’s a sample of 16:

  1. The probability of a supervolcano causing a civilisation-threatening catastrophe in the next century is estimated to be 100x that of asteroids and comets combined.
  2. The Biological Weapons Convention — a global agreement to protect humanity — has just four employees, and a smaller budget than an average McDonald’s.
  3. In 2008 a ‘gamma ray burst’ reached Earth from another galaxy, 10 billion light years away. It was still bright enough to be visible to the naked eye. We aren’t sure what generates gamma ray bursts but one cause may be two neutron stars colliding.
  4. Before detonating the first nuclear weapon, scientists in the Manhattan Project feared that the high temperatures in the core, unprecedented for Earth, might be able to ignite the hydrogen in water. This would set off a self-sustaining reaction that would burn off the Earth’s oceans, killing all life above ground. They thought this was unlikely, but many atomic scientists feared their calculations could be missing something. As far as we know, the US President was never informed of this possibility, but similar risks were one reason Hitler stopped pursuing the Bomb.
  5. If we eventually burn all the fossil fuels we’re confident we can access, the leading Earth-system models suggest we’d experience 9–13°C of warming by 2300, an absolutely catastrophic increase.
  6. In 1939, the renowned nuclear scientist Enrico Fermi told colleagues that a nuclear chain reaction was but a ‘remote possibility’. Four years later Fermi himself was personally overseeing the world’s first nuclear reactor. Wilbur Wright predicted heavier-than-air flight was at least fifty years away — just two years before he himself invented it.
  7. The Japanese bioweapons programme in the Second World War — which included using bubonic plague against China — was directly inspired by an anti-bioweapons treaty. The reasoning ran that if Western powers felt the need to outlaw their use, these weapons must especially good to have.
  8. In the early 20th century the Spanish Flu killed 3-6% of the world’s population. In the 14th century the Black Death killed 25-50% of Europeans. But that’s not the worst pandemic to date: that’s the passage of European diseases to the Americans, which may have killed as much as 90% of the local population.
  9. A recent paper estimated that even if honeybees were completely lost — and all other pollinators too — this would only create a 3 to 8 percent reduction in global crop production.
  10. In 2007, foot-and-mouth disease, a high-risk pathogen that can only be studied in labs following the top level of biosecurity, escaped from a research facility leading to an outbreak in the UK. An investigation found that the virus had escaped from a badly-maintained pipe. After repairs, the lab’s licence was renewed — only for another leak to occur two weeks later.
  11. Toby estimates that ‘great power wars effectively pose more than a percentage point of existential risk over the next century. This makes it a much larger contributor to total existential risk than all the natural risks like asteroids and volcanos combined.
  12. During the Cuban Missile Crisis, Kennedy and Khrushchev found it so hard to communicate, and the long delays so dangerous, that they established the ‘red telephone’ system so they could write to one another directly, and better avoid future crises coming so close to the brink.
  13. A US Airman claims that during a nuclear false alarm in 1962 that he himself witnessed, two airmen from one launch site were ordered to run through the underground tunnel to the launch site of another missile, with orders to shoot a lieutenant if he continued to refuse to abort the launch of his missile.
  14. In 2014 GlaxoSmithKline accidentally released 45 litres of concentrated polio virus into a river in Belgium. In 2004, SARS escaped from the National Institute of Virology in Beijing. In 2005 at the University of Medicine and Dentistry in New Jersey, three mice infected with bubonic plague went missing from the lab and were never found.
  15. The Soviet Union covered 22 million square kilometres, 16% of the world’s land area. At its height, during the reign of Genghis Khan’s grandson, Kublai Khan, the Mongol Empire had a population of 100 million, around 25% of world’s population at the time.
  16. All the methods we’ve come up with for deflecting asteroids wouldn’t work on one big enough to cause human extinction.
  17. Here’s fifty-one ideas for reducing existential risk from the book.

While I’ve been studying this topic for a long time, and known Toby eight years, a remarkable amount of what’s in the book was new to me.

Of course the book isn’t a series of isolated amusing facts, but rather a systematic review of the many ways humanity’s future could go better or worse, how we might know about them, and what might be done to improve the odds.

And that’s how we approach this conversation, first talking about each of the main risks, then how we can learn about things that have never happened before, then finishing with what a great future for humanity might look like and how it might be achieved.

Toby is a famously good explainer of complex issues — a bit of a modern Carl Sagan character — so as expected this was a great interview, and one which my colleague Arden Koehler and I barely even had to work for.

For those wondering about pandemic just now, this extract about diseases like COVID-19 was the most read article in the The Guardian USA the day the book was launched.

Some topics Arden and I bring up:

  • What Toby changed his mind about while writing the book
  • Asteroids, comets, supervolcanoes, and threats from space
  • Why natural and anthropogenic risks should be treated so differently
  • Are people exaggerating when they say that climate change could actually end civilization?
  • What can we learn from historical pandemics?
  • How to estimate likelihood of nuclear war
  • Toby’s estimate of unaligned AI causing human extinction in the next century
  • Is this century be the most important time in human history, or is that a narcissistic delusion?
  • Competing visions for humanity’s ideal future
  • And more.

Interested in applying this thinking to your career?

If you found this interesting, and are thinking through how considerations like these might affect your career choices, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.

Apply to speak with our team

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#71 – Benjamin Todd on the key ideas of 80,000 Hours

The 80,000 Hours Podcast is about “the world’s most pressing problems and how you can use your career to solve them”, and in this episode we tackle that question in the most direct way possible.

Last year we published a summary of all our key ideas, which links to many of our other articles, and which we are aiming to keep updated as our opinions shift.

All of us added something to it, but the single biggest contributor was our CEO and today’s guest, Ben Todd, who founded 80,000 Hours along with Will MacAskill back in 2012.

This key ideas page is the most read on the site. By itself it can teach you a large fraction of the most important things we’ve discovered since we started investigating high impact careers.

But it’s perhaps more accurate to think of it as a mini-book, as it weighs in at over 20,000 words.

Fortunately it’s designed to be highly modular and it’s easy to work through it over multiple sessions, scanning over the articles it links to on each topic.

Perhaps though, you’d prefer to absorb our most essential ideas in conversation form, in which case this episode is for you.

If you want to have a big impact with your career, and you say you’re only going to read one article from us, we recommend you read our key ideas page.

And likewise, if you’re only going to listen to one of our podcast episodes, it should be this one. We have fun and set a strong pace, running through:

  • The most common misunderstandings of our advice
  • A high level overview of what 80,000 Hours generally recommends
  • Our key moral positions
  • What are the most pressing problems to work on and why?
  • Which careers effectively contribute to solving these problems?
  • Central aspects of career strategy like how to weigh up career capital, personal fit, and exploration
  • As well as plenty more.

One benefit of this podcast over the article is that we can more easily communicate uncertainty, and dive into the things we’re least sure about, or didn’t yet cover within the article.

Note though that our what’s in the article is more precisely stated, our advice is going to keep shifting, and we’re aiming to keep the key ideas page current as our thinking evolves over time. This episode was recorded in November 2019, so if you notice a conflict between the page and this episode in the future, go with the page!

Update: As of Sept 2021, you can now see this more recent introduction to the key ideas of 80,000 Hours and our story on the Superdatascience podcast, which is especially good for people with STEM backgrounds. You can also see another introduction on Clearer Thinking, which is a bit more in-depth.

Interested in applying this thinking to your career?

If you found this interesting, and are thinking through how considerations like these might affect your career choices, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.

Apply to speak with our team

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

Bonus episode: Arden & Rob on demandingness, work-life balance and injustice

Today’s bonus episode of the podcast is a quick conversation between me and my fellow 80,000 Hours researcher Arden Koehler about a few topics, including the demandingness of morality, work-life balance, and emotional reactions to injustice.

You can get it by subscribing to the 80,000 Hours Podcast wherever you listen to podcasts. Learn more about the show.

Arden is about to graduate with a philosophy PhD from New York University, so naturally we dive right into some challenging implications of utilitarian philosophy and how it might be applied to real life. Issues we talk about include:

  • If you’re not going to be completely moral, should you try being a bit more moral or give up?
  • Should you feel angry if you see an injustice, and if so, why?
  • How much should we ask people to live frugally?

So far the feedback on the post-episode chats that we’ve done have been positive, so we thought we’d go ahead and try out this freestanding one. But fair warning: it’s among the more difficult episodes to follow, and probably not the best one to listen to first as you’ll benefit from having more context!

If you’d like to listen to more of Arden, you can find her in episode 67 — David Chalmers on the nature and ethics of consciousness, or episode 66 – Peter Singer on being provocative, effective altruism & how his moral views have changed.

Here’s more information on some of the issues we touch on:

And finally, Toby Ord — one of our founding Trustees and a Senior Research Fellow in Philosophy at Oxford University — has his new book The Precipice: Existential Risk and the Future of Humanity coming out next week. I’ve read it and very much enjoyed it. Find out where you can pre-order it here. We’ll have an interview with him coming up soon.

Continue reading →

#70 – Dr Cassidy Nelson on the twelve best ways to stop the next pandemic (and limit COVID-19)

COVID-19 (previously known as nCoV) is alarming governments and citizens around the world. It has killed more than 1,000 people, brought the Chinese economy to a standstill, and continues to show up in more and more places.

But bad though it is, it’s much closer to a warning shot than a worst case scenario. The next emerging infectious disease could easily be more contagious, more fatal, or both.

Despite improvements in the last few decades, humanity is still not nearly prepared enough to contain new diseases. We identify them too slowly. We can’t do enough to reduce their spread. And we lack vaccines or drugs treatments for at least a year, if they ever arrive at all.

This is a precarious situation, especially with advances in biotechnology increasing our ability to modify viruses and bacteria as we like.

In today’s episode, Cassidy Nelson, a medical doctor and research scholar at Oxford University’s Future of Humanity Institute, explains 12 things her research group think urgently need to happen if we’re to keep the risk at acceptable levels. The ideas are:

Science

1. Roll out genetic sequencing tests that lets you test someone for all known and unknown pathogens in one go.
2. Fund research into faster ‘platform’ methods for going from pathogen to vaccine, perhaps using innovation prizes.
3. Fund R&D into broad-spectrum drugs, especially antivirals, similar to how we have generic antibiotics against multiple types of bacteria.

Response

4. Develop a national plan for responding to a severe pandemic, regardless of the cause. Have a backup plan for when things are so bad the normal processes have stopped working entirely.
5. Rigorously evaluate in what situations travel bans are warranted. (They’re more often counterproductive.)
6. Coax countries into more rapidly sharing their medical data, so that during an outbreak the disease can be understood and countermeasures deployed as quickly as possible.
7. Set up genetic surveillance in hospitals, public transport and elsewhere, to detect new pathogens before an outbreak — or even before patients develop symptoms.
8. Run regular tabletop exercises within governments to simulate how a pandemic response would play out.

Oversight

9. Mandate disclosure of accidents in the biosafety labs which handle the most dangerous pathogens.
10. Figure out how to govern DNA synthesis businesses, to make it harder to mail order the DNA of a dangerous pathogen.
11. Require full cost-benefit analysis of ‘dual-use’ research projects that can generate global risks.

12. And finally, to maintain momentum, it’s necessary to clearly assign responsibility for the above to particular individuals and organisations.

Very simply, there are multiple cutting edge technologies and policies that offer the promise of detecting new diseases right away, and delivering us effective treatments in weeks rather than years. All of them can use additional funding and talent.

At the same time, health systems around the world also need to develop pandemic response plans — something few have done — so they don’t have to figure everything out on the fly.

For example, if we don’t have good treatments for a disease, at what point do we stop telling people to come into hospital, where there’s a particularly high risk of them infecting the most medically vulnerable people? And if borders are shut down, how will we get enough antibiotics or facemasks, when they’re almost all imported?

Separately, we need some way to stop bad actors from accessing the tools necessary to weaponise a viral disease, before they cost less than $1,000 and fit on a desk.

These advances can be pursued by politicians and public servants, as well as academics, entrepreneurs and doctors, opening the door for many listeners to pitch in to help solve this incredibly pressing problem.

In the episode Rob and Cassidy also talk about:

  • How Cassidy went from clinical medicine to a PhD studying novel pathogens with pandemic potential
  • The pros, and significant cons, of travel restrictions
  • Whether the same policies work for natural and anthropogenic pandemics
  • Where we stand with nCoV as of today.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Transcriptions: Zakee Ulhaq.

Continue reading →

#69 – Jeff Ding on China, its AI dream, and what we get wrong about both

The State Council of China’s 2017 AI plan was the starting point of China’s AI planning; China’s approach to AI is defined by its top-down and monolithic nature; China is winning the AI arms race; and there is little to no discussion of issues of AI ethics and safety in China. How many of these ideas have you heard?

In his paper ‘Deciphering China’s AI Dream’ today’s guest, PhD student Jeff Ding, outlines why he believes none of these claims are true.

He first places China’s new AI strategy in the context of its past science and technology plans, as well as other countries’ AI plans. What is China actually doing in the space of AI development?

Jeff emphasises that China’s AI strategy did not appear out of nowhere with the 2017 state council AI development plan, which attracted a lot of overseas attention. Rather that was just another step forward in a long trajectory of increasing focus on science and technology. It’s connected with a plan to develop an ‘Internet of Things’, and linked to a history of strategic planning for technology in areas like aerospace and biotechnology.

And it was not just the central government that was moving in this space; companies were already pushing forward in AI development, and local level governments already had their own AI plans. You could argue that the central government was following their lead in AI more than the reverse.

What are the different levers that China is pulling to try to spur AI development?

Here, Jeff wanted to challenge the myth that China’s AI development plan is based on a monolithic central plan requiring people to develop AI. In fact, bureaucratic agencies, companies, academic labs, and local governments each set up their own strategies, which sometimes conflict with the central government.

Are China’s AI capabilities especially impressive? In the paper Jeff develops a new index to measure and compare the US and China’s progress in AI.

Jeff’s AI Potential Index — which incorporates trends and capabilities in data, hardware, research and talent, and the commercial AI ecosystem — indicates China’s AI capabilities are about half those of America. His measure, though imperfect, dispels the notion that China’s AI capabilities have surpassed the US or make it the world’s leading AI power.

Following that 2017 plan, a lot of Western observers thought that to have a good national AI strategy we’d need to figure out how to play catch-up with China. Yet Chinese strategic thinkers and writers at the time actually thought that they were behind — because the Obama administration had issued a series of three white papers in 2016.

Finally, Jeff turns to the potential consequences of China’s AI dream for issues of national security, economic development, AI safety and social governance.

He claims that, despite the widespread belief to the contrary, substantive discussions about AI safety and ethics are indeed emerging in China. For instance, a new book from Tencent’s Research Institute is proactive in calling for stronger awareness of AI safety issues.

In today’s episode, Rob and Jeff go through this widely-discussed report, and also cover:

  • The best analogies for thinking about the growing influence of AI
  • How do prominent Chinese figures think about AI?
  • Cultural cliches in the West and China
  • Coordination with China on AI
  • Private companies vs. government research
  • How are things are going to play out with ‘compute’?
  • China’s social credit system
  • The relationship between China and other countries beyond AI
  • Suggestions for people who want to become professional China specialists
  • And more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

Bonus episode: What we do and don’t know about the 2019-nCoV coronavirus

UPDATE: Please also see our COVID-19 landing page for many more up-to-date articles about the pandemic.


Two 80,000 Hours researchers, Robert Wiblin and Howie Lempel, just recorded a discussion about the 2019-nCoV virus.

You can get it by subscribing to the 80,000 Hours Podcast wherever you listen to podcasts. Learn more about the show.

In the 1h15m conversation we cover:

  • What is it?
  • How many people have it?
  • How contagious is it?
  • What fraction of people who contract it die?
  • How likely is it to spread out of control?
  • What’s the range of plausible fatalities worldwide?
  • How does it compare to other epidemics?
  • What don’t we know and why?
  • What actions should listeners take, if any?
  • How should the complexities of the above be communicated by public health professionals?

Below are some links we discuss in the episode, or otherwise think are informative:

Advice on how to avoid catching contagious diseases

Forecasts

General summaries of what’s going on

Our previous episodes about pandemic control

Thoughts on how to communicate risk to the public

Official updates

Published papers

General advice on disaster preparedness

Tweets mentioned

Continue reading →

#68 – Will MacAskill on the moral case against ever leaving the house, whether now is the hinge of history, and the culture of effective altruism

You’re given a box with a set of dice in it. If you roll an even number, a person’s life is saved. If you roll an odd number, someone else will die. Each time you shake the box you get $10. Should you do it?

A committed consequentialist might say, “Sure! Free money!” But most will think it obvious that you should say no. You’ve only gotten a tiny benefit, in exchange for moral responsibility over whether other people live or die.

And yet, according to today’s return guest, philosopher Will MacAskill, in a real sense we’re shaking this box every time we leave the house, and those who think shaking the box is wrong should probably also be shutting themselves indoors and minimising their interactions with others.

To see this, imagine you’re deciding whether to redeem a coupon for a free movie. If you go, you’ll need to drive to the cinema. By affecting traffic throughout the city, you’ll have slightly impacted the schedules of thousands or tens of thousands of people. The average life is about 30,000 days, and over the course of a life the average person will have about two children. So — if you’ve impacted at least 7,500 days — then, statistically speaking, you’ve probably influenced the exact timing of a conception event. With 200 million sperm in the running each time, changing the moment of copulation, even by a fraction of a second, will almost certainly mean you’ve changed the identity of a future person.

That different child will now impact all sorts of things as they go about their life, including future conception events. And then those new people will impact further future conceptions events, and so on. Thanks to these ripple effects, after 100 or maybe 200 years, basically everybody alive will be a different person because you went to the movies.

As a result, you’ll have changed when many people die. Take car crashes as one example: about 1.3% of people die in car crashes. Over that century, as the identities of everyone change as a result of your action, many of the ‘new’ people will cause car crashes that wouldn’t have occurred in their absence, including crashes that prematurely kill people alive today.

Of course, in expectation, exactly the same number of people will have been saved from car crashes, and will die later than they would have otherwise.

So, if you go for this drive, you’ll save hundreds of people from premature death, and cause the early death of an equal number of others. But you’ll get to see a free movie (worth $10). Should you do it?

This setup forms the basis of ‘the paralysis argument’, explored in one of Will’s recent papers.

To see how it implies inaction as an ideal, recall the distinction between consequentialism and non-consequentialism. For consequentialists, who just add up the net consequences of everything, there’s no problem here. The benefits and costs perfectly cancel out, and you get to see a free movie.

But most ‘non-consequentialists’ endorse an act/omission distinction: it’s worse to knowingly cause a harm than it is to merely allow a harm to occur. And they further believe harms and benefits are asymmetric: it’s more wrong to hurt someone a given amount than it is right to benefit someone else an equal amount.

So, in this example, the fact that your actions caused X deaths should be given more moral weight than the fact that you also saved X lives.

It’s because of this that the nonconsequentialist feels they shouldn’t roll the dice just to gain $10. But as we can see above, if they’re being consistent, rather than leave the house, they’re obligated to do whatever would count as an ‘inaction’, in order to avoid the moral responsibility of foreseeably causing people’s deaths.

Will’s best idea for resolving this strange implication? In this episode we discuss a few options:

  • give up on the benefit/harm asymmetry
  • find a definition of ‘action’ under which leaving the house counts as an inaction
  • accept a ‘Pareto principle’, where actions can’t be wrong so long as everyone affected would approve or be indifferent to them before the fact.

Will is most optimistic about the last, but as we discuss, this would bring people a lot closer to full consequentialism than is immediately apparent.

Finally, a different escape — conveniently for Will, given his work — is to dedicate your life to improving the long-term future, and thereby do enough good to offset the apparent harms you’ll do every time you go for a drive. In this episode Rob and Will also cover:

  • Are, or are we not, living at the most influential time in history?
  • The culture of the effective altruism community
  • Will’s new lower estimate of the risk of human extinction over the next hundred years
  • Why does AI stand out a bit less for Will now as a particularly pivotal technology?
  • How he’s getting feedback while writing his book
  • The differences between Americans and Brits
  • Does the act/omission distinction make sense?
  • The case for strong longtermism, and longtermism for risk-averse altruists
  • Caring about making a difference yourself vs. caring about good things happening
  • Why feeling guilty about characteristics you were born with is crazy
  • And plenty more.

Interested in applying this thinking to your career?

If you found this interesting, and are thinking through how considerations like these might affect your career choices, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.

Apply to speak with our team

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#67 – David Chalmers on the nature and ethics of consciousness

What is it like to be you right now? You’re seeing this text on the screen, you smell the coffee next to you, feel the warmth of the cup, and hear your housemates arguing about whether Home Alone was better than Home Alone 2: Lost in New York. There’s a lot going on in your head — your conscious experiences.

Now imagine beings that are identical to humans, except for one thing: they lack conscious experience. If you spill that coffee on them, they’ll jump like anyone else, but inside they’ll feel no pain and have no thoughts: the lights are off.

The concept of these so-called ‘philosophical zombies’ was popularised by today’s guest — celebrated philosophy professor David Chalmers — in order to explore the nature of consciousness. In a forthcoming book he poses a classic ‘trolley problem’:

Suppose you have a conscious human on one train track, and five non-conscious humanoid zombies on another. If you do nothing, a trolley will hit and kill the conscious human. If you flip a switch to redirect the trolley, you can save the conscious human, but in so doing kill the five non-conscious humanoid zombies. What should you do?

Many people think you should divert the trolley, precisely because the lack of conscious experience means the moral status of the zombies is greatly reduced, or absent entirely.

So, which features of consciousness qualify someone for moral consideration? One view is that the only conscious states that matter are those that have a positive or negative quality, like pleasure and suffering. But Dave’s intuitions are quite different.

He asks us to consider the ‘Vulcans’. If you’ve never seen Star Trek, Vulcans experience rich forms of cognitive and sensory consciousness; they see and hear and reflect on the world around them. But they’re incapable of experiencing pleasure or pain.

Does such a being lack moral status?

To answer this Dave invites us to imagine a further trolley problem: suppose you have a conscious human on one track, and five Vulcans on the other. Should you divert the trolley to kill the five Vulcans in order to save the human?

Dave firmly believes the answer is no, and if he’s right, pleasure and suffering can’t be the only things required for moral status. The fact that Vulcans are conscious in other ways must matter in itself.

Dave is one of the world’s top experts on the philosophy of consciousness. He helped return the question ‘what is consciousness?’ to the centre stage of philosophy with his 1996 book ‘The Conscious Mind’, which argued against then-dominant materialist theories of consciousness.

This comprehensive interview, at over four and a half hours long, outlines each contemporary answer to the mystery of consciousness, what it has going for it, and its likely ethical implications. Those theories span the full range from illusionism, the idea that consciousness is in some sense an ‘illusion’, to panpsychism, according to which it’s a fundamental physical property present in all matter.

These questions are absolutely central for anyone who wants to build a positive future. If insects were conscious our treatment of them could already be an atrocity. If accurate computer simulations of people will one day be conscious, how will we know, and how should we treat them? And what is it about consciousness that matters, if anything?

Dave Chalmers is probably the best person on the planet to interview about these questions, and Rob & Arden cover this and much more over the course of what is both our longest ever episode and our personal favourite so far.

They discuss:

  • Why is there so little consensus among philosophers about so many key questions?
  • Can free will exist, even in a deterministic universe?
  • Might we be living in a simulation? Why is this worth talking about?
  • The hard problem of consciousness
  • Materialism, functionalism, idealism, illusionism, panpsychism, and other views about the nature of consciousness
  • The story of ‘integrated information theory’
  • What philosophers think of eating meat
  • Should we worry about AI becoming conscious, and therefore worthy of moral concern?
  • Should we expect to get to conscious AI well before we get human-level artificial general intelligence?
  • Could minds uploaded to a computer be conscious?
  • If you uploaded your mind, would that mind be ‘you’?
  • Why did Dave start thinking about the ‘singularity’?
  • Careers in academia
  • And whether a sense of humour is useful for research.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#66 – Peter Singer on being provocative, EA, how his moral views have changed, & rescuing children drowning in ponds

In 1989, the professor of moral philosophy Peter Singer was all over the news for his inflammatory opinions about abortion. But the controversy stemmed from Practical Ethics — a book he’d actually released way back in 1979. It took a German translation ten years on for protests to kick off.

According to Singer, he honestly didn’t expect this view to be as provocative as it became, and he certainly wasn’t aiming to stir up trouble and get attention.

But after the protests and the increasing coverage of his work in German media, the previously flat sales of Practical Ethics shot up. And the negative attention he received ultimately led him to a weekly opinion column in The New York Times.

Singer points out that as a result of this increased attention, many more people also read the rest of the book — which includes chapters with a real ability to do good, covering global poverty, animal ethics, and other important topics. So should people actively try to court controversy with one view, in order to gain attention for another more important one?

Singer’s book The Life You Can Save has just been re-released as a 10th anniversary edition, available as a free ebook and audiobook, read by a range of celebrities. Get it here.

Perhaps sometimes, but controversy can also just have bad consequences. His critics may view him as someone who says whatever he thinks, hang the consequences. But as Singer tells it, he gives public relations considerations plenty of thought.

One example is that Singer opposes efforts to advocate for open borders. Not because he thinks a world with freedom of movement is a bad idea per se, but rather because it may help elect leaders like Mr Trump.

Another is the focus of the effective altruism (EA) community. Singer certainly respects those who are focused on improving the long-term future of humanity, and thinks this is important work that should continue. But he’s troubled by the possibility of extinction risks becoming the public face of the movement.

He suspects there’s a much narrower group of people who are likely to respond to that kind of appeal, compared to those who are drawn to work on global poverty or preventing animal suffering. And that to really transform philanthropy and culture more generally, the effective altruism community needs to focus on smaller donors with more conventional concerns.

Rob is joined in this interview by Arden Koehler, the newest addition to the 80,000 Hours team, both for the interview and a post-episode discussion. They only had an hour with Peter, but also cover:

  • What does he think are the most plausible alternatives to consequentialism?
  • Is it more humane to eat wild caught animals than farmed animals?
  • The re-release of The Life You Can Save
  • Whether it’s good to polarize people in favour and against your views
  • His active opposition to the Vietnam war and conscription
  • Should we make it easier for people to express unpopular opinions?
  • His most and least strategic career decisions
  • What does he think are the effective altruism community’s biggest mistakes?
  • Population ethics and arguments for and against prioritising the long-term future
  • What led to his changing his mind on significant questions in moral philosophy?
  • What is at the heart of making moral mistakes?
  • What should we do when we are morally uncertain?
  • And more.

In the post-episode discussion, Rob and Arden continue talking about:

  • The pros and cons of keeping EA as one big movement
  • Singer’s thoughts on immigration
  • And consequentialism with side constraints

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.
Illustration of Singer: Matthias Seifarth.

Continue reading →

#65 – Ambassador Bonnie Jenkins on 8 years of combating WMD terrorism

Ambassador Bonnie Jenkins has had an incredible career in diplomacy and global security.

Today she’s a nonresident senior fellow at the Brookings Institution and president of Global Connections Empowering Global Change, where she works on global health, infectious disease and defence innovation. And in 2017 she founded her own nonprofit, the Women of Color Advancing Peace, Security and Conflict Transformation (WCAPS).

But in this interview we focus on her time as Ambassador at the U.S. Department of State under the Obama administration, where she worked for eight years as Coordinator for Threat Reduction Programs in the Bureau of International Security and Nonproliferation.

In that role, Bonnie coordinated the Department of State’s work to prevent weapons of mass destruction (WMD) terrorism with programmes funded by other U.S. departments and agencies, and as well as other countries.

What was it like to be an ambassador focusing on an issue, rather than an ambassador of a country? Bonnie says the travel was exhausting. She could find herself in Africa one week, and Indonesia the next. She’d meet with folks going to New York for meetings at the UN one day, then hold her own meetings at the White House the next.

Each event would have a distinct purpose. For one, she’d travel to Germany as a US Representative, talking about why the two countries should extend their partnership. For another, she could visit the Food and Agriculture Organization to talk about why they need to think more about biosecurity issues. No day was like the last.

Bonnie was also a leading U.S. official in the launch and implementation of the Global Health Security Agenda (GHSA) discussed at length in episode 27.

Before returning to government in 2009, Bonnie served as program officer for U.S. Foreign and Security Policy at the Ford Foundation. She also served as counsel on the National Commission on Terrorist Attacks Upon the United States (9/11 Commission). Bonnie was the lead staff member conducting research, interviews, and preparing commission reports on counterterrorism policies in the Office of the Secretary of Defense and on U.S. military plans targeting al-Qaeda before 9/11.

She’s also a retired Naval Reserves officer and received several awards for her service. Bonnie remembers the military fondly. She didn’t want that life 24 hours a day, which is why she never went full time. But she liked the rules, loved the camaraderie and remembers it as a time filled with laughter.

And as if that all weren’t curious enough, four years ago Bonnie decided to go vegan. We talk about her work so far as well as:

  • How listeners can start a career like hers
  • The history of Cooperative Threat Reduction work
  • Mistakes made by Mr Obama and Mr Trump
  • Biggest uncontrolled nuclear material threats today
  • Biggest security issues in the world today
  • The Biological Weapons Convention
  • Where does Bonnie disagree with her colleagues working on peace and security?
  • The implications for countries who give up WMDs
  • The fallout from a change in government
  • Networking, the value of attention, and being a vegan in DC
  • And the best 2020 Presidential candidates.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →