We could feed all 8 billion people through a nuclear winter. Dr David Denkenberger is working to make it practical.

I was reading this paper called Fungi & Sustainability – the premise was that after an asteroid impact, humans would go extinct and the world would be ruled by mushrooms, which would grow just fine in the dark. I thought… why don’t we just eat the mushrooms and not go extinct?

Dr David Denkenberger

If a nuclear winter or asteroid impact blocked the sun for years, our inability to grow food would result in billions dying of starvation, right? According to Dr David Denkenberger, co-author of Feeding Everyone No Matter What: no. If he’s to be believed, nobody need starve at all.

Even without the sun, David sees the Earth as a bountiful food source. Mushrooms farmed on decaying wood. Bacteria fed with natural gas. Fish and mussels supported by sudden upwelling of ocean nutrients – and many more.

Dr Denkenberger is an Assistant Professor at the University of Alaska Fairbanks, and he’s out to spread the word that while a nuclear winter might be horrible, experts have been mistaken to assume that mass starvation is an inevitability. In fact, he says, the only thing that would prevent us from feeding the world is insufficient preparation.

Not content to just write a book pointing this out, David has gone on to found a growing non-profit – the Alliance to Feed the Earth in Disasters – to brace the world to feed everyone come what may. He expects that today 10% of people would find enough food to survive a massive disaster. In principle, if we did everything right, nobody need go hungry. But being more realistic about how much we’re likely to invest, David hopes a plan to inform people ahead of time would save 30%, and a decent research and development scheme 80%.

According to David’s published cost-benefit analyses, work on this problem may be able to save lives, in expectation, for under $100 each, making it an incredible investment.

These preparations could also help make humanity more resilient to global catastrophic risks, by forestalling an ‘everyone for themselves’ mentality, which then causes trade and civilization to unravel.

But some worry that David’s cost-effectiveness estimates are exaggerations, so I challenge him on the practicality of his approach, and how much his non-profit’s work would actually matter in a post-apocalyptic world. In our extensive conversation, we cover:

  • How could the sun end up getting blocked, or agriculture otherwise be decimated?
  • What are all the ways we could we eat nonetheless? What kind of life would this be?
  • Can these methods be scaled up fast?
  • What is his organisation, ALLFED, actually working on?
  • How does he estimate the cost-effectiveness of this work, and what are the biggest weaknesses of the approach?
  • How would more food affect the post-apocalyptic world? Won’t people figure it out at that point anyway?
  • Why not just leave guidebooks with this information in every city?
  • Would these preparations make nuclear war more likely?
  • What kind of people is ALLFED trying to hire?
  • What would ALLFED do with more money? What have been their biggest mistakes?
  • How he ended up doing this work. And his other engineering proposals for improving the world, including how to prevent a supervolcano explosion.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

What’s the best charity to donate to?

First published Nov 2016. Last updated Dec 2018.

If you want to make a difference, and are not already wedded to a cause, what’s the best charity to donate to? This is a brief summary of the most useful information we’ve been able to find.

First, we’ll sketch a process to use to compare options, then we’ll give our recommendations.

If you don’t have much time for research, our top recommendation is to give to the Effective Altruism Funds.

How to choose an effective charity
First, plan your research

  1. Do you trust someone else? If you know someone who shares your values and has already put a lot of thought into where to give, then consider simply going with their recommendations. You can skip ahead to see some recommendations from experts in charity evaluation. If you still want to do your own research, go to the next step.
  2. If you have under $10,000 to give, consider entering a donor lottery. It’s now possible to put $5,000 into a fund with other small donors, in exchange for a 5% chance of being able to choose where $100,000 from that fund gets donated. Why might you want to do this? In the case where you win, you can do a great deal of research into where’s best to give, to allocate that $100,000 as well as possible. Otherwise, you don’t have to do any research,

Continue reading →

Find your highest impact role: 77 new vacancies in our December job board updates

Thanks to the sterling work of Maria Gutierrez, our job board continues to get big updates each 2 week, and now lists 169 vacancies, with 77 additional opportunities in the last month.

If you’re actively looking for a new role, we recommend checking out the job board regularly – when a great opening comes up, you’ll want to maximise your time to prepare.

The job board is a curated list of the most promising positions to apply for that we’re currently aware of. They’re all high-impact opportunities at organisations that are working on some of the world’s most pressing problems:

Check out the job board →

They’re demanding positions, but if you’re a good fit for one of them, it could be your best opportunity to have an impact.

If you apply for one of these jobs, or intend to, please do let us know.

A few highlights from the last month

Continue reading →

A year’s worth of education for under a dollar and other ‘best buys’ in development, from the UK aid agency’s Chief Economist

…more teachers, more books, more inputs, like smaller class sizes – at least in the developing world – seem to have no impact, and that’s where most government money gets spent….

Dr Rachel Glennerster

If I told you it’s possible to deliver an extra year of ideal primary-level education for 30 cents, would you believe me? Hopefully not – the claim is absurd on its face.

But it may be true nonetheless. The very best education interventions are phenomenally cost-effective, but they’re not the kinds of things you’d expect, says this week’s guest, Dr Rachel Glennerster.

She’s Chief Economist at the UK’s foreign aid agency DFID, and used to run J-PAL, the world-famous anti-poverty research centre based at MIT’s Economics Department, where she studied the impact of a wide range of approaches to improving education, health, and political institutions. According to Glennerster:

“…when we looked at the cost effectiveness of education programs, there were a ton of zeros, and there were a ton of zeros on the things that we spend most of our money on. So more teachers, more books, more inputs, like smaller class sizes – at least in the developing world – seem to have no impact, and that’s where most government money gets spent.”

“But measurements for the top ones – the most cost effective programs – say they deliver 460 LAYS per £100 spent ($US130). LAYS are Learning-Adjusted Years of Schooling. Each one is the equivalent of the best possible year of education you can have – Singapore-level.”

“…the two programs that come out as spectacularly effective… well, the first is just rearranging kids in a class.”

“You have to test the kids, so that you can put the kids who are performing at grade two level in the grade two class, and the kids who are performing at grade four level in the grade four class, even if they’re different ages – and they learn so much better. So that’s why it’s so phenomenally cost effective because, it really doesn’t cost anything.”1

“The other one is providing information. So sending information over the phone [for example about how much more people earn if they do well in school and graduate]. So these really small nudges. Now none of those nudges will individually transform any kid’s life, but they are so cheap that you get these fantastic returns on investment – and we do very little of that kind of thing.”

(See the links section below to learn more about these kinds of results.)

In this episode, Dr Glennerster shares her decades of accumulated wisdom on which anti-poverty programs are overrated, which are neglected opportunities, and how we can know the difference, across a range of fields including health, empowering women and macroeconomic policy.

Regular listeners will be wondering – have we forgotten all about the lessons from episode 30 of the show with Dr Eva Vivalt? She threw several buckets of cold water on the hope that we could accurately measure the effectiveness of social programs at all.

According to Eva, her dataset of hundreds of randomised controlled trials indicates that social science findings don’t generalize well at all. The results of a trial at a school in Namibia tell us remarkably little about how a similar program will perform if delivered at another school in Namibia – let alone if it’s attempted in India instead.

Rachel offers a different and more optimistic interpretation of Eva’s findings.

Firstly, Rachel thinks it will often be possible to anticipate where studies will generalise and where they won’t. Studies are being lumped together that vary a great deal in i) how serious the problem is to start, ii) how well the program is delivered, iii) the details of the intervention itself. It’s no surprise that they have very variable results.

Rachel also points out that even if randomised trials can never accurately measure the effectiveness of every individual program, they can help us discover regularities of human behaviour that can inform everything we do. For instance, dozens of studies have shown that charging for preventative health measure like vaccinations will greatly reduce the number of people who take them up.

To learn more and figure out who you sympathise with, you’ll just have to listen to the the episode.

Regardless, Vivalt and Glennerster agree that we should continue to run these kinds of studies, and today’s episode delves into the latest ideas in global health and development. We discuss:

  • The development of aid work over the past 3 decades?
  • What’s the right balance of RCT work?
  • Do RCTs distract from broad economic growth and progress in these societies?
  • Overrated/underrated: charter cities, getting along with colleagues, cash transfers, cracking down on tax havens, micronutrient supplementation, pre-registration
  • The importance of using your judgement, experience, and priors
  • Things that reoccur in every culture
  • Do we produce too many programs where the quality of implementation matters?
  • Has the “empirical revolution” gone too far?
  • The increasing usage of Bayesian statistics
  • High impact gender equality interventions
  • Should we mostly focus on reforming macroeconomic policy in developing countries?
  • How important are markets for carbon?
  • What should we think about the impact the US and UK had in eastern Europe after the Cold War?

Footnote 1: You may understandably want to read the research on this! Unfortunately it is from an as-yet unpublished analysis by Noam Angrist at the World Bank. He worked on constructing a measure of Learning-Adjusted School Years for impact evaluations, which builds on Rachel’s 2013 paper in Science which attempts to determine the cost-effectiveness of a range of different health interventions. He says it should be available “sometime in the early new year” – we’ll link to it when it comes out.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

A simple checklist for overcoming life and career setbacks

At 80,000 Hours we focus a lot on developing ambitious plans to dramatically improve the world.

Something we haven’t written so much about is how to overcome the challenges – heartbreak, rejection, failure, illness, grief, conflict and more – that are sure to arise as we attempt to follow through on those plans, and which risk throwing us off course.

We don’t have particular expertise on this topic, but I wanted to share an approach that me and some friends have found useful, and which might help you as well.

When bad things happen in life, the thoughts we then have about them have a big impact on how much they harm us. Even where we can’t avoid the direct suffering inflicted by a problem, we can at least avoid hurting ourselves further, by ruminating about it and getting trapped in a cycle of negative thoughts.

In the case of the minor annoyances we face every day, maintaining our equanimity can almost entirely eliminate the harm they cause us. And even when we face serious adversity, ensuring we think about it the right way can limit the damage, and save us from falling into depression or another negative spiral. (Though this list isn’t really suitable for seriously traumatic events.)

To help myself with this, I’ve made a checklist of questions I try to work through when something unpleasant happens, in order to reframe the situation and get over it as quickly as possible.

Continue reading →

Computer science algorithms tackle fundamental and universal problems. Can they help us live better, or is that a false hope?

We tend to think of deciding whether to commit to a partner, or where to go out for dinner, as uniquely and innately human problems. The message of the book is simply: they are not. In fact they correspond – really precisely in some cases – to some of the fundamental problems of computer science.

Brian Christian

Ever felt that you were so busy you spent all your time paralysed trying to figure out where to start, and couldn’t get much done? Computer scientists have a term for this – thrashing – and it’s a common reason our computers freeze up. The solution, for people as well as laptops, is to ‘work dumber’: pick something at random and finish it, without wasting time thinking about the bigger picture.

Ever wonder why people reply more if you ask them for a meeting at 2pm on Tuesday, than if you offer to talk at whatever happens to be the most convenient time in the next month? The first requires a two-second check of the calendar; the latter implicitly asks them to solve a vexing optimisation problem.

What about estimating the probability of something you can’t model, and which has never happened before? Math has got your back: the likelihood is no higher than 1 in the number of times it hasn’t happened, plus one. So if 5 people have tried a new drug and survived, the chance of the next one dying is at most 1 in 6.

Bestselling author Brian Christian studied computer science, and in the book Algorithms to Live By he’s out to find the lessons it can offer for a better life. In addition to the above he looks into when to quit your job, when to marry, the best way to sell your house, how long to spend on a difficult decision, and how much randomness to inject into your life.

In each case computer science gives us a theoretically optimal solution. In this episode we think hard about whether its models match our reality.

One genre of problems Brian explores in his book are ‘optimal stopping problems’, the canonical example of which is ‘the secretary problem’. Imagine you’re hiring a secretary, you receive n applicants, they show up in a random order, and you interview them one after another. You either have to hire that person on the spot and dismiss everybody else, or send them away and lose the option to hire them in future.

It turns out most of life can be viewed this way – a series of unique opportunities you pass by that will never be available in exactly the same way again.

So how do you attempt to hire the very best candidate in the pool? There’s a risk that you stop before you see the best, and a risk that you set your standards too high and let the best candidate pass you by.

Mathematicians of the mid-twentieth century produced the elegant solution: spend exactly one over e, or approximately 37% of your search, just establishing a baseline without hiring anyone, no matter how promising they seem. Then immediately hire the next person who’s better than anyone you’ve seen so far.

It turns out that your odds of success in this scenario are also 37%. And the optimal strategy and the odds of success are identical regardless of the size of the pool. So as n goes to infinity you still want to follow this 37% rule, and you still have a 37% chance of success. Even if you interview a million people.

But if you have the option to go back, say by apologising to the first applicant and begging them to come work with you, and you have a 50% chance of your apology being accepted, then the optimal explore percentage rises all the way to 61%.

Today’s episode focuses on Brian’s book-length exploration of how insights from computer algorithms can and can’t be applied to our everyday lives. We cover:

  • Is it really important that people know these different models and try to apply them?
  • What’s it like being a human confederate in the Turing test competition? What can you do to seem incredibly human?
  • Is trying to detect fake social media accounts a losing battle?
  • The canonical explore/exploit problem in computer science: the multi-armed bandit
  • How can we characterize a computational model of what people are actually doing, and is there a rigorous way to analyse just how good their instincts actually are?
  • What’s the value of cardinal information above and beyond ordinal information?
  • What’s the optimal way to buy or sell a house?
  • Why is information economics so important?
  • The martyrdom of being a music critic
  • ‘Simulated annealing’, and the best practices in optimisation
  • What kind of decisions should people randomize more in life?
  • Is the world more static than it used to be?
  • How much time should we spend on prioritisation? When does the best solution require less precision?
  • How do you predict the duration of something when you you don’t even know the scale of how long it’s going to last?
  • How many heists should you go on if you have a certain fixed probability of getting arrested and having all of your assets seized?
  • Are pro and con lists valuable?
  • Computational kindness, and the best way to schedule meetings
  • How should we approach a world of immense political polarisation?
  • How would this conversation have changed if there wasn’t an audience?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

Think twice before talking about ‘talent gaps’ – clarifying nine misconceptions

After pushing the idea of ‘talent gaps’ in 2015, we’ve noticed increasing confusion about the term.

This is partly our fault. So, here’s a quick list of common misconceptions about talent gaps and how they can be fixed. This is all pretty rough and we’re still refining our own views, but we hope this might start to clarify this issue, while we work on better explaining the idea in our key content.

1. Problem areas are constrained by specific skills, not ‘talent’

Problem areas are rarely generically ‘talent constrained’. They’re instead constrained by specific skills and abilities. It’s nearly always clearer to talk about the specific needs of the field, ideally down to the level of specific profiles of people, rather than talent and funding in general.

For instance, work to positively shape the development of AI is highly constrained by the following:

  • ML researchers, especially those able to do field-defining work, who are interested in and understand AI safety, the alignment problem, and other issues relevant to the long-term development of AI.
  • People skilled in operations, especially those able to run non-profits with under 50 people or academic institutes, and who are interested in and understand issues related to the long-term development of AI.
  • Strategy and policy researchers able to do disentanglement research in pre-paradigmatic fields.
  • People with the policy expertise and career capital to work in influential government positions who are also knowledgeable about and dedicated to the issue.

Continue reading →

PhD or programming? Fast paths into aligning AI as a machine learning engineer, according to ML engineers Catherine Olsson & Daniel Ziegler

If you are a talented software engineer, the state of the questions right now is that some of them are just ready to throw engineers on. And so if you haven’t just tried applying to the position that you want, just try. Just see. You might actually be ready for it.

Catherine Olsson

After dropping out of his ML PhD at Stanford, Daniel Ziegler needed to decide what to do next. He’d always enjoyed building stuff and wanted to help shape the development of AI, so he thought a research engineering position at an org dedicated to aligning AI with human interests could be his best option.

He decided to apply to OpenAI, spent 6 weeks preparing for the interview, and actually landed the job. His PhD, by contrast, might have taken 6 years. Daniel thinks this highly accelerated career path may be possible for many others.

On today’s episode Daniel is joined by Catherine Olsson, who has also worked at OpenAI, and left her computational neuroscience PhD to become a research engineer at Google Brain. They share this piece of advice for those interested in this career path: just dive in. If you’re trying to get good at something, just start doing that thing, and figure out that way what’s necessary to be able to do it well.

To go with this episode, Catherine has even written a simple step-by-step guide to help others copy her and Daniel’s success.

Please let us know how we’ve helped you – take 5 minutes to fill out our 2018 annual impact survey. This is one of the best quick things you can do to support our work.

Daniel thinks the key for him was nailing the job interview.

OpenAI needed him to be able to demonstrate the ability to do the kind of stuff he’d be working on day-to-day. So his approach was to take a list of 50 key deep reinforcement learning papers, read one or two a day, and pick a handful to actually reproduce. He spent a bunch of time coding in Python and TensorFlow, sometimes 12 hours a day, trying to debug and tune things until they were actually working.

Daniel emphasizes that the most important thing was to practice exactly those things that he knew he needed to be able to do. He also received an offer from the Machine Intelligence Research Institute, and so he had the opportunity to decide between two organisations focused on the global problem that most concerns him.

Daniel’s path might seem unusual, but both he and Catherine expect it can be replicated by others. If they’re right, it could greatly increase our ability to quickly get new people into ML roles in which they can make a difference.

Catherine says that her move from OpenAI to an ML research team at Google now allows her to bring a different set of skills to the table. Technical AI safety is a multifaceted area of research, and the many sub-questions in areas such as reward learning, robustness, and interpretability all need to be answered to maximize the probability that AI development goes well for humanity.

Today’s episode combines the expertise of two pioneers and is a key resource for anyone wanting to follow in their footsteps. We cover:

  • What is the field of AI safety? How could your projects contribute?
  • What are OpenAI and Google Brain doing?
  • Why would one decide to work on AI?
  • The pros and cons of ML PhDs
  • Do you learn more on the job, or while doing a PhD?
  • Why did Daniel think OpenAI had the best approach? What did that mean?
  • Controversial issues within ML
  • What are some of the problems that are ready for software engineers?
  • What’s required to be a good ML engineer? Is replicating papers a good way of determining suitability?
  • What fraction of software developers could make similar transitions?
  • How in-demand are research engineers?
  • The development of Dota 2 bots
  • What’s the organisational structure of ML groups? Are there similarities to an academic lab?
  • The fluidity of roles in ML
  • Do research scientists have more influence on the vision of an org?
  • What’s the value of working in orgs not specifically focused on safety?
  • Has learning more made you more or less worried about the future?
  • The value of AI policy work
  • Advice for people considering 23andMe

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

ML engineering for AI safety & robustness: a Google Brain engineer’s guide to entering the field

Note that this guide was written in November 2018 to complement an in-depth conversation on the 80,000 Hours Podcast with Catherine Olsson and Daniel Ziegler on how to transition from computer science and software engineering in general into ML engineering, with a focus on alignment and safety. If you like this guide, we’d strongly encourage you to check out the podcast episode where we discuss some of the instructions here, and other relevant advice.

Technical AI safety is a multifaceted area of research, with many sub-questions in areas such as reward learning, robustness, and interpretability. These will all need to be answered in order to make sure AI development will go well for humanity as systems become more and more powerful.

Not all of these questions are best tackled with abstract mathematics research; some can be approached with concrete coding experiments and machine learning (ML) prototypes. As a result, some AI safety research teams are looking to hire a growing number of Software Engineers and ML Research Engineers.

Additionally, some research teams that may not think of themselves as focussed on ‘AI Safety’ per se, nonetheless work on related problems like verification of neural nets or learning from human feedback, and are often hiring engineers.

What are the necessary qualifications for these positions?

Software Engineering: Some engineering roles on AI safety teams do not require ML experience. You might already be prepared to apply to these positions if you have the following qualifications:

  • BSc/BEng degree in computer science or another technical field (or comparable experience)
  • Strong knowledge of software engineering (as a benchmark: could pass a Google software engineering interview)
  • Interest in working on AI safety
  • (usually) Willingness to move to London or the San Francisco Bay Area

If you’re a software engineer with an interest in these roles,

Continue reading →

Second October job board update

Our job board now lists 142 vacancies, with 38 additional opportunities since the last update 3 weeks ago.

If you’re actively looking for a new role, we recommend checking out the job board regularly – when a great opening comes up, you’ll want to maximise your preparation time.

The job board remains a curated list of the most promising positions to apply for that we’re currently aware of. They’re all high-impact opportunities at organisations that are working on some of the world’s most pressing problems:

Check out the job board →

They’re demanding positions, but if you’re a good fit for one of them, it could be your best opportunity to have an impact.

If you apply for one of these jobs, or intend to, please do let us know.

A few highlights from the last month

Continue reading →

New article: Have a particular strength? Already an expert in a field? Here are the socially impactful careers 80,000 Hours suggests you consider first.

We’ve published a new article that summarises our advice based on your strengths and link you to the most relevant articles for you to read:

This list is preliminary. We wanted to publish our existing thoughts on what to do with each skill, but can easily see ourselves changing our minds over the coming years.

You can read about our general process and what career paths we recommend in our full article.

Sometimes, however, it’s possible to give more specific advice about what options to consider to people who already have pre-existing experience or qualifications, or are unusually good at a certain type of work.

In this article, we provide a list of skills, and for each one give a list of socially-impactful options that people who are unusually good in that area should most often consider.

We start with three “strengths” (quantitative, verbal & social, and visual). Then we go on to give advice for people with existing experience in fifteen specific fields.

Bear in mind it’s often possible to completely change field: we’ve seen people switch from philosophy to software engineering, and architecture into economics. Nonetheless, these are good starting points.

The skill types also overlap, and you probably also have several of them. The aim is just to give you some tips on narrowing down your options more quickly.

Continue reading →

    Philosophy Prof Hilary Greaves on moral cluelessness, population ethics, probability within a multiverse, & harnessing the brainpower of academia to tackle the most important research questions

    You might think, OK, I know that the immediate effects of funding anti-malarial bed nets are positive – I know that I’m going to save lives. But I also know that there are going to be further downstream effects and side-effects of my intervention. For example, effects on the size of future populations. It’s notoriously unclear how to think about the value of future population size, whether it’ll be a good thing to increase population in the short term, or whether that would in the end be a bad thing. There are lots of uncertainties here.

    Hilary Greaves

    The barista gives you your coffee and change, and you walk away from the busy line. But you suddenly realise she gave you $1 less than she should have. Do you brush your way past the people now waiting, or just accept this as a dollar you’re never getting back? According to philosophy professor Hilary Greaves – Director of Oxford University’s Global Priorities Institute, which is hiring now – this simple decision will completely change the long-term future by altering the identities of almost all future generations.

    How? Because by rushing back to the counter, you slightly change the timing of everything else people in line do during that day — including changing the timing of the interactions they have with everyone else. Eventually these causal links will reach someone who was going to conceive a child.

    By causing a child to be conceived a few fractions of a second earlier or later, you change the sperm that fertilizes their egg, resulting in a totally different person. So asking for that $1 has now made the difference between all the things that this actual child will do in their life, and all the things that the merely possible child – who didn’t exist because of what you did – would have done if you decided not to worry about it.

    As that child’s actions ripple out to everyone else who conceives down the generations, ultimately the entire human population will become different, all for the sake of your dollar. Will your choice cause a future Hitler to be born, or not to be born? Probably both!

    Some find this concerning. The actual long term effects of your decisions are so unpredictable, it looks like you’re totally clueless about what’s going to lead to the best outcomes. It might lead to decision paralysis — you won’t be able to take any action at all.

    Prof Greaves doesn’t share this concern for most real life decisions. If there’s no reasonable way to assign probabilities to far-future outcomes, then the possibility that you might make things better in completely unpredictable ways is more or less canceled out by the equally plausible possibility that you might make things worse in equally unpredictable ways.

    But, if instead we’re talking about a decision that involves highly-structured, systematic reasons for thinking there might be a general tendency of your action to make things better or worse — for example if we increase economic growth — Prof Greaves says that we don’t get to just ignore the unforeseeable effects.

    When there are complex arguments on both sides, it’s unclear what probabilities you should assign to this or that claim. Yet, given its importance, whether you should take the action in question actually does depend on figuring out these numbers.

    So, what do we do?

    Today’s episode blends philosophy with an exploration of the mission and research agenda of the Global Priorities Institute: to develop the effective altruism movement within academia. We cover:

    • What’s the long term vision of the Global Priorities Institute?
    • How controversial is the multiverse interpretation of quantum physics?
    • What’s the best argument against academics just doing whatever they’re interested in?
    • How strong is the case for long-termism? What are the best opposing arguments?
    • Are economists getting convinced by philosophers on discount rates?
    • Given moral uncertainty, how should population ethics affect our real life decisions?
    • How should we think about archetypal decision theory problems?
    • The value of exploratory vs. basic research
    • Person affecting views of population ethics, fragile identities of future generations, and the non-identity problem
    • Is Derek Parfit’s repugnant conclusion really repugnant? What’s the best vision of a life barely worth living?
    • What are the consequences of cluelessness for those who based their donation advice on GiveWell style recommendations?
    • How could reducing global catastrophic risk be a good cause for risk-averse people?
    • What’s the core difficulty in forming proper credences?
    • The value of subjecting EA ideas to academic scrutiny
    • The influence of academia in society
    • The merits of interdisciplinary work
    • The case for why operations is so important in academia
    • The trade off between working on important problems and advancing your career

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

    The 80,000 Hours Podcast is produced by Keiran Harris.

    Continue reading →

    New article: Ways people trying to do good accidentally make things worse, and how to avoid them

    We’ve published a new article about how to avoid accidentally causing harm through your career:

    “We encourage people to work on problems that are neglected by others and large in scale. Unfortunately those are precisely the problems where people can do the most damage if their approach isn’t carefully thought through.

    If a problem is very important, then setting back the cause is very bad. If a problem is so neglected that you’re among the first focused on it, then you’ll have a disproportionate influence on the field’s reputation, how likely others are to enter it, and many early decisions that could have path-dependent effects on the field’s long-term success.

    We don’t particularly enjoy writing about this admittedly demotivating topic. Ironically, we expect that cautious people – the folks who least need this advice – will be the ones most likely to take it to heart.

    Nonetheless we think cataloguing these risks is important if we’re going to be serious about having an impact in important but ‘fragile’ fields like reducing extinction risk.

    In this article, we’ll list six ways people can unintentionally set back their cause. You may already be aware of most of these risks, but we often see people neglect one or two of them when new to a high stakes area – including us when we were starting 80,000 Hours.”

    Read the full article…

    Continue reading →

      Economics Prof Tyler Cowen says our overwhelming priorities should be maximising economic growth and making civilization more stable. Is he right?

      We can already see that three key questions should be elevated in their political and philosophical importance. Namely: number one, what can we do to boost the rate of economic growth? Number two, what can we do to make civilization more stable? And number three, how should we deal with environmental problems?

      Tyler Cowen

      I’ve probably spent more time reading Tyler Cowen – Professor of Economics at George Mason University – than any other author. Indeed it’s his incredibly popular blog Marginal Revolution that prompted me to study economics in the first place. Having spent thousands of hours absorbing Tyler’s work, it was a pleasure to be able to question him about his latest book and personal manifesto: Stubborn Attachments: A Vision for a Society of Free, Prosperous, and Responsible Individuals.

      Tyler makes the case that, despite what you may have heard, we can make rational judgments about what is best for society as a whole. He argues:

      1. Our top moral priority should be preserving and improving humanity’s long-term future
      2. The way to do that is to maximise the rate of sustainable economic growth
      3. We should respect human rights and follow general principles while doing so.

      We discuss why Tyler believes all these things, and I push back where I disagree. In particular: is higher economic growth actually an effective way to safeguard humanity’s future, or should our focus really be elsewhere?

      In the process we touch on many of moral philosophy’s most pressing questions: Should we discount the future? How should we aggregate welfare across people? Should we follow rules or evaluate every situation individually? How should we deal with the massive uncertainty about the effects of our actions? And should we trust common sense morality or follow structured theories?

      After covering the book, the conversation ranges far and wide. Will we leave the galaxy, and is it a tragedy if we don’t? Is a multi-polar world less stable? Will humanity ever help wild animals? Why do we both agree that Kant and Rawls are overrated?

      Today’s interview is released on both the 80,000 Hours Podcast and Tyler’s own show: Conversation with Tyler.

      Tyler may have had more influence on me than any other writer but this conversation is richer for our remaining disagreements. If the above isn’t enough to tempt you to listen, we also look at:

      • Why couldn’t future technology make human life a hundred or a thousand times better than it is for people today?
      • Why focus on increasing the rate of economic growth rather than making sure that it doesn’t go to zero?
      • Why shouldn’t we dedicate substantial time to the successful introduction of genetic engineering?
      • Why should we completely abstain from alcohol and make it a social norm?
      • Why is Tyler so pessimistic about space? Is it likely that humans will go extinct before we manage to escape the galaxy?
      • Is improving coordination and international cooperation a major priority?
      • Why does Tyler think institutions are keeping up with technology?
      • Given that our actions seem to have very large and morally significant effects in the long run, are our moral obligations very onerous?
      • Can art be intrinsically valuable?
      • What does Tyler think Derek Parfit was most wrong about, and what was he was most right about that’s unappreciated today?
      • How should we think about animal suffering?
      • Do self-aware entities have to be biological in some sense?
      • What’s the most likely way that the worldview presented in Stubborn Attachments could be fundamentally wrong?
      • During ‘underrated vs overrated’, should guests say ‘appropriately rated’ more often?

      Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

      The 80,000 Hours podcast is produced by Keiran Harris.

      Continue reading →

      What skills or experience are most needed within professional effective altruism in 2018? And which problems are most effective to work on? New survey of organisational leaders.

      Update April 2019: We think that our use of the term ‘talent gaps’ in this post (and elsewhere) has caused some confusion. We’ve written a post clarifying what we meant by the term and addressing some misconceptions that our use of it may have caused. Most importantly, we now think it’s much more useful to talk about specific skills and abilities that are important constraints on particular problems rather than talking about ‘talent constraints’ in general terms. This page may be misleading if it’s not read in conjunction with our clarifications.

      What are the most pressing needs in the effective altruism community right now? What problems are most effective to work on? Who should earn to give and who should do direct work? We surveyed managers at organisations in the community to find out their views. These results help to inform our recommendations about the highest impact career paths available.

      Our key finding is that for the questions that we asked 12 months ago, the results have not changed very much. This gives us more confidence in our survey results from 2017.

      We also asked some new questions, including about the monetary value placed on our priority paths, discount rates on talent and how current leaders first discovered and got involved in effective altruism.

      Below is a summary of the key figures, some caveats about the data’s limitations,

      Continue reading →

      New career review on becoming an academic researcher: Highlights on your chances of success, which fields have highest GRE scores, & having impact outside research

      We recently published a new career review on becoming an academic researcher by Jess Whittlestone. It covers issues such as:

      • Entry requirements and what it takes to excel.
      • What are your chances of success?
      • How to maximise your impact within academia.
      • How to assess your personal fit at each stage of your career.
      • Which field are best to enter?
      • How to establish your career early on, and trade-off impact against career advancement.
      • Review of the pros and cons of the path.

      Check out our career review of academic research →

      Here are some extracts from the full profile.

      Research isn’t the only way academics can have a large impact

      When we think of academic careers, research is what first comes to mind, but academics have many other pathways to impact which are less often considered. Academics can also influence public opinion, advise policy-makers, or manage teams of other researchers to help them be more productive.

      If any of these routes might turn out to be a good fit for you, then it makes the path even more attractive. We’ll sketch out some of these other paths:

      1. Public outreach

      Peter Singer’s career began in an ordinary enough way for a promising young academic, studying philosophy at Oxford University. But he soon started moving in a different direction from his peers,

      Continue reading →

      October job board update

      Our job board now lists 128 vacancies, with 45 additional opportunities since last month.

      If you’re actively looking for a new role, we recommend checking out the job board regularly – when a great opening comes up, you’ll want to maximise your preparation time.

      The job board remains a curated list of the most promising positions to apply for that we’re currently aware of. They’re all high-impact opportunities at organisations that are working on some of the world’s most pressing problems:

      Check out the job board →

      They’re demanding positions, but if you’re a good fit for one of them, it could be your best opportunity to have an impact.

      If you apply for one of these jobs, or intend to, please do let us know.

      A few highlights from the last month

      Continue reading →

      Dr Paul Christiano on how OpenAI is developing real solutions to the ‘AI alignment problem’, and his vision of how humanity will progressively hand over decision-making to AI systems

      Paul Christiano is one of the smartest people I know and this episode has one of the best explanations for why AI alignment matters and how we might solve it. After our first session produced such great material, we decided to do a second recording, resulting in our longest interview so far. While challenging at times I can strongly recommend listening – Paul works on AI himself and has a very unusually thought through view of how it will change the world. Even though I’m familiar with Paul’s writing I felt I was learning a great deal and am now in a better position to make a difference to the world.

      A few of the topics we cover are:

      • Why Paul expects AI to transform the world gradually rather than explosively and what that would look like
      • Several concrete methods OpenAI is trying to develop to ensure AI systems do what we want even if they become more competent than us
      • Why AI systems will probably be granted legal and property rights
      • How an advanced AI that doesn’t share human goals could still have moral value
      • Why machine learning might take over science research from humans before it can do most other tasks
      • Which decade we should expect human labour to become obsolete, and how this should affect your savings plan.

      If an AI says, “I would like to design the particle accelerator this way because,” and then makes an inscrutable argument about physics, you’re faced with this tough choice. You can either sign off on that decision and see if it has good consequences, or you [say] “no, don’t do that ’cause I don’t understand it”. But then you’re going to be permanently foreclosing some large space of possible things your AI could do.

      Paul Christiano

      Here’s a situation we all regularly confront: you want to answer a difficult question, but aren’t quite smart or informed enough to figure it out for yourself. The good news is you have access to experts who are smart enough to figure it out. The bad news is that they disagree.

      If given plenty of time – and enough arguments, counterarguments and counter-counter-arguments between all the experts – should you eventually be able to figure out which is correct? What if one expert were deliberately trying to mislead you? And should the expert with the correct view just tell the whole truth, or will competition force them to throw in persuasive lies in order to have a chance of winning you over?

      In other words: does ‘debate’, in principle, lead to truth?

      According to Paul Christiano – researcher at the machine learning research lab OpenAI and legendary thinker in the effective altruism and rationality communities – this question is of more than mere philosophical interest. That’s because ‘debate’ is a promising method of keeping artificial intelligence aligned with human goals, even if it becomes much more intelligent and sophisticated than we are.

      It’s a method OpenAI is actively trying to develop, because in the long-term it wants to train AI systems to make decisions that are too complex for any human to grasp, but without the risks that arise from a complete loss of human oversight.

      If AI-1 is free to choose any line of argument in order to attack the ideas of AI-2, and AI-2 always seems to successfully defend them, it suggests that every possible line of argument would have been unsuccessful.

      But does that mean that the ideas of AI-2 were actually right? It would be nice if the optimal strategy in debate were to be completely honest, provide good arguments, and respond to counterarguments in a valid way. But we don’t know that’s the case.

      According to Paul, it’s clear that if the judge is weak enough, there’s no reason that an honest debater would be at an advantage. But the hope is that there is some threshold of competence above which debates tend to converge on more accurate claims the longer they continue.

      Most real world debates are set up under highly suboptimal conditions; judges usually don’t have a lot of time to think about how to best get to the truth, and often have bad incentives themselves. But for AI safety via debate, researchers are free to set things up in the way that gives them the best shot. And if we could understand how to construct systems that converge to truth, we would have a plausible way of training powerful AI systems to stay aligned with our goals.

      This is our longest interview so far for good reason — we cover a fascinating range of topics:

      • What could people do to shield themselves financially from potentially losing their jobs to AI?
      • How important is it that the best AI safety team ends up in the company with the best ML team?
      • What might the world look like if several states or actors developed AI at the same time (aligned or otherwise)?
      • Would artificial general intelligence grow in capability quickly or slowly?
      • How likely is it that transformative AI is an issue worth worrying about?
      • What are the best arguments against being concerned?
      • What would cause people to take AI alignment more seriously?
      • Concrete ideas for making machine learning safer, such as iterated amplification.
      • What does it mean to say that a crow-like intelligence could be much better at science than humans?
      • What is ‘prosaic AI’?
      • How do Paul’s views differ from those of the Machine Intelligence Research Institute?
      • The importance of honesty for people and organisations
      • What are the most important ways that people in the effective altruism community are approaching AI issues incorrectly?
      • When would an ‘unaligned’ AI nonetheless be morally valuable?
      • What’s wrong with current sci-fi?

      Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

      The 80,000 Hours podcast is produced by Keiran Harris.

      Continue reading →

      List of 80,000 Hours content from the last 4 months, summary of what was most popular, and plans for future releases.

      Cross posted from the Effective Altruism Forum.

      Here’s your regular reminder of everything 80,000 Hours has released over the last four months, since our last roundup. If you’d like to get these updates more regularly, you can join our newsletter.

      We’ve done a major redesign of our job board, increasing the number of vacancies listed there from ~20 to over 100. It has doubled its traffic since the first half of the year, and is now one of the five most popular pages on the whole site.

      1. High impact job board

      We’ve released two in-depth articles that should be of special interest to the community:

      1. These are the world’s highest impact career paths according to our research. This is an update of our top recommended careers.
      2. Should you play to your comparative advantage when choosing your career? New theoretical content on the relevance of comparative advantage, and thoughts on how to practically evaluate it.

      We released 9 podcast episodes totalling 19.5 hours, covering lots of key topics in EA in significant depth (in chronological order):

      1. How the audacity to fix things without asking permission can change the world, demonstrated by Tara Mac Aulay
      2. Tanya Singh on ending the operations management bottleneck in effective altruism
      3. Finding the best charity requires estimating the unknowable.

      Continue reading →

      Recent research we’ve published: Our top 10 careers for social impact; Congressional staffing; Comparative advantage; And can you guess which psychology experiments will replicate?

      We recently published a number of new articles that you might have missed if you don’t follow us on social media (Facebook and Twitter) or our research newsletter.

      Probably our most important release for this year is this article summarising many of our key findings since we started in 2011:

      It outlines our new suggested process anyone can use to generate a short-list of high-impact career options given their personal situation.

      It then describes the top five key categories of career we most often recommend, which should produce at least one good option for almost all graduates, and why we’re enthusiastic about them.

      It goes on to list and explains the top 10 “priority paths” we want to draw attention to, because we think they can enable to right person to do a particularly large amount of good for the world.

      Second, if you’re trying to figure out which job is the best fit for you, or how to coordinate with other people – for example the effective altruism community – you will want to read:

      Third, if you’d like to influence government or work in politics, you should check out our comprehensive review of the pros and cons of being a Congressional Staffer and how to become one:

      Continue reading →