Research process and principles

Our articles are based on over 10 years of speaking to experts, reviewing the best research we can find, learning from a growing network of professionals and advisors, speaking to advisees about individual career decisions, and reviewing research by academics and nonprofits.

Approach to research

Use of experts and academic literature

When we first encounter a question, we usually aim to work out: (i) who are the relevant experts? (ii) what would they say about this question? We call what they would say ‘expert common sense,’ and we think it often forms a good position to work from (more).

People who are experts on a topic can sometimes be systematically biassed about some questions — e.g. if the question is “how important is this domain?” we should expect experts to be biassed as a group toward thinking it’s more important, since usually only people who think a domain is important will choose to become experts in it. We try to correct for this by filtering expert views through our own judgement in cases like this, as well as talking to accomplished generalists with some distance from the field.

Who counts as an expert on a question?

This is a hard question. We try to look at how much past experience with the topic someone has, how well their past work seems to hold up, and how much they are focused on the very particular issues we are trying to understand. We also use our own judgement to assess how well their claims of expertise hold up, and we often rely on the judgement of others who we trust who have their own related form of expertise.

Here are some examples of people we often seek advice from. Our podcast guests are also a rich source of expertise that informs our work.

What if there are no experts on a question?

We are interested in many issues for which there are not yet established fields and few, if any, experts — for example, the question, “what is the chance that civilisation would recover from a worldwide catastrophe?” has received very little attention.

In these cases we do the best we can — we look at what research there is, and just try not to place too much confidence in conclusions of preliminary work done by people without much experience. In the meantime, we reason about the questions ourselves and try to keep up to date with expert views as they develop.

Use of academic literature

We place relatively high weight on what scientific and academic literature says about a question, when applicable. We often start an inquiry by reaching out to academic advisors in disciplines like economics, philosophy, psychology, machine learning, and biology, and asking about what relevant literature there is (more on who we often reach out to below).

We don’t solely rely on academic literature, however — we will often draw on less formal research, for example, preprints and reports by nonprofit research organisations — especially when there isn’t much academic literature on a question.

How we think about quantification

Which careers make the most difference can be unintuitive, since it’s difficult to grasp the scale and scope of different problems and the differing effectiveness of interventions, which we think can differ by orders of magnitude. We think this makes it important to attempt to quantify and model key factors when possible, so that we can keep in mind the true magnitude of these differences.

The process of quantification is also often valuable for learning more and reasoning clearly about an issue, and making your reasoning transparent to others.

However, for most questions we care about, quantified models contain huge (often unknown) uncertainties, and therefore, should not be followed blindly. We always seek to weigh the results of quantified models against their robustness compared to qualitative analysis and common sense. Read more.

We strive to be Bayesian

We try to think about our prior guesses on an issue, and then update from there based on evidence as it comes in. See an example here. This is called ‘Bayesian reasoning’. Although it’s not uncontroversial (or always used), this style of reasoning is, as far as we know, regarded as best practice for coming to views under high uncertainty.

Reducing bias

We’re very aware of the potential for bias in our work, which often relies on difficult judgement calls, and we have surveyed the literature on biases in career decisions.

To reduce bias and its effects, we aim to make our research highly transparent, so that bias is easier to spot. We also aim to state our initial position, so that readers can see the direction in which we’re most likely to be biassed, and write about why we might be wrong. See for example our FAQ on our problem prioritisation, where we explain how we think our list of the most pressing problems is most likely to be wrong.

We also try to approach questions from multiple angles and not put too much weight on a single one, weighting each perspective according to its robustness and the importance of the consequences. Though this way of reasoning is messier (how to combine different verdicts on a question from different perspectives into a single decision?) we still think this process is better in decision making under high uncertainty. This style of thinking has been supported by various groups and has several names, including ‘cluster thinking’, ‘model combination and adjustment’, ‘many weak arguments’, and ‘fox style’ thinking.

Seeking feedback

We see all of our work as being in progress, and strive to improve it by continually seeking feedback. For example:

  • Though most of our articles have a primary author, they are always reviewed by other members of the team before publication.
  • For major research, we send drafts to several external researchers and people with experience in the area for feedback.
  • We seek to proactively gather feedback on our most central positions — in particular, our views on the most pressing global problems and the career paths that have the highest potential for impact, via regularly surveying domain experts and generalist advisors who share our values.
  • We solicit feedback on our calls with advisees regularly (with permission), internally and from external subject-area experts.
  • We aim to articulate our ideas in a way that will make it easy for readers to know if they disagree, by:
    • Clearly explaining our reasoning and evidence. If you see a claim that isn’t backed up by a link or citation, you can assume there’s no further justification
    • Flagging judgement calls and being clear about our confidence in different claims (e.g. when something is simply a best guess)
    • Stating our key uncertainties.

The experience of our advising team and the people we talk to

Our careers advisors speak to over a thousand people every year about their career decisions. To do this well, our advisors seek to stay up to date on developments in our top problem areas and the best careers advice, including from outside 80,000 Hours. Over time, talking to so many people also helps us learn what advice is most helpful for real career decisions. When writing articles, we often draw on the experience and knowledge of our advising team.

Many 80,000 Hours advisees are also impressively experienced in areas we care about. We find that they often have valuable thoughts about the pros and cons of different career paths and jobs, what they’re like from the inside, and what kinds of work and skills are most needed.

Whose views are reflected on the site?

Individual authors, editors, and other members of the team

Most articles have individual authors from our writing and research team, though they are reviewed and shaped by others from within and outside 80,000 Hours, especially editors and people with subject matter expertise.

There’s a diversity of opinion among our team about many of the issues we tackle, such as which global problems are most pressing. We think this makes sense, and is really useful: engaging with a variety of perspectives ensures claims get challenged and we don’t become intellectually lazy.

But it does mean that we don’t always come to consensus. In those cases, we either say nothing about the topic, present multiple views, or just go with the views of the author.

Generally, you should think of our articles as reflecting the views of individual authors, with lots of input from other members of the 80,000 Hours team.

For some important questions, we assign a point person to gather input from inside and outside 80,000 Hours and determine our institutional position. For example, we do this with our list of the world’s most pressing problems, our page on the top most promising career paths, and some controversial topics, like whether to work at an AI lab. Ultimately, there is no formula for how to combine this input, so we make judgement calls, and even where we have an institutional position on something, it continues to be the case that views among our staff members differ, and that we actively encourage that disagreement. Final editorial calls on what goes on the website lie with our website director.

Finally, many of our articles are authored by outside experts. We still always review the articles ourselves to try to spot errors and ensure we buy the arguments being made by the author, but we defer to the author on the research (though we may update the article substantively later to keep it current).

Who are our external advisors, and whose research do we tend to draw on?

We have a large network of external advisors working in industry, nonprofits, government, and academia, across the issues we prioritise — many of whom volunteer their time to review our research because they support our mission (see a few examples here).

Like us, the majority of these advisors consider themselves part of the effective altruism community, because they share a commitment to using evidence and reason to find the best ways to help others. Most also share our emphasis on helping future generations and reducing existential risks, because they also think that’s where we can have the most positive impact given the current state of the world.

We also tend to closely follow the research of people and groups who share this mindset. For instance, we often draw on the research of Open Philanthropy, a philanthropic and research foundation that is 80,000 Hours’ biggest funder.

This means most of our external advisors tend to share our basic worldview, which allows us to go deep on the implications of this perspective for career choice. That said, it could also be a source of bias. Although we do try to consider other perspectives in our work, and we do seek critical feedback, if it turns out that our worldview is deeply mistaken, it’d likely be hard for us to find out and course correct.

How sure are we about our ideas and arguments?

Ultimately, our research aims are very ambitious — we aim to have working answers on questions like “what are the most pressing problems in the world?” and “what are the very best career opportunities for addressing them?” These are huge and complex issues, and there is no doubt our current answers are wrong in many ways.

That said, we think that sharing the conclusions we come to — even if they are only best guesses — is useful, because it gives others a place to start from. We have been thinking hard about these questions for many years and hope that passing on the results of that thinking to our readers can give them a better jumping off point to craft their own views and to make their own decisions.