We want to be transparent about how we go about our research into career choice, so in the latest site update, we added a page listing the principles we use to guide our research. The full page is here. I’ve copied the text below.

What principles do you think we’ve missed? Which parts don’t you agree with?


What evidence do we consider?

Use of scientific literature

We place relatively high weight on what scientific literature says about a question, when applicable. If there is relevant scientific literature, we start our inquiry by doing a literature search.

Expert common sense

When we first encounter a question, our initial aim is normally to work out: (i) who are the relevant experts? (ii) what would they say about this question? We call what they would say ‘expert common sense’, and we think it often forms a good starting position (more). We try not to deviate from expert common sense unless we have an account of why it’s wrong.

Quantification

Which careers make the most difference can be unintuitive, since it’s difficult to grasp the scale and scope of different problems, which often differ by orders of magnitude. This makes it important to attempt to quantify and model key factors when possible. The process of quantification is also often valuable for learning more about an issue, and making your reasoning transparent to others. However, we recognise that for most questions we care about, quantified models contain huge (often unknown) uncertainties, and therefore, should not be followed blindly. We always weigh the results of quantified models against their robustness compared to qualitative analysis and common sense.

The experience of the people we coach

We’ve coached hundreds of people on career decisions and have a wider network of people we gather information from who are aligned with our mission. We place weight on their thoughts about the pros and cons of different areas.

How do we combine evidence?

We strive to be Bayesian

We attempt to explicitly clarify our prior guess on an issue, and then update in favor or out of favor based on the strength of our evidence for or against. See an example here. This is called ‘Bayesian reasoning’, and, although not always it adopted, seems to be regarded as best practice for decision making under high uncertainty among those who write about good decision making process.1

We use ‘cluster thinking’

As opposed to relying on one or two strong considerations, we seek to evaluate the question from many angles, weighting each perspective according to its robustness and the importance of the consequences. We think this process provides more robust answers in the context of decision making under high uncertainty than alternatives (such as making a simple quantified model and going with the answer). This style of thinking has been supported by various groups and has several names, including ‘cluster thinking’, ‘model combination and adjustment’, ‘many weak arguments’, and ‘fox style’ thinking.

We seek to make this process transparent by listing the main perspectives we’ve considered on a question. We also make regular use of structured qualitative evaluations, such as our framework.

We seek robustly good paths

Our aim is to make good decisions. Since the future is unpredictable and full of unknown unknowns, and we’re uncertain about many things, we seek actions that will turn out to be good under many future scenarios.

Avoiding bias

We’re very aware of the potential for bias in our work, which often relies on difficult judgement calls, and have surveyed the literature on biases in career decisions. To avoid bias, we aim to make our research highly transparent, so that bias is easier to spot. We also aim to state our initial position, so that readers can see the direction in which we’re most likely to be biased, and write about why we might be wrong.

Seeking feedback

We see all of our work as in progress, and seek to improve it by continually seeking feedback.
We seek feedback through several channels:

  • All research is vetted within the team.
  • For major research, we’ll send it to external researchers and people with experience in the area for comments.
  • We aim to publish all of our substantial research publicly on our blog.
  • Blog posts are rated by a group of external raters.

In the future, we intend to carry out internal and external research evaluations.

We aim to make our substantial pieces of research easy to critique by:

  • Clearly explaining our reasoning and evidence. If you see a claim that isn’t backed up by a link or citation, you can assume there’s no further justification.
  • Flagging judgement calls.
  • Giving an overview of our research process.
  • Stating our key uncertainties.

  1. For instance, Nate Silver writes in Chapter 8 from The Signal in the Noise:

    We may be undergoing a paradigm shift in the statistical methods that scientists are using. The critique I have made here about the flaws of Fisher’s statistical approach is neither novel nor radical: prominent scholars in fields ranging from clinical psychology to political science to ecology have made similar arguments for years. But so far there there has been little fundamental change. Recently, however, some well-respected statisticians have begin to argue that frequentist statistics should no longer be taught to undergraduates. And some professions have considering banning Fisher’s hypothesis test from their journals. In fact, if you read what’s been written in the past ten years, it’s hard to find anything that doesn’t advocate a Bayesian approach.

    He cites: Jacob Cohen, “The Earth is Round (p>0.05), American Psychologist, 49, 12, (December 1994); Jeff Fill, “The Insignificance of the Null Hypothesis Significance Testing,” Political Research Quarterly, 52, 3, (September 1999); Anderson, Burnham & Thompson, “Null Hypothesis Testing: Problems, Prevalence, and an Alternative,” Journal of Wildlife Management, 64, 4, (2000); William Briggs, “It’s time to Stop Teaching Frequentism to Non-Statisticians”, arXiv.org, (January 13, 2012); David Krantz, “The Null Hypothesis Testing Controversy in Psychology,” Journal of the American Statistical Association, 44, no 488 (December 1999).