If someone offered you a free beer, but told you there’s a 1% chance it contains poison, you wouldn’t drink it. That’s because the badness of drinking poison far outweighs the goodness of getting a free beer, so even though you’re very unlikely to end up with poison, it’s not worth drinking.
We all make decisions about risk and uncertainty like this in our daily lives, but when trying to do good we face even greater uncertainty about the ultimate effects of our actions, especially if we consider all their long-term effects.
In practice, we don’t know what the effects of our actions will be. The best we can do is to consider all of the good and bad things that could result from our actions, and weigh them by how likely you think they are to actually happen. So the possibility of dying in a car crash will be regarded as twice as bad if it’s twice as likely.
The technical term for adding up all the good and bad potential consequences of an action, weighted by their probability, is the ‘expected value’ of the action.
As a theoretical ideal, insofar as our aim is to make a difference, we seek the actions with the highest expected-value, according to the values listed in our definition of social impact. This is why in our definition, we say what matters is promoting ‘expected’ wellbeing, rather than just ‘wellbeing’.
This doesn’t mean that in practice we should attempt to make explicit estimates of the probabilities and values of different outcomes. This is sometimes helpful, but it’s often better to look for useful heuristics and robust arguments, or even use gut intuitions and snap decisions to save time.
Most of our work at 80,000 Hours is not about making quantitative estimates of the impact of different careers, but rather finding good proxies for expected value, such as the [‘importance, neglectedness & tractability’ framework] for comparing problems, or the concepts of leverage or career capital. Finding these proxies is a large part of the subject of the rest of the key ideas series.
Practical decision making should use whatever methods work. Expected value theory describes the ideal we’re trying to approximate.
Expected value is also only about the good or bad consequences of our actions. There may be other morally relevant factors besides consequences. See our article on the definition of social impact for an introduction.
Whether the expected value approach is always the ideal way to assess uncertain outcomes is also debated. These debates mainly focus on unusual circumstances, such as when dealing with tiny probabilities of extreme amounts of value, as in Pascal’s Wager. But some argue that decisions with this structure come up when trying to make a difference.
In moral philosophy, this is called the problem of ‘fanaticism’. If you’d like to learn more about this debate, listen to our podcast with Christian Tarnsey and explore the reading here.
This is a supporting article in our key ideas series. Read the next article in the series, or here are some others you might find interesting:
Get the guide in your inbox.
Join our community of 150,000+ people doing good with their careers, and we’ll send you an in-depth guide to our key ideas, job opportunities, and monthly updates on new research.