What is expected value?
If someone offered you a free beer, but told you there’s a 1% chance it contains poison, you wouldn’t drink it. That’s because the badness of drinking poison far outweighs the goodness of getting a free beer, so even though you’re very unlikely to end up with poison, it’s not worth drinking.
We all make decisions about risk and uncertainty like this in our daily lives. And when trying to do good, we often face even greater uncertainty about the ultimate effects of our actions, especially when we consider all their long-term effects.
In practice, we don’t — and can’t — know for sure what the effects of our actions will be. The best we can do is to consider all of the good and bad things that could result from an action, and weigh them by how likely we think they are to actually happen. So you should think of the possibility of dying in a car crash as twice as concerning if it’s twice as likely.
We call this the ‘expected value’ of our actions, which is the technical term for the sum of all the good and bad potential consequences of an action, weighted by their probability. (You can read more about the technical definition here.)
Expected value is a theoretical ideal, not a practical methodology
There’s a difference between the theoretical ideal we’re aiming for, and the methods we use to get to that ideal.
As a theoretical ideal, if our aim is to have a positive impact and we’d prefer to have more impact rather than less, we should seek the actions with the highest expected value. This guarantees that, as we act again and again, the effects of our efforts will be as good as they can be, even if sometimes we get unlucky. (This is why in our definition of social impact, we say what matters is promoting ‘expected wellbeing,’ rather than just ‘wellbeing.’)
This doesn’t mean that when making real decisions we should make explicit estimates of the probabilities and values of different outcomes. Making explicit estimates of expected value is sometimes useful as a method, but it’s often better to look for useful rules of thumb and robust arguments, or even use gut intuitions and snap decisions to save time. What’s important is that you don’t neglect either the value of potential outcomes of your action, or how likely they are to happen.
Most of our work at 80,000 Hours is not about making explicit or quantitative estimates of the impact of different careers, but rather finding good proxies for expected value, such as the ‘importance, neglectedness, and tractability’ framework for comparing problems, or the concepts of leverage and career capital. Finding these practical proxies is a large part of our key ideas series.
In short, practical decision making should use whatever methods work. Expected value theory describes the ideal we’re trying to approximate.
Objections to using expected value
Whether the expected value approach is always the ideal way to assess uncertain outcomes even theoretically is also debated. These debates mainly focus on unusual circumstances, such as when dealing with tiny probabilities of extreme amounts of value, as in Pascal’s wager.
But some argue that decisions with this structure do sometimes come up when trying to make a difference. In moral philosophy, this is called the problem of ‘fanaticism’ — named because small probabilities of extreme amounts of value could seem to justify apparently crazy actions. To learn more about the debate around fanaticism, listen to our podcast with Christian Tarnsey and explore the reading list here.
Expected value is also only about the consequences of our actions. There may be other morally relevant factors besides consequences. See our article on the definition of social impact for an introduction.
Doing the action with the highest expected value also implies being risk-neutral rather than risk-averse. There’s also some debate about whether being risk-neutral about impact makes sense — we think it does.
Further reading on expected value
- Expected utility in the Encyclopedia Britannica
- The Effective Altruism Forum’s Reading list on expected value
- Criteria of rightness vs. decision procedures, by Amanda Askell, examines the difference between finding what is right, and making decisions based on an answer to that question
- Expected value, by Holden Karnofsky, explains in theory why it’s important to think in terms of expected value
- Bayesian mindset, by Holden Karnofsky, looks at thinking in terms of probabilities and expected value for real-world decisions
- On expected utility, a series of blog posts by Joseph Carlsmith giving a very thorough and rigorous account of why to act according to expected value
- Other-centered ethics and Harsanyi’s Aggregation Theorem, by Holden Karnofsky, gives a more rigorous argument for using expected value calculations to make ethical decisions
This is a supporting article in our key ideas series. Read the next article in the series, or check out these other entries, which you might find interesting:
Enter your email and we’ll mail you a book (for free).
Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity.