What is expected value?

If someone offered you a free beer, but told you there’s a 1% chance it contains poison, you wouldn’t drink it. That’s because the badness of drinking poison far outweighs the goodness of getting a free beer, so even though you’re very unlikely to end up with poison, it’s not worth drinking.

We all make decisions about risk and uncertainty like this in our daily lives. And when trying to do good, we often face even greater uncertainty about the ultimate effects of our actions, especially when we consider all their long-term effects.

In practice, we don’t — and can’t — know for sure what the effects of our actions will be. The best we can do is to consider all of the good and bad things that could result from an action, and weigh them by how likely we think they are to actually happen. So you should think of the possibility of dying in a car crash as twice as concerning if it’s twice as likely.

We call this the ‘expected value’ of our actions, which is the technical term for the sum of all the good and bad potential consequences of an action, weighted by their probability. (You can read more about the technical definition here; it’s why in our definition of social impact, we say what matters is promoting ‘expected wellbeing,’ rather than just ‘wellbeing.’)

For example, if a disaster rescue effort has a 10% chance of saving 100 people, then its expected value is saving 10 lives.

If another effort has a 20% chance of saving 50 lives, then it would also save 10 lives in expectation, and so we could say it has similar expected value.

Of course, it’s rare to be able to have that much precision in your estimate of either the total potential benefits or the likelihood that they occur. But you can sometimes make rough, informed guesses, and it can be better to work with a rough guess than with no idea at all.

Why use expected value?

If you repeatedly make decisions with the highest expected value, then, statistically, you should end up maximising how much value you end up with over time.

More broadly, all actions have uncertain outcomes. We need some way of weighing these outcomes, or else we wouldn’t be able to say which actions have good or bad consequences. Expected value is the simplest and most defensible way to do this. Deviating from expected value can lead to decisions that seem irrational.

The arguments for this get complex – we’d recommend this review by Joe Carlsmith for more detail.

That said, there are many limitations to how expected value should be applied, which we cover in the next four sections:

  • Expected value is only a theoretical ideal. In most situations, it’s not helpful to explicitly make quantitative estimates of value and probability.
  • Expected value is usually only applied to what’s of value impartially, but in many decisions there may be other ethically relevant factors.
  • Expected value classically applies to values that you’re risk-neutral about. But if you’re maximising a resource — like money — that has diminishing returns, you should be risk-averse.
  • Expected value theory has some counterintuitive implications in extreme scenarios.

Expected value is a theoretical ideal, not a practical methodology

Moreover, even insofar as you want to maximise impact, there’s a difference between the theoretical ideal we’re aiming for, and the methods we use to get towards that ideal.

As a theoretical ideal, if our aim is to have a positive impact, and we’d prefer to have more impact rather than less, we should generally seek the actions with the highest expected value (holding other factors equal).

But, this doesn’t mean that when making real life decisions we should always make explicit estimates of the probabilities and values of different outcomes. And in fact there are good reasons to expect doing this to go wrong. For example, model error and missing factors can lead to your estimates being overoptimistic, if not wildly askew (e.g. see Goodhart’s law).

Making explicit estimates of expected value is sometimes useful as a method — such as when you’re comparing two global health interventions — but it’s often better to look for useful rules of thumb and robust arguments, or even use gut intuitions and snap decisions to save time.

For example, little of our work at 80,000 Hours involves explicit or quantitative estimates of the impact of different careers. Rather, we focus on finding good proxies for expected value, such as the ‘importance, neglectedness, and tractability’ framework for comparing problems, or the concepts of leverage and career capital.

What’s important is that you try to find proxies that don’t neglect either the value of potential outcomes of your action or how likely they are to happen.

In short, practical decision making should use whatever methods work. Expected value is a theoretical tool that can help us think more clearly about the consequences of our actions, whether applied directly, or by helping us to find better rules of thumb.

Is expected value all that matters?

Expected value is helpful for comparing the impact of different actions (i.e. how much ‘value’ they lead to), but we don’t think impact (narrowly construed) is the only thing you should consider when making decisions.

For instance, we don’t generally endorse doing harm in your career, even if you think the harmful action on balance has a positive expected value. You should also be careful to avoid actions that could unintentionally cause serious harm or set back your field.

More broadly, there are other kinds of moral considerations besides positive impact, such as character and rights, and you might have other important personal values, such as loyalty to your friends and family. Trying to maximise expected value to the exclusion of all else generally seems like a mistake.

Expected value and risk

Expected value in its classic form can only be applied when you are ‘risk neutral’ about the outcomes. By contrast, if you were risk averse, you might prefer getting a ‘sure thing’ rather than an uncertain bet even if its expected value were higher. While there are many cases in which it makes sense to be approximately risk neutral, there are many instances in which you shouldn’t.

For instance, we would argue that it’s better to take a 50% chance of saving 100 lives than a 100% chance of saving 10 lives, even though the first option involves more risk, because the expected value of the first bet is so much higher.

On the other hand, you shouldn’t be risk neutral about resources with diminishing returns, such as money, especially if you are taking large gambles.

To see why, consider that it’s much harder to donate $10 billion effectively than $1 million, because you’ll start to run out of opportunities. This means that if someone already has $10 billion, doubling their money is much less useful, because it’ll be so hard to spend the extra.

More generally, you should only be close to risk-neutral about resources when you’re a small actor relative to others working on your cause, and the outcome isn’t correlated with what happens to those other actors. We don’t endorse 100% risk neutrality due to the many reasons for moderation, which we summarise in a separate post, such as moral uncertainty. (Technical aside: we’d generally guess that for a cause as a whole, money has logarithmic returns, which implies a significant degree of risk aversion at the level of the cause as a whole.)

This said, for individuals planning their career, we think there are good arguments to take on more risk than normal, provided you are careful about downsides and have a personal safety net. We discuss this in a separate article.

Objections to using expected value even in principle

Whether the expected value approach is always the ideal way to assess impartial outcomes for values without diminishing returns, even theoretically, is also debated.

These debates often focus on unusual circumstances, such as when dealing with tiny probabilities of extreme amounts of value, as in Pascal’s wager, or the St Petersburg paradox.

Or imagine you’re given the choice between a 51% chance of doubling the future value of the world, and a 49% chance of ending the world. This bet arguably has positive expected value, which suggests you should take it.

However, most people (including us) would never take this bet, even in a theoretical case where we knew these probabilities were right.

Indeed, if you take the bet repeatedly, the probability of ending the world tends to 100%.

So, naively maximising expected value doesn’t always work, even as a theoretical ideal.

Unfortunately, alternatives to expected value have paradoxes of their own. So, once we’re dealing with cases like these, we’re facing unsolved problems in ethics and decision theory, and we certainly don’t have the answers.

These limits to our knowledge are one reason why it can be important not to commit too heavily to a single theoretical framework.

One saving grace is that these kinds of problematic cases rarely seem to arise in ordinary decisions, and in those cases, our take is that expected value is still generally the best theoretical answer for how to weigh the value of different uncertain outcomes.

Unfortunately, however, some have argued that cases with this structure do sometimes come up when trying to make a difference. One example is the so-called problem of ‘fanaticism’ — named because there can be cases where small probabilities of large amounts of value seem to justify apparently fanatical actions.

We certainly don’t advocate that one should “bite the bullet” and behave fanatically. One reason for this is that we always try to consider multiple perspectives, including other moral perspectives, in our actions, which usually rules out fanatical actions, especially those that harm others. (Read more in our article on the definition of social impact.)

If you want to start learning more about the philosophical puzzle of fanaticism, we have a podcast episode focusing on it, and see this list of further reading.

Learn more

Read next

This is a supporting article in our advanced series. Read the next article in the series, or here are some others you might find interesting:

Plus, join our newsletter and we’ll mail you a free book

Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.