We can be uncertain about matters of fact, like whether it’ll rain tomorrow, but we can also be uncertain about moral claims, like how much to value the interests of future generations.

In the last decade, there has been more study of how to act when uncertain about what’s of value, and our co-founder, Will MacAskill, has written a book on the topic.

An approach that’s common is to pick the view that seems most plausible to you, and to go with that. This has been called the ‘my favourite theory’ approach.

But this approach seems bad. Consider a situation like this:

You’re at a restaurant and can order either foie gras and vegetarian risotto. You think there’s a 55% chance that animal welfare has no moral significance, and a 45% chance that it does, which would mean it’s deeply wrong to eat the foie gras. Personally, you’d find either meal equally delicious.

The favourite theory approach would say you should act as if animal welfare doesn’t matter, and so it’s equally good to order either the foie gras or the risotto. But it seems clearly better to pick the risotto.

Rather than go with your favourite theory, we think a more plausible approach is to consider a range of perspectives, and take the actions that seem best on balance.

Exactly how to determine which actions seem best ‘on balance’ is highly debated, but we think one likely consequence is a type of moral caution – if one plausible perspective says an action is extremely wrong, we should probably not take the action, even if other perspectives say it’s permissible or even good.

Some effects this has on our advice:

  • Although we think wellbeing is most likely what matters, it’s one reason to put some weight on other values having intrinsic importance.
  • It’s an additional reason to be very cautious about taking any actions that seem clearly wrong from a common sense perspective (e.g. harming for the greater good).
  • It makes us more confident in longtermism. E.g. even if there’s a reasonable chance the total view of population ethics isn’t correct, merely putting some weight on that perspective means you should act as if longtermism is correct.
  • Even if you think consequentialism is the most plausible theory, you should still avoid doing significant harms according to non-consequentialist perspectives (e.g. serious rights violations).
  • Non-consequentialists should focus more on doing enormous amounts of good from a consequentialist perspective.
  • Even if you think wellbeing is what’s most likely of moral value, you should as if other values have some intrinsic importance.

You can learn more about moral uncertainty by listening to this podcast with Will MacAskill, and following the further reading here.

Further reading

Read next

This is a supporting article in our key ideas series. Read the next article in the series, or here are some others you might find interesting:

Get the guide in your inbox.

Join our community of 150,000+ people doing good with their careers, and we’ll send you an in-depth guide to our key ideas, job opportunities, and monthly updates on new research.