Moral uncertainty: how to act when you’re uncertain about what’s good
We can be uncertain about matters of fact, like whether it’ll rain tomorrow, but we can also be uncertain about moral claims, like how much to value the interests of future generations.
In the last decade, there has been more study of how to act when uncertain about what’s of value, and our cofounder, Will MacAskill, has written a book on the topic.
An approach that’s common is to pick the view that seems most plausible to you, and to go with that. This has been called the ‘my favourite theory’ approach.
But this approach seems bad. Consider a situation like this:
You’re at a restaurant and can order either foie gras or vegetarian risotto. You think there’s a 55% chance that animal welfare has no moral significance, and a 45% chance that it does, which would mean it’s deeply wrong to eat the foie gras. Personally, you’d find either meal equally delicious.
The favourite theory approach would say you should act as if animal welfare doesn’t matter, and so it’s equally good to order either the foie gras or the risotto. But it seems clearly better to pick the risotto.
Rather than go with your favourite theory, we think a more plausible approach is to consider a range of perspectives, and take the actions that seem best on balance.
Exactly how to determine which actions seem best ‘on balance’ is highly debated, but we think one likely consequence is a type of moral caution — if one plausible perspective says an action is extremely wrong, we should probably not take the action, even if other perspectives say it’s permissible or even good.
Some effects this has on our advice:
- Even if you think wellbeing is what’s most likely of moral value, you should act as if other values have some intrinsic importance.
- It’s an additional reason to be very cautious about taking any actions that seem clearly wrong from a common-sense perspective (such as harming for the greater good).
- It makes us more confident in longtermism. For example, even if there’s a reasonable chance the total view of population ethics isn’t correct, merely putting some weight on that perspective means you should act as if longtermism is correct.
- Even if you think consequentialism is the most plausible theory, you should still avoid doing significant harms according to non-consequentialist perspectives (e.g. serious rights violations).
- Non-consequentialists should focus more on doing enormous amounts of good from a consequentialist perspective.
Learn more
- Our podcast with Will MacAskill — Our descendants will probably see us as moral monsters. What should we do about that? — summarises the debate around moral uncertainty and Will’s views of the implications.
- Practical ethics given moral uncertainty — a short blog post by Will introducing the issue
- Podcast: Toby Ord on the perils of maximising the good that you do
- Podcast: Andreas Mogensen on whether effective altruism is just for consequentialists
- An entry in the Stanford Encyclopedia of Philosophy: Moral Decision-Making Under Uncertainty
Read next
This is a supporting article in our advanced series. Read the next article in the series, or here are some others you might find interesting: