(Or, How to be a high impact philosopher, part III)
In 1900 the mathematician David Hilbert published a list of 23 of the most important unsolved problems in mathematics. This list heavily influenced mathematical research over the 20th century: if you worked on one of Hilbert’s problems, then you were doing respectable mathematics.
There is no such list within moral philosophy. That’s a shame. Not all problems that are discussed in ethics are equally important. And often early graduate students have no idea what to write their thesis on and so just pick something they’ve written on for coursework previously, or pick something that’s ‘hot’ at the time. I don’t know for sure, but I imagine the same is true of many other academic disciplines.
What would the equivalent list look like for moral philosophy? Of course, it’s difficult to define ‘important’, but let’s say here that they are the potentially soluble problems that, if solved and taken seriously, would make the greatest difference to the way the world is currently run. I’ve briefly discussed this idea with Nick Beckstead, and also Carl Shulman and Nick Bostrom, and here’s a select list of what we came up with. For more explanation of why, see my previous two posts on high impact philosophy, here and here.
The Practical List
- What’s the optimal career choice? Earning to give, advocacy, research and innovation, or something more common-sensically virtuous?
What are the highest leverage political policies? Libertarian paternalism? Prediction markets? Cruelty taxes, such as taxes on caged hens; luxury taxes?
What are the highest value areas of research? Tropical medicine? Artificial intelligence? Economic cost-effectiveness analysis? Moral philosophy?
Given our best ethical theories (or best credence distribution in ethical theories), what’s the biggest problem we currently face?
The Theoretical List
- What’s the correct population ethics? How should we value future people compared with present people? Do people have diminishing marginal value?
Should we maximise expected value when it comes to small probabilities of huge amounts of value? If not, what should we do instead?
How should we respond to the possibility of creating infinite value (or disvalue)? Should that consideration swamp all others? If not, why not?
How should we respond to the possibility that the universe actually has infinite value? Does it mean that we have no reason to do any action (because we don’t increase the sum total of value in the world)? Or does this possibility refute aggregative consequentialism?
How should we accommodate moral uncertainty? Should we apply expected utility theory? If so, how do we make intertheoretic value comparisons? Does this mean that some high-stakes theories should dominate our moral thinking, even if we assign them low credence?
How should intuitions weigh against theoretical virtues in normative ethics? Is common-sense ethics roughly correct? Or should we prefer simpler moral theories?
Should we prioritise the prevention of human wrongs over the alleviation of naturally caused suffering? If so, by how much?
What sorts of entities have moral value? Humans, presumably. But what about non-human animals? Insects? The natural environment? Artificial intelligence?
What additional items should be on these lists?
You might also be interested in: