Even if we can’t lower catastrophic risks now, we should do something now so we can do more later
Does that fit with your schedule Mr President?
A line of argument I frequently encounter is that it is too early to do anything about ‘global catastrophic risks’ today (these are also sometimes called ‘existential risks’).
For context, see our page on assessing the biggest problems in the world, evaluation of opportunities to lower catastrophic risks and our review of becoming an AI safety researcher.
This line of argument doesn’t apply so much to preventing the use of nuclear weapons, climate change, or containing disease pandemics – the potential to act on these today is about at the same level as it will be in the future.
But what about new technologies that don’t exist yet: artificial intelligence, synthetic biology, atomically precise manufacturing, and others we haven’t thought about yet? There’s a case that we should wait until they are closer to actually being developed – at that point we will have a much better idea of:
- what form those technologies will take, if any at all;
- what can be done to make them less risky;
- who we need to talk to to make that happen.
Superficially this argument seems very reasonable. Each hour of work probably does get more valuable the closer you are to a ‘critical juncture in history.’ Things clearly could have been done directly to change nuclear weapons policy and the direction of the Cold War in the 40s and 50s.