Even if we can’t lower catastrophic risks now, we should do something now so we can do more later

vzuqa

Does that fit with your schedule Mr President?

A line of argument I frequently encounter is that it is too early to do anything about ‘global catastrophic risks’ today (these are also sometimes called ‘existential risks’).

For context, see our page on assessing the biggest problems in the world, evaluation of opportunities to lower catastrophic risks and our review of becoming an AI safety researcher.

This line of argument doesn’t apply so much to preventing the use of nuclear weapons, climate change, or containing disease pandemics – the potential to act on these today is about at the same level as it will be in the future.

But what about new technologies that don’t exist yet: artificial intelligence, synthetic biology, atomically precise manufacturing, and others we haven’t thought about yet? There’s a case that we should wait until they are closer to actually being developed – at that point we will have a much better idea of:

  • what form those technologies will take, if any at all;
  • what can be done to make them less risky;
  • who we need to talk to to make that happen.

Superficially this argument seems very reasonable. Each hour of work probably does get more valuable the closer you are to a ‘critical juncture in history.’ Things clearly could have been done directly to change nuclear weapons policy and the direction of the Cold War in the 40s and 50s. But in the 1920s? In the 19th century? Not so much. Better to keep your powder dry.

I nonetheless think this widespread line of argument is misguided and dangerous.

There is something that could have been done in the 1920s or even the 19th century, which I will call ‘capacity and institution building’. Here are several forms of investment that take decades:

  • training to be an expert in a security policy area;
  • building up the credibility, contacts and concrete proposals to be able to rapidly implement policy change when necessary;
  • training to be a top researcher on a new technology so you can better shape its direction;
  • improving international relations and weapons proliferation controls to lower the risk that new technologies will be weaponised;
  • improving the forecasting and decision-making processes within governments, intelligence agencies and militaries so that they are less likely to make major mistakes;
  • building well-functioning organisations and teams to do all of the above, with people who understand and are willing to fund them.

Let’s say we get a memo in 2050 that smarter-than-human artificial intelligence will be invented within the next five years. Unlike an undergraduate paper, none of the above can be thrown together at the last minute. The accumulation of career capital, expertise and smoothly running organisations takes decades. This is why experts such as Prof Bostrom think capacity building is one of the top interventions for lowering catastrophic risks (see Chapter 14 of Superintelligence).

What does this mean for you:

  • If you think any of these technological breakthroughs might come within the next 80 years and be a big deal for humanity, then some of us should start putting in place the infrastructure to steer their trajectory today.
  • If you are 30 today, then building up reasonably flexible career capital to prepare yourself to deal with security problems that will be most severe in 20-30 years may be entirely reasonable.

Learn more about the appropriate timing of work on catastrophic risks from the Global Priorities Project.

Learn more about relevant talent gaps.