Is now the time to do something about AI?

Antique_mechanical_clock

The Open Philanthropy Project recently released a review of research on when human level artificial intelligence will be achieved. The main conclusion of the report was we’re really uncertain. But the author (Luke Muehlhauser, an expert in the area) also gave his 70% confidence interval: 10-120 years.

That’s a lot of uncertainty.

And that’s really worrying. This confidence interval suggests the author puts significant probability on human-level artificial intelligence (HLAI) occurring within 20 years. A survey of the top 100 most cited AI scientists also gave a 10% chance that HLAI is created within ten years (this was the median estimate; the mean was a 10% probability in the next 20 years).

This is like being told there’s a 10% chance aliens will arrive on the earth within the next 20 years.

Making sure this transition goes well could be the most important priority for the human race in the next century. (To read more, see Nick Bostrom’s book, Superintelligence, and this popular introduction by Wait But Why).

We issued a note about AI risk just over a year ago when Bostrom’s book was released. Since then, the field has heated up dramatically.

In January 2014, Google bought Deepmind for $400m. This triggered a wave of investment into companies focused on building human-level AI. A new AI company seems to arrive every week.

Continue reading →

Even if we can’t lower catastrophic risks now, we should do something now so we can do more later

vzuqa

Does that fit with your schedule Mr President?

A line of argument I frequently encounter is that it is too early to do anything about ‘global catastrophic risks’ today (these are also sometimes called ‘existential risks’).

For context, see our page on assessing the biggest problems in the world, evaluation of opportunities to lower catastrophic risks and our review of becoming an AI safety researcher.

This line of argument doesn’t apply so much to preventing the use of nuclear weapons, climate change, or containing disease pandemics – the potential to act on these today is about at the same level as it will be in the future.

But what about new technologies that don’t exist yet: artificial intelligence, synthetic biology, atomically precise manufacturing, and others we haven’t thought about yet? There’s a case that we should wait until they are closer to actually being developed – at that point we will have a much better idea of:

  • what form those technologies will take, if any at all;
  • what can be done to make them less risky;
  • who we need to talk to to make that happen.

Superficially this argument seems very reasonable. Each hour of work probably does get more valuable the closer you are to a ‘critical juncture in history’.

Continue reading →