The Open Philanthropy Project recently released a review of research on when human level artificial intelligence will be achieved. The main conclusion of the report was we’re really uncertain. But the author (Luke Muehlhauser, an expert in the area) also gave his 70% confidence interval: 10-120 years.
That’s a lot of uncertainty.
And that’s really worrying. This confidence interval suggests the author puts significant probability on human-level artificial intelligence (HLAI) occurring within 20 years. A survey of the top 100 most cited AI scientists also gave a 10% chance that HLAI is created within ten years (this was the median estimate; the mean was a 10% probability in the next 20 years).
This is like being told there’s a 10% chance aliens will arrive on the earth within the next 20 years.
Making sure this transition goes well could be the most important priority for the human race in the next century. (To read more, see Nick Bostrom’s book, Superintelligence, and this popular introduction by Wait But Why).
We issued a note about AI risk just over a year ago when Bostrom’s book was released. Since then, the field has heated up dramatically.
In January 2014, Google bought Deepmind for $400m. This triggered a wave of investment into companies focused on building human-level AI. A new AI company seems to arrive every week.