And that’s really worrying. This confidence interval suggests the author puts significant probability on human-level artificial intelligence (HLAI) occurring within 20 years. A survey of the top 100 most cited AI scientists also gave a 10% chance that HLAI is created within ten years (this was the median estimate; the mean was a 10% probability in the next 20 years).
This is like being told there’s a 10% chance aliens will arrive on the earth within the next 20 years.
Making sure this transition goes well could be the most important priority for the human race in the next century. (To read more, see Nick Bostrom’s book, Superintelligence, and this popular introduction by Wait But Why).
We issued a note about AI risk just over a year ago when Bostrom’s book was released. Since then, the field has heated up dramatically.
In January 2014, Google bought Deepmind for $400m. This triggered a wave of investment into companies focused on building human-level AI. A new AI company seems to arrive every week.
Then last month, OpenAI happened. Y Combinator and Elon Musk announced a billion dollar project to develop AI for the good of humanity, and share it freely. Since it’s open source, this is likely to further speed up the overall progress of artificial intelligence, shortening the timeline to HLAI. At the same time, it should mean more investment in safety research, making the field even more talent constrained. (And how these negative and positive effects balance is pretty unclear). Either way, action is more urgent than before.
Working to reduce the risks of artificial intelligence looks like an increasingly pressing cause. If you’d like to make it a focus of your career, please let us know. You can also find some ideas about what to do with your career here. These notes are a bit out of date, but still seem roughly right. The main difference is that the field is even more talent constrained than funding constrained than when we wrote them.