Is now the time to do something about AI?

80,000 Hours is a non-profit that gives you the information you need to find a fulfilling, high-impact career. Our advice is all free, tailored for talented graduates, and based on five years of research alongside academics at Oxford. Start with our career guide.

Antique_mechanical_clock

The Open Philanthropy Project recently released a review of research on when human level artificial intelligence will be achieved. The main conclusion of the report was we’re really uncertain. But the author (Luke Muehlhauser, an expert in the area) also gave his 70% confidence interval: 10-120 years.1

That’s a lot of uncertainty.

And that’s really worrying. This confidence interval suggests the author puts significant probability on human-level artificial intelligence (HLAI) occurring within 20 years. A survey of the top 100 most cited AI scientists also gave a 10% chance that HLAI is created within ten years (this was the median estimate; the mean was a 10% probability in the next 20 years).

This is like being told there’s a 10% chance aliens will arrive on the earth within the next 20 years.

Making sure this transition goes well could be the most important priority for the human race in the next century. (To read more, see Nick Bostrom’s book, Superintelligence, and this popular introduction by Wait But Why).

We issued a note about AI risk just over a year ago when Bostrom’s book was released. Since then, the field has heated up dramatically.

In January 2014, Google bought Deepmind for $400m. This triggered a wave of investment into companies focused on building human-level AI. A new AI company seems to arrive every week.

This, Bostrom’s book and a landmark conference in Puerto Rico, helped to trigger a major rise in investment into AI safety research. Several tens of millions of dollars have been raised in the last year or two. The field is now more talent-constrained than funding constrained, prompting Luke Muehlhauser to call for those interested in AI safety to “delurk”. We wrote a profile on AI safety research that goes into more detail.

Then last month, OpenAI happened. Y Combinator and Elon Musk announced a billion dollar project to develop AI for the good of humanity, and share it freely. Since it’s open source, this is likely to further speed up the overall progress of artificial intelligence, shortening the timeline to HLAI. At the same time, it should mean more investment in safety research, making the field even more talent constrained. (And how these negative and positive effects balance is pretty unclear). Either way, action is more urgent than before.

Working to reduce the risks of artificial intelligence looks like an increasingly pressing cause. If you’d like to make it a focus of your career, please let us know. You can also find some ideas about what to do with your career here. These notes are a bit out of date, but still seem roughly right. The main difference is that the field is even more talent constrained than funding constrained than when we wrote them.

Notes and references

  1. Though note this is author’s personal view, and does not reflect the views of the Open Philanthropy Project.