What our research has found about AI — and why it matters
Everyone’s suddenly talking a lot about artificial intelligence — and we have many helpful resources for getting up to speed.
With the release of GPT-4, Bing, DALL-E, Claude, and many other AI systems, it can be hard to keep track of all the latest developments in artificial intelligence. It can also be hard to keep sight of the big picture: what does this emerging technology actually mean for the world?
This is a huge topic — and a lot is still unknown. But at 80,000 Hours, we’ve been interested in and concerned about AI for many years, and we’ve researched the issue extensively. Now, even major media outlets are taking seriously the kinds of things we’ve been worried about. Given all the excitement in this area, we wanted to share a round-up of some of our top content and findings about AI from recent years.
This blog post was first released to our newsletter subscribers.
Join over 200,000 newsletter subscribers who get content like this in their inboxes every two weeks — and we’ll also mail you a free book!
Some of our top articles on AI:
- Preventing an AI-related catastrophe — Have you ever wondered why some people think advanced AI could pose an existential threat? This problem profile explains the case for AI risk — as well as some important objections.
- What could an AI-caused catastrophe actually look like? — This article tries to give a more concrete picture of worst-case scenarios.
- Anonymous advice on increasing AI capabilities — We asked knowledgeable people in the field for their views on whether people who want to reduce AI risk should work in roles that could further AI progress.
- Career reviews:
- AI technical safety
- China-related AI safety and governance paths
- Shaping future governance of AI
- We also have relevant career reviews on software engineering and information security, which explain how these careers could contribute to the development of safe AI.
Some of our top podcast episodes on the topic:
- Ajeya Cotra discussed the report she wrote estimating how long it will take before AI can radically transform our economy.
- Richard Ngo, a researcher at OpenAI, discussed his work, misconceptions about machine learning, why he views the ‘alignment problem’ differently than others do, and balancing fear and excitement about the technology.
- Chris Olah discussed his work at Anthropic, specifically his research into ‘interpretability’ — that is, understanding why AI systems work the way they do.
- Nova Das Sarma explained why information security may be critical to the safe development of AI systems.
- Robert Long discussed why large language models like GPT-4 (probably) aren’t conscious — but future AI systems might be. And that could be a huge deal.
That’s it from us — here are some interesting pieces of content about AI from other sources:
- Scott Alexander explained why he doesn’t buy the most pessimistic estimates of AI risk — though he still has significant worries.
- Katja Grace of AI Impacts wrote a post last December laying out the case for slowing down the development of new AI systems.
- Ezra Klein interviewed Kelsey Piper about the rapid developments in the field of AI and what kinds of risks we should be on guard against.
- 1,100+ notable signatories just signed an open letter asking ‘all AI labs to immediately pause for at least 6 months’ in TechCrunch