10 essential resources for understanding advanced AI and its risks

Our site has an overview of what’s happening with AGI, but here’s some more essential reading for understanding the field. We don’t agree with everything the authors say, but we think they’re well worth reading.

1. Preparing for the intelligence explosion by William MacAskill and Fin Moorhouse at Forethought Research (March 2025)

These authors argue that an “intelligence explosion” could compress a century of technological progress into a decade, creating numerous grand challenges that humanity must prepare for now. You can listen to Will MacAskill discuss this piece on our podcast.

2. AI 2027 by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean (April 2025)

An analysis of a concrete scenario in which AGI arrives soon via the automation of AI research. The team also provides its own forecasts of several key outcomes in the accompanying research. This has become one of the most discussed pieces of research in the field.

3. Situational Awareness: The Decade Ahead by Leopold Aschenbrenner (June 2024)

A former OpenAI employee makes a compelling case — across five in-depth chapters — that AGI is coming much sooner than many expect, and few realise just how much it will change the world. We think this piece might underplay the challenge of aligning AGI with human interests and the need for international coordination on AI risks. However, many of its predictions about the development of agentic reasoning models have proved remarkably accurate.

4. The case for multi-decade AI timelines by Ege Erdil from Epoch AI (April 2025)

This piece makes one of the most influential arguments against the idea that we may have AGI by 2030. Ege and Tamay Besiroglu discussed these ideas on the Dwarkesh podcast.

5. The Most Important Century by Holden Karnofsky (2021)

A series from 2021 arguing that transformative AI could make the upcoming decades the most important in history. Some of it is now out of date, but contains several useful articles including How we could stumble into AI catastrophe, AI could defeat all of us combined, Why AI alignment could be hard with modern deep learning, and Jobs for helping with the most important century.

6. Does AI Progress Have a Speed Limit? by Ajeya Cotra and Arvind Narayanan (April 2025)

Two experts discuss the factors behind the pace of AI development. They present contrasting views about the likely speed of progress in AI and its societal effects, offering useful insights into the state of the debate.

7. Is power-seeking AI an existential risk? by Joe Carlsmith (June 2022)

This is one of the central papers putting together the argument that extremely powerful AI systems could pose an existential threat to humanity.

8. Gradual disempowerment by Jan Kulveit, Raymond Douglas, Nora Ammann, Deger Turan, David Krueger, and David Duvenaud (January 2025)

Even if we avoid the risks of power-seeking and scheming AIs, there may be other ways AI systems could disempower humanity. Our political, economic, and cultural systems might slowly drift away from serving human interests in a world with advanced AI.

9. Taking AI welfare seriously by Robert Long, Jeff Sebo, et al. (November 2024)

This paper makes a thorough case that we shouldn’t only worry about the risks AI poses to humanity — we need to potentially consider the interests of future AI systems as well.

9. Machines of loving grace — How AI could transform the world for the better by Anthropic CEO Dario Amodei (October 2024)

It’s important to understand why there’s enthusiasm for building powerful AI systems, despite the risks. This post from an AI company CEO attempts to paint a positive vision for powerful AI.

Other reading lists by topic

Here are additional lists of resources we’ve put together for specific AI-related topics.