- Career advice & strategy (1,030)
- Ability (23)
- Accidental harm (26)
- Advice for undergraduates (49)
- Being ambitious (29)
- Career capital (209)
- Career planning (407)
- Community & coordination (177)
- Exploration (146)
- How to get a job (48)
- Job satisfaction (284)
- Leverage (7)
- Mental health (63)
- Now vs later (20)
- Personal fit (332)
- PhDs (23)
- Replaceability (9)
- Risk (118)
- Unconventional advice (6)
- World problems (465)
- AI-enabled power grabs (10)
- AI-enhanced decision making (7)
- Building a flourishing future (7)
- Building effective altruism (70)
- Catastrophic AI misuse (8)
- Civilisational resilience (16)
- Climate change (27)
- Engineered pandemics (44)
- Factory farming (28)
- Global health (70)
- Great power conflict (30)
- Institutional decision making (57)
- Moral status of digital minds (16)
- Nuclear war (36)
- Other pressing problems (141)
- Power-seeking AI systems (10)
- Prioritisation frameworks (35)
- S-risks (9)
- Space governance (6)
- Totalitarianism (20)
- Wild animal suffering (14)
- AI (163)
- Skill-building and career capital (650)
- Communicating ideas (25)
- Computer science (27)
- Early-stage startup employee (5)
- Economics (50)
- Economics PhD (7)
- Experience with an emerging power (61)
- Expertise relevant to a top problem (4)
- Fundraising (19)
- Graduate school (24)
- History (15)
- Law (15)
- Machine learning (29)
- Machine learning PhD (2)
- Organisation-building (11)
- Philanthropic advising (41)
- Philosophy (32)
- Policy and political skills (29)
- Psychology (14)
- Research (218)
- Skills (237)
- Software and tech skills (5)
- Statistics (4)
- Working at a foundation (11)
- Working in congress (2)
- Career paths (747)
- AI companies (15)
- AI hardware expertise (13)
- AI policy (86)
- AI safety technical research (73)
- Art & entertainment (7)
- Being a public intellectual (17)
- Biomedical research (33)
- Biorisk strategy & policy (42)
- Building the effective altruism community (21)
- China-related AI safety & governance (16)
- China-Western coordination (16)
- Communication (65)
- Computer science (6)
- Consulting (10)
- Data science (28)
- Earning to give (110)
- Engineering (10)
- Entrepreneurship (66)
- Finance (18)
- Founding a tech startup (34)
- Global priorities research (135)
- Government & policy (145)
- Grantmaking (25)
- High impact executive assistance (9)
- History of large societal trends (11)
- Information security (20)
- Journalism (26)
- Longtermist philanthropy (9)
- Marketing (6)
- Medicine (25)
- Operations (20)
- Organising an effective altruism group (1)
- Philosophy academia (12)
- Politics (54)
- Research in relevant areas (125)
- Research management (13)
- Software engineering (35)
- Specialist in emerging global powers (8)
- Think tank research (10)
- Web design (2)
- Foundations (490)
- Effective altruism (232)
- Existential risk (135)
- Future generations & longtermism (114)
- Moral patients (18)
- Moral philosophy (122)
- Reasoning and decision-making (132)
- Reasoning well (2)
- Other topics (307)
- About 80,000 Hours (68)
- Advocacy (54)
- Anonymous advice (19)
- COVID-19 (24)
- Doing good in your current job (109)
- Donating effectively (38)
- Forecasting (31)
- Income (30)
- Jobs at 80,000 Hours (26)
- User stories (4)
- Volunteering (14)
- Uncategorised (74)
Topic archive: Existential risk
So long as civilisation continues to exist, we’ll have the chance to solve all our other problems, and have a far better future. But if we go extinct, that’s it.
Those concerned about extinction have started a new movement working to safeguard civilisation, which has been joined by Stephen Hawking, Max Tegmark, and new institutes founded by researchers at Cambridge, MIT, Oxford, and elsewhere.
Risks such as climate change and nuclear war are worrying enough — but there might be still bigger threats. We think that that reducing these risks could be the most important thing you do with your life, and want to explain exactly what you can do to help.
- All135
- Podcasts69
- Blog posts34
- Key articles13
- Problem profiles12
- Podcasts (80k After Hours)3
- Career reviews2
- AI guide pages1
- Career guide pages1
Remove the "articles" filter to see all 135 items on this topic.
Selected highlights
Working in US AI policy
Article by Niel Bowerman · Last updated July 2023 · First published January 2019
All articles tagged "Existential risk"
2025
- Article
- Article
- 11 essential readings on AI safety, risk, and alignment (First published)Article
2023
- The 80,000 Hours Podcast on Artificial Intelligence and related topics (First published)Article
- Article
2022
August- ArticleArticleJune
- The case for reducing existential risks (Updated)Article
2021
May2020
AugustMarch- Article
2018
October- Ways people trying to do good accidentally make things worse, and how to avoid them (First published)Article
August- What are the most pressing world problems? (First published)Article
2017
October- The case for reducing existential risks (First published)Article
August- Article
July- Is it fair to say that most social programmes don’t work? (First published)Article
- All135
- Podcasts69
- Blog posts34
- Key articles13
- Problem profiles12
- Podcasts (80k After Hours)3
- Career reviews2
- AI guide pages1
- Career guide pages1
Remove the "articles" filter to see all 135 items on this topic.
Join our newsletter to get notified when we release new content on this topic.

