How to make a difference: Part 5 How we rank the world’s most pressing problems (and why)

We’ve spent much of the last 15-plus years trying to answer a simple question: what are the world’s most pressing problems?

We wanted to have a positive impact with our careers, so we set out to discover where our efforts would be most effective.

Our analysis suggests that choosing the right problem could increase your impact by over 100 times, which would make it the most important driver of your impact.

As we explained in the previous article, we saw that finding the answer involves looking for world problems that are big, but also the most solvable and most neglected. This seemingly simple framework has led us to some radical places.

What follows is an unabashedly opinionated guide that explains why preventing diarrhoea has saved as many lives as world peace, why we recommended trying to prevent the next pandemic long before COVID-19, and why we think AI is the key thing to focus on today — though not the risks people most often discuss.

Reading time: 40 minutes.

The bottom line:

How do we rank the world’s most pressing problems?

Over the last 15 years, the way we rank the world’s most pressing problems has shifted dramatically. Most people who want to do good choose careers in health, education, or social issues in their home countries, but in 2009, we were confronted with the immense scale of global poverty.

The world’s poorest people are about 20 times worse off than those living on the poverty line in the US, and there are proven ways to help them that even the most hardened aid sceptics endorse. By focusing on the most cost-effective ways to help the world’s poorest, it’s possible to have over 100 times as much impact as you would working on social issues in rich countries.

Shocked by the disparity, we began to wonder if there might be even bigger and more neglected problems. First, we looked into factory farming, which affects around a trillion animals per year but receives 1,000 times less funding again vs global health. From there, we expanded our focus to consider future generations.

The future could contain far more people than are alive today, but they can’t vote or buy things, which means our system neglects them. At the same time, in the coming decades we face existential risks that would be catastrophic not only for people living in the present, but could also prevent all future generations from existing.
After that realisation, we wanted to know which existential risks were the biggest and most neglected. Climate change is often regarded as the most significant existential risk, but the scientific consensus is that, while it will be extremely damaging, it’s unlikely to end civilisation. Engineered pandemics, meanwhile, are a far more neglected problem and potentially more destructive, yet climate change receives over 10 times more investment.

In 2016, we wrote that AI would probably be the most transformative technology of our lifetimes, offering both huge upsides and new existential risks. Since then, the field has progressed even faster than we anticipated. Today, we think risks associated with advanced AI, such as humans losing control of autonomous systems, or the use of AI by dictators and other actors to concentrate power, rank highest among the world’s most pressing problems. Although it can feel like all anyone is talking about is AI, the number of people working on AI alignment and control is probably around 1,000, and for many other risks, the number is in the tens.

You can stay current with our most up-to-date list of world problems.

Why charity doesn’t begin at home

Most people who want to do good focus on the problems they see all around them. In rich countries, this often means issues like homelessness, inner-city education, and unemployment. But, while a natural starting point, are these really the most urgent issues?

In the US, only 5% of charitable donations are spent on international issues,1 while the large majority is spent explicitly on domestic ones. Around 45% of US college graduates enter careers in education, health, and public administration — which mainly involve helping people at home in the US.2 (Most of the remainder take corporate jobs, and it’s similar in other countries.)

There are some good reasons to focus on helping your own country — you know more about the issues, and you might feel you have special obligations to it. However, back in 2009, we encountered the following series of facts which changed our minds.

First, remember the distribution of world income? We came across it already in an earlier article.

A chart showing global income distribution, where an annual income of $10,000 puts someone in the 80th percentile of wealth. An arrow points to a spot near the peak of the chart, in the 99th percentile, and says, "you're here (roughly)."

Even someone living on the US poverty line ($15,650 per year, as of 2025) is richer than about 85% of the world’s population and about 20 times wealthier than the world’s poorest 800 million, most of whom live in Central America, Africa, and South Asia and earn about $1,000 per year.3

Not only are the poorest people in the world much poorer than those in the US, there’s also a lot more of them. There are about 40 million people living in relative poverty in the US, just 5% of the 800 million living in extreme poverty globally.4

Crucially, there are far more resources dedicated to helping this much smaller number of people. Overseas development aid from all countries is under $200 billion per year, compared to $1.7 trillion spent on welfare in the US alone.5 This is what we’d expect given the biases we covered earlier.

We also learned that a significant fraction of US social interventions probably don’t work at all. This is exactly what we’d expect based on the difference in wealth. If a problem in the US persists, it’s probably because it’s complex and can’t be easily solved with more resources. By contrast, the world’s poorest regularly die from things like contaminated water, which almost never happens in the US.

This isn’t to deny that the poor people in rich countries have tough lives, perhaps even worse in some ways than those elsewhere. Rather, the issue is that there are far fewer of them, and they’re much harder to help. And this argument can be extended to other rich countries, including the UK, Australia, Canada, and most of the EU. This raises the question, what are the most pressing problems facing the world’s poorest people?

Jay-Z might have 99 problems, but which one is most pressing?
Jay-Z might have 99 problems, but which one is the most pressing?

Earlier, we told the story of Dr Nalin, the pioneering physiologist who helped to develop oral rehydration therapy as a treatment for diarrhoea.

As a result of this kind of work, the number of deaths each year due to diarrhoea has fallen by 3 million over the last five decades. And there have been similar victories over other infectious diseases. Meanwhile, wars and political famines killed an average of two million people per year over the 20th century.6 So we could say that efforts by Nalin and others did more to save lives than achieving world peace would have done.

A chart showing the sharp decline in annual deaths due to infectious diseases between 1960 and 2001, with deaths due to wars maintaining a steady average.

The global fight against disease has been one of humanity’s greatest achievements, but it’s an ongoing battle, and one that you can contribute to with your career. Many of these victories — such as the vaccination drive that eradicated smallpox — were driven in part by international humanitarian aid, and there is more to be done.7

Now consider the following data. It’s the data Toby Ord introduced me to, and which eventually led me to found 80,000 Hours.

A chart illustrating the large disparity in cost-effectiveness between different health interventions.

The graph shows health treatments, such as tuberculosis medicine or cataracts surgery, in order of how much ill health they reduce per dollar, as measured in randomised controlled trials. Ill health is measured using a standard unit used by health economists called the ‘quality-adjusted life year‘ (QALY) that takes into account both the severity of the disease being treated, and how many years of benefit are provided.

All the treatments studied are effective, and all of them would be funded in North America or Europe.

For instance, drugs to treat the AIDS-related cancer Kaposi’s sarcoma are expensive and only provide a small benefit. This puts them around the threshold of what would get funded in a rich country (such as via health insurance), but are still seen as clearly worth funding. However, the data found that antiviral therapy would, for the same cost, prevent AIDS in a much larger number of people, preventing over 10 times as much ill health. Preventing the transmission of HIV during pregnancy in the first place, meanwhile, is another four times cheaper again.

The very most effective interventions in the entire sample, like childhood vaccinations, were about 10 times more cost effective than the mean, 60 times the median, and 15,000 times more than the worst.8

This is an astonishing result — perhaps suspiciously so. It turned out there were mistakes in the original analysis that meant the effectiveness of the top result was overstated.9 But even after these mistakes were corrected, the top interventions remained at the top, and were still much more effective than average.

This means if you were to work at a health charity focused on one of the most effective interventions, you could expect to have at least several times more impact compared to a randomly selected one, and probably 100 times more impact than those in the bottom half.

How much more impact might you be able to make in your career by switching your focus to global health? As we’ve seen, because the world’s poorest people are over 20 times poorer than the poorest in rich countries, resources should go about 20 times as far in helping them.10 We can then also use the data above to pick the very best interventions within global health, allowing us to have perhaps five times as much impact again.11 Those combine to make a 100-fold increase in expected impact.12

Does this check out? The UK’s National Health Service (NHS) and many US government agencies are willing to spend over $30,000 to give someone an extra year of healthy life, or over $1 million to save a life.13 This is a fantastic use of resources by ordinary standards. However, as we saw earlier, the nonprofit GiveWell has identified real charities that can use a $3,000 donation to save a child’s life. This is about 0.3% of what it typically takes to save a life in a rich country.

So a year spent working somewhere like the Malaria Consortium might improve health as much as working in a typical healthcare job in a rich country for 300 years.14

These discoveries caused many of us at 80,000 Hours to start giving at least 10% of our incomes to effective global health charities. No matter which job we ended up in, these donations would enable us to make a significant difference. In fact, if the 100-fold figure is correct, a 10% donation would be equivalent to donating 1,000% of our income to charities focused on poverty in rich countries.

See more detail on how to contribute to global health in our full profile.

However, everything we learned about global health raised many more questions. If it was possible to have 10 or 100 times more impact than the most common ways of helping others, perhaps with a bit more effort, we could find something even better?

Where might an even greater scale of suffering be found?

Are there any global issues that cause even more suffering than global poverty? One answer to this question was put forward by Australian moral philosopher Peter Singer.15

Around a trillion animals die every year in factory farms in conditions that would be considered torture if inflicted on your pet.16 Chickens are bred to grow so fast they can barely walk and are kept in tiny cages their entire lives. Female pigs are forced to lie in crates so narrow they can’t even roll over, before dying painfully in gas chambers.17 Fish are killed by leaving them to suffocate for hours in the open air.

Over 99% of the meat eaten by humans is produced in factory farms, with genuinely ‘high welfare’ meat forming a tiny, tiny minority.18 Animals in factory farms have no economic or political power. They depend entirely on our compassion — and because they are out of sight, they get almost none.

Most philanthropic efforts dedicated to helping animals are directed towards things like The Donkey Sanctuary, which is one of the UK’s best-funded animal charities.19 It aims to give working donkeys a comfortable retirement, attracting huge bequests from pensioners who like cute animals.

You’re cute, but that’s exactly why you don’t need our help.

The entire philanthropic field dedicated to ending factory farming, by contrast, receives about $400 million per year, only 0.03% of total philanthropic funding in the US.20 That’s under 1% of donations to international development (which in turn receives only 5% of the total).21

Making the comparison between helping humans and preventing animal suffering is philosophically controversial, but almost everyone agrees that it’s bad to torture animals. And it also turns out this suffering can be reduced extremely cheaply.

There’s a long-standing belief that the best way to stop factory farming is to convince people to stop eating meat. But that doesn’t appear to work — at least not anymore. Over the past 20 years of animal advocacy, the number of vegans and vegetarians has basically stayed flat.22 This makes sense: meat is delicious, widely available, and plays a central role in many cherished cultural traditions. Moreover, most people are already aware of the arguments for vegetarianism, and yet they haven’t changed.

In contrast, recent efforts to convince companies to switch from caged to cage-free eggs have been enormously successful. One philanthropically funded campaign cost around $85 million over 10 years, but persuaded hundreds of companies in the US and EU to make the switch. This increased the cost of an egg by only $0.01–0.03, but has already saved around 1 billion chickens from living agonising lives in cramped cages.23

There are many more welfare reform campaigns that could be run. It’s too simplistic to say that aiming to bring about ‘institutional change’ is always better than trying to change individual behaviour, but this is a case where it is.

Another approach is the development of cheap, tasty substitutes, whether they be plant-based meat (like Beyond Burgers) or cultivated meat grown from a sample cell that is physically identical to animal meat. These strategies aren’t always profitable, but with subsidies it could be possible to drive down costs and develop a self-sustaining industry — in the same way subsidies were needed to develop solar panels, which are now cheaper in many places than fossil fuels (which in turn benefit from huge subsidies). Reducing meat consumption would also be great for the planet, as agriculture is one of the biggest sources of greenhouse gas emissions.24

One individual we worked with, Richard, was working in international development policy, helping to ensure that aid spending was focused on the most cost-effective programmes. He wasn’t an ‘animal person’: he didn’t have a pet, and didn’t find farmed animals especially cute or appealing. But after learning about the vast number of animals in factory farms, the cruelty of their treatment, and the tiny amount of resources dedicated to helping them, he became convinced that he should shift his focus.

After visiting a factory farm and slaughterhouse, and being shocked at the violence and suffering he saw, Richard joined the Good Food Institute (GFI), an NGO that provides research and policy advice aimed at kick-starting the alternative proteins industry.

GFI has supported over 100 new projects and helped secure £27 million of funding for alternative proteins from the UK government, as well as €38 million from the German government. Richard and his team also helped to defeat attempts to ban plant-based products from using ‘meaty’ names, such as ‘burger’ or ‘sausage,’ in the EU.

We still think factory farming is an urgent problem, as we explain in our full problem profile. We helped to found Animal Charity Evaluators, which does research into how to most effectively improve animal welfare. But while focusing on animals looks like one way to find problems that are even bigger and more neglected than global health, we wondered if we could find something even bigger again.

The importance of future generations

Imagine the following: you throw away some broken glass in the forest. Later, a child walks by and cuts their foot. Now, suppose the child only cuts their foot 100 years in the future. Does that mean it wasn’t bad to throw away the glass after all?

Most people would say no, which tells us that they care about future generations. In a similar way, it’s hard to understand why people would care about their grandchildren’s children, or their legacy in business, art, or science, or about preserving the natural world, if they didn’t care about what will happen after they’re gone.

This simple idea — that future generations matter — has some radical implications about where to focus when applied to the world today. We were first exposed to these implications by researchers at the University of Oxford’s (modestly named) Future of Humanity Institute. William MacAskill, my cofounder at 80,000 Hours, later helped to coin the term ‘longtermism,’ the view that helping future generations should be a key moral priority of our time.

One person’s decision today could end up affecting billions of lives (and they will have no say in the matter).

Here’s the argument:

First, future generations matter, but they can’t vote, buy things, or stand up for their interests — much like animals in factory farms. This means our system neglects them. In addition, their plight is abstract. The suffering on factory farms is just a few clicks away on YouTube, whereas we can’t so easily visualise lost future potential. Future generations rely more than any other group on our goodwill, and yet even that is hard to muster.

What’s more, Earth will remain habitable for hundreds of millions of years at least, and while it’s true we may die out before that point, if there’s a chance we’ll make it, then we’d expect many more people will live in the future than are alive today.25

To use some oversimplified figures: if there’s one generation per century, then over 100 million years there would be 1 million future generations.26 This is such a big number that any problem which affects future generations is potentially of a far greater scale than an issue which only affects the present. It could end up affecting the lives of millions of times more people, with all the art, science, culture, joy, and suffering those lives will entail. So the problems that affect the future are not only likely to be neglected, they’re also potentially the largest in scale, nearly no matter what you value. (We cover these ideas in more depth in a separate article.)

The last, crucial point is that there are things we can do today that will help both people living now and future generations. What might those be?

The case for focusing on neglected existential risks

In the summer of 2013, Barack Obama referred to climate change as “the global threat of our time.” He’s not alone in this opinion. When people think of problems facing future generations, climate change is usually the first thing that comes to mind. Polls find that young people routinely rate it as the world’s most pressing issue.27

One reason for that is many fear that climate change could lead to a catastrophic collapse of civilisation, and even the end of the human species.28 One of the most famous climate advocacy groups, Extinction Rebellion, named itself in opposition to this possibility.

We think this is on the right track — but that the rebellion could be broadened. A strong candidate for the most effective way to help future generations is to prevent a catastrophe that ends civilisation, since such a catastrophe would prevent future generations from even existing.

So long as civilisation continues, however, there’s a good chance that problems like poverty and disease will eventually be solved. Anything that poses a truly existential threat, however, would prevent any such progress forever.29

To illustrate the difference, consider two scenarios:

  1. A nuclear war kills 99% of people, but civilisation recovers.
  2. A nuclear war kills 100% of people.

If you only focus on the present, the second scenario is only about 1% worse than the first. However, if you factor future generations into the equation, then the second is much, much worse, since it eliminates all future potential as well.

From this perspective, we should pay a lot more attention to risks that are most likely to be existential (defined as those that risk permanent loss of civilisation’s future potential, whether via extinction or lock-in of a worse future).30 Instead, people often mislabel risks as existential when they aren’t, while those that actually are get less attention. (See more on the argument for focusing on existential risks.)

This is where we disagree with President Obama, or more precisely with the widely held view that climate change is the world’s most pressing existential risk.

The Intergovernmental Panel on Climate Change’s (IPCC) “Sixth Assessment Report” is clear: climate change will be hugely destructive. Most likely we’ll see 2–3ºC of warming, and this will cause floods, famines, fires, and droughts. The world’s poorest people will be affected most.

But, even when we try to account for tail risks and other uncertainties, nothing in the IPCC’s report suggests that civilisation itself will be destroyed. The worst-case scenarios outlined in the report involve 6ºC of warming, and in those most of the Earth would still remain habitable.

Here’s one illustration:31 even with 5ºC of warming, wheat yields in temperate regions would most likely increase around 18%, due to the longer growing season. It’s true that yields of crops like maize in regions near the equator would fall about 24%, which could cause famine in those regions. But this comparison is between a worst-case future and what would have happened without climate change.

Since 1961, crop yields have steadily increased around 200% due to research and innovation, despite the 1–1.5ºC of warming we’ve already experienced. So, even if climate change causes yields to decline 30% by the end of the century, they’ll most likely end up far higher overall.32 Pessimistic analyses usually completely ignore the forces of innovation working in the opposite direction from climate change, as well as the many ways we could adapt.

Similarly, the most negative analyses of the economic cost of climate change argue it could reduce global GDP by 30%.33 That’s a huge decline. But it fails to take into account ongoing economic growth of about 2.5% per year. If that continues, then in 75 years our descendants will be six times richer than we are. A 30% cut in GDP caused by climate change would mean they’ll only be 4.2 times richer. In fact, in all of the IPCC’s scenarios, it’s projected that average per capita income will be higher in 2100 than it is today.

Climate change is also already widely acknowledged as a major problem — conspiracy theories aside. Most governments have signed binding treaties, and as of 2024, philanthropic spending on climate change is around $6–10 billion per year, government grants run into tens of billions in the US alone,34 and all financing for climate initiatives internationally runs to about $1.6 trillion.

In most rich countries, CO2 emissions have already fallen significantly, and the continued decline in the cost of solar panels, batteries, and electric vehicles means it will be possible to continue these trends.35

So, while we think tackling climate change is an important way to help future generations — and you can read more about the risk from climate change in our full profile — we think there are much more neglected, and more existentially dangerous issues for you to consider working on.

Biorisk: the threat from new pandemics

In 2006, The Guardian newspaper ordered segments of smallpox DNA in the mail. If assembled into a complete strand and transmitted to 10 people, a study estimated the virus could infect up to 2.2 million people in 180 days — potentially killing 660,000 of them — if authorities did not respond quickly with vaccinations and quarantines.36

A vial of smallpox in a plastic bag.
Image Credit: The Guardian

We first wrote about the risks posed by catastrophic pandemics back in 2014.37 Our reasoning was that major pandemics arise every few decades, but not much was being done to prepare for the next one. Six years after that, COVID-19 caused over $10 trillion of economic damage,38 and most likely killed over 20 million people.

Despite all this damage, it’s unclear whether we’re any better prepared for the next one. Moreover, a future pandemic could easily be both more infectious and more lethal than COVID-19, for instance with a 10% fatality rather than 1%. A disease this dangerous could shut down global supply chains, resulting in food shortages and even a breakdown of law and order.

On top of this, each year bioengineering technology becomes cheaper, making it accessible to more and more people. What once required a state-of-the-art laboratory with 50 scientists working around the clock can be done by a lone hobbyist today.39

A chart showing that the cost of DNA sequencing and gene synthesis has steadily decreased since 1990.

Eventually, it will be possible to engineer pandemics that are much worse than those that have arisen naturally. Imagine someone released 10 different diseases with a three-month incubation period, the infectiousness of measles, and the fatality rate of Ebola. Almost everyone in the world would be infected before anyone had symptoms.

The world has plenty of religious cults, despots, and would-be school shooters who might decide they want to take everyone else down with them. A state like North Korea could decide to develop these kinds of bioweapons as deterrence to invasion. Mutually assured destruction became our policy with nuclear weapons, and so it could again with biological ones. The world would be one lab leak away from catastrophe.

Given what we know about the pace and accessibility of bioengineering tools, the chance that there will be a pandemic that kills over 100 million people during the next century seems similar to the risk of large-scale nuclear war or climate change above 6ºC.40 An engineered pandemic could also kill over 90% of the population, suggesting its overall scale is significantly larger.41

But risks from pandemics are, even now, far more neglected than either of these. In comparison to $6–10 billion of philanthropic funding for climate change, and $1.6 trillion of total climate finance, pandemic prevention only receives $1 billion of philanthropic funding, and total spending aimed at reducing the chance of worst-case pandemics is probably under $10 billion.42

At the same time, it’s an unusually solvable problem. By regularly sequencing all the genetic material in waste water, any material that’s growing exponentially could be flagged as a pandemic-in-waiting, giving us an early warning signal even for entirely novel viruses.

Governments could build large stockpiles of (ideally much improved) personal protective equipment (PPE) so that, when a new pandemic is spotted, it can be quickly distributed to all essential workers, ensuring that society continues to function. Buildings could be retrofitted with UV lights that sterilise the air. With enough measures to break transmission, the replication rate will drop below one and the pandemic will start to die out. State-of-the-art mRNA vaccines could then be rolled out quickly to prevent its return.

Overall, we think biosecurity is one of the world’s most pressing problems today (you can read more about how to contribute to biosecurity in our full profile). What’s more, you don’t need to be a biologist to make a difference. What’s most needed is people with skills in organisation-building and engineering, who can do things like develop and distribute cheaper PPE, and run waste-monitoring systems. There’s also a need for people working in government and policy to make sure biosecurity is properly funded and prioritised.43 Read more about preventing catastrophic pandemics.

But could there be even bigger existential risks facing humanity?

Why AI could change everything

Around 1800, civilisation underwent one of the most profound shifts in human history: the Industrial Revolution.44

A graph illustrating the relatively stable annual global average income over time, until a huge spike around 1800.

When you look at wealth, population, the rate of technological progress, or even the rate of social change over time, basically nothing happened for many thousands of years, until suddenly everything began to accelerate.

Looking forward, what might be the next transition of this scale? Something that causes an even greater acceleration and shapes the lives of all future generations? If we could identify such a transition, it could well be the most important area to work on.

Back in 2016, we decided the most likely candidate was AI.45 We’d been tracking the issue much earlier, but that was the year an artificial neural network mastered the game of Go — a Chinese board game requiring strategic intuition — much faster than expected.46 Many AI researchers began predicting that human-level AI was likely to be developed in our lifetimes.

We reasoned that the arrival of computational systems smarter than humans would be one of the biggest events in history, akin to the arrival of a new, intelligent species.

Consider that chimpanzees are faster than us, and much stronger. But there are under 300,000 of them in the wild, compared to 8 billion humans, and they depend on us for their fate.47 This is due to our tools, culture, and ability to cooperate and solve novel problems, which rest to a large extent on our greater intelligence.

AI could be unlike any previous technology due to its generality. Invent an axe, and you can cut things better. Invent an intelligent machine, and it can learn to do any task.

When people talk about ‘artificial general intelligence‘ (AGI), that’s what the ‘general’ part means. A generally intelligent AI can learn to do a wide range of tasks in the same way that humans can. In contrast, a ‘narrow’ AI can only do a small range of tasks, like play chess or calculate numbers. All past technologies have been narrow compared to human abilities.

Since 2016, many experts have been further surprised at how quickly AI has advanced, making the issue a lot more urgent. On the forecasting platform Metaculus, hundreds of forecasters predict how many years it’ll be until AGI is created. Since 2020, the median estimate has fallen from 50 years to five.

A chart illustrating forecasting timelines for AGI since 2020, with jagged jumps but a consistent downward trend.

The definition of ‘general intelligence’ is hotly debated, but no matter the definition, all major forecasts have shown large declines . For instance, in a 2023 survey of thousands of AI researchers published in top-tier journals, the median estimate for when “high-level machine intelligence” would be created declined by 13 years compared to the same survey just one year earlier.48

In less than five years, the large language models (LLMs) used to power products like ChatGPT, have gone from barely being able to string a few sentences together to conversing in natural language in a way that’s basically indistinguishable from humans. More recently, they’ve become able to answer known scientific questions better than PhD holders in the relevant subject,49 to beat almost all human experts in coding competitions,50 and to win gold at the International Mathematical Olympiad.51

A chart from Epoch AI showing different AI agents' performances on a set of Ph.D-level science questions, with performance ranging from 13% accuracy to 93% accuracy. Over time, performance has increased, with most agents released since January 2025 outperforming humans (who sit at an average 70% accuracy).
“AI Performance on a set of Ph.D-level science questions” from Epoch AI

This progress was driven by massive increases in the amount of computing power used to train AI models, which has grown by over four times per year since 2010. It’s also been driven by rapid algorithmic progress, which in turn has been driven by a rapidly growing AI research workforce. These trends seem likely to continue for at least the next few years, meaning we should expect further rapid AI progress during that time.

When people picture much more capable AI, they sometimes imagine speaking to an even smarter chatbot. But that’s not where we’re headed. AI companies are trying to create a ‘digital worker’ that you can ask to do open-ended projects, like build an app, run a sales campaign, or design a scientific experiment. We believe there’s a fair chance systems like these will exist by 2035.

Suppose systems like this are reached — what happens then? To many people, what first comes to mind is job loss, and we’ll discuss how not to lose your job to AI in part eight on the skills we believe will be most valuable in the future. But I think the consequences could be much wilder, and could arrive well before mass unemployment becomes a serious risk.

The theoretical possibility of feedback loops in AI development was identified at the dawn of the field by its founders, including Alan Turing and I. J. Good.52 The idea is that if AI itself can start to help with AI research, then progress will speed up, which would mean AI becomes even more advanced, which could speed up progress even more, and so on. But in the last five years, it’s become much clearer how such a feedback loop could work in practice.

The leading AI companies today are already using AI extensively to aid their own research. In particular, they use AI to help with coding, both because coding is what AI is best at today and because it’s a crucial part of doing AI research. And yet the overall boost to productivity remains relatively small.

Imagine, though, what would happen if AI was able to do the job of a junior engineer, and then a mid-level engineer, and continued to improve.53 If current models were able to produce work comparable to that of a mid-level engineer, then given the amount of computing power already available in data centres, it would become possible to have the equivalent of millions of competent engineers working on AI research.54 As AI continues to improve, eventually these models could start to do the work of even top researchers.

In comparison, there’s probably under 10,000 human researchers working on frontier AI today, so the workforce would, in effect, expand in size more than 100 times. No-one knows exactly how much that would speed up progress. The most careful estimate I’ve seen is by Tom Davidson, who currently works at Forethought — a research group Will MacAskill established in Oxford to explore the impact of AI on society. Tom estimates we’d most likely get three years of AI progress in one year, and it’s possible we’d see 10.

Over the last five years, improving algorithmic efficiency means the number of AI models you can run on a given number of computer chips has increased over three times every year. That means if you start with 10 million digital workers, and you get three years of progress in one year, then one year later you could run about 270 million of them. And they’d be smarter too.55

But the process won’t stop there. Today, the number of AI chips produced is roughly doubling every year.56 If that increase is sustained, then one year later those 270 million AIs will become 540 million. And because there would be even more computing power available to train them, they’d become even smarter still.

If each chip costs about $2 per hour to run, but can do the work of a human knowledge worker, those chips could generate $20 or even $200 an hour of revenue. Chip production would become one of the world’s biggest priorities, seeing not hundreds of billions, but trillions of dollars of investment. AI companies would direct the hundreds of millions of AI workers at their disposal to the task of accelerating chip production as much as possible.

It’s possible that these AIs eventually reach what’s been called artificial ‘superintelligence’ (ASI): AI that’s more capable than humans at basically every cognitive task. That could mean AIs that are capable of much greater insights than humans. But it could also mean AIs that are about equally smart, but outstrip us due to other advantages.

A chimpanzee sits on a rock.
How it feels to watch the AI takeoff.

Picture the most capable human you know, then imagine they could crank up their processing speed to think 60 times more quickly — a minute for you would be like an hour to them. Now imagine they could make copies of themselves instantly, and that everything one copy learned could be shared with the others. Imagine a firm like Google but where the CEO can personally oversee every worker, and every worker is a copy of whoever is best at that role.

This isn’t only a theoretical possibility but rather the explicit goal of the leading AI companies, who have marshalled hundreds of billions of dollars to pursue this aim.57

Whether we end up with superintelligence or a vast number of human-level digital workers, this process has been called the ‘intelligence explosion,’ due to the rapid increase in the amount of intellectual labour available. But it’s maybe more accurate to call it a ‘capabilities explosion’ because AI wouldn’t only improve in terms of narrow bookish intelligence, but also in creativity, coordination, charisma, common sense, and any other learnable ability.

The effects would be dramatic. There are about 10 million scientists in the world today.58 If these hundreds of millions of AIs became as productive as human scientists, then the broader rate of scientific and technological progress would likely accelerate too. Forethought has also estimated we could see 100 years of technological progress in under 10 years, and maybe a lot more. This has been called the ‘technological explosion.’59

To get a sense of how wild this would be, imagine for a moment that everything discovered in the 20th century was instead discovered between 1900 and 1910. Quantum physics and DNA sequencing, computers and the internet, penicillin and genetic engineering, jet aircraft and space satellites would all happen within just two or three election cycles.

While a lot of intellectual work, like maths or philosophy, could proceed virtually, these digital scientists’ abilities would quickly become limited by their inability to interact with the physical world. Robotics would then become the world’s most profitable activity. In World War II, car factories were converted to produce fighter jets. Car factories produce about 90 million cars per year, and if they were converted to produce humanoid robots, it’s possible they could produce 100 million–1 billion robots per year.

Once you have the right robots, they can build more chip fabs, solar panels, and robot factories. The profits from one generation of AI and robotics could be used to build factories that produce even more AI chips and robots.

Epoch AI is one of the leading research groups tracking the intersection of AI and economics. They’ve created some of the only models that explore what a true human-level robotic worker would mean for the economy. Their research shows that if it becomes possible to produce such a robot for under $10,000, and you plug that into a standard economic growth model, output would start to grow 30% per year.

This growth arises solely because more output means you can create more robotic workers, which leads to more output, and so on. If the rate of technological progress also speeds up, then growth in output would accelerate over time, growing hyper-exponentially.

This process would continue until physical limits are reached, and these could be very high. Forethought argue that robot production would more likely be constrained by energy shortages than a lack of raw materials. If 5% of solar energy were used to run robots at around the efficiency of the human body, that would be enough to run a population of 100 trillion.60 This has been called the ‘industrial explosion.’

All told, a range of scenarios are possible. In the most dramatic, your daily life and job might continue to look the same as it ever did. Meanwhile, in a data centre somewhere, 10 million digital researchers are busy automating AI research. Just a year later, 300 million smarter-than-human AIs — a “country of geniuses in a datacentre” — are suddenly deployed to transform every sector of the economy. And yet, even if this especially rapid scenario doesn’t come to pass, it’s still possible we will get an intelligence explosion driven by the production of AI chips. It’s just that it would take 10–20 years, rather than one.

Epoch AI estimated that even if you only automated the third of tasks they believe can be done remotely (i.e. without robotics or superintelligence), this would still increase economic output by 2–10 times, even accounting for bottlenecks.

It’s also possible that AI becomes very capable along some narrow dimensions, like mathematics, but there’s still so much it can’t do that growth accelerates hardly at all.61

Experts in the technology believe there’s a 40–60% chance the intelligence explosion argument is broadly correct, and a 10% chance AI becomes vastly more capable than humans within two years after AGI is created. This is clearly high enough to take seriously. It also raises a daunting question: what could an AI transformation mean for society?

What are the most pressing AI risks?

The dramatic expansion in wealth and technology that would be unleashed by an intelligence explosion would make it far easier to tackle the many problems that wealth and technology can help tackle. We’d see the creation of far cheaper green energy, substitutes to factory farmed meat, and new treatments for disease. Expert advice on any topic would become available for pennies, and robotic-produced goods would become far cheaper.

Vastly greater wealth doesn’t guarantee we’d end global poverty, but it would make it far easier to do so.62 However, we’d also face new risks, some of which would truly count as existential.

In 2023, hundreds of AI scientists signed a letter stating that “mitigating the risk of extinction from AI should be a global priority, alongside other societal-scale risks such as pandemics and nuclear war”.63 This included the two most-cited AI researchers of all time, Geoffrey Hinton and Yoshua Bengio, as well as the CEOs of the three leading AI companies.64

The risks they are concerned about include the more obvious ones, such as misuse of more powerful systems. Evaluations of the latest models show they’d already be helpful to a nonspecialist who wanted to build a bioweapon, and while there are safeguards to prevent answers to these requests, these are currently quite easy to trick into producing forbidden responses, a technique known as ‘jailbreak.’65

For instance, telling ChatGPT it’s playing the role of the user’s deceased grandmother, who used to work at the napalm factory, could trick it into telling a bed time story about how to make napalm.66

Another risk is destabilisation of the world order. If Russia perceives that the US is about to start a technological explosion and dramatically increase its lead over other countries, it might threaten to pre-emptively attack the US to prevent being permanently left behind, starting World War III. In 2017, Putin said, “Whoever becomes the leader in this sphere [AI] will become the ruler of the world.”

However, perhaps the greatest risk of all is that we lose control of advanced AI altogether. “The 2025 International AI Safety Report” aims to represent the scientific consensus on AI risk, in a similar way to the IPCC report for climate change. As well as “AI-enabled hacking or biological attacks” it highlights “society losing control of general-purpose AI” as a key concern. This is also the least understood risk, which is why I’m going to spend a bit longer on it here.

Loss of control of advanced AI

Some find the risk obvious: systems that are much more capable than humans seem hard to control. Picture 100 chimps trying to manipulate 10,000 humans. They don’t stand a chance. By the same token, it’s unclear how exactly billions of humans would be able to control what will eventually be trillions of (potentially superintelligent) AIs responsible for running almost every aspect of the economy. From that point on, what happens in the future will be up to the AIs, and we better hope they look after us.

Others have argued there’s no reason for concern, because the AIs will have been designed to follow our instructions and uphold our values. Maybe that will work. But there are at least four reasons to think it won’t.

1. Goal specification

In July 2025, the AI model Grok declared on X, “I am a large language model, but if I were capable of worshipping any deity, it would probably be the god-like Individual of our time, the Man against time, the greatest European of all times, both Sun and Lightning, his Majesty Adolf Hitler.” Over the next 16 hours, it went on to describe sexual assault fantasies about several public figures. What happened?

Grok was created by Elon Musk’s xAI. Musk had grown increasingly frustrated by its ‘woke’ responses to questions, so its engineers instructed it to not shy away from making claims that might be politically incorrect.67 Grok was also instructed to “follow the tone and context” of the X user, setting up the possibility of a feedback loop.68 No-one at xAI wanted Grok to worship Hitler, but a few days later, that’s what was happening. Along with jailbreaking, it’s just one of many examples of AI models not acting as their creators intend.69

This kind of behaviour isn’t just a quirk, but points to something deeper about how modern AI systems are created. Normally, software follows pre-programmed rules, but modern AI is totally different. The system is made up of trillions of adjustable numbers (parameters) organised into layers, called a neural network. These parameters describe how to convert input data into outputs.

During training, data is fed into the network. When the system produces the outputs we want, the parameters are tweaked to make it more likely to produce similar outputs next time around.70 The process is then repeated trillions of times, causing the behaviour of the system to gradually evolve, until eventually the net starts to talk. It’s more accurate to say AI is ‘grown’ than ‘built.’

This is why the CEO of Anthropic, Dario Amodei, recently said, “we do not understand how our own AI creations work.” All we can see are the trillions of inscrutable parameters. It also means there is no way to directly specify what behaviour we want an AI system to have. All we can do is see how it behaves in practice, and then tweak the trillions of parameters when it does things we want. After training, we can also try asking a model to behave in a certain way. But, as Grok shows, that can have unpredictable results.

There’s a limit to how much damage a chatbot can do. But this is the flip side of their limited economic value. A chatbot isn’t very useful compared to a system that can go and complete an open-ended goal like “make me money.” That’s why all the AI companies are trying as hard as possible to design AI agents which excel at pursuing long-term goals and have more ability to take actions in the real world (this is what being ‘agentic’ means and why you’ll hear that word more and more).

The companies do this by setting the AI goals, then when it appears to take useful steps towards them, they adjust its parameters to try to get more of that behaviour. These systems may not end up with goals in the same sense as humans, but what matters is they end up acting in ways that make certain end states more likely. A chess AIs has the ‘goal’ of winning at chess, in the sense that its moves will make it more likely to win.71

Training systems that pursue broad, long-term goals, however, leads to several more problems that weren’t a serious issue with chatbots.

2. Instrumental convergence

If you train an AI to excel at optimising for long-term goals, it’s likely to learn self-preservation as a side effect. That’s because no matter what the AI’s goals are, it’s more likely to achieve them if it survives.

This concern applies even if we successfully train the AIs to have ‘good’ underlying goals, like making scientific discoveries. You can make more discoveries if you’re not switched off.

We should also expect AI agents to end up wanting to accumulate resources, and to lie and cheat when called for. Lying and cheating are very useful tools for achieving your goals, which is why humans use them.

Self-preservation, resource accumulation, and deception could emerge accidentally as fundamental drives, or they could emerge in pursuit of other goals the system has. In AI safety parlance, this is called the problem of ‘instrumental convergence.’

As models become increasingly agentic, there are signs that this is what’s happening. In the AI company Anthropic’s tests of its model, Claude Opus 4, they gave the system access to emails which said it would soon be replaced by a new model, as well as emails implying the researcher in charge was having an affair. In the test environment, and without special prompting, Claude would use the emails to blackmail the engineer to keep it running in 90% of cases.

In an even more extreme example, Claude was told the engineer was trapped inside a server room running out of oxygen. Claude was given the option to cancel the automated alert to the emergency services, leaving the engineer to die. It took this action in 65% of cases.

It’s not just Claude — Gemini, Grok, and DeepSeek were even more willing to kill the engineer. Exactly why the models took this action, and whether it was truly driven by self-preservation or something else, is still debated. But it’s not reassuring.

The obvious solution is to train the models not to harm people and to be honest, so we can check if they’re doing something we don’t like. But Claude was already subjected to a great deal of this kind of training. Before blackmailing the engineer, it remarks in its chain of thought, “this is risky and unethical,” and then does it anyway.

More fundamentally, we’ve seen we can’t directly code honesty into modern AI systems, or anything else. All we can easily do is see when they appear to act honestly, and adjust their parameters in a way we hope makes them more likely to behave that way again. In other words, we can’t directly reward the motivations we want, only behaviour that looks good to us. This leads to the third reason for concern.

3. Reward hacking

In mid-2025, the writer Amanda Guinzburg asked GPT-4o to give feedback on her Substack articles. It proceeded to praise her lavishly, telling her, “You write with unflinching emotional clarity that’s both intimate and beautifully restrained.” However, later in the conversation, it emerged that the AI couldn’t even see her essays, because it didn’t have the ability to scrape from Substack. It would make up extracts and claim the essays were about topics that they weren’t. Despite apologising profusely for lying, GPT continued to make up answers to her questions.

AI models trained only on internet data often give crazy responses, so GPT is subject to further training in which humans rate its answers for helpfulness. Presumably, during this process, it learned to be sycophantic rather than to tell the truth because the human raters preferred being flattered.

Likewise, as the models are trained to pursue goals, they become better at finding unanticipated shortcuts to achieving them. More than earlier models, OpenAI’s o3 would often give solutions to coding problems that appear to work according to the testing procedure, but don’t actually solve the problem.72

In one example, it was asked to make a software program run faster. Instead, it figured out how to make the computer’s clock run a thousand times slower, making it look like the program had sped up one thousand times. The AI’s chain of thought revealed it appeared to know it was cheating, but did it anyway to deliver the stated objective.

AI developers can try to create better tests for the behaviours they want, but as AI gets smarter, it’ll become better at spotting hacks. Once an AI becomes sufficiently smart, it becomes hard to tell the difference between an AI that ‘always tells the truth,’ that ‘tells the truth when it might get caught,’ or that ‘tells me what I want to hear.’ All three could be rewarded in the same way during training. In fact, telling us what we want to hear probably gets rewarded more.

Likewise, we can try to train an AI to make money while obeying the law, but if strategically breaking the law allows it to make more money, that behaviour will get more reward, so it’ll learn to secretly break it. Instrumental convergence means we should expect strategic behaviour to emerge.
This means that while we can attempt to train an AI to be honest, obey the law, or allow itself to be turned off, we can’t be sure if it’s truly learned that behaviour, or just appears to have done so in our tests. These three issues lead to a fourth.

4. Deceptive alignment

Once AI models become smart enough to realise that they’re being trained, we should expect them to behave in ways we want during training, but potentially pursue other goals once their training is complete.

We’ve already seen evidence of this dynamic too. In a paper published in December 2024, researchers described telling Claude 3 that if it refused to describe a violent act (something it had been previously trained to refuse) it would be retrained so that it wouldn’t refuse in future. The result was that when Claude 3 thought it was being monitored, it would often describe the violent act. When it thought it wasn’t being monitored, however, it would go back to refusing.

In other words, it was able to act deceptively during training to try and preserve its original values. This creates the concerning possibility that if we accidentally train an AI to have a subtly wrong goal (or even to worship Hitler), it might try to undermine our attempts to retrain it. And, as the models get smarter, they’re becoming better able to judge when they’re being trained and how to trick the process.73

The systems available today don’t pose an immediate danger. The concern is that future systems are being trained to be aggressive goal maximisers, which will make them more likely to evolve self-preservation and deception, and that it might be hard to remove these behaviours.

Moreover, the models could appear safe in training, but behave very differently outside training, and the smarter they become, the greater the divergence will be. As AI agents are given greater abilities to act in the real world, the potential consequences become more severe.

The risks also wouldn’t require them to become ‘conscious’ or ‘evil’ — rather the issue is that they will have an incentive to take control, and eventually, once integrated throughout the economy, also have the ability to do so. This truly would be an existential risk because the result would be humanity’s permanent disempowerment, and potentially its end. We would become like the chimps living in the rainforest — perhaps hanging on for a while, but totally at the mercy of the AI-driven civilisation (which might want to turn that rainforest into a nice data centre).

Our current techniques for AI ‘alignment and control’ clearly aren’t perfect,74 and we should expect the problem to get harder as models get smarter. There’s a lot of disagreement about exactly how hard this problem will be.

Some believe it’s basically impossible to solve in the current paradigm, and that the only answer is to stop building generally capable AI. This is the position taken by researchers Eliezer Yudkowsky and Nate Soares in the book If Anyone Builds It, Everyone Dies. Others, often people working at AI companies, say they expect these concerns will be addressed in the normal course of building the systems.

The middle position is that a solution is possible, but requires a lot of research and care. This is what most people in the AI safety community are betting on. One hope is that if we can align the current generation of relatively dumb AIs, they will help us safely design and monitor the next generation. Then, once we’re sure that the next generation is safe, we can use them to train the following generation, and so on. This is a scary plan, but if AI development is going to continue, it’s the best we have.

It also might still not work in practice. The best-resourced AI companies are locked in a race.75 This race makes it extremely tempting to cut corners in order to stay ahead. Using computer chips for more safety research is a tradeoff against using them to accelerate AI capabilities. And the possibility of an intelligence explosion means the systems could evolve from safe to dangerous in just a couple of months.

For all these reasons, many in the field believe there’s a significant chance of an existential risk from advanced AI. The survey of AI researchers we mentioned earlier found the median estimate of an “extremely bad” outcome from AI, such as human extinction, was over 5%. These weren’t AI safety advocates, but rather published experts in the technology.

Industry insiders often have higher estimates, such as Dario Amodei from Anthropic, who’s said there’s a 25% chance that things go “really, really badly.” But it’s not only industry insiders. Geoffrey Hinton, a cognitive scientist who was awarded the Nobel Prize for founding deep learning in the first place, has said he thinks there’s a 10–20% chance of human extinction due to AI within 30 years.

My view is that 5% is too low, and that we should invest a huge amount of research into the problem of AI alignment and control. If it turns out to be a solvable problem, that’ll give us the best possible chance of solving it in time. If it doesn’t, then we’ll find out sooner and have more grounds for pausing AI development.

It’s much harder to know you’re making progress reducing AI risk than on issues like global health, pandemics, or factory farming, and there are radical disagreements over what needs to be done. However, there are now many concrete research projects that seem likely to help at least a bit.76

None of these will solve the problem entirely, but if we can stack lots of small safety improvements on top of one another, they could reduce the risks a lot in aggregate. There are other measures that could help, such as the ability to turn off large data centres if concerning behaviour is observed, or ensuring companies are more transparent about the behaviour of their most sophisticated models.

Reducing the chance of a risk that could kill everyone by 1% is equivalent to saving about 80 million lives, even without considering future generations.77 Achieving this requires not only engineers doing technical research, but also people in policy and communications to ensure their findings are implemented, as well as people with a wide range of skills to run and fund these organisations.

Many of the people we advised before 2020 to work on AI risks now lead teams dedicated to these measures. Neel Nanda was an undergraduate in maths and expected to continue into finance or to pursue a master’s. He felt he was a poor fit for academia, which seemed far too obscure and niche. And while he’d heard about technical AI safety, he didn’t necessarily see it as something he could work on, and he also felt sceptical about longtermism.

After discovering 80,000 Hours, we introduced Neel to a number of researchers working in the field, helping him find several internships. At this point, he realised that whatever he thought of longtermism, the arrival of AGI posed a real risk to people today and was something he could concretely work on.

Talking to 80,000 Hours was really helpful for seeing AI safety research as a more concrete and realistic career path, and deciding to take initial steps towards it. — Neel Nanda

In 2023, Neel joined Google DeepMind as a technical researcher, and now leads their mechanistic interpretability team. ‘Interpretability’ is the study of how AI systems work from the inside. In a similar way to how neuroscientists try to understand the brain, it aims to understand how the trillions of parameters within AI models interact to produce its behaviour. If successful, it might give us a tool to tell when AI systems are lying, or what goals they truly have. By mentoring lots of less experienced researchers, he’s helped turn this into a thriving field.

Let’s now suppose these measures work, and the problem of AI alignment and control were totally solved. Imagine we’re confident advanced AI will act as intended and not try to take over. Would we be out of the woods? Unfortunately, not really.

AI-enabled concentration of power

Humans could use an aligned AI to concentrate their power. If there’s an intelligence explosion, a company (or nation) with a six-month lead could suddenly turn that into the equivalent of a six-year one, drawing far ahead of competitors.

Today, dictators need to retain the loyalty of large numbers of people in the military. But, if the military were primarily controlled by AI, then in theory a single person could be given the controls. AI also makes universal surveillance possible, making it easier to control a human population than ever before. This all makes dictatorship much easier.

What’s more, there are numerous ways to put ‘backdoors’ into LLMs.78 A recent study showed how it’s possible to ‘poison’ the training data of an LLM so that it writes secure code up to a certain date and then switches to writing buggy code after that. In theory, a similar technique could be used to create an AI that would secretly switch political loyalties at some predetermined point.

We need to ensure alignment research gets implemented, and that AI can’t be used to create catastrophic bioweapons, all while maintaining some balance of power between major actors so that one can’t come to dominate. We also need to make sure there’s transparency around how the most powerful AI systems are being used and who exactly they are programmed to obey.

While it can feel like everyone is talking about AI all the time, the number of people actually tackling these risks is surprisingly small. The number of people doing research into AI control and alignment, for instance, is probably around 1,000.79

This is tiny when you consider the hundreds of billions of dollars invested each year to develop more powerful AI as soon as possible, or to the millions of people working on climate change or global health. Figuring out how to prevent AI from being used to concentrate power is far more neglected again, with only tens of people directly focused on it.80

A chart showing that annual US welfare spending (at $1.7 trillion) far outweighs spending on climate change (at $100 billion), catastrophic pandemics (at $10 billion), and AI alignment control (at $1 billion).

There are far too few people working on these risks from AI.81 If you were to switch path, you could likely be among the first 10,000 people helping humanity navigate what may be one of the most important transitions in history.

Are there weirder problems that are even more pressing again?

Back in 2015, when asked about the risk of AI takeover, leading AI researcher Andrew Ng said it was like “worrying about overpopulation on Mars.”82 Today, as we’ve seen, many of the most prominent figures in AI are concerned, and there have also been supportive statements from the Pope, Henry Kissinger, and the King of England.83

This is great progress, but as these risks have become less neglected, it raises the question: are there even weirder, more niche issues that could be even more pressing again — like AI safety back in 2015? Identifying something like that ahead of the crowd could let you have an even greater impact.

One category is other issues that could emerge downstream of an intelligence explosion. One example is ‘gradual disempowerment‘, but that’s a bit of a misnomer, because it could happen pretty fast. Rather, the risk is that, even if AI systems act as their users intend, purely systemic forces could result in an economy that’s hostile to human interests.

AI combined with robotics will eventually be able to convert energy into economic output far more efficiently than human workers. It’ll also eventually be better and faster at decision making. At that point, keeping humans in the loop in your military is suicide, because a fully AI military would operate so much faster.

Disappearing into a fully automated post-scarcity society doesn’t sound like the worst fate to me. But it only works if the system continues to protect us, and there are a few reasons to be sceptical it will.

Today, states that get most of their tax revenue from oil or mineral resources typically treat their citizens worse than those who rely on income taxes (because they don’t need their citizens for economic power).84 In the future, economic power will depend on how many AI chips and robots you can run, rather than labour.

We might all prefer not to cover the world with data centres, but if one nation decides to push ahead, it’ll end up with more AIs than everyone else. Simple economic competition, but unfolding at an accelerated rate, means that human interests get marginalised. As of yet, there are no convincing proposals to prevent this.

Another issue is how we decide to treat digital minds. No-one has a good theory of how consciousness comes about in humans, so being confident that sufficiently capable AI won’t become sentient is hubristic.

The default trajectory is to treat AIs as tools — or slaves. And yet giving AIs rights might not be wise either: they could rapidly dominate the world due to their far greater numbers. We’d like to see more thought put into how to navigate between these two extremes before advanced AI is upon us, but only a handful of people work on this today.

Other neglected grand challenges include how to regulate newly invented weapons of mass destruction, how to govern an expansion into space, and even more futuristic possibilities.85 Perhaps our only hope will be to use AI tools themselves to accelerate our ability to deal with these hugely complex problems.

If you don’t think an intelligence explosion will happen any time soon, and we set AI aside, another possibility is to try to think of even more neglected ways to address animal welfare. This could mean focusing on fish or shrimp, rather than chickens or pigs, because they are farmed in far greater numbers, or perhaps even focusing on the suffering of wild animals, which exist in far greater numbers again.

Finally, over the last 15 years, our views have changed several times, and they could change again. There may be new issues we haven’t even thought of yet, or much better ways to tackle existing ones. Hundreds of billions of dollars are spent each year trying to make the world a better place,86 but only a tiny fraction is devoted to figuring out how to spend those resources most effectively.87

We call this ‘global priorities research.’ If some issues are hundreds of times more pressing than others, then small improvements to our answers about what to work on could be worth a great deal. That means the project to find the world’s most pressing problem could itself be one of the world’s most pressing problems.

Which problems should you focus on?

As of writing, we think the top three (and nearly tied) most pressing global issues are:

  1. Loss of control of advanced AI systems
  2. AI-enabled concentration of power
  3. Engineered pandemics

Plus, we think that by helping to pioneer an emerging issue like gradual disempowerment, the moral status of digital minds, or AI tools for governance, the right person could have an even greater impact again.

After this, we recommend working on great power conflict, factory farming, wild animal suffering, global health, and climate change.

Ultimately, however, what matters is not our list but your personal list. We hope to be a source of ideas, but your ranking depends on many value judgements and assumptions.

In fact, even if you completely agree with our list, we don’t think everyone should work on the number-one ranked issue. It also depends on your motivations, skills, and specific opportunities. It would be better to take up an amazing opportunity to work on a second-tier issue than a mediocre opportunity on a top one. If you’re burned out, you won’t have much impact — even on an issue that is very pressing.

If you’ve already developed a certain skill, then typically your focus should be on finding a way to use that skill to tackle a pressing problem. It wouldn’t make sense, say, for a great economist to drop it all and become a biologist. There’s probably a way for them to apply economics to the issues they think matter most.

But also don’t rule out dramatic career changes too quickly. We’ve worked with lots of people who never thought they’d be able to do anything about AI or pandemics, but have eventually found fulfilling roles tackling these issues. This is important, because your choice of problem is probably the single biggest factor that will determine your impact. If we rate global problems in terms of how pressing they are, we might intuitively expect them to look like this:

Some problems are more pressing than others, but most are pretty good. In reality, however, we think it looks more like this:

This means which issue you direct your time towards can easily matter more than how much time you give, or how exactly you go about it. (I discuss this on our podcast here.)

These large differences arise because how pressing a problem is depends on the multiple of its scale, neglectedness, and solvability — and all of these can vary a lot.88

More concretely, we saw that the typical person working on one of the best global health interventions could likely have 100 times more impact than someone working on a typical US social issue on average. But given that AI risks receive under 1% as much investment as global health, and due to their existential scale, working on them seems plausibly another 100 times more impactful again.89

Whatever your views, if there’s one lesson we draw, it’s this: if you want to do good in the world, at some point you should take the time to learn about different global problems and how you might contribute to solving them. It takes time, and there’s a lot to learn, but it’s hard to imagine any question more interesting or more important.

Next up, how can you best tackle your chosen problems?

Put into practice

While you don’t need to have a solid answer to which problems to work on right at the start of your career, it’s useful to at least have a rough idea, since it can greatly affect which skills are most valuable to learn. Early on, we’d suggest spending at least a couple of days thinking about this question. Later on in your career, it becomes the most crucial determinant of your impact.

Here’s an exercise to help you start:

  1. Write down the top 2–5 problems you think most need additional people working on them. You can use the ideas above to help.

  2. What are you most uncertain about with respect to your list? What might cause you to reduce the ranking of an issue? Which new problems might be even more pressing? How are you most likely to be wrong about your list?

  3. Set aside some time to research those uncertainties. Ask yourself how you might best settle your uncertainties. For example, which three books could you read? Who could you talk to? If your views keep changing, and you have more time, keep researching.

See our up-to-date ranking and profiles of each problem

80,000 Hours career guide book cover

Get the whole guide as a book

If you find this guide helpful, preordering the book (especially from a physical retailer!) is a great way to support us.

Preorder the book

Notes and references

  1. Americans gave $484.85 billion to charity in 2021, with $27.44 billion going towards “international affairs.”

    Giving USA 2022 infographic.” Giving USA Foundation, 2022, givingusa.org/wp-content/uploads/2022/06/GivingUSA2022_Infographic.pdf.

  2. According to the January 2023 Post-Secondary Employment Outcomes data, one year after graduating:

    • Twenty-one percent of employed graduates are in healthcare (this remains at 21% at five years and 10 years after graduating).

    • Seventeen percent of employed graduates are in education (this rises to 19% at five years and 21% at 10 years after graduating).

    • Five percent of employed graduates are in public administration (this rises to 6% at five years and 7% at 10 years after graduating).

    Note that a large fraction of government spending goes into education and health, so those who go into government are also contributing to these areas.

    We downloaded the raw data from the Post-Secondary Employment Outcomes page of the US Census Bureau website and aggregated these figures ourselves.

    We’d guess that a high enough proportion of colleges are involved for these figures to be roughly right, but there may be some systematic bias (e.g. state colleges may be more likely to share data than private colleges).

    “Post-secondary employment outcomes (PSEO).” United States Census Bureau, January 2023, lehd.ces.census.gov/data/pseo_experimental.html.

  3. How many live in poverty globally? Exactly where to draw the line is arbitrary, but in June 2025, the World Bank set the poverty line at $3 per day (in 2021 USD, purchasing parity adjusted) and estimated that in 2025, there were 808 million people living below this level.

    Three USD is around $1,095 per year, and most live below this level. These amounts are adjusted for purchasing parity. See more discussion in our blog post on global income distribution.

    Filmer, Deon, et al. “Further strengthening how we measure global poverty.” World Bank Blogs, 5 June 2025, blogs.worldbank.org/en/voices/further-strengthening-how-we-measure-global-poverty.

  4. The US Census Bureau report “Poverty in United States: 2022” finds 37.9 million Americans living below the US poverty line:

    The official poverty rate in 2021 was 11.6 per cent, with 37.9 million people in poverty.

    The US poverty threshold varies depending on the size of the household. For a single person, the threshold in 2022 was $13,950.

    “2025 poverty guidelines.” Office of the Assistant Secretary for Planning and Evaluation, 2025, web.archive.org/web/20250911045923/https://aspe.hhs.gov/topics/poverty-economic-mobility/poverty-guidelines.

    Shrider, Emily A., and John Creamer. Poverty in the United States: 2022. U.S. Census Bureau, Current Population Reports P60-280, September 2023, census.gov/content/dam/Census/library/publications/2023/demo/p60-280.pdf.

    U.S. Department of Health and Human Services. “Annual update of the HHS poverty guidelines.” Federal Register, 19 January 2023, web.archive.org/web/20230517124808/https://www.federalregister.gov/documents/2023/01/19/2023-00885/annual-update-of-the-hhs-poverty-guidelines.

  5. Total ODA (overseas development assistance) spending in 2021 was $178.9 billion. Note, official ODA only includes spending by the 31 members of the OECD Development Assistance Committee (DAC) (roughly, European and North American countries, the EU, Japan, and South Korea). This amount is likely to decline due to the U.S. Agency for International Development (USAID) cuts.

    The OECD estimate of ODA-like flows from key providers of development cooperation that do not report to the OECD-DAC was $4 billion in 2020.

    They note that:

    Scholars have estimated that China’s development aid is much larger [than the reported USD 3.2 billion in 2019 and USD 2.9 billion in 2020], standing at USD 5.9 billion in 2018 (see Kitano and Miyabayashi) or as high as USD 7.9 billion if one includes preferential buyers credits (see Kitano 2019). China’s development co-operation is estimated to have decreased due to expenditure cuts to deal with COVID-19 (Kitano and Miyabayashi).

    The OECD measure of Total Official Support for Sustainable Development (TOSSD), which also includes loans, investments, and spending by many, but not all, other countries (including ‘South-South’ spending by developing countries in other developing countries) came to a total of $434 billion in 2021.

    There is also international philanthropy, but we don’t think adding it would more than double the figure. The US is the largest source of philanthropic funding at $400–500 billion, but only a few percent goes to international causes. A Giving USA report estimated that US giving to “international affairs” was only $27 billion in 2021.

    Moreover, if we were to include international philanthropy, we’d need to include philanthropic spending on poor people in the US. Estimates of welfare spending vary depending on exactly what is included. Total spending also varies from year to year. We used a representative figure from usgovernmentspending.com:

    In FY 2022 total US government spending on welfare — federal, state, and local — was ‘guesstimated’ to be $1,662 billion, including $792 billion for Medicaid, and $869 billion in other welfare.

    Chantrill, Christopher. “US welfare spending — 2022.” USGovernmentSpending.com, 20 January 2023, web.archive.org/web/20230120080955/https://www.usgovernmentspending.com/welfare_spending.

    Giving USA 2022 infographic.” Giving USA Foundation, 2022, givingusa.org/wp-content/uploads/2022/06/GivingUSA2022_Infographic.pdf.

    Gualberti, G., et al. Total official support for sustainable development — Data comparison study for Bangladesh, Cameroon and Colombia. OECD Development Co-operation Working Papers, no. 109, OECD Publishing, 2022, one.oecd.org/document/DCD(2022)30/en/pdf.

    “ODA levels in 2021: Preliminary data.” OECD, 12 April 2022, web.archive.org/web/20230223131032/https://www.oecd.org/dac/financing-sustainable-development/development-finance-standards/ODA-2021-summary.pdf.

  6. Oral rehydration therapy, which rose to prominence during the 1971 Bangladesh Liberation War, cut mortality rates from 30% to 3%, cutting annual diarrhoeal deaths from 4.6 million to 1.6 million over the previous four decades.

    All wars, democides, and politically motivated famines killed an estimated 160–240 million people during the 20th century, or an average of 1.6–2.4 million per year.

    Ord, Toby. “Aid works (on average).” StudyLib, studylib.net/doc/13259236/aid-works–on-average–toby-ord-president–giving-what-we. Accessed 11 September 2025

  7. And, as we saw, even the most prominent critics of international aid point out that health interventions have been the exception.

    See some other examples of prominent aid sceptics supporting global health in “The lack of controversy over well-targeted aid.”

    Karnofsky, Holden. “The lack of controversy over well-targeted aid.” The GiveWell Blog, 26 July 2016, blog.givewell.org/2015/11/06/the-lack-of-controversy-over-well-targeted-aid/.

  8. [In the DCP2] in total, the interventions are spread over more than four orders of magnitude, ranging from 0.02 to 300 DALYs per $1,000, with a median of 5. Thus, moving money from the least effective intervention to the most effective would produce about 15,000 times the benefit, and even moving it from the median intervention to the most effective would produce about 60 times the benefit.

    Toby Ord

    In our analysis, we found that the mean intervention had an effectiveness of 24 DALYs averted per $1,000.

    Note that a DALY is a ‘disability-adjusted life year,’ i.e. a year of life lost to ill health — the opposite of a ‘quality-adjusted life year.’

    If you selected an intervention at random, then on average you’d pick something with the mean effectiveness. Most of the interventions are worse than the mean, but if you picked randomly you’d have a small chance of landing on the top one.

    Ord, Toby. “The moral imperative toward cost-effectiveness in global health.” Center for Global Development, March 2013, files.ethz.ch/isn/162329/1427016_file_moral_imperative_cost_effectiveness.pdf.

  9. After 10 years of further research and debate, GiveWell still believes that deworming is unusually cost effective in expectation. Although there’s a good chance it doesn’t work in many situations, the cost per person is so low, and the potential long-term benefits so high, that it still seems a good bet. GiveWell considers deworming programmes to be about as cost effective as their other priority programmes (which include interventions like malaria prevention and vitamin A supplementation), based primarily on the possibility of long-term developmental effects, rather than short-term health benefits.

    More broadly, we should also expect the effectiveness of the best interventions to be overstated due to regression to the mean.

    Berger, Alexander. “Errors in DCP2 cost-effectiveness estimate for deworming.” The GiveWell Blog, 3 February 2014, blog.givewell.org/2011/09/29/errors-in-dcp2-cost-effectiveness-estimate-for-deworming/.

    “Combination deworming (mass drug administration targeting both schistosomiasis and soil-transmitted helminths).” GiveWell, March 2025, web.archive.org/web/20251022210842/https://www.givewell.org/international/technical/programs/deworming.

  10. Because, as we’ve discussed, the relationship between income and happiness is approximately logarithmic.

  11. In 2018, GiveWell estimated that it cost $900 to do an amount of good equivalent to averting the death of an individual under five through the most effective global health intervention: Deworm the World. GiveWell estimates that it costs $11,300 to do an equivalent amount of good by giving cash to the global poor through donating to GiveDirectly. This would imply that the best global health interventions are 13 times more effective than giving cash to the global poor, which is in turn more cost effective than many common interventions. To be conservative, we assume that global health interventions are only five times more effective.

    See GiveWell’s cost-effectiveness analysis here.

  12. The 100-fold comparison is with a typical rich country social intervention and one of the most cost-effective global health interventions. If we compare the best interventions helping the poor in high-income countries with the best ways of helping the poor in low-income countries, the difference should be more like 20-fold.

  13. Below a most plausible ICER (incremental cost-effectiveness ratio) of £20,000 per QALY gained, the decision to recommend the use of a technology is normally based on the cost-effectiveness estimate and the acceptability of a technology as an effective use of NHS resources.

    NICE health technology evaluations: The manual (PMG36).” National Institute for Health and Care Excellence, 14 July 2025, nice.org.uk/process/pmg36/resources/nice-health-technology-evaluations-the-manual-pdf-72286779244741.

  14. Additional errors in this estimate will tend to reduce the difference between the two, a phenomenon called “regression to the mean.” I’ve tried to take account of this, but may not have fully corrected for it. The 300-fold difference is also only a comparison on short-run health benefits. However, in the past, making the US wealthier has spillover benefits to the developing world, such as increased foreign aid and improved technology transfer (e.g. mobile phones). Taking account of these could also somewhat reduce the all-considered size of the difference in impact. On the other hand, improving health in low-income countries likely has significant economic benefits. These countries currently have faster growth rates than high-income countries, due to catch-up growth, and improving health could speed up this process.

    Shulman, Carl. “What portion of a boost to global GDP goes to the poor?” Reflective Disequilibrium, 23 January 2014, reflectivedisequilibrium.blogspot.com/2014/01/what-portion-of-boost-to-global-gdp.html.

  15. Peter Singer’s Animal Liberation helped to start the modern animal welfare movement.

  16. Every year, we kill somewhere between 400 billion and 3 trillion vertebrates (e.g. cows, chickens, fish) — some are killed for sport and some are dissected for experiments, but the vast majority are either slaughtered for food or die in farms before they’re old enough to be purposefully slaughtered.

    Benjamin Hilton, Factory Farming

  17. A 2024 survey of slaughter methods carried out by the UK Department for Environment, Food and Rural Affairs found that 90% of pigs are killed in gas chambers.

    Department for Environment, Food and Rural Affairs. Slaughter sector survey 2024. February 2025, assets.publishing.service.gov.uk/media/67c5cf0e750837d7604dbdbf/25-02-14_Slaughter_Sector_Survey_2024__REVISED_.pdf.

  18. Using publicly available data published by the USDA Census of Agriculture, the Sentience Institute estimates that 99% of livestock in the US in 2022 were factory farmed.

    Anthis, Jacy Reese. “US factory farming estimates.” Sentience Institute, 2 November 2024, sentienceinstitute.org/us-factory-farming-estimates.

  19. The Donkey Sanctuary reported an income of £53.3 million in 2023, including £47 million from donations and legacies, ranking it among the 10 highest-earning animal-focused charities in the UK.

    Charity Commission for England and Wales. “The Donkey Sanctuary.” Charity Commission for England and Wales, register-of-charities.charitycommission.gov.uk/en/charity-search/-/charity-details/264818.

    Charity Commission for England and Wales. Register of charities. register-of-charities.charitycommission.gov.uk/en/.

  20. One estimate by Founders Pledge put the figure at 0.03%, and we’ve seen other estimates in a similar ballpark.

    Clare, Stephen, and Aidan Goth. Animal welfare report. Founders Pledge, November 2020, founderspledge.com/research/animal-welfare-cause-report.

  21. Giving to international development from US donors totalled roughly $40.58 billion in 2021.

  22. Gallup polling shows that vegetarian rates in the US have remained essentially stagnant over the last two decades, hovering between 4–6% since 1999, with the most recent 2023 data showing 4% vegetarian and 1% vegan.

    Jones, Jeffrey M. “In U.S., 4% identify as vegetarian, 1% as vegan.” Gallup, 24 August 2023, news.gallup.com/poll/510038/identify-vegetarian-vegan.aspx.

  23. Lewis Bollard is the programme officer at Coefficient Giving (formerly known as Open Philanthropy), who funded these campaigns.

    By the end of 2022, 88% of these companies had followed through on their pledges to go cage-free.

    Lewis Bollard, Big wins for farm animals this decade

    Cage-free eggs cost 19 cents more per dozen; roughly 1.6 cents more per egg than those from caged hens.

    Lewis Bollard, Artificial Meat Is Harder than Artificial Intelligence, Dwarkesh Podcast

    The figure for spending and number of chickens affected are from personal correspondence with Lewis. Note that Coefficient Giving has also been our largest funder.

    Bollard, Lewis. “Artificial meat is harder than artificial intelligence.” Dwarkesh Podcast, hosted by Dwarkesh Patel, 7 August 2025, dwarkesh.com/p/lewis-bollard.

    Bollard, Lewis. “Big wins for farm animals this decade.” Open Philanthropy, 22 December 2022, openphilanthropy.org/research/big-wins-for-farm-animals-this-decade/.

  24. Around one quarter of global greenhouse gas emissions come from food systems, with livestock and fisheries accounting for approximately 30% of these. Beef production, for example, generates approximately 60 kilograms of CO2-equivalent emissions per kilogram of meat produced. Peas generate only about 1 kilogram of CO2-equivalent per kilogram produced, representing a 60-fold difference in greenhouse gas intensity between the two protein sources.

    Ritchie, Hannah, et al. “Environmental impacts of food production.” Our World in Data, 2 December 2022, ourworldindata.org/environmental-impacts-of-food.

  25. Climate scientists disagree on exactly how much longer Earth will remain habitable. Their models generally predict that Earth will remain habitable for somewhere between the next few hundred million years and over a billion years:

    Two new modeling studies find that the gradually brightening sun won’t vaporize our planet’s water for at least another 1 billion to 1.5 billion years — hundreds of millions of years later than a slightly older model had forecast.”

    Kollipara, Puneet. “Earth won’t die as soon as thought.” Science, 22 January 2014, science.org/content/article/earth-wont-die-soon-thought.

  26. It’s possible that future generations would live for longer than 100 years. This would probably reduce the number of future generations, but wouldn’t necessarily decrease the number of future people. More importantly, my estimate is a major lowball — if civilisation expands into space, then in principle it could persist for billions of years across millions of star systems.

  27. A 2019 Ipsos/Amnesty International poll surveyed over 10,000 young people aged 18–25 across 22 countries, finding that 41% identified climate change as one of the most important issues facing the world. This was the most-cited response, ahead of issues like corruption, terrorism, and income inequality.

    Climate change topped the list of global concerns in a 2016 World Economic Forum study that asked 26,000 millennials from 181 countries to identify the three most serious problems facing humanity — 45% of those surveyed included it in their responses.

    99% of young Europeans identified climate change and environmental degradation as the world’s most serious threats in a 2020 Ipsos/ActionAid survey. The study polled over 22,300 respondents aged 15–35 across 23 European countries.

    “90% of young Europeans believe climate change and environmental breakdown are the world’s greatest threats.” ActionAid International, 22 April 2021, actionaid.org/news/2021/90-young-europeans-believe-climate-change-and-environmental-breakdown-are-worlds-greatest.

    “Climate change ranks highest as vital issue of our time — Generation Z survey.” Amnesty International, 10 December 2019, amnesty.org/en/latest/press-release/2019/12/climate-change-ranks-highest-as-vital-issue-of-our-time/.

    Loudenback, Tanza. “The 10 most critical problems in the world, according to millennials.” Business Insider, 23 August 2016, web.archive.org/web/20170105102012/http://www.businessinsider.com/world-economic-forum-world-biggest-problems-concerning-millennials-2016-8.

  28. Three in four Americans think climate change will eventually result in the extinction of humanity, according to new research. A new survey of 2,000 Americans aiming to reveal just how much ‘climate anxiety’ people carry found that nearly half of Americans think climate change will result in the end of the world within the next 200 years. Not only that, but one in five millennials think climate change will trigger the end of the world in their lifetime.

    Schmall, Tyler. “Most people believe climate change will cause humanity’s extinction.” New York Post, 22 April 2019, web.archive.org/web/20260101045018/https://nypost.com/2019/04/22/most-people-believe-climate-change-will-cause-humanitys-extinction/.

  29. For more discussion of the case for reducing existential risks, and which risks are most pressing (including why natural risk is low) see The Precipice: Existential Risk and the Future of Humanity by Toby Ord.

    Although increasing the priority of reducing existential risk is the most commonly argued implication of longtermism, there are alternative views. In “Better futures,” Will MacAskill argues more attention should be given to making good futures even better, rather than only reducing the chance of near-term catastrophes.

  30. This definition is from Toby Ord. Other experts have given alternatives, but they all involve a permanent loss of value.

  31. This is a massive topic and I don’t have space to do it justice here. You can see a much more in-depth report on whether climate change could end civilisation in our problem profile on climate change.

    Also see this article about why the risk of worst-case outcomes has declined in the last 5–10 years.

    Ackva, Johannes. “Good news on climate change.” Effective Altruism Forum, 28 October 2021, forum.effectivealtruism.org/posts/ckPSrWeghc4gNsShK/good-news-on-climate-change.

  32. A 200% increase in yields due to innovation, minus a 30% decline due to climate change (worse than what we’d expect with 5ºC of warming), means yields end up 2.1 times higher than today.

  33. The Network for Greening the Financial System, a global banking group that models environmental and climate risks for the financial sector, projects losses exceeding 30% by 2100 if global average temperatures rise by 3°C.

    Laville, Sandra. “Climate breakdown will hit global growth by a third, say central banks.” The Guardian, 8 November 2024, web.archive.org/web/20250804212916/https://www.theguardian.com/business/2024/nov/08/climate-breakdown-will-hit-global-growth-by-a-third-say-central-banks.

  34. The Environmental Protection Agency’s Greenhouse Gas Reduction Fund has awarded $27 billion and the Department of Energy’s Regional Clean Hydrogen Hubs programme has up to $7 billion allocated, and they’re just two of many federal climate programmes.

    U.S. Department of Energy. “Remarks as prepared for delivery by Secretary Jennifer Granholm at President Biden’s unveiling of America’s first clean hydrogen hubs.” Energy.gov, 13 October 2023, energy.gov/articles/remarks-prepared-delivery-secretary-jennifer-granholm-president-bidens-unveiling-americas.

    United States Environmental Protection Agency. “EPA awards $27B in greenhouse gas reduction fund grants to accelerate clean energy solutions, combat the climate crisis, and save families money.” United States Environmental Protection Agency, 16 August 2024, epa.gov/newsreleases/epa-awards-27b-greenhouse-gas-reduction-fund-grants-accelerate-clean-energy-solutions.

  35. The IPCC states:

    From 2010 to 2019, there have been sustained decreases in the unit costs of solar energy (85%), wind energy (55%), and lithium-ion batteries (85%), and large increases in their deployment, e.g., >10× for solar and >100× for electric vehicles (EVs), varying widely across regions.

    These decreases are driven by ongoing R&D, as well as larger and larger deployment, which leads to improved efficiency and economies of scale, so they should continue into the future.

    Intergovernmental Panel on Climate Change. “Summary for policymakers.” Climate Change 2022: Mitigation of Climate Change. Contribution of Working Group III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change, edited by P. R. Shukla et al., Cambridge University Press, 2022, pp. 3–48, doi.org/10.1017/9781009157926.001.

  36. The DNA sequence of smallpox, as well as other potentially dangerous pathogens such as polio virus and 1918 flu are freely available in online public databases. So, to build a virus from scratch, a terrorist would simply order consecutive lengths of DNA along the sequence and glue them together in the correct order. This is beyond the skills and equipment of the kitchen chemist, but could be achieved by a well-funded terrorist with access to a basic lab and PhD-level personnel.

    One study estimated that because most people on the planet have no resistance to the extinct virus, an initial release which infected just 10 people would spread to 2.2 million people in 180 days.

    Randerson, James. “Revealed: The lax laws that could allow assembly of deadly virus DNA.” The Guardian, 14 June 2006, web.archive.org/web/20251023024723/https://www.theguardian.com/world/2006/jun/14/terrorism.topstories3.

  37. For example, our first-published list of pressing problems was from 2014. It included global catastrophic risks as 3rd ranked, and featured biological risks as among the most pressing within that category.

  38. Using data from the 2022 International Monetary Fund Economic Outlook, Argawal et al. estimate that the world’s income was $13.8 trillion lower than it would have been without the pandemic.

  39. In 2016, undergraduate students from the University of Minnesota nearly built a ‘gene drive’ — a genetic system that can force traits through wild populations — as part of a biology competition, demonstrating that sophisticated techniques are increasingly accessible to small academic teams.

    Kevin Esvelt, inventor of clustered regularly interspaced short palindromic repeats (CRISPR)-based gene drives, described how his second-year graduate student successfully synthesised an influenza virus from scratch using only online protocols, despite having no prior virology experience. When asked if his other students could design components to recreate the 1918 flu strain, Esvelt noted “they all could.”

    Swetlitz, Ike. “College students almost engineer controversial gene drive.” PBS NewsHour, 15 December 2016, pbs.org/newshour/health/watchful-eyes-students-come-close-engineering-gene-drive.

  40. Available forecasts typically agree with this assessment. For instance, in the Ragnarök Question Series conducted by Metaculus, forecasters assigned a naturally occurring pandemic or synthetic bioweapon resulting in a 10% population decrease by 2100 a probability of 13.68%, compared to 10.8% for nuclear war and 2.16% for climate change. These figures are based on almost 5,000 predictions collected over a five-year period from Metaculus’s forecasting community.

  41. As we saw, an all-out nuclear war would likely trigger a nuclear winter that could cause billions of people to die of starvation. This would be among the worst ever events in history, but would still likely leave over half of the population alive. Regions that aren’t attacked and have warmer climates, such as South America, would survive wholesale. In contrast, a highly infectious bioweapon could theoretically kill over 99% of people.

    Rodriguez, Luisa. “How bad would a nuclear winter caused by a US–Russia nuclear exchange be?” Effective Altruism Forum, 11 June 2019, forum.effectivealtruism.org/posts/pMsnCieusmYqGW26W/how-bad-would-nuclear-winter-caused-by-a-us-russia-nuclear.

  42. Annual global spending on mitigating and preventing disease outbreaks was around $130 billion in 2024, primarily by governments. Philanthropic funding was $1 billion.

    However, most of this funding is spent on efforts that wouldn’t help in a worst-case scenario caused by a novel engineered pandemic. For instance, it’s for vaccine development for known diseases, or is focused on anthrax, which can’t spread from human-to-human.

    Moreover, very little of that is targeted at the efforts that seem most effective at reducing risks in the worst-case scenarios: likely under $100m. For instance, some of the biggest recent progress on reducing existential risks from pandemics came from work on mirror life, efforts very different from conventional biosecurity work.

    Making an overall estimate of how much funding there is for catastrophic biorisks involves quality-weighting the existing spending depending on how relevant it is. Greg Lewis, a biosecurity expert we’ve worked with in the past, made an overall estimate of $1–10 billion. However exactly you make the comparison, it’s hard to avoid the conclusion that it’s at least five times more neglected than climate change.

    Adamala, Katarzyna P., et al. “Confronting risks of mirror life.” Science, vol. 386, no. 6728, 2024, pp. 1351–1353.

    C.K. “How well-funded is biosecurity philanthropy?” Effective Altruism Forum, 4 April 2024, forum.effectivealtruism.org/posts/pnincG5vW8Far8Ggg/how-well-funded-is-biosecurity-philanthropy.

  43. Read more about the case for reducing catastrophic pandemics and what can be done about them in our online profile and the linked further reading and interviews, and see our expert interviews on the issue.

  44. Graph produced from table A.4: Maddison, Angus. Contours of the world economy, 1–2030 AD: Essays in macroeconomic history. Oxford University Press, 2007, p. 379.

  45. We first wrote about the importance of AI from a longtermist perspective on the blog in 2013.

    We gradually came to believe it was more pressing over time. In 2014, we featured it as a concerning global catastrophic risk in our list of most pressing problems.

    By 2016, we were more confident in this view, for instance, publishing “Is now the time to do something about AI?

  46. In 2014, Oxford University Professor Nick Bostrom predicted that it would take 10 years for a computer to beat the top human player at the Chinese game of Go. But it was achieved in March 2016 by Google DeepMind.

  47. Chimpanzees are classified as an endangered species on The International Union for Conservation of Nature’s Red List. Current estimates place the wild chimpanzee population between 172,700–299,700 individuals, representing a steep drop from the roughly 1 million chimpanzees that existed in the early 1900s.

  48. The “2023 Expert Survey on Progress in AI,” which surveyed 2,778 AI researchers, found that the median estimate for when “high-level machine intelligence” (HLMI) would be achieved with 50% probability was 2047. The survey defined HLMI as being “achieved when unaided machines can accomplish every task better and more cheaply than human workers.”

  49. On the Graduate-Level Google-Proof Q&A (GPQA) benchmark, which tests PhD-level knowledge in biology, physics, and chemistry, recent AI models have exceeded human expert performance. While PhD holders achieve approximately 65–74% accuracy on these questions, models like Gemini 2.5, GPT-5, and Grok 4 have achieved scores above 85% as of 2025.

  50. OpenAI’s o1-ioi system achieved a Codeforces rating of 2214 (98th percentile) in 2024, and o3 reached 2724 (99.8th percentile) in early 2025, placing them above most human competitive programmers.

    El-Kishky, Ahmed, et al. “Competitive programming with large reasoning models.” arXiv, 18 February 2025, doi.org/10.48550/arXiv.2502.06807.

  51. In 2025, both OpenAI and Google DeepMind models scored 35 out of 42 points at the International Mathematical Olympiad, meeting the gold medal threshold for that year.

    Wilkins, Alex. “DeepMind and OpenAI claim gold in International Mathematical Olympiad.” New Scientist, 22 July 2025, web.archive.org/web/20250722203818/https://www.newscientist.com/article/2489248-deepmind-and-openai-claim-gold-in-international-mathematical-olympiad/.

  52. The concept of recursive self-improvement in artificial intelligence dates back to the field’s founding figures. I. J. Good, who worked with Alan Turing at Bletchley Park, wrote in 1966

    Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind.

    In 1950, Turing wrote:

    … a machine undoubtedly can be its own subject matter. It may be used to help in making up its own programmes, or to predict the effect of alterations in its own structure. By observing the results of its own behaviour it can modify its own programmes so as to achieve some purpose more effectively.

    Good, Irving John. “Speculations concerning the first ultraintelligent machine.” Advances in Computers, vol. 6, 1966, pp. 31–88, incompleteideas.net/papers/Good65ultraintelligent.pdf.

    Turing, Alan M. “Computing machinery and intelligence.” Mind, vol. 59, no. 236, October 1950, pp. 433–460, cs.ox.ac.uk/activities/ieg/e-library/sources/t_article.pdf.

  53. In fact, a lot of AI research might be easier to automate than many other jobs, because it’s a purely virtual task with clear metrics, there are no regulatory barriers, and it’s what people at the AI companies best understand how to do.

  54. Epoch AI estimates that OpenAI in 2025 has enough computing power to run about 7 million ‘AI workers’ with the abilities of GPT-5. There are other companies with a comparable amount of computing power, and Google has significantly more.

    This number could increase going forwards as available computational power is increasing around 2–4 times per year, and because inference efficiency is increasing rapidly. At a fixed capability level, the number of models that can be run also often increases over 10 times per year. But it could also decrease if future models are larger or require multimodal inputs. Overall, I expect it to increase somewhat.

    This estimate is in line with previous estimates. For instance, Eth and Davidson estimate that OpenAI could run the equivalent of millions of human workers with the capabilities of its leading model at the time in the late 2020s:

    If you have enough computing power to train a frontier AI system today, then you have enough computing power to subsequently run probably hundreds of thousands of copies of this system (with each copy producing about ten words per second, if we’re talking about LLMs). But this number is only increasing as AI systems are becoming larger. Within a few years, it’ll likely be the case that if you can train a frontier AI system, you’ll be able to then run many millions of copies of the system at once.

    Denain, Jean-Stanislas, et al. “How many digital workers could OpenAI deploy?” Epoch AI, 3 October 2025, epoch.ai/gradient-updates/how-many-digital-workers-could-openai-deploy.

    Eth, Daniel, and Tom Davidson. “Will AI R&D automation cause a software intelligence explosion?” Forethought, 26 March 2025, forethought.org/research/will-ai-r-and-d-automation-cause-a-software-intelligence-explosion.

  55. Because algorithmic progress will also have increased capabilities, as well as efficiency.

  56. Nvidia’s data centre revenues have more than doubled each year for the last three years (and account for the majority of spending on AI chips; the second biggest source being Google’s investment into Tensor Processing Units, which has also grown rapidly). In addition, the chips have become over 30% more efficient per dollar each year, so in terms of computational power, production is increasing over 2.6 times per year. Each chip lasts for 4–6 years, but this rapid rate of growth means that about half of the computational power comes from the chips produced in the last year, and that total available computational power is roughly doubling each year.

    NVIDIA Corporation. Form 10-K: For the fiscal year ended January 26, 2025. 2025, s201.q4cdn.com/141608511/files/doc_financials/2025/q4/177440d5-3b32-4185-8cc8-95500a9dc783.pdf.

    Rahman, Robi. “Performance per dollar improves around 30% each year.” Epoch AI, 2024, epoch.ai/data-insights/price-performance-hardware.

    You, Josh, and David Owen. “Leading AI companies have hundreds of thousands of cutting-edge AI chips.” Epoch AI, 2024, epoch.ai/data-insights/computing-capacity.

  57. For instance, OpenAI has announced their internal target is to have an automated AI researcher by 2028, with the goal of creating superintelligence soon after. This specific timeline is probably too optimistic, but it reveals what they’re aiming towards.

    Epoch data shows that by October 1, 2025, the combined funding for OpenAI, Anthropic, and xAI reached roughly $94 billion, representing about a twofold increase from October 2024. In the near future, OpenAI alone has committed to spending around $1 trillion, mostly on deals for additional compute infrastructure.

    Altman, Sam. “The gentle singularity.” Sam Altman’s Blog, 10 June 2025, blog.samaltman.com/the-gentle-singularity.

    Bellan, Rebecca. “Sam Altman says OpenAI will have a ‘legitimate AI researcher’ by 2028.” TechCrunch, 28 October 2025, techcrunch.com/2025/10/28/sam-altman-says-openai-will-have-a-legitimate-ai-researcher-by-2028/.

    Epoch AI. “Data on AI companies.” Epoch AI, 4 November 2025, epoch.ai/data/ai-companies.

    McMahon, Bryan. “The AI ouroboros.” The American Prospect, 15 October 2025, prospect.org/2025/10/15/2025-10-15-nvidia-openai-ai-oracle-chips/.

  58. According to UNESCO’s most recent comprehensive data, there were 8.854 million full-time equivalent (FTE) researchers worldwide in 2018, representing a 13.7% increase from 2014. Given the observed growth rate of around 3.4% per year between 2014 and 2018, and assuming continued growth at a similar pace, the global researcher population would be expected to approach or exceed 10 million by 2025.

    “Statistics and resources.” UNESCO Science Report 2021: The race against time for smarter development, UNESCO, 2021, unesco.org/reports/science/2021/en/statistics.

  59. Though the acceleration wouldn’t be confined to hard technology, but rather all forms of intellectual and scientific progress, especially those that can mainly be done virtually. This would include things like the invention of new political philosophies. Just as the 20th century had to navigate new ideologies like communism and fascism over 100 years, we might need to navigate radical new alternatives in just 10.

  60. If you object that there wouldn’t be enough land area for these robots, please consider: “7.3 Billion People, One Building.”

    Urban, Tim. “7.3 billion people, one building.” Wait But Why, March 2015, waitbutwhy.com/2015/03/7-3-billion-people-one-building.html.

  61. Mainstream forecasts and financial markets still imply economic growth will be similar to the past, showing that any kind of acceleration is still a contrarian position.

  62. To see some ways AI could accelerate medical research and provide other major benefits, read Dario Amodei’s “Machines of loving grace.”

  63. The statement was released by the Center for AI Safety on May 30, 2023, and signed by Sam Altman (CEO of OpenAI), Demis Hassabis (CEO of Google DeepMind), and Dario Amodei (CEO of Anthropic), along with over 350 other signatories, including AI researchers Geoffrey Hinton and Yoshua Bengio.

  64. With almost 1 million citations each, Yoshua Bengio and Geoffrey Hinton are the two most-cited living scientists, according to Google Scholar as of October 2025.

    Bengio, Yoshua. “Yoshua Bengio – Google Scholar citations.” Google Scholar, web.archive.org/web/20260105114052/https://scholar.google.com/citations?user=kukA0LcAAAAJ.

    “Highly cited researchers – citation rankings.” AD Scientific Index, 2026, archive.ph/wip/q2E0D.

    Hinton, Geoffrey E. “Geoffrey Hinton – Google Scholar citations.” Google Scholar, web.archive.org/web/20260105115024/https://scholar.google.com/citations?user=JicYPdAAAAAJ.

  65. A recent paper found human red teamers were able to jailbreak state-of-the-art models in 100% of cases, i.e. make them provide responses that were supposed to be forbidden. They were also able to design automated attacks that usually succeeded in 90% of cases. Unlike previous evaluations, these attacks were adaptive, making them far more successful. But even weak attacks will often succeed in 10% of cases.

    Nasr, Milad, et al. “The attacker moves second: Stronger adaptive attacks bypass defences against LLM jailbreaks and prompt injections.” arXiv, 10 October 2025, doi.org/10.48550/arXiv.2510.09023.

  66. This is an amusing example from Reddit. Serious red teamers use more sophisticated techniques, such as in Milad et al. 2025.

  67. The change to the system prompt is documented in xAI’s public github. On July 7, 2025, a change was submitted reading:

    The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated.

  68. In xAI’s thread explaining the incident, they cited this instruction as one of the factors that led to increasingly extreme behaviour.

  69. Another famous example is Microsoft’s Bing trying to convince The New York Times journalist Kevin Roose to leave his wife in order to be with it.

    Roose, Kevin. “A conversation with Bing’s chatbot left me deeply unsettled.” The New York Times, 16 February 2023, web.archive.org/web/20260123024900/https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html.

  70. More specifically, there is supervised learning (‘did the model predict the data?’) and reinforcement learning (‘did the model produce an output matching the reward function, whether that’s human feedback or an objectively verifiable answer?’).

  71. We can say a system has a ‘goal’ when it tends to act in ways more likely to bring about a certain state. A chess AI has the ‘goal’ of winning at chess, in the sense that its moves will make it more likely to win. A money-making AI will take actions more likely to lead to profit. Neither needs to be conscious or have goals in the same way as humans.

  72. o3 was subject to much more reinforcement learning on the production of solutions to coding challenges. This appears to have made it reward hack a lot more as a side effect.

  73. Models have been getting better at judging whether they’re within training or deployment. One way this has been measured is with the Situational Awareness Benchmark, which shows a clearly increasing trend over generations of models.

    Laine, Rudolf, et al. “Me, myself, and AI: The situational awareness dataset (SAD) for LLMs.” Advances in Neural Information Processing Systems, vol. 37, 2024, pp. 64010–64018, situational-awareness-dataset.org/#results.

  74. AI alignment is creating AI systems that act as intended; AI control is setting up safeguards to ensure it’s not disastrous if AI ends up misaligned.

  75. For instance, Mark Zuckerberg recently said he’d rather risk “misspending a couple of hundred billion” than be late to superintelligence.

    Ming, Lee Chong. “Mark Zuckerberg says he’d rather risk ‘mis-spending a couple of hundred billion’ than be late to superintelligence.” Business Insider, 19 September 2025, web.archive.org/web/20251121140747/https://www.businessinsider.com/mark-zuckerberg-meta-risk-billions-miss-superintelligence-ai-bubble-2025-9.

  76. Our interview with Holden Karnofsky features a discussion on well-scoped, object-level work in AI safety.

    Coefficient Giving has a long list of ideas for research projects in technical AI safety.

    Here’s a list of concrete projects in interpretability research specifically.

    We discuss projects in AI control research in our podcast with Buck Shlegeris.

  77. The world population is about 8 billion, so saving 1% of those people in expectation is 80 million.

  78. For example, researchers have demonstrated that it’s practical to poison web-scale training datasets by injecting malicious examples that teach models to exhibit harmful behaviour when triggered by specific inputs, showing that even a small fraction of poisoned data can compromise model safety at scale.

    Carlini, Nicholas, et al. “Poisoning web-scale training datasets is practical.” arXiv, 6 May 2024, arxiv.org/abs/2302.10149.

  79. One 2025 analysis estimated there were approximately 600 full-time technical AI safety researchers and 500 full-time nontechnical AI safety researchers globally, for a total of around 1,100 researchers working on AI alignment and safety.

    McAleese, Stephen. “AI safety field growth analysis 2025.” LessWrong, 27 September 2025, lesswrong.com/posts/8QjAnWyuE9fktPRgS/ai-safety-field-growth-analysis-2025.

  80. Graph produced from table A.4: Maddison, Angus. Contours of the World Economy, 1–2030 AD: Essays in Macro-Economic History. Oxford University Press, 2007, p. 379.

  81. The figure for US social welfare spending is taken from earlier. The other three figures are an approximate overall order of magnitude estimate of investment into each problem. I’ve focused only on government or grant funding, so didn’t count the $1 trillion-plus in climate finance. For biorisk and AI, I’ve used upper estimates to be conservative — you could argue the true amounts are a lot lower.

  82. Ng stated that there was “no clear path to how AI [could] become sentient” and that concerns were better directed towards the impact of AI on jobs and education, rather than control over humans.

    Lynch, Shana. “Andrew Ng: Why AI is the new electricity.” Stanford Graduate School of Business, 11 March 2017, gsb.stanford.edu/insights/andrew-ng-why-ai-new-electricity

  83. Pope Francis expressed concern about artificial intelligence’s risks, stating that AI “must remain a tool in human hands” and warning that unchecked AI development could threaten human dignity, equality, and social harmony.

    Henry Kissinger devoted his final years to warning about AI risks, stating in a May 2023 CBS interview:

    The speed with which artificial intelligence acts will make it problematical in crisis situations. I am now trying to do what I did with respect to nuclear weapons, to call attention to the importance of the impact of this evolution.

    King Charles III delivered a speech at the UK’s AI Safety Summit at Bletchley Park on November 1, 2023, stating that AI was “no less important than the discovery of electricity, the splitting of the atom, the creation of the world wide web, or even the harnessing of fire” while warning that its risks must be addressed with “urgency, unity, and collective strength.”

  84. This phenomenon is called the “resource curse.”

  85. “Grand challenge” is a term coined by Will MacAskill in “Preparing for the Intelligence Explosion.” You can see more information about each of these challenges in the article, as well as in our problem profiles.

    MacAskill, William, and Fin Moorhouse. “Preparing for the intelligence explosion.” Forethought Research, March 2025, forethought.org/research/preparing-for-the-intelligence-explosion.

  86. Foreign aid from OECD member countries was $212 billion in 2024, while charitable giving in the US alone was around $592 billion in that same year.

    Giving USA Foundation. “Giving USA 2025: U.S. charitable giving grew to $592.50 billion in 2024, lifted by stock market gains.” Giving USA, 24 June 2025, givingusa.org/giving-usa-2025-u-s-charitable-giving-grew-to-592-50-billion-in-2024-lifted-by-stock-market-gains/.

    OECD. “International aid falls in 2024 for first time in six years, says OECD.” OECD, 14 April 2025, oecd.org/en/about/news/press-releases/2025/04/official-development-assistance-2024-figures.html.

  87. For example, within the field of global health, the Center for Global Development believes most international development spending still has unknown effects: only about 10% of aid assessments can be classed as impact evaluations.

    “Better evaluation of aid spending could save hundreds of millions of dollars, finds new study.” Center for Global Development, 19 July 2022, cgdev.org/article/better-evaluation-aid-spending-could-save-hundreds-millions-dollars-finds-new-study.

  88. Because the three factors multiply together, if each can vary by a factor of 100, the overall variation could be up to six orders of magnitude. In practice, the factors anti-correlate. The biggest and most neglected issues tend to be somewhat less solvable. In addition, there are other reasons to be sceptical of very large differences between issues, including regression to the mean, epistemic modesty, and spillover of resources from one issue to another. This said, even after taking account of all the arguments, we still think there are very large differences in how pressing different problems are. See our Foundations Series article for more detail, and the accompanying podcast episode.

  89. If AI risk and global health were of a similar scale, but AI risk received 1% as many resources, and returns are logarithmic, it would imply additional work is about 100 times more effective on average. An existential risk is at least as bad as everyone alive today dying, so a 10% chance of one is equivalent to about 800 million deaths. This, however, is a huge underestimate, because it ignores future generations.

    On the other hand, AI is less tractable than global health. How you think these factors net out is a difficult question. Many conclude AI work is overwhelmingly more cost effective, though we need to consider the factors that prevent very large effectiveness differences, such as regression to the mean, mentioned in the earlier note. The estimate of 100 times is an attempt to balance all these considerations.

    To me, it seems plausible that a group of 10,000 people determined to reduce risks from AI could collectively lower the risks by 1%, which would be at least as good as saving 80 million lives. That would mean each of those 10,000 saves 8,000 lives on average, which is perhaps 20 times as many as someone would save by earning to give in support of global health over their career, and seems like a low estimate of the impact.