AI-enhanced decision making

Summary

The arrival of AGI could “compress a century of progress in a decade”, forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right.

But AI development also presents an opportunity: we could build and deploy AI tools that help us think more clearly, act more wisely, and coordinate more effectively. And if we roll these decision making tools out quickly enough, humanity could be far better equipped to navigate the critical period ahead.

We’d be excited to see some more people trying to speed up the development and adoption of these tools. We think that for the right person, this path could be very impactful.

That said, this is not a mature area. There’s significant uncertainty about what work will actually be most useful, and getting involved has potential downside risks.

So our guess is that, at this stage, it’d be great if something like a few hundred particularly thoughtful and entrepreneurial people worked on using AI to improve societal decision making. If the field proves promising, they could pave the way for more people to get involved later.

Our overall view

Sometimes recommended

We’d love to see more people working on this issue. But you might be able to do even more good working on one of our top priority problem areas.

Profile depth

Medium-depth 

This is one of many profiles we've written to help people find the most pressing problems they can solve with their careers. Learn more about how we compare different problems and see how this problem compares to the others we've considered so far.

Why advancing AI decision making tools might matter a lot

Humans often make big mistakes.

Our institutions ignored climate scientists for decades, responded ineffectively to early COVID-19 warnings, and have rushed into countless wars that all parties later regretted. It’s striking how far our actual decisions sometimes fall short of what, in hindsight, looks obviously necessary.

Why does this keep happening? Sometimes we misunderstand the facts or fail to predict challenges ahead of us. Other times, we know there’s a problem, but we don’t take sufficient action or coordinate on a response.1

We’re now rapidly developing advanced AI systems that could transform every aspect of society, making good decision making even more critical. Soon, we could be dealing with:

  • A whole new population of extremely capable agents, potentially with different goals and interests to humans
  • A totally reshaped labour market where AI systems, rather than humans, drive much or all economic progress
  • AIs developing new advanced technologies — including weapons — faster than we can study their risks
  • Societal and geopolitical tensions over who controls or receives the benefits of advanced AI, possibly escalating into conflict

Advanced AI systems could also produce ideas and economic outputs much faster than humans, potentially compressing a century’s worth of progress into a decade — which means decisions that once played out over years might need to be made in a matter of months.

So the chance of missteps is high. And as we’ve argued elsewhere, the stakes could be existential.

If we want to navigate this period well, we’ll need to think more clearly, act more wisely, and coordinate more effectively than before. And that’s a tall order.

AI tools could help us make much better decisions

The development of advanced AI could both make decision making more challenging and raise the stakes of humanity’s future decisions. But AI is not a monolith, and — perhaps counterintuitively — we think certain AI tools could actually be part of the solution.

AI systems are capable of things humans simply aren’t — they can absorb far more information, process it at vastly higher speeds, and improve their performance by practicing the same task millions of times.

They’ve already beaten the best humans at strategy games like Go, and they’re now also performing impressively on complex reasoning and problem-solving tasks. And if you’ve ever used “deep research” tools from AI companies like OpenAI and Google DeepMind, you know current models can process huge amounts of information into coherent conclusions much faster than even the greatest human minds.

Given this, we think we’re within reach of having AI tools that can seriously improve human decision making — some may even be buildable with today’s technology.

Two kinds seem especially promising:2

  • Epistemic tools, which help us understand what’s true and what’s likely to happen. For example:
    • AI fact checkers may be more reliable and impartial evaluators of information than humans are. Society currently struggles to converge on matters of fact — consider how often political disagreements come down to a dispute over facts, or how easily misinformation spreads online. We’ll need to get a lot better at this if we have to navigate epistemic disruption from advanced AI.
    • AI forecasting systems could help institutions make better predictions about world events and model the effects of different policies.3
    • More speculatively, AI tools for moral progress could help us reason through complex ethical questions and potentially come to more agreement as a society.
  • Coordination tools, which help groups work together and make better collective decisions, even if they have competing interests. For example:
    • AI negotiation tools could find mutually beneficial agreements that might otherwise be missed — perhaps by rapidly simulating thousands of hours of negotiation and testing out a vast number of agreements before making a proposal.
    • AI-enabled verification systems could reliably and impartially monitor compliance with agreements, overcoming the trust barriers that often prevent groups from cooperating.
    • Structured transparency tools could enable tightly controlled information sharing, allowing parties to detect specific threats from each other — like whether someone is building dangerous weapons — without the broader privacy costs of ordinary surveillance.
    • There’s ongoing research in the field of “Cooperative AI” exploring more ways to use AI for improved coordination.

We think these applications target some of the most common failures of human decision making. We often get led astray by false information, incorrectly predict how things will unfold, or fail to prevent outcomes no one wanted because we can’t cooperate.

Another virtue of these applications is that they seem to be more useful for enabling good outcomes than bad ones, overall. As a general rule of thumb, it seems empowering people to better understand the world and coordinate with each other is usually good for humanity — at least under the assumption that people are usually well intentioned.4

Of course, this assumption doesn’t always hold. We do think there’s some risk of people deliberately using even these AI tools to cause harm — a possibility we address later on.

We might be able to differentially speed up the rollout of AI decision making tools

Right now, only a handful of projects are building the kinds of AI tools we described above — a drop in the ocean compared to the billions invested in developing more broadly capable AI agents.

Plus, there’s often a lag between between society having the ability to build a product and it actually being built and successfully rolled out. Consider COVID-19 vaccines: although the underlying mRNA technology for these vaccines was proven in the mid-2000s, they didn’t actually arrive until late 2020 — almost a year into the pandemic.

This points to an opportunity: we might be able to accelerate the development and adoption of AI decision making tools, which would mean getting their benefits faster. And even a small speedup could be consequential — for example, getting sophisticated verification tools just a few months earlier could mean critical safety commitments get nailed down before we develop dangerous AI systems, instead of arriving too late to make a difference.

What we’re pointing to here is one form of “differential technology development”: influencing the order in which different technologies emerge in order to make the world safer. In this case, the idea is to speed up the development of certain safety-promoting AI capabilities so they’re available before we have to contend with other, riskier AI capabilities.5

Because we’ve seen so few projects in this direction so far, there’s still lots of low-hanging fruit to pick. Later on, we describe some work we think could be useful.

What are the arguments against working to advance AI decision making tools?

Having said all this, there are some objections we think people should really consider when deciding whether to work in this area.

Huge AI companies are racing to develop models that excel at all kinds of complex reasoning — and they’re making rapid progress. Meanwhile, there are growing market incentives to build AI products for specific, commercially valuable tasks, which might include some of the applications above.

So AI decision making tools might get developed anyway by people trying to make money —
meaning it might not be a good use of time for people who want to do good with their careers. Why not just wait for this to happen, and do something else with your time?

This does seem right to some extent. As a general rule, focusing on something that’s already commercially incentivised will probably reduce the counterfactual impact of your work.

But we think there are ways you could still make a meaningful difference here — especially if you focus on gaps in the market when deciding what project to pursue.

First, your work could still help society achieve the benefits of these tools sooner than they would otherwise have arrived.

You might speed things up directly (for example, by successfully building a specific tool before anyone else gets there). And if your work does get overtaken by another project, it could still have compounding effects that speed up the arrival of future tools. For instance, if it attracts more investment or builds relevant knowledge, your project could enable others to achieve a certain milestone faster — which could in turn bring forward the next milestone, and so on.

And as we’ve said, even a small speedup could make a big difference here.

Importantly, although we think frontier models will eventually excel at tasks in epistemics and coordination, simply waiting for good decision making tools to get rolled out could mean getting them once AGI has already arrived. By then, it might be too late to use them to avoid a catastrophe.

Second, you might be able to focus on products that are less incentivised by the market.

For example, while advanced AI forecasting tools might get built by default for profitable uses like financial trading, there’s much less commercial pressure to develop AI systems that are good at predicting other things, or to create sophisticated tools for reasoning about ethics. 6

By accelerating progress on these tools, you might also increase knowledge, hype, and investment into AI R&D more broadly. This could bring about AGI sooner, giving us less time to prepare.

Your work could also enhance certain dangerous capabilities. For example, we think AI systems that excel at planning pose risks of disempowering humans — and developing systems that are great at forecasting might dangerously boost AI planning capabilities.

We’ve explored these concerns elsewhere, and there’s a lot to say on the subject. But in this context, it’s worth bearing in mind:
* Although projects in this area might contribute to AI hype to some degree, these effects will probably be very insignificant compared to the billions of dollars already being invested into building AGI. By contrast, you could have an outsized impact on humanity’s ability to make wise decisions.
* You might be able to (and should probably try to) target lower-risk applications that don’t directly feed the development of dangerous capabilities. For example, AI fact-checking tools seem much safer to build than tools leveraging strategic planning or persuasion.
* If these tools seriously improve our ability to navigate the world’s biggest challenges, some speedup in the arrival of dangerous AI capabilities could still be worth it overall.

There is also a role to play for interventions that slow down progress on dangerous technologies — whether that’s through regulations that allow companies to take their time on safety without bearing the costs of unilateral slowdowns, or perhaps even campaigning to pause frontier AI development altogether. But speeding up progress on safety-promoting technologies can happen at the same time. It might also be easier: while slowing down requires agreement from officials or companies, you can just decide to develop a new tool without consensus. And you’ll likely face less pushback, since your strategy won’t mean forgoing or delaying the benefits of future AI (or threatening powerful companies’ bottom lines).

Like many technologies, AI tools for epistemics and coordination could be used to cause harm.

After all, getting better at understanding the world and coordinating with others typically makes you better at achieving your goals. And since people sometimes have goals that are harmful to others, these tools will sometimes help people do bad things more effectively.

For example, groups with access to tools that enhance their negotiation or forecasting abilities could use them to illegitimately gain strategic advantages over those who don’t have such tools. In extreme cases, this could potentially even enable a dangerous power grab.

We’d guess that actors with genuinely malicious intentions are just not that common.7 Broadly speaking, it seems most harmful decisions don’t happen because people really want to cause harm, but because we misunderstand a situation, don’t realise the consequences our actions could have, or fail to find a solution that’s less costly for everyone involved — defects AI decision making tools would help us overcome.

And as we said earlier: a general, commonsense rule of thumb here is that empowering humans to understand the world and coordinate better seems to usually be a good thing for humanity.

So our guess is that overall, AI decision making tools will help us prevent bad outcomes more often than they’ll enable them. This is one of the key reasons we’re broadly enthusiastic about these tools.

But this is a generalisation, and won’t hold true for every AI decision making tool you could create.8 So if you’re deciding whether to build or promote a new tool, you should factor in its specific misuse risks — and whether it might actually favour harmful uses over beneficial ones. And these are difficult questions, so you should get help when trying to answer them.

The most extreme risks here — like the chance of enabling a power grab — also highlight the importance of getting AI decision making tools in enough hands. By default, the most powerful actors will have access to better technologies than everyone else. But if we can make decision making tools widely accessible and equip key institutions to use them, we could prevent any single group from gaining dangerous advantages over others. But we may need dedicated effort to make this happen.

So should you work on this?

Bottom line: it’s complicated, but if you’re a good fit, working on this could have a lot of upside.

For the reasons above, it does seem that some work in this space will end up having very little impact — and some could even have negative effects.

You’re more likely to avoid the pitfalls if you can prioritise AI decision making projects that are:

  • Underincentivised by the market
  • Less likely to drive the development of other, dangerous AI capabilities
  • More useful for beneficial purposes than for harmful ones, or more robust to misuse

But deciding what projects to pursue on this basis is much easier said than done. And because there aren’t many concrete job opportunities here, working in this area may also require a more entrepreneurial approach than you’d need for tackling many other pressing problems.

So overall, we don’t think we can recommend this work as widely as we recommend working in more mature areas where the paths to impact are better tested and more clearly mapped out.

Still, we think efforts to advance AI decision making tools could be very impactful for the right person. If you’re especially good at navigating ambiguity, have an entrepreneurial mindset, and have strong judgement about what projects to prioritise, this could be a great fit. At this stage, we’d be excited to see perhaps a few hundred more people working in this area.

If you’re interested in being one of those people, we recommend building a network in AI safety and finding people who can help you think through specific project ideas first.

It’s also worth noting that some researchers — like the authors of this article from Forethought — feel more optimistic than we do about having many more people working in this area. So it’s possible we’re underrating it!

In any case, we also recommend keeping up to date with the evolving landscape of AGI challenges, and being ready to pivot if other needs become more pressing.

How to work in this area

Here are the top recommendations we’ve seen for people who want to speed up the development and adoption of AI decision making tools.

Help build AI decision making tools

The most direct thing you can do is working somewhere that’s building the tools themselves. You can find some relevant organisations and research projects that are hiring on our job board below. But since the field is currently small, you might consider founding your own project instead.

Either way, there’s lots to do here — not just the core engineering work, but also making demos, getting stakeholders on board, designing user interfaces that are appealing to decision makers, doing market research to tailor products to user needs, and ensuring projects operate efficiently.

This means you don’t need to be a technical expert to join or found projects of this kind: they also need great operations staff, product managers, and more.

Complementary work

There are other ways you can support these efforts without getting directly involved in building the tools. For example, you could:

  • Measure and steer these beneficial capabilities:
    • Design benchmarks or evaluations for the AI capabilities that would most help decision making.
  • Work on supporting tech and infrastructure:
    • Develop complementary technologies that help remove barriers to adoption — for example, by addressing users’ privacy or security concerns.
    • Curate and manage data sets that can be used to train specialised AI decision making tools — for example, data about past mistakes in forecasting or negotiation, or high-quality research notes from fields where specialised decision making tools could be very helpful.
    • Create infrastructure — like online databases or directories — to help people share resources and collaborate on projects.
  • Help with implementation:
    • Help integrate these tools into existing decision making processes at key institutions, including educating stakeholders on how to use them.

Position yourself to help in future

If you’re not currently able to work on any of this — or just don’t feel it’s the best option right now — you can still position yourself to help in future by:

  • Working at or founding any (non-harmful) company, especially a technology company, so that you can learn and practice the skills of founding projects — or see the career capital steps listed in our founder profile.
  • Developing expertise in fields where these AI tools could be especially impactful, like forecasting or diplomacy.
  • Joining key institutions (like government agencies or international bodies) that might benefit a lot from AI decision making tools — and staying current with the technologies while you’re there, so you can help integrate them later on.

What opportunities are there?

The field is currently small, but our job board features some relevant opportunities — including open positions, funding, and fellowships.

    View all opportunities

    Want one-on-one advice on pursuing this path?

    If you think this path might be a great option for you, our team might be able to advise you on your next steps.

    We can help you compare options, make connections, and possibly even help you find jobs or funding opportunities.

    Apply for advising

    Learn more

    For specific project ideas, we recommend Lukas Finnveden’s list of ideas for AI “epistemics” projects. We also think the examples listed on the Future of Life Foundation fellowship page are great, although applications to this fellowship are now closed.

    Acknowledgements

    This profile draws extensively from Forethought’s article “AI Tools for Existential Security”.

    Many thanks to Arden Koehler, Lizka Vaintrob, Niel Bowerman, and Max Dalton for input.

    Notes and references

    1. For example, it seems that wars are sometimes driven by failures to coordinate effectively. Political scientist James D. Fearon argues that when leaders choose to go war over finding a solution that’s less costly for all parties, it’s often due to difficulties with sharing information or making reliable commitments.

    2. There are lots of other AI tools that could benefit humanity in some way.

      A cluster of promising examples are what Forethought calls “risk-targeted applications”: AI tools designed to address specific existential threats, rather than improving human decision making more generally. For example, we could design AI tools that help us align frontier models with human values, or detect threatening pathogens before they cause a pandemic.

      These aren’t directly relevant for improving human decision making more broadly, but we’re excited by their potential to address other pressing problems. We’ve discussed some of these as possible interventions in our problem profiles on power-seeking AI and pandemics.

      We feel less excited by some other beneficial applications of AI. For example, while we think AI tools for improving healthcare and education are potentially useful, they’re currently more incentivised by the market than the applications we discuss in this article — and seem to have less potential to steer us away from existential scale risks. That said, they at least seem robustly beneficial.

    3. For more on the value of improved forecasting, see our podcast episodes with Philip Tetlock (1 2.

    4. This assumption at least seems reasonable. Broadly speaking, it seems that most harmful decisions people make don’t stem from a desire to cause harm. Instead, when companies pollute rivers or governments start unnecessary wars, it’s usually because of some failure in fact checking, prediction, negotiation, or something else under this wide umbrella of ‘epistemics and coordination.’ If this is true, widespread access to better decision making tools of this kind should prevent more bad outcomes than they’ll cause.

      It seems especially likely that people will prefer using these tools in cooperative ways if we have abundant resources in the future (which could be the case if AI automation supercharges our economic production). See Eric Drexler’s article on “Paretotopian Goal Alignment” for more.

    5. There are other forms of differential technology development that could help to mitigate the risks from advanced AI. For example, in other articles, we highlight interventions that would slow down the development of certain dangerous AI capabilities — for example, efforts to strictly regulate or pause progress on frontier AI models.

      For more comparison between the strategy discussed in this article and other forms of differential technology development, see appendix 3 and appendix 5 to Forethought’s article on “AI Tools for Existential Security.”

    6. In general, the social benefit of some of these technologies might outstrip commercialisable consumer demand. And not every technology that can be commercialised is created — certainly not right away. (The COVID-19 example above illustrates this!)

      You might disagree with this if you expect the market to exhibit a very efficient form of technological determinism — but this just seems too extreme to us.

    7. Plus, it seems especially likely that people will mostly want to use AI decision making tools for good if they believe we’ll have abundant resources in the future (which could be the case if AI automation supercharges our economic production).

      There’s some support for this idea in Eric Drexler’s article on “Paretotopian Goal Alignment” — he argues that the prospect of greatly increased resources could encourage people to choose cooperation over conflict, even if they have competing interests.

    8. The core idea here maps to this argument from Vitalik Buterin’s “Coordination, Good and Bad” about the benefits and pitfalls of improving human coordination:

      While it is emphatically true that “everyone coordinating with everyone” leads to much better outcomes than “every man for himself”, what that does NOT imply is that each individual step toward more coordination is necessarily beneficial.

      Buterin then goes on to discuss the risks of “unbalanced” coordination, including harmful collusion — and presents some strategies for avoiding them.