Macrostrategy research

Summary

In a nutshell:

We have a lot of unanswered questions about what the biggest threats facing humanity are, what work will matter most in the coming decades, and what it would even look like for things to ‘go well.’ Macrostrategy researchers try to answer big questions like these, which stake out new, uncertain territory.

We’re especially excited about macrostrategy research that focuses on the future of AI. Without this kind of work, we could easily fail to anticipate serious issues raised by the development of advanced AI, as well as lose out on opportunities for flourishing in a world with transformative technology.

Pros:

  • A real chance to help shape humanity’s long-term trajectory
  • Extremely interesting and creative work, at the frontier of some of the hardest questions out there
  • Very neglected: there are probably only dozens of dedicated researchers in the world, so additional effort could go a long way

Cons:

  • Very few organisations and jobs
  • Hard to be confident you’re making progress, as you often can’t verify your conclusions against reality
  • Potentially less direct influence on decision makers than (for example) careers in AI governance or technical safety research

Key facts on fit:

You’ll need to be excellent at doing novel research, comfortable sitting with messy, ill-defined questions, and able to make progress on them independently — often without clear frameworks or established methods. Strong writing is also essential. The best candidates tend to be creative, analytically sharp, and great at reasoning under uncertainty.

If you want to do macrostrategy research focused on the future of AI, then you’ll also need a strong understanding of AI and its dynamics.

Previous research experience is very helpful. But even if you’ve had research positions before, we’d recommend testing your fit for this type of research before applying to jobs — see our suggestions below.

Recommended

If you are well suited to this career, it may be the best way for you to have a social impact.

Review status

Based on an in-depth investigation 

What is macrostrategy research, and why work on it?

Our website covers many of the world’s biggest challenges — but it’s not the whole picture. That’s because:

  • Realistically, there’s no way we’ve already identified every important issue or opportunity coming up for humanity.
  • We probably aren’t prioritising between the things we have identified in the best possible way. After all, ranking problems requires making judgement calls on some deeply uncertain questions.
  • As a society, we don’t have a concrete picture of what a great future would really look like — and we aren’t even in agreement on how to assess how good different outcomes are. So we don’t know exactly what we should be aiming towards.

Some researchers are trying to fill in these gaps by exploring new issues, proposing priorities, and developing theories about the future we should aim for. This loosely connected set of efforts is what we’re calling ‘macrostrategy research’ — though (unfortunately!) there’s no single agreed term for it.

The prospect of advanced AI changing the world in unprecedented ways, and potentially doing so very rapidly, means ‘macrostrategy researchers’ could have an unusually large impact. Their work could help us anticipate serious threats raised by future AI systems, and avoid losing out on opportunities to flourish in the age of AI — perhaps shaping what life looks like for all future generations.

Today, a macrostrategy researcher might focus on:

These research areas all stake out new, uncertain territory, and are usually several steps away from ideas that are immediately policy-relevant or actionable. This means they tend to require bigger-picture, multidisciplinary, and strategic thinking — as well as a willingness to deal with the unknown.

Clearly, macrostrategy research can cover a lot of ground. (It even has significant overlap with what was traditionally called ‘global priorities research’.) But we’re most eager to see people doing macrostrategy research relating to the future of AI.1 The clearest example we know about here is the work being done at Forethought, a research group dedicated to navigating the full range of challenges presented by ‘explosive’ AI progress).

In fact, it’s becoming increasingly difficult to do serious macrostrategy research without devoting some attention to AI. The way that the future of AI unfolds — how AI gets developed and used, and the risks and opportunities it brings with it — could shape the answer to any of the questions above.

Why work on macrostrategy research?

There are several reasons to think macrostrategy research could be a good use of your career, if you’re well suited to the work:

  • It has a track record of influence. In fact, many of the global issues we view as most pressing are things people probably wouldn’t be thinking about if it weren’t for researchers exploring bigger-picture questions that weren’t seen as immediately policy-relevant five or 10 years ago. (We called this work ‘global priorities research’ at the time.) For example:2
  • It’s really neglected. There are probably only dozens of people doing dedicated, AI-relevant macrostrategy research, even though there are so many important questions to cover.3 That means additional effort could go a long way.
  • It’s really interesting — for the right person! If you’re motivated by thinking deeply about the world, exploring lots of new ideas, and charting out uncertain territory, there could hardly be a more fascinating career.

What are the downsides to working on macrostrategy?

You might struggle to get a job

There are currently very few organisations doing dedicated macrostrategy research — and those teams tend to be very small. We highlight relevant positions and funding opportunities on our job board below, but we wish there were more to highlight!

People can work independently on macrostrategy research, or work on relevant questions from within academia. But our guess is that these aren’t usually the most effective ways to do it, because the lack of institutional support means that your work won’t receive the kind of feedback (or pathways to impact) that it might inside a think tank or other dedicated organisation.

All in all, that means there aren’t that many jobs — and employers at these organisations are very selective.

It may be difficult to make progress

You might think it’s hard for dedicated macrostrategy research to do much better than our current best knowledge about these questions. There’s enormous uncertainty over the trajectory of AI progress and how it will change the world, and not much historical precedent to draw on. Plus, the AI systems and social or political dynamics you’ll often be theorising about don’t even exist yet, which makes it hard to check whether your conclusions are actually tracking reality.

It’s reasonable to worry about this. But even with this deep uncertainty, we do think AI macrostrategy research has already had positive effects on the world.

Partly motivated by macrostrategy research, the range of AI risks people are actively working on has expanded well beyond the field’s original focus on misalignment. For example, there are now some efforts to prevent AI systems being used to engineer pandemics, and to measure disempowerment by AI within society. Given how unsure we are about what risks AI systems will actually pose, these developments feel robustly valuable — by covering more bases, we’re becoming more prepared for a wider range of ways the future could go.

We feel hopeful that further macrostrategy research can continue strengthening our portfolio of efforts on global issues.

It might also get easier to make progress on macrostrategy questions over time, with the development of sophisticated AI tools for research and forecasting.

It might be hard to influence decision makers

As well as doing great research, you also need to make your research matter.

This might be particularly challenging in macrostrategy research, because the topics are often more abstract and don’t always suggest immediate policy solutions.

Compared to being an AI governance researcher, you’ll probably be less involved in policy circles. And compared to working on technical safety research at a major AI company, you’ll have less say in how the technology is developed.

That said, there are some previous examples of policy makers and AI companies acting on macrostrategy research,4 and we do expect there to be more in future.

And “influencing decision makers” doesn’t have to mean directly advising governments or AI companies. As a macrostrategy researcher, your work could potentially appeal to the broad group of people concerned with making the future of AI go well — including grantmakers, fieldbuilders, and nonprofit founders. These people are shaping the ideas and priorities that eventually filter up to decision makers, even if they aren’t directly setting policies.

It might be better to focus on more speculative issues later

Some of the risks researchers have already identified look like they’re beginning to play out. For example, we’re seeing early warning signs that AI systems might seek power from humans, examples of AI being used to execute sophisticated cyberattacks, and conflict between companies and governments over who has ultimate control of AI.

You might think we should focus on ensuring humanity survives these threats first, and only begin to explore new territory later on. It can feel strange to ask questions like “how can we flourish as a society?” when we’re still working out how to avoid going extinct, creating huge amounts of suffering, or falling to authoritarianism.

There’s also a chance it’ll be easier to make progress on macrostrategy questions later on than it is today: we’ll have more information to help us make good predictions about the future of AI, and we might even have very sophisticated AI tools to help us with the hardest reasoning.

The thing is: no one really knows which risks will turn out to be most urgent, or what the ideal window is for taking action on each issue. The challenges we’re currently focusing on may end up being non-issues; risks that seem speculative now might confront us suddenly. (In fact, part of the role of macrostrategy research is to give us more clarity on what’s actually most urgent and important.) And it seems unwise to bank on it being easy and effective to automate macrostrategy research soon, given how jagged AI development tends to be.

And although many macrostrategy questions probably aren’t the most urgent questions for humanity to answer, we still think it’s good for some people to be looking ahead. It makes sense to take a portfolio approach to preparing for advanced AI, rather than putting all our eggs into one basket and just hoping we made the right call.

Would you be a good fit for macrostrategy research?

What skills and traits are needed to succeed?

Here are the attributes we think are most important for being a great macrostrategy researcher:

  • You’ll need to be excellent at doing novel research. That involves:
    • Having great research taste — that is, the ability to notice which questions actually matter.
    • Being able to take on messy, ill-defined questions and come up with reasonable assessments about them using a variety of research methods. (Read more about predicting success in research.)
  • You’ll need to like spending time on abstract, speculative work that doesn’t have clear feedback loops (i.e. you often won’t be able to check your conclusions against reality).
  • Because these research questions haven’t been explored by many people, there won’t always be more experienced researchers to turn to for guidance. So you’ll need to have the drive to stick with tough problems while working largely independently.
  • To communicate your research ideas effectively, you’ll need to be a strong writer.
  • For macrostrategy research focused on the future of AI, you’ll need a strong understanding of AI and its dynamics.
  • Quantitative analysis or forecasting skills can be helpful (depending on the kinds of questions you’re trying to answer), but are not always essential.
  • The best candidates will also be:
    • Highly creative, curious, and open to unusual ideas.
    • Unusually analytically intelligent.
    • Willing to change their minds when presented with new, compelling evidence or arguments.
    • Great at noticing when they’re uncertain about something, and taking those uncertainties into account when drawing conclusions.
    • Strongly focused on making the world better.

We recommend testing your fit before if you’re not ready to apply to roles yet. You can find some ideas below, or learn more about assessing personal fit.

What experience is useful?

There aren’t any essential prerequisites for getting a job as a macrostrategy researcher, unless you’re trying to work in academia, in which case you’ll need a PhD. The organisations that hire for these roles often take a ‘non-credentialist’ approach, and are more interested in whether you fit the skills profile above.

But there are some experiences that could demonstrate your fit. These include:

  • Previous research experience — for example, a competitive research fellowship, postgraduate study, or having worked at a think tank or research institute (bonus points if you’ve done research in an interdisciplinary setting, or you’re used to working in areas that are speculative or poorly understood)
  • A track record of writing, whether that’s blog posts, academic papers, journalism, or something else
  • Academic success — for example, having gone to a top postgraduate programme in any field

As for your academic background:

  • It’s far more important that you have the traits we described above than any particular academic background. But depth in any highly-relevant discipline — like philosophy, economics, physics, maths, international relations, politics, history, or (of course!) AI — could be helpful.
  • If you want to do macrostrategy research as an academic:
    • In theory, you could try to answer macrostrategy research questions within a variety of disciplines — but you’ll need to find an advisor who will help you develop these unusual ideas, which can be hard.
    • The entry point with least resistance might be a PhD in philosophy, which tends to offer a fair amount of flexibility. Bear in mind that academic research can be a challenging path.

Non-research paths

You don’t have to be a researcher to join an organisation doing macrostrategy work. These organisations are often in need of research managers, operations staff, communications specialists, and more.

In our series of career reviews, you’ll find dedicated articles covering the skills and experiences needed to succeed in other roles like these.

Note: even if you’re looking for non-research roles at organisations doing macrostrategy work, it’ll still be useful to have some familiarity with the research they do.

Example people

Top organisations

You could pursue macrostrategy research from a variety of institutional homes — academia, research nonprofits, think tanks, grantmakers, and even ‘preparedness’ teams at some AI companies. You won’t always be able to focus exclusively on macrostrategy research in these roles, though.

Here are some organisations we know of where you might be able to work on macrostrategy research:

Note that many places won’t use the term ‘macrostrategy’ to describe their work, because there’s no single agreed term for this kind of research!

Next steps

If you’re ready to apply for jobs

You can find some relevant organisations and research projects that are hiring on our job board below.

    View all opportunities

    Or, if you’ve got a project idea in mind, have already spent time working in this or a related field, and feel excited to build something new: you could consider founding your own organisation.

    If you need, test your fit or build career capital first

    We’d recommend starting by:

    • Reading widely and developing your own ideas. For example, try forming opinions on how an intelligence explosion might unfold, what’s most important to get right for the future of AI to go well, or where current thinking on these topics falls short. Come up with ideas for research questions to investigate.
    • Doing some writing. Start your own blog or Substack, or consider writing on forums where you’ll find other macrostrategy researchers (like the EA Forum and LessWrong).
    • Trying out a small project. You could pick a research question and spend 10 or 20 hours trying to make progress on it. You probably won’t come up with any earth-shattering insights, but you’ll get a feel for the kinds of thinking that macrostrategy requires — and maybe you’ll get an interesting blog post out of it.
    • Reaching out to people in the field. Ask them for feedback on your research ideas, writing, or any small projects you’ve worked on. You can find tips for meeting people here, and advice for reaching out to people here.

    If you feel more confident you want to go down this path, you could try some of these higher-commitment options for gaining career capital:

    • Apply to a research fellowship. The MATS programme’s Forethought stream is directly focused on AI macrostrategy. Fellowships run by Constellation, the Future Impact Group, and Foresight might also provide opportunities to research relevant topics around AI, existential risk, and building a flourishing future. And even research programmes that aren’t as directly relevant to AI macrostrategy could still help you build valuable research skills.
    • Take a course. For example, to better understand the strategic landscape of AI, we recommend BlueDot’s AGI Strategy course. If you’re keen to think carefully about what a great future could look like and how to get there, you could try a wordbuilding course.

    Before you apply to macrostrategy positions, it might also be valuable to try other research roles related to AI risk — like being an AI governance researcher or technical AI safety researcher — where you’ll deal with more concrete research topics and get better feedback loops on your work.

    If you want to learn more

    Examples of recent AI macrostrategy research

    Influential older macrostrategy research

    Research agendas and reports on research progress

    Working in macrostrategy

    Podcast episodes

    Speak with us

    Want one-on-one advice on pursuing this path?

    We think this path could easily be a top option for people who are a good fit. If you think this macrostrategy research might be a great fit for you, we’d be especially excited to advise you on next steps, one-on-one. We can help you consider your options, make connections with others working in the same field, and possibly even help you find jobs or funding opportunities.

    APPLY TO SPEAK WITH OUR TEAM

    Notes and references

    1. But note that one question for macrostrategy researchers is whether we’re right to be so focused on the risks of AI compared to other issues facing the world.

    2. For examples that aren’t mentioned in this list, see the successes retrospectively highlighted by the Future of Humanity Institute — a former research group at the University of Oxford which was known, among other things, for global priorities research.

    3. See, for example, this research agenda published by the Global Priorities Institute in 2024. The institute has since shut down, but these research areas remain important.

    4. Indeed, some decision makers have taken note of compelling but speculative ideas presented to them before.

      Some ideas from the Future of Humanity Institute (FHI) — an early example of an organisation doing macrostrategy research — seem to have been taken seriously in policy contexts. After writing The Precipice, FHI researcher Toby Ord was invited to advise the UN Secretary General’s Office on existential risk. He contributed to a major UN report which called for AI regulation, new international institutions to govern existential risks, and a dedicated lab for monitoring long-term issues — proposals in keeping with many ideas from Ord’s macrostrategy research.

      A much less encouraging example comes from before the field of macrostrategy research even existed: the idea of overpopulation being a threat to humanity’s survival. After Paul Ehrlich’s 1968 book The Population Bomb painted a vivid picture of imminent global famine, several governments took action. President Nixon signed into law the Commission on Population Growth and the American Future; India embraced policies that, in many states, required sterilisation to obtain water, electricity, ration cards, medical care, and pay raises; and China introduced its one-child policy.

      We know now that these fears were wildly overblown, and the measures taken were unnecessary and often very harmful. But the point here is that researchers painting a vivid picture of a potential threat has ended up motivating action before — for better or worse.