Macrostrategy research
Review status
Based on an in-depth investigation
Table of Contents
What is macrostrategy research, and why work on it?
Our website covers many of the world’s biggest challenges — but it’s not the whole picture. That’s because:
- Realistically, there’s no way we’ve already identified every important issue or opportunity coming up for humanity.
- We probably aren’t prioritising between the things we have identified in the best possible way. After all, ranking problems requires making judgement calls on some deeply uncertain questions.
- As a society, we don’t have a concrete picture of what a great future would really look like — and we aren’t even in agreement on how to assess how good different outcomes are. So we don’t know exactly what we should be aiming towards.
Some researchers are trying to fill in these gaps by exploring new issues, proposing priorities, and developing theories about the future we should aim for. This loosely connected set of efforts is what we’re calling ‘macrostrategy research’ — though (unfortunately!) there’s no single agreed term for it.
The prospect of advanced AI changing the world in unprecedented ways, and potentially doing so very rapidly, means ‘macrostrategy researchers’ could have an unusually large impact. Their work could help us anticipate serious threats raised by future AI systems, and avoid losing out on opportunities to flourish in the age of AI — perhaps shaping what life looks like for all future generations.
Today, a macrostrategy researcher might focus on:
- Sketching out how critical periods for humanity, such as an intelligence explosion, might play out — the AI Futures Project has done lots of work in this direction.
- Scoping out neglected or emerging challenges that could have a big impact on the world (like the intelligence curse, and how to govern resources if civilisation expands into space.
- Making a case for how to prioritise work on the most important issues and opportunities that have already been identified (like Nick Bostrom’s influential 2013 paper arguing that we should prioritise work on problems with existential-scale stakes).
- Exploring what it might mean for things to ‘go well’ or ‘go badly’ for civilisation — including what the right moral theories are for judging how ‘well’ things are going, whether avoiding bad outcomes should really be our focus if we’re trying to make the future better, and what, concretely, the features of a good or bad world might be.
- Surfacing the novel tools and concepts needed to steer us towards a flourishing world (like the idea of acausal trade, or new governance structures for digital beings.
These research areas all stake out new, uncertain territory, and are usually several steps away from ideas that are immediately policy-relevant or actionable. This means they tend to require bigger-picture, multidisciplinary, and strategic thinking — as well as a willingness to deal with the unknown.
Clearly, macrostrategy research can cover a lot of ground. (It even has significant overlap with what was traditionally called ‘global priorities research’.) But we’re most eager to see people doing macrostrategy research relating to the future of AI.1 The clearest example we know about here is the work being done at Forethought, a research group dedicated to navigating the full range of challenges presented by ‘explosive’ AI progress).
In fact, it’s becoming increasingly difficult to do serious macrostrategy research without devoting some attention to AI. The way that the future of AI unfolds — how AI gets developed and used, and the risks and opportunities it brings with it — could shape the answer to any of the questions above.
Why work on macrostrategy research?
There are several reasons to think macrostrategy research could be a good use of your career, if you’re well suited to the work:
- It has a track record of influence. In fact, many of the global issues we view as most pressing are things people probably wouldn’t be thinking about if it weren’t for researchers exploring bigger-picture questions that weren’t seen as immediately policy-relevant five or 10 years ago. (We called this work ‘global priorities research’ at the time.) For example:2
- The idea that AI systems might be conscious or deserve moral consideration was pioneered by researchers like Carl Shulman and Nick Bostrom, who were searching for neglected issues of a potentially enormous scale. Now ‘AI welfare’ is on the agenda at Anthropic, and the prospect of AI consciousness is being considered by researchers at other major AI companies.
- The importance of taking the interests of all future generations into account when comparing different global problems was first emphasised by philosophers Will MacAskill and Toby Ord, who were thinking about what it would mean to make the world better.
- The field of wild animal welfare came about because researchers like Yew-Kwang Ng, a professor of economics at Monash University, began to consider whether it could present an unusually large opportunity to have an impact compared to existing areas of work.
- It’s really neglected. There are probably only dozens of people doing dedicated, AI-relevant macrostrategy research, even though there are so many important questions to cover.3 That means additional effort could go a long way.
- It’s really interesting — for the right person! If you’re motivated by thinking deeply about the world, exploring lots of new ideas, and charting out uncertain territory, there could hardly be a more fascinating career.
What are the downsides to working on macrostrategy?
You might struggle to get a job
There are currently very few organisations doing dedicated macrostrategy research — and those teams tend to be very small. We highlight relevant positions and funding opportunities on our job board below, but we wish there were more to highlight!
People can work independently on macrostrategy research, or work on relevant questions from within academia. But our guess is that these aren’t usually the most effective ways to do it, because the lack of institutional support means that your work won’t receive the kind of feedback (or pathways to impact) that it might inside a think tank or other dedicated organisation.
All in all, that means there aren’t that many jobs — and employers at these organisations are very selective.
It may be difficult to make progress
You might think it’s hard for dedicated macrostrategy research to do much better than our current best knowledge about these questions. There’s enormous uncertainty over the trajectory of AI progress and how it will change the world, and not much historical precedent to draw on. Plus, the AI systems and social or political dynamics you’ll often be theorising about don’t even exist yet, which makes it hard to check whether your conclusions are actually tracking reality.
It’s reasonable to worry about this. But even with this deep uncertainty, we do think AI macrostrategy research has already had positive effects on the world.
Partly motivated by macrostrategy research, the range of AI risks people are actively working on has expanded well beyond the field’s original focus on misalignment. For example, there are now some efforts to prevent AI systems being used to engineer pandemics, and to measure disempowerment by AI within society. Given how unsure we are about what risks AI systems will actually pose, these developments feel robustly valuable — by covering more bases, we’re becoming more prepared for a wider range of ways the future could go.
We feel hopeful that further macrostrategy research can continue strengthening our portfolio of efforts on global issues.
It might also get easier to make progress on macrostrategy questions over time, with the development of sophisticated AI tools for research and forecasting.
It might be hard to influence decision makers
As well as doing great research, you also need to make your research matter.
This might be particularly challenging in macrostrategy research, because the topics are often more abstract and don’t always suggest immediate policy solutions.
Compared to being an AI governance researcher, you’ll probably be less involved in policy circles. And compared to working on technical safety research at a major AI company, you’ll have less say in how the technology is developed.
That said, there are some previous examples of policy makers and AI companies acting on macrostrategy research,4 and we do expect there to be more in future.
And “influencing decision makers” doesn’t have to mean directly advising governments or AI companies. As a macrostrategy researcher, your work could potentially appeal to the broad group of people concerned with making the future of AI go well — including grantmakers, fieldbuilders, and nonprofit founders. These people are shaping the ideas and priorities that eventually filter up to decision makers, even if they aren’t directly setting policies.
It might be better to focus on more speculative issues later
Some of the risks researchers have already identified look like they’re beginning to play out. For example, we’re seeing early warning signs that AI systems might seek power from humans, examples of AI being used to execute sophisticated cyberattacks, and conflict between companies and governments over who has ultimate control of AI.
You might think we should focus on ensuring humanity survives these threats first, and only begin to explore new territory later on. It can feel strange to ask questions like “how can we flourish as a society?” when we’re still working out how to avoid going extinct, creating huge amounts of suffering, or falling to authoritarianism.
There’s also a chance it’ll be easier to make progress on macrostrategy questions later on than it is today: we’ll have more information to help us make good predictions about the future of AI, and we might even have very sophisticated AI tools to help us with the hardest reasoning.
The thing is: no one really knows which risks will turn out to be most urgent, or what the ideal window is for taking action on each issue. The challenges we’re currently focusing on may end up being non-issues; risks that seem speculative now might confront us suddenly. (In fact, part of the role of macrostrategy research is to give us more clarity on what’s actually most urgent and important.) And it seems unwise to bank on it being easy and effective to automate macrostrategy research soon, given how jagged AI development tends to be.
And although many macrostrategy questions probably aren’t the most urgent questions for humanity to answer, we still think it’s good for some people to be looking ahead. It makes sense to take a portfolio approach to preparing for advanced AI, rather than putting all our eggs into one basket and just hoping we made the right call.
Would you be a good fit for macrostrategy research?
What skills and traits are needed to succeed?
Here are the attributes we think are most important for being a great macrostrategy researcher:
- You’ll need to be excellent at doing novel research. That involves:
- Having great research taste — that is, the ability to notice which questions actually matter.
- Being able to take on messy, ill-defined questions and come up with reasonable assessments about them using a variety of research methods. (Read more about predicting success in research.)
- You’ll need to like spending time on abstract, speculative work that doesn’t have clear feedback loops (i.e. you often won’t be able to check your conclusions against reality).
- Because these research questions haven’t been explored by many people, there won’t always be more experienced researchers to turn to for guidance. So you’ll need to have the drive to stick with tough problems while working largely independently.
- To communicate your research ideas effectively, you’ll need to be a strong writer.
- For macrostrategy research focused on the future of AI, you’ll need a strong understanding of AI and its dynamics.
- Quantitative analysis or forecasting skills can be helpful (depending on the kinds of questions you’re trying to answer), but are not always essential.
- The best candidates will also be:
- Highly creative, curious, and open to unusual ideas.
- Unusually analytically intelligent.
- Willing to change their minds when presented with new, compelling evidence or arguments.
- Great at noticing when they’re uncertain about something, and taking those uncertainties into account when drawing conclusions.
- Strongly focused on making the world better.
We recommend testing your fit before if you’re not ready to apply to roles yet. You can find some ideas below, or learn more about assessing personal fit.
What experience is useful?
There aren’t any essential prerequisites for getting a job as a macrostrategy researcher, unless you’re trying to work in academia, in which case you’ll need a PhD. The organisations that hire for these roles often take a ‘non-credentialist’ approach, and are more interested in whether you fit the skills profile above.
But there are some experiences that could demonstrate your fit. These include:
- Previous research experience — for example, a competitive research fellowship, postgraduate study, or having worked at a think tank or research institute (bonus points if you’ve done research in an interdisciplinary setting, or you’re used to working in areas that are speculative or poorly understood)
- A track record of writing, whether that’s blog posts, academic papers, journalism, or something else
- Academic success — for example, having gone to a top postgraduate programme in any field
As for your academic background:
- It’s far more important that you have the traits we described above than any particular academic background. But depth in any highly-relevant discipline — like philosophy, economics, physics, maths, international relations, politics, history, or (of course!) AI — could be helpful.
- If you want to do macrostrategy research as an academic:
- In theory, you could try to answer macrostrategy research questions within a variety of disciplines — but you’ll need to find an advisor who will help you develop these unusual ideas, which can be hard.
- The entry point with least resistance might be a PhD in philosophy, which tends to offer a fair amount of flexibility. Bear in mind that academic research can be a challenging path.
Non-research paths
You don’t have to be a researcher to join an organisation doing macrostrategy work. These organisations are often in need of research managers, operations staff, communications specialists, and more.
In our series of career reviews, you’ll find dedicated articles covering the skills and experiences needed to succeed in other roles like these.
Note: even if you’re looking for non-research roles at organisations doing macrostrategy work, it’ll still be useful to have some familiarity with the research they do.
Example people
Top organisations
You could pursue macrostrategy research from a variety of institutional homes — academia, research nonprofits, think tanks, grantmakers, and even ‘preparedness’ teams at some AI companies. You won’t always be able to focus exclusively on macrostrategy research in these roles, though.
Here are some organisations we know of where you might be able to work on macrostrategy research:
- Forethought (this is the clearest example today of an organisation focused on AI macrostrategy)
- The AI Futures Project — which has existing work trying to forecast the future of AI, and some upcoming work on a positive vision for AGI
- ACS Research, whose researchers explore neglected issues like gradual disempowerment
- Redwood Research — though this organisation is primarily focused on controlling and aligning AI systems, its researchers have also published some work on bigger-picture questions around alignment strategy
- The Center on Long-Term Risk, which is currently doing macrostrategy work under its ‘strategic readiness’ agenda
- Macroscopic Ventures — which does research and grantmaking on topics like concentration of power and avoiding worst-case futures
- Longview Philanthropy, particularly for its grantmaking research into ‘better futures’
- Coefficient Giving, especially its Short Timelines Special Projects team
- Coefficient Giving is open to receiving Expressions of Interest from people interested in macrostrategy research.
- The Future of Life Institute’s Futures Team
- The Foresight Institute’s Existential Hope team
- Rethink Priorities:
- Worldview Investigations Team (though its work is not primarily about AI and is closer to traditional ‘global priorities research’)
- Rethink’s new AI strategy team (which looks to be focusing on AI macrostrategy, but has not yet published anything)
Note that many places won’t use the term ‘macrostrategy’ to describe their work, because there’s no single agreed term for this kind of research!
Next steps
If you’re ready to apply for jobs
You can find some relevant organisations and research projects that are hiring on our job board below.
Or, if you’ve got a project idea in mind, have already spent time working in this or a related field, and feel excited to build something new: you could consider founding your own organisation.
If you need, test your fit or build career capital first
We’d recommend starting by:
- Reading widely and developing your own ideas. For example, try forming opinions on how an intelligence explosion might unfold, what’s most important to get right for the future of AI to go well, or where current thinking on these topics falls short. Come up with ideas for research questions to investigate.
- Doing some writing. Start your own blog or Substack, or consider writing on forums where you’ll find other macrostrategy researchers (like the EA Forum and LessWrong).
- Trying out a small project. You could pick a research question and spend 10 or 20 hours trying to make progress on it. You probably won’t come up with any earth-shattering insights, but you’ll get a feel for the kinds of thinking that macrostrategy requires — and maybe you’ll get an interesting blog post out of it.
- Reaching out to people in the field. Ask them for feedback on your research ideas, writing, or any small projects you’ve worked on. You can find tips for meeting people here, and advice for reaching out to people here.
If you feel more confident you want to go down this path, you could try some of these higher-commitment options for gaining career capital:
- Apply to a research fellowship. The MATS programme’s Forethought stream is directly focused on AI macrostrategy. Fellowships run by Constellation, the Future Impact Group, and Foresight might also provide opportunities to research relevant topics around AI, existential risk, and building a flourishing future. And even research programmes that aren’t as directly relevant to AI macrostrategy could still help you build valuable research skills.
- Take a course. For example, to better understand the strategic landscape of AI, we recommend BlueDot’s AGI Strategy course. If you’re keen to think carefully about what a great future could look like and how to get there, you could try a wordbuilding course.
Before you apply to macrostrategy positions, it might also be valuable to try other research roles related to AI risk — like being an AI governance researcher or technical AI safety researcher — where you’ll deal with more concrete research topics and get better feedback loops on your work.
If you want to learn more
Examples of recent AI macrostrategy research
- We recommend checking out the materials on Forethought’s website, particularly any research tagged as ‘macrostrategy’ (you can filter by topic on this page). Some examples include:
- Preparing for the intelligence explosion argues we need to prepare now for a broad range of neglected AI challenges beyond misalignment.
- The Better Futures series focuses on what a genuinely good post-AGI future might look like, and what we can do now to improve our chances of getting there.
- Research by the AI Futures Project, such as AI 2027 and the updated model released in December 2025 — both efforts to paint a big picture of how AGI development could go.
- Work by researchers at ACS Research, such as Gradual disempowerment — another effort to scope out highly neglected risks from AI.
- This blog post by Ryan Greenblatt at Redwood Research outlining different, high-level strategies for handling misalignment risks depending on the level of political will that exists.
- Various publications labelled ‘prioritization and macrostrategy’ from the Center on Long-Term Risk.
- We also recommend following the work of independent researchers like Nick Bostrom, Eric Drexler, Richard Ngo, and Joe Carlsmith.
Influential older macrostrategy research
- Nick Bostrom: Superintelligence: Paths, Dangers, Strategies, Existential risk reduction as global priority, and Crucial considerations and wise philanthropy
- Eric Drexler: Reframing superintelligence and this talk on Paretotopian Goal Alignment
- Sharing the world with digital minds by Carl Shulman and Nick Bostrom, an example of early work on how to think about the moral status of digital minds
- The Most Important Century, a series by Holden Karnofsky and other authors
Research agendas and reports on research progress
- This brief summary of Forethought’s research agenda, though you can learn more on the Forethought website
- The Global Priorities Institute’s 2024 research agenda, published before it shut down.
- The Future of Humanity Institute’s retrospective report on its research contributions — though this institute has also shut down, examples of its previous research projects will give you a sense of what macrostrategy researchers today are thinking about and drawing upon
Working in macrostrategy
Podcast episodes
- Will MacAskill on AI causing a “century in a decade” — and how we’re completely unprepared
- Carl Shulman on the economy, government, and society after AGI — Part 1 and Part 2.
Speak with us
Want one-on-one advice on pursuing this path?
We think this path could easily be a top option for people who are a good fit. If you think this macrostrategy research might be a great fit for you, we’d be especially excited to advise you on next steps, one-on-one. We can help you consider your options, make connections with others working in the same field, and possibly even help you find jobs or funding opportunities.
Notes and references
- But note that one question for macrostrategy researchers is whether we’re right to be so focused on the risks of AI compared to other issues facing the world.↩
- For examples that aren’t mentioned in this list, see the successes retrospectively highlighted by the Future of Humanity Institute — a former research group at the University of Oxford which was known, among other things, for global priorities research.↩
- See, for example, this research agenda published by the Global Priorities Institute in 2024. The institute has since shut down, but these research areas remain important.↩
- Indeed, some decision makers have taken note of compelling but speculative ideas presented to them before.
Some ideas from the Future of Humanity Institute (FHI) — an early example of an organisation doing macrostrategy research — seem to have been taken seriously in policy contexts. After writing The Precipice, FHI researcher Toby Ord was invited to advise the UN Secretary General’s Office on existential risk. He contributed to a major UN report which called for AI regulation, new international institutions to govern existential risks, and a dedicated lab for monitoring long-term issues — proposals in keeping with many ideas from Ord’s macrostrategy research.
A much less encouraging example comes from before the field of macrostrategy research even existed: the idea of overpopulation being a threat to humanity’s survival. After Paul Ehrlich’s 1968 book The Population Bomb painted a vivid picture of imminent global famine, several governments took action. President Nixon signed into law the Commission on Population Growth and the American Future; India embraced policies that, in many states, required sterilisation to obtain water, electricity, ration cards, medical care, and pay raises; and China introduced its one-child policy.
We know now that these fears were wildly overblown, and the measures taken were unnecessary and often very harmful. But the point here is that researchers painting a vivid picture of a potential threat has ended up motivating action before — for better or worse.↩