Research questions that could have a big social impact, organised by discipline

The Royal Portuguese Cabinet of Reading, Rio de Janeiro, Brazil

About these research questions

People frequently ask us what high-impact research in different disciplines might look like. This might be because they’re already working in a field and want to shift their research in a more impactful direction. Or maybe they’re thinking of pursuing an academic research career and they aren’t sure which discipline is right for them.

Below you will find a list of disciplines and a handful of research questions and project ideas for each one.

They are meant to be illustrative, in order to help people who are working or considering working in these disciplines get a sense of what some attempts to approach them from a longtermist perspective might look like. They also represent projects that we think would be useful to pursue from a longtermist perspective.

These lists are not meant to be exhaustive; nor are they meant to express a considered view on what we think the most valuable questions and projects in each area are.

We’ve categorised the entries by discipline, though even if you’re already a researcher in one discipline we’d encourage you to consider questions and projects from others as well. Working at the intersection of two fields, and using the tools from one to tackle questions from another, can be good ways to increase your impact, as these interfaces are often more neglected.

There is some overlap between the disciplines listed below, and some repetition of questions that seemed like particularly good examples of research in more than one field.

This article is a work in progress — we hope to add and refine entries to these lists over time.

Note February 2022: A new list of questions that seem particularly high impact to us has just come out: Important, actionable research questions for the most important century. They don’t all fit neatly into the categories below, as many are interdisciplinary and somewhat different from the kind of work done in most academic departments, so we haven’t integrated them below. But check them out, and if you think you might be a good fit for working on one of them we encourage you to explore it!

What are these lists based on?

Our primary strategy in compiling these lists was to look through formal and informal collections of high-impact research questions put together by others in the effective altruism community or by people working on our priority problems. We’ve linked to these sources throughout, as well as at the end of this article. One reason we’ve used informal sources is that we’re interested in questions and projects that seem high-impact in part because they’re less well-researched by established academics.

When choosing between a question or project that seemed higher impact from a longtermist perspective and one that struck us as more illustrative, we often chose the latter.

We’ve lightly edited or changed many of the questions and project descriptions we got from other sources, which we note in parentheses. If there is no adaptation or change indicated, the entry is an exact quote. Often you can find more context for these questions and projects — including existing literature and additional questions — in the linked sources. If there is no source indicated, we wrote the entry ourselves.

Biology and genetics

  • How are average welfare levels distributed across different species (in the wild or in captivity)?
  • What’s the average lifespan of the most common species of wild animals? What percent die via various means and how slow and painful is it? (Adapted from Lewis Bollard, EA summit project ideas)
  • How do our best AI systems so far compare to animals and humans, both in terms of performance and in terms of brain size? What do we know from animals about how cognitive abilities scale with brain size, learning time, environmental complexity, etc.? (Richard Ngo, Technical AGI safety research outside AI)
  • Research and develop methods for genetically engineering or breeding crops that could thrive in the tropics during a nuclear winter scenario (Adapted from ALLFED, Effective Theses Topic Ideas)
  • What future possibilities are there for brain-computer interfacing and how does this interact with issues in AI safety? (Ryan Carey, comment on Concrete projects list)
  • Research the genetic basis of hedonic set point, e.g. develop a model to predict hedonic set point using SNPs available on 23andme. Some researchers think promising candidates for such a study include the SCN9A and FAAH and FAAH-OUT genes. (Adapted from Qualia Research Institute, Volunteer page)
  • What’s the minimum viable human population (from the perspective of genetic diversity)? (Michael Aird, Crucial questions for longtermists)

Business and organisational development

  • How should altruistic/philanthropic actors coordinate when there are projects they’d both like to see happen but would both prefer that the other do/fund? (Adapted from Luke Muehlhauser (writing for Open Philanthropy), Technical and Philosophical Questions That Might Affect our Grantmaking)
  • What forecasting methods used by private corporations can be adapted for use by altruistic actors?
  • What is the state of the art for making crucial data and information visible and salient to leaders at large private corporations? Can these techniques be adapted into interfaces that keep relevant decision makers up to date with the most important information about deployed AI? (Inspired by Richard Ngo, Technical AGI safety research outside AI)

China studies

  • What are Chinese computer scientists’ views on AI progress and the importance of work on safety? (You might try running a survey similar to this one from 2016, but focusing on AI experts in China.) (Adapted from Ben Todd, A new recommend career path for effective altruists: China specialist)
  • How does the Chinese government shape its technology policy? What attitudes does it have towards AI (including AI safety), synthetic biology, and regulation of emerging technology? (Adapted from Ben Todd, A new recommend career path for effective altruists: China specialist)
  • How does Chinese nuclear no first-use policy affect global stability and potential strategic doctrines for emerging technologies? (Adapted from personal correspondence with an expert)
  • Why has Mohism almost died out in China, relative to other schools of thought? (Adapted from personal correspondence with an expert)
  • Why have certain aspects of Chinese civilisation been so long-lasting? Are there any lessons we can draw from this about what makes for highly resilient institutions, cultures, or schools of thought? (Inspired by personal correspondence with an expert)

Climate studies and earth sciences

  • Under what scenarios could climate change be an existential catastrophe? E.g. through runaway or moist greenhouse effects, permafrost, methane clathrate, or cloud feedbacks? How likely are these scenarios? (Toby Ord, The Precipice, Appendix F)
  • More generally, what environmental problems — if any — pose existential risks? (Adapted from Effective Thesis)
  • How frequent are supervolcanic eruptions and what size of eruption could cause a volcanic winter scenario? (Adapted from Toby Ord, The Precipice, Appendix F)
  • Improve modeling on nuclear winter and climate effect of asteroids, comets, and supervolcanoes (The Precipice, Appendix F)
  • What are potential risks from geoengineering technologies and which of these technologies — if any — might be promising for mitigating climate change?

Cognitive science and neuroscience

  • What traits are the best indicators of sentience in different animal species? What do these measurements suggest about the distribution of sentience across species?
  • What are the best and cheapest underexplored treatments for cluster headaches and other extremely painful conditions? What does this imply about the causes of extreme suffering? (Adapted from Qualia Research Institute, Volunteer page)
  • Which features of the human brain are most important for intelligence? How important is computational power vs. brain architecture vs. accumulated knowledge?
  • What potential nootropics or other cognitive enhancement tools are most promising? E.g. does creatine actually increase IQ in vegetarians?

Economics

  • What is the effect of economic growth on existential risk? How desirable is economic growth after accounting for this and any other side effects that might be important from a longtermist perspective? (See a recent paper by Leopold Aschenbrenner for some initial work on this question.)

  • What’s the best way to measure individual wellbeing? What’s the best way to measure aggregate wellbeing for groups?

  • What determines the long-term rate of expropriation of financial investments? How does this vary as investments grow larger? (Michael Aird, Crucial questions for longtermists)

  • What can economic models — especially models of economic growth — tell us about recursive self improvement in advanced AI systems? (Adapted from AI Impacts, Promising research projects)
  • Can concerns about unaligned artificial intelligence or economic dominance by influence-seeking agents be reformulated in terms of standard economic ideas, such as principal-agent problems and the effects of automation? (Adapted from Richard Ngo, Technical AI Safety research outside AI)
  • Of the comprehensive macroeconomic indices already available to us, which serve best as proxies for long-term expected global welfare (including but not limited to considerations of existential risks)? What would be the broad policy implications of targeting such indices instead of GDP per capita? (Global Priorities Institute, Research Agenda)
  • What is the optimal design of international institutions that are formed to increase global public goods or decrease global public bads? (Global Priorities Institute, Research Agenda)
  • Economists may also be interested in working on questions in the global priorities research section, below.

Epidemiology, synthetic biology, and medicine

  • Research and development of techniques for screening DNA synthesis requests for dangerous pathogens (especially techniques that won’t create infohazards by being reverse engineered).

  • Research and development into platforms that might decrease the time it takes to go from novel pathogen to vaccine. (Cassidy Nelson, 80,000 Hours podcast interview)

  • What broad-spectrum drugs, especially antivirals, are most promising for tackling novel pathogens? (Cassidy Nelson, 80,000 Hours podcast interview)

  • Roll out genetic sequencing-based diagnostics that let you test someone for all known and unknown pathogens in one go. (Cassidy Nelson, 80,000 Hours podcast interview)

  • Is extreme human life extension possible? If so, what research is most promising for reaching that goal? (Adapted from Effective Thesis)

Global Priorities Research

  • How likely would catastrophic long-term outcomes be if everyone in the future acts for their own self-interest alone? (Adapted from Michael Aird, Crucial questions for longtermists)
  • How should altruistic/philanthropic actors coordinate when there are projects they’d both like to see happen but would both prefer that the other do/fund? (Inspired by Luke Muehlhauser (writing for Open Philanthropy), Technical and Philosophical Questions That Might Affect our Grantmaking)

  • What is the expected value of the continued existence of the human race? Might this expected value be negative, or just unclear? How do our answers to these questions vary if we (i) assume utilitarianism; (ii) assume a non-utilitarian axiology; (iii) fully take axiological uncertainty into account? (Global Priorities Institute, Research Agenda)

  • Assuming that there is a single, context-independent welfare level corresponding to a life’s having zero contributive value to social welfare, what kinds of lives have zero welfare in this contributive sense? (Global Priorities Institute, Research Agenda)
  • Could advances in AI lead to risks of very bad outcomes, like suffering on a massive scale? Is it the most likely source of such risks? (Adapted from Michael Aird, Crucial questions for longtermists)
  • One common view is that we should favour interventions that have more evidential support, all else being equal. On the face of it, this conflicts with the maximisation of expected value if one would prefer an intervention with much stronger evidence but a (possibly infinitesimally) small reduction in expected value (if ‘all else being equal’ means: ‘expected value being equal’). On the other hand, it also seems reasonable to place some value on the uncertainty of an intervention. What is the correct response to this mean-variance tradeoff? (Global Priorities Institute, Research Agenda)
  • How much do global issues differ in how cost-effective the most cost-effective interventions within them are?
  • Many of the questions under economics, philosophy, history, and other disciplines could also be considered global priorities research.

History

Law

  • What case studies are there for legal rights or other protections being won for beings that didn’t and wouldn’t ever have the right to vote, and what lessons do these have for animal welfare?
  • What lessons for AI can be drawn from the regulation of other dual-use technologies? (Effective Thesis)
  • What legal obstacles are there to setting up a stable long-term financial investment fund that will not be appropriated for centuries or longer?
  • What legal strategies might help make prediction markets more viable?
  • What rights might digital minds have under current law? What characteristics would affect this?
  • What legal tools could states use to close and/or exert control over AI companies? (Adapted from Allan Dafoe, AI Governance: A Research Agenda)

Machine learning, artificial intelligence, and computer science

  • Analyze the performance of different versions of software for benchmark problems, like SAT solving or chess, and determine the extent to which hardware and software progress facilitated improvement. (AI Impacts, Possible Empirical Investigations)
  • How can we make AI systems more robust to distributional shifts (i.e. how do we make sure they fail gracefully if they encounter a context that differs radically from their training environment)? For example, can we develop reinforcement learning agents that consistently notice when they are out of distribution and ask for guidance? (Adapted from Amodei et al., Concrete Problems in AI Safety)
  • Designing techniques to avoid reward hacking. For example, using an adversarial reward-checking agent to try to find scenarios that the ML system claimed were high reward but a human labels as low reward (Amodei et al., Concrete Problems in AI Safety)
  • Improving ML systems’ ability to avoid negative side effects without having to hard code them into the system’s loss function. For example, one could try to define an ‘impact regularizer’ that would penalise making big ‘changes to the environment.’ How might such an idea be formalised? (Adapted from Amodei et al., Concrete Problems in AI Safety)
  • How can we design ML systems that are more transparent and whose models are more easily interpretable by humans? For example, see Chris Olah’s research into designing better visual representations of deep learning. (Evan Hubinger, Chris Olah’s Views on AGI Safety)

  • If you have a system that can make decisions as well as a human can, how can you use that system to build more powerful systems which make much better decisions than humans while still preserving its alignment with human values? Answering this question means that it’s more likely that we’ll be able to use powerful general-purpose ML to make decisions that are actually good for us, rather than just seeming good. (Personal correspondence with Buck Shlegeris). For more information, see OpenAI’s blog post on “AI Safety via Debate” and/or Paul Christiano’s work on iterated distillation and amplification, which is discussed in our interview with Paul.

  • Currently, formal models of agents and decision making are based on clearly false simplifying assumptions about the world; for example, that the agent itself has a mind that isn’t part of the physical universe. These limitations of our models sometimes mean that they suggest absurd things about what rational agents would do in various situations. If we want to be able to reason carefully about the behaviour of artificially constructed intelligent agents, it might be helpful to have more usable formal models. (Not everyone thinks that it’s important for us to be able to reason formally about agents this way.) (Personal correspondence with Buck Shlegeris). See the Machine Intelligence Research Institute (MIRI)’s work on embedded agency for more.

  • What types of AI (in terms of architecture, subfield, application, etc.) are most likely to contribute to reaching artificial general intelligence ? What AI capabilities would be necessary or sufficient, individually or collectively? (Adapted from Future of Life Institute, A survey of research topics for robust and beneficial AI)

  • How quickly might those capabilities arise?

  • How can “progress” in AI research effectively be tracked and measured? What progress points would signal important technological milestones or the need for a change in approach? (CNAS, Artificial Intelligence and Global Security Initiative Research Agenda) (See also: the “Assessing AI Progress” section of Centre for the Governance of AI’s research agenda (page 21))

Philosophy

  • Assuming we are uncertain about what moral theory to believe, is there anything wrong with some views with very high stakes dominating our uncertainty-adjusted moral conclusions (‘fanaticism’)? For example, if you think theory A is very unlikely to be true, but it says action X is extremely valuable, while all other theories think X is just slightly bad, is there anything wrong with concluding you should do X? If so, what plausible view of moral uncertainty would allow us to avoid this conclusion? (Adapted from Global Priorities research agenda)
  • What concerns are there with representing people as having utility functions? What alternatives are there?
  • What sorts of entities have moral status? Controversial categories include nonhuman animals (including insects), the dead, the natural environment, and current or potential artificial intelligence. (Adapted from Will MacAskill, The most important unsolved problems in ethics)
  • Although it has been frequently argued that advanced AI goals should reflect ‘human values’, which particular values should be preserved (given that there is a broad spectrum of inconsistent views across the globe and across time about what these values should be)? (Adapted from Future of Life Institute, A survey of research topics for robust and beneficial AI)
  • What are the best heuristics for reliably identifying experts on a topic, or choosing what to believe when apparent experts disagree?

  • What’s the chance that the people making the decision in the future about how to use our ‘cosmic endowment’ are such that we would be happy, now, to defer to them? (Global Priorities Institute, Research Agenda)

  • How can we distinguish between AIs helping us better understand what we want and AIs changing what we want (both as individuals and as a civilisation)? How easy is the latter to do; and how easy is it for us to identify? (Richard Ngo, Technical AGI safety research outside AI)
  • Philosophers may also be interested in working on questions in the global priorities research section, above.

Physics and astronomy

  • Research the deflection of 1 km+ asteroids and comets, perhaps restricted to methods that couldn’t be weaponised (such as those that don’t lead to accurate changes in trajectory). (Toby Ord, The Precipice, Appendix F)
  • Improve our understanding of the risks from long-period comets. (Toby Ord, The Precipice, Appendix F)
  • Improve our modelling of impact winter scenarios, especially for 1–10 km asteroids. Work with experts in climate modelling and nuclear winter modelling to see what modern models say. (Toby Ord, The Precipice, Appendix F)
  • What would be required for humans to settle other planets or to use resources from outside Earth?
  • How likely is the existence of extraterrestrial life? (Center on Long-Term Risk, Open Research Questions)

Political science, international relations, and security studies

  • What types or features of institutions could help enable the representation of the interests of future generations and/or sentient nonhumans in political processes?
  • How feasible is an eventual rise of a global and potentially long-lasting totalitarian regime? What are potential predictors of such a regime? (Such as, perhaps, improved surveillance technologies or genetic engineering?) (Adapted from Michael Aird, Crucial questions for longtermists)
  • How could AI transform domestic and mass politics? E.g. will AI-enabled surveillance, persuasion, and robotics make totalitarian systems more capable and resilient? (Allan Dafoe, AI Governance: A Research Agenda)
  • How will geopolitical, bureaucratic, cultural, or other factors affect how actors choose to adopt AI technology for military or security purposes? (CNAS, Artificial Intelligence and Global Security Initiative Research Agenda)
  • Will AI come to be seen as the one of the most strategically important parts of the modern economy, warranting massive state support and intervention? If so, what policies might this cause countries to adopt, and how will this AI nationalism interact with global free trade institutions and commitments? (Adapted from Allan Dafoe, AI Governance: A Research Agenda)

  • What are the conditions that could spark and fuel an international AI race? How great are the dangers from such a race, how can those dangers be communicated and understood, and what factors could reduce or exacerbate them? What routes exist for avoiding or escaping the race, such as norms, agreements, or institutions regarding standards, verification, enforcement, or international control? (Allan Dafoe, AI Governance: A Research Agenda)

Psychology

  • How well does good forecasting ability transfer across domains? (Inspired by Scott Alexander, answer to What are the open problems in human rationality?)
  • What’s the best way to measure individual wellbeing — across people, and for different kinds of sentient beings? (Adapted from Happier Lives Institute, Research agenda)
  • What are the best ways to encourage compliance with safety and security norms and/or create a culture of safety and security among scientists who work with dangerous pathogens or other dual-use technologies? (Adapted from Effective Thesis)
  • What potential nootropics or other cognitive enhancement tools are most promising? E.g. does creatine actually increase IQ in vegetarians?
  • Develop more reliable and tamper-proof measures for so-called ‘dark tetrad’ traits — psychopathy, Machiavellianism, sadism, and narcissism. (Adapted from David Althaus and Tobias Baumann, Reducing long-term risks from malevolent actors)
  • Some expect that as AI advances it might engage in behaviour that we experience as manipulative. What are the best defenses against these new possible sorts of manipulation? (Adapted from Future of Life Institute, A survey of research topics for robust and beneficial AI)

Public policy

Science policy/infrastructure and metascience

Sociology

  • Generate case studies of successes and failures by social movements (e.g. the anti-GMO movement, the anti-nuclear weapons movement, or the LGBTQ movement) — what happened and how? (Inspired by Sentience Institute, Research agenda)

  • What were the biggest and most long-lasting changes in cultural value systems throughout history? How did they happen and why?
    (Inspired by Effective Thesis)

  • What are the best ways to encourage compliance with safety and security norms and/or create a culture of safety and security among scientists who work with dangerous pathogens or other dual-use technologies? (Adapted from Effective Thesis)

  • Why are some values, institutions, and organisations extremely durable — lasting hundreds of years (e.g. academia) — whereas others change frequently? What are the social mechanisms that explain this? (Effective Thesis)
  • Case studies into how institutions and organisations make big, important decisions, or respond to catastrophes.

Statistics and mathematics

  • When estimating the chance that now (or any given time) is a particularly pivotal moment in history, what is the best uninformative prior to update from? For example, see our podcast with Will MacAskill and this thread between Will MacAskill and Toby Ord for a discussion of the relative merits of using a uniform prior v. a Jeffreys prior.
  • Toby Ord has argued that because the human species has survived for a long time, we can conclude the natural per-century human existential risk is low (or else it’d be extremely unlikely that we’d be here). Does this argument still hold if we assume there are millions of potentially intelligent species evolving throughout the history of the universe, and only those that survive about as long as we have become advanced enough to ask questions about how high natural extinction risk is? (Some experts believe this issue has been adequately resolved.)

  • Currently, formal models of agents and decision making are based on clearly false simplifying assumptions about the world; for example, that the agent itself has a mind that isn’t part of the physical universe. These limitations of our models sometimes mean that they suggest absurd things about what rational agents would do in various situations. If we want to be able to reason carefully about the behaviour of artificially constructed intelligent agents, it might be helpful to have more usable formal models. (Not everyone thinks that it’s important for us to be able to reason formally about agents this way.) (Personal correspondence with Buck Shlegeris). See the Machine Intelligence Research Institute (MIRI)’s work on embedded agency for more.

  • Many statisticians may also be interested in the ML questions we list above.

Sources

Thank you to everyone who put together lists of research questions — original or compiled from elsewhere — that they thought could be promising for people to work on. This article relied on their efforts more than our own. Michael Aird’s Effective Altruism Forum post A Central directory for open research questions was particularly helpful in putting this article together.

Here are all the other cited sources (in order of appearance):

Want to work on one of the research questions above?

You could be a good fit to receive one-on-one careers advising from us. We can help you analyse your plan, narrow down your options, and make connections. It’s all free.

Check out advising

Thinking of going into academia?

Check out our article on academic research as a career path.

Read now