Whole brain emulation
Our overall view
Sometimes recommended
We’d love to see more people working on this issue. But you might be able to do even more good working on one of our top priority problem areas.
Profile depth
Exploratory
Table of Contents
Why might whole brain emulation be a pressing issue?
Whole brain emulation is a strategy for creating a kind of artificial intelligence by replicating the functionality of the human brain in software. It seems likely that we’ll be able to emulate brains this century — unless other forms of AI are created first, which then change our trajectory.1
Successful whole brain emulation could enable dramatic new forms of intelligence — so steering the development of this technique could be crucial (see our full profile on preventing AI-related catastrophes for more). If digital people are created through whole brain emulation, this would likely cause rapid economic growth, and — because digital people won’t die or physically age — their existence could lock in values for an extended period of time. In the worst cases, this could involve locking in stable totalitarianism or bringing about other suffering risks.
While we’re reasonably confident that it could be important to research the governance of whole brain emulation, we’re not sure that accelerating this technology would be a good thing overall. Here are a few relevant considerations:2
- Whole brain emulations could be more interpretable than other forms of human-level artificial intelligence. We have a huge amount of experience understanding human intelligence. This means that we may be more able to understand whole brain emulations than other forms of AI (and as a result prevent any unintended behaviour).
- Whole brain emulations could inherit human motives. For many forms of transformative AI, it’s unclear whether these systems have goals — and if they do, it’s unclear what goals they have. But when emulating the entirety of a human brain, it seems more likely that the emulation will have similar motives to the brain. However, it’s not clear that those motives would remain in the weird, alien digital environment in which this brain finds itself. Also, humans can be untrustworthy, selfish, and cruel — so it’s far from certain that this would be an entirely good thing.
- The development of whole brain emulation is more predictable than the development of other forms of AI. We are very likely to emulate the brains of simple organisms like the roundworm C. elegans before we’re able to emulate human brains. This means we’d have some warning about the arrival of whole brain emulation, giving society some time to prepare and adapt. This could mean that the technology is more likely to have positive effects overall.
- Speeding up the arrival of transformative AI may not be positive. Doing research into whole brain emulation speeds up the arrival of whole brain emulation. Since whole brain emulation is a form of transformative AI, this research speeds up the arrival of transformative AI — which could be very dangerous.
- The technology developed for whole brain emulation could be used to develop less safe types of AI. We might be able to develop whole brain emulation by scanning and accurately simulating human brains. If this is the case, we’re likely to be constrained by our scanning and simulation capabilities, rather than any lack of knowledge about neuroscience. On the other hand, as we gather this data, it seems easily possible that this would give us insight into neuroscience, and this could help us develop neuromorphic AI (AI inspired by but not wholly based on the human brain). Bostrom (2014) argues that neuromorphic AI is less safe than other forms of AI.
- Successful whole brain emulation may not fully remove the risks of other forms of AI. Whole brain emulations may have capabilities exceeding humans, because emulations can be copied or run at faster speeds. That said, whole brain emulations may be less intelligent than other forms of AI, so we may still develop these other forms of AI. This means that overall we would have had to deal with two dangerous transitions: the development of whole brain emulation, and the development of another more capable form of AI. This could mean the total risk is higher if whole brain emulation is developed first.
Overall, working on whole brain emulation seems like it could be extremely valuable if: we’re pessimistic about existential risks from other forms of AI and research into whole brain emulation means it’s substantially more likely to occur before other forms of AI.
All that said, we should be wary of another concern: even if whole brain emulation could reduce existential risks from AI, its creation could entail creating artificial sentience, and this carries with it a whole set of other challenges.
Attempts to better answer the question of whether we should be working on whole brain emulation may be extremely valuable, and we’re not aware of anyone currently working on this problem full time.
Learn more about whole brain emulation
- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom (2014) discusses whole brain emulation in detail.
- The Digital People FAQ by Holden Karnofsky (2021) considers what a world with whole brain emulation would look like.
- Whole Brain Emulation: A Roadmap by Anders Sandberg and Nick Bostrom (2007) details how we might expect to see this technology developed.
- Podcast: Jonathan Birch on the edge cases of sentience and why they matter
- Podcast: Anil Seth on the predictive brain and how to study consciousness
- Podcast: Robert Long on why large language models like GPT (probably) aren’t conscious
- Podcast: Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe
Read next: Explore other pressing world problems
Want to learn more about global issues we think are especially pressing? See our list of issues that are large in scale, solvable, and neglected, according to our research.
Notes and references
- Eth et al. (2013), “The prospects of whole brain emulation within the next half-century.” Sciendo.
In this paper, we investigate the plausibility of WBE being developed in the next 50 years (by 2063). We identify four essential requisite technologies: scanning the brain, translating the scan into a model, running the model on a computer, and simulating an environment and body. Additionally, we consider the cultural and social effects of WBE. We find the two most uncertain factors for WBE’s future to be the development of advanced miniscule probes that can amass neural data in vivo and the degree to which the culture surrounding WBE becomes cooperative or competitive. We identify four plausible scenarios from these uncertainties and suggest the most likely scenario to be one in which WBE is realized, and the technology is used for moderately cooperative ends.↩
- These considerations were adapted from Bostrom (2014), *Superintelligence: Paths, Dangers, Strategies*, Chapter 14: The strategic picture.↩