China-related AI safety and governance paths

Expertise in China and its relations with the world might be critical in tackling some of the world’s most pressing problems. In particular, China’s relationship with the US is arguably the most important bilateral relationship in the world, with these two countries collectively accounting for over 40% of global GDP.1 These considerations led us to publish a guide to improving China–Western coordination on global catastrophic risks and other key problems in 2018. Since then, we have seen an increase in the number of people exploring this area.

China is one of the most important countries developing and shaping advanced artificial intelligence (AI). The Chinese government’s spending on AI research and development is estimated to be on the same order of magnitude as that of the US government,2 and China’s AI research is prominent on the world stage and growing.

Because of the importance of AI from the perspective of improving the long-run trajectory of the world, we think relations between China and the US on AI could be among the most important aspects of their relationship. Insofar as the EU and/or UK influence advanced AI development through labs based in their countries or through their influence on global regulation, the state of understanding and coordination between European and Chinese actors on AI safety and governance could also be significant.

That, in short, is why we think working on AI safety and governance in China and/or building mutual understanding between Chinese and Western actors in these areas is likely to be one of the most promising China-related career paths. Below we provide more arguments and detailed information on this option.

If you are interested in pursuing a career path described in this profile, contact 80,000 Hours’ one-on-one team and we may be able to put you in touch with a specialist advisor.

In a nutshell: We’d be excited to see more people build expertise to do work in or related to China in order to reduce long-term risks associated with the development and use of AI.

There are arguments in favour and against pursuing this career path: on the one hand, effective coordination is imperative for avoiding dangerous conflicts and racing dynamics; on the other hand, there are major risks and complexities involved in this path.

Some promising career paths to aim for include:

  • Technical AI safety research
  • Safety policy advising in an AI lab
  • Research at a think tank or long-term focused research group
  • Translation and publication advising

Personal fit for these roles both in the West and in China depends heavily on experience, networking ability, language, and citizenship.

Sometimes recommended — personal fit dependent

This career will be some people's highest-impact option if their personal fit is especially good.

Review status

Based on a shallow investigation 

Why pursue this path?

1. Safely managing the introduction of AI may require unprecedented international coordination.

As discussed elsewhere, without careful design, AI might act in ways unintended by humans, with potentially catastrophic effects. Even if we can control advanced AI, it could be very economically and socially disruptive (in good or bad ways), and could be used as a destabilising weapon of war.3

Because reducing existential risks from advanced AI benefits the whole world equally, it has the character of a global public goods problem. Steps to reduce existential risk tend to be undersupplied by the market, since each actor that can take such steps could capture only a small portion of the value (even if the actor is a large country) — while bearing all the costs (e.g. slower local AI progress).

Past experience shows the provision of global (and intergenerational) public goods is possible with enough effort and coordination — for instance, in the case of ozone layer protection.

However, coordination to ensure safe advanced AI may be more challenging to achieve. This is partly because AI is anticipated to bring a lot of advantages, and actors such as governments or companies around the world might be motivated to be the first to develop and deploy advanced AI systems in order to capture most of these benefits.4 With actors competing for speed and/or performance, they may cut corners on safety, leading to a “race to the bottom.”5

Unprecedented global coordination may therefore be required, and it seems beneficial to have a lot of people doing thoughtful work to encourage this.

2. Coordination between China and the US on AI safety would require deliberate effort.

It’s a challenging period for China–US relations, including in the area of AI. There are a couple of theoretical reasons to think that these relations may remain strained:

  • Power dynamics: according to power transition theory, there is potential for conflict when a dominant nation and a challenger reach relative equivalence of power, especially when the challenger is dissatisfied with the status quo. There is significant debate over whether this theory is robust and whether it applies to China–US relations in the 21st century. However, if both of these conditions are true, it suggests great effort may be required to avoid conflict and engender coordination between the two countries.
  • Differences in ideology and regime type: political leaders in China and the US diverge on a number of ideas about governance, values, and legitimate uses of technology. These divides can make it difficult for actors to see one another’s perspectives and come to agreements.6

3. There is growing interest in AI safety and governance in China.

The number of organisations doing potentially promising work on AI safety and governance in and with China appears to have increased since we published our earlier guide on China-related careers in February 2018. For instance, the Beijing Academy of AI (BAAI), which was established towards the end of that year and is backed by the Chinese Ministry of Science and Technology, has advertised for researchers with experience in interpretability and natural language processing adversarial attack and defence. The Basic Theory Research Center established in 2019 under Tsinghua University’s Institute for AI is exploring robust and explainable AI theory and methods. We provide more examples of companies doing relevant work below.

Sets of principles for AI development published by BAAI and by China’s National New Generation Artificial Intelligence Governance Expert Committee have considerable similarities with the Asilomar AI Principles, which have been endorsed by thousands of experts worldwide, including the heads of leading international AI labs. In a 2021 article, top Chinese researchers — including a member of the AI Governance Expert Committee established by China’s Ministry of Science and Technology — discussed safety risks from artificial general intelligence and potential countermeasures.

Another point indicating attention on AI safety and governance in China comes from a survey of 49 Chinese AI researchers by the Center for Security and Emerging Technology in early 2020. This survey found that 71% of respondents thought the general level of concern in China about AI safety was moderate, and 6% thought concern was significant. Furthermore, 96% of the respondents agreed on the need for international conventions and safeguards to ensure the safe development of a “next generation” of AI.7

Compared to a hypothetical scenario in which Chinese actors are not concerned about AI safety, the above survey suggests that this problem is somewhat tractable. That said, we are still highly uncertain about the overall tractability of increasing coordination to reduce AI risk — as we are for the wider category of work to positively shape the development of AI.

4. You could help pioneer the subfield of China–Western coordination on AI safety and governance.

Despite the upward trend in work on AI safety and governance in general (including in China), work in these fields still seems highly neglected.

Based on the number of employees at relevant organisations we have identified in the field of technical AI safety, there are likely to be only a few hundred full-time technical AI safety researchers worldwide.

The pool of those working in AI governance is likely to be broader, since it encompasses both policy practitioners and researchers, but our best guess is that the number of people working full-time in the field of AI governance worldwide is still less than 1,000. And we’d guess that of these, less than 10% are actively working towards improving China–Western coordination.8 This would come to less than 100 people.

These are all rough figures and, as stated above, the field is quickly evolving. Nonetheless, it seems fair to conclude that as of 2021, only around 100 people are working to improve China–Western coordination on AI safety and governance on a full-time basis. This seems to be a very small number of people given the magnitude of the subproblem of China–Western coordination on AI. Therefore, all else equal, the marginal impact of additional people working on this problem should be high (though see some caveats below.)

5. Gaining and leveraging expertise on China is a promising option in general.

As we talked about in our earlier guide to China-related careers, China has a crucial role in almost all our priority problem areas. This flows largely from it being home to a fifth of the world’s population, its status as the world’s second-largest economy, and its significance as a nuclear and military power. Despite this, there is relatively poor understanding and coordination between China and the West.

In terms of career capital, there are lots of backup options if you pursue a path of improving China–Western coordination on AI. Many people recognise the importance of improving China–Western coordination more broadly. In the West, there is a lot of interest in China affairs in general, and China expertise is only growing in importance and demand. So, even if your plans for improving coordination on AI don’t work out, you can still use the aptitude you will have gained in China–Western relations to help solve other pressing problems.

Arguments against pursuing this path

1. It’s far from guaranteed you’d have a large impact in this career.

Like career paths in US AI policy, to have outsized impact in this path you would need to:

  • Be in the right place at the right time. In other words, be in a position to influence key decisions about the design or governance of AI systems when AI technologies are advanced enough that those decisions need to be made.
  • Have the good judgement and sufficient expertise to influence those decisions in a positive direction. It can be difficult to predict the expertise that will be needed in advance, or to know what actions will have the best long-term impact. Figuring these things out is a lot of the value you would contribute — and it won’t be easy.9

Plus:

  • The development and societal integration of AI would need to have major potential downsides and/or upsides. If future technical or political developments are such that the potential for negative consequences or lost value from advanced AI is not as significant as it currently seems, AI safety and governance would become a less important area to work on.
  • Coordination problems would need to be solvable, but not solved by default. If we are unable to make progress on international coordination issues around advanced AI systems (for instance, because the benefits of being ahead in its development are too substantial), then improving coordination becomes less tractable. Conversely, working on this problem has less impact if the problem is likely to solve itself through some other mechanism.

Meeting all four of these conditions in concert seems somewhat unlikely, so we think the median outcome for an individual working on China–Western coordination on AI safety and security will be to have little impact.

However, in scenarios in which all four of the above are satisfied, your impact could be very large indeed, and so we still recommend this career because of its large expected impact and good backup options.

2. There is a possibility of doing harm.

At 80,000 Hours, we encourage people to work on problems that are neglected by others and large in scale. Unfortunately, this often also means that people could accidentally do more damage if things don’t go well.

It seems to us that China–Western coordination on AI safety and governance is an issue for which this is a danger.

This article lists six ways people trying to do good can unintentionally set back their cause. Many of them apply to this career path.

One particularly easy way to have a substantial negative impact is to act unilaterally in contexts in which even one person mistakenly taking a particular action could pose widespread costs to the field or the world as a whole. So if you pursue this path, it seems wise to develop and consult a good network and a strong knowledge base, and to avoid unilateral action even when you think it is justified.

In addition, people pursuing this path could cause harm by supporting the development and deployment of AI technologies in ethically questionable ways (even if unintentionally). This links to the next point.

3. There are potential abuses of AI-enabled technology.

Given the powerful economic and national incentives at play, the dual-use nature of the technology,10 and the complexity of AI supply chains, your efforts to increase beneficial coordination on AI could indirectly cause damage or be used by other interested actors in a harmful way. For example, if you help smooth the way for a certain useful compromise, what harmful uses of AI might result?

People working to develop, procure, or regulate AI in highly charged, international contexts will therefore likely have to grapple with challenging ethical judgement calls with many relevant considerations. For instance:

  • What is the ratio of expected benefits to harms involved in working on a particular project or at a particular organisation?
  • Have you factored in the potential risk to your reputation, which could affect your ability to have an impact in the future?
  • What is the chance that the same or worse would happen if you were not involved?

It will be hard to think through these questions, and there is always the possibility of reaching wrong answers.

4. There are complex international political considerations and uncertainties.

The evolution of China–US relations and international politics more broadly introduces constraints and uncertainties in careers aiming to improve China–Western coordination on AI safety and governance.

For those considering technical safety research:

  • In recent years, authorities in the US have increased scrutiny of Chinese researchers’ backgrounds. This could affect Chinese nationals who plan to study and work in AI in the US.
  • Under UK rules since September 2020, students from certain countries (including China) need to undergo security vetting if they wish to pursue postgraduate studies in AI and other “sensitive subjects” that could be used in “Advanced Conventional Military Technology.”

For those considering public sector roles:

  • Studying abroad for over six months may disqualify people from applying for certain roles in the Chinese public sector, such as police positions in the public security or judicial administration systems.11
  • Meanwhile, Helen Toner, the Director of Strategy at the Center for Security and Emerging Technology at Georgetown University, has suggested that spending a significant amount of time in China or maintaining close and ongoing contact with Chinese nationals could make it harder to get a security clearance in the US.
  • On the other hand, the Boren Awards funding programme, which invests in “linguistic and cultural knowledge for aspiring federal government employees,” lists China as one of the countries it prefers applicants to study in.
  • We’ve been told that having a background in China — and possibly even just visiting — could exclude you from some Western government jobs.

We are therefore uncertain about the extent to which time in China would damage someone’s career prospects in governments and think tanks in the US and other Western countries. This will likely depend on the particulars of the case; for instance, individuals who worked for foreign media outlets in China will likely face fewer challenges than those with experience at organisations affiliated with the Chinese government.12

It is therefore worth carefully considering the potential implications on your long-term career options before changing your location or organisation. Ideally, you should speak to people inside organisations you might work at to get their views.

For those considering think tank roles:
In March 2021, China introduced sanctions against certain political and academic figures and institutions from Europe and the UK, prohibiting them from travelling to or doing business with China. These measures were a response to sanctions imposed on Chinese officials by the EU and UK (and other Western countries).13 The inclusion of the Mercator Institute for China Studies (Europe’s leading think tank on China) among those sanctioned suggests that those pursuing China-focused think tank roles in the West could face hurdles as a result of the political environment.

The section below details further challenges associated with specific types of roles, along with the opportunities for impact they offer.

Ideas for where to aim longer term

In this section we set out some options to aim for, which have been researched enough for us to feel comfortable recommending them.14 How you should prioritise between these depends mostly on your comparative advantage and personal fit.

Because some paths are more suitable for individuals of particular citizenship, we have split these ideas into three subsections to help you find the options that are more likely to be available to you.

Paths that are likely most suitable for people of Chinese citizenship and/or heritage

The first two paths in this category relate to helping build the field of technical AI safety in industrial and academic Chinese labs. Being productive and progressing quickly in these settings will likely be difficult without strong Chinese language skills, and foreign scientists working in China have said that assimilating into local culture is a tough task. Admittedly, it seems possible to work at the US offices of Chinese labs without Chinese language skills — for instance, many of the co-authors of Deep Speech 2 at Baidu’s Silicon Valley AI Lab are not Chinese. But you are likely to have greater potential for helping build the field of technical AI safety in Chinese labs if you can communicate effectively in Chinese.

The third path in this category — researching AI governance at Chinese think tanks — is likely to require Chinese citizenship, particularly for government-affiliated institutes.

AI safety researchers and engineers at top Chinese AI companies

This path would involve doing research on AI safety-related topics, such as alignment, robustness, or interpretability, at top Chinese AI companies. This could be valuable for encouraging safer development and deployment of advanced AI systems. In the long run, you could aim to progress to a senior position and promote increased interest in and implementation of AI safety measures internally. You could also potentially help promote AI safety among other Chinese researchers, if you are able to publish research or communicate about it at conferences.

If your comparative advantage and personal fit are such that you would like to aim for this route over technical AI safety roles elsewhere, we recommend first trying to gain career capital by studying at top graduate schools and working at top AI safety companies and labs, regardless of location.

In China, companies that may have relevant positions include Baidu, Tencent, Huawei, and RealAI (a startup backed by the Honorary Dean of Tsinghua University’s Institute of AI). All four have done work on adversarial attacks, for instance.15

If you have strong research credentials and communication skills, you could consider communicating safety ideas among Chinese AI researchers by organising workshops and doing translations.

In addition to research credentials, people pursuing this path need patience and strong interpersonal skills to be able to progress to influential positions from which they can promote AI safety.

AI safety researchers and professors at top Chinese AI academic labs

These roles would also involve doing research related to AI safety, but at top Chinese university labs. This could be valuable both for making progress on technical safety problems and for encouraging interest in AI safety among other Chinese researchers — especially if you progress to take on teaching or supervisory responsibilities.

Options include:

If you want to aim for this route, we recommend first gaining career capital by studying at top graduate schools and working at top AI safety labs, regardless of location. Over time, you could try to build a Chinese network by collaborating with sympathetic Chinese AI labs and overseas Chinese professors. As in industrial labs, you could pursue field-building opportunities if you are a good fit.

Given the fairly early stage of the AI safety field in China, if you are not open to working outside China, it could be challenging in the short term to find suitable professors and/or mentors.

AI governance researchers at Chinese think tanks

This path involves analysing technical developments and policy responses in other countries in order to produce safety policy recommendations for the Chinese government or more public-facing research. This is another area where strong ethical judgement may be especially important.

This could be particularly high impact because think tanks in China are mostly affiliated with the government in some way, and thus have a more direct route to informing government policy than many think tanks in countries like the US and UK.

Below are some options for institutes doing AI-related work (as of 2021), with the first five likely to be closer to policymaking:

Candidates with the best chance of securing jobs at such think tanks would have studied at top science and technology policy graduate schools.

Paths for which citizenship is less likely to be an important consideration

It’s important for both Chinese and Western citizens to be involved in supporting China–Western coordination on AI safety and governance. Your citizenship is not likely to bar you from consideration from the following paths, provided you have the legal right to work in your intended country of employment and any necessary language skills.16

AI governance researchers at key long-term-focused research groups

This path would involve conducting research with a longer-term focus and potentially less immediate policy relevancy than at a standard think tank. This could be either:

  • Descriptive research, such as analysing the political and economic forces that might influence the deployment of any advanced AI technology in China.
  • Prescriptive research, such as making recommendations for reducing the risks of racing dynamics between Chinese and non-Chinese companies.

This path will likely allow more emphasis on finding disinterested avenues to international coordination compared to work at national security–related think tanks, which may face more pressure to respond to policymakers’ desire to gain or maintain strategic advantage.

Potential organisations to aim for include:

We are uncertain how much capacity such organisations have for China-focused full-time staff. If you are unable to join these organisations full-time, you could consider volunteering or contracting with them on China-related projects. To add value, you would likely need to be able to propose and drive projects with little direction.

AI safety and interdisciplinary researchers focusing on problems of cooperation

This path would involve conducting research into problems of cooperation, in both AI and other disciplines. According to “Open Problems in Cooperative AI,” an article co-authored by research scientists at DeepMind and Professor Allan Dafoe:

Central questions include: how to build machine agents with the capabilities needed for cooperation, and how advances in machine learning can help foster cooperation in populations of agents (of machines and/or humans), such as through improved mechanism design and mediation. This research integrates ongoing work on multi-agent systems, game theory and social choice, human-machine interaction and alignment, natural-language processing, and the construction of social tools and platforms.

However, Cooperative AI is not the union of these existing areas, but rather an independent bet about the productivity of specific kinds of conversations that involve these and other areas. We see opportunity to more explicitly focus on the problem of cooperation, to construct unified theory and vocabulary, and to build bridges with adjacent communities working on cooperation, including in the natural, social, and behavioural sciences.

Cooperative AI emerges in part from multi-agent research, a field whose importance other AI safety–focused groups and researchers have highlighted. These include researchers at UC Berkeley’s Center for Human-Compatible Artificial Intelligence and Université de Montréal’s Mila and the Center on Long-Term Risk.

In Human Compatible: Artificial Intelligence and the Problem of Control, Stuart Russell suggested that:

We will need to generalize inverse reinforcement learning from the single-agent setting to the multi-agent setting—that is, we will need to devise learning algorithms that work when the human and robot are part of the same environment and interacting with each other.

(See our interview with Stuart for more.)

We think research of this kind could help inform governance and technical proposals for improving coordination between principals and agents in different countries. While we have not explored related research in China in depth, labs that seem to be conducting relevant work include:

AI-focused translators and publishing advisors

This kind of role would support the translation of AI-related publications to facilitate information exchange between China and the English-speaking world. Building a common understanding of the risks and opportunities associated with advanced AI seems to be an important precursor to establishing any agreements or mechanisms for jointly mitigating risks.

Jeff Ding has highlighted that while AI-related developments covered in Western outlets tend to be quickly translated for a Chinese audience, the reverse is not true. Valuable initiatives helping to reduce this imbalance include:

Alternatively, native speakers of Mandarin could consider acting as translation advisors for publishing houses and authors seeking to translate English books related to AI safety into Chinese. Examples of publications that have benefited from the support of such advisors include the Chinese versions of Life 3.0 and Human Compatible.

For this path, it could be hard to find full-time roles that allow for specialisation in AI safety and governance; you may also want to explore relevant work that can be performed in a contractor capacity.

Experience in technology-related translation and/or relevant connections would be valuable for pursuing these options.

Paths for which the relevance of citizenship is unclear

While the top leaders of Chinese AI companies and labs are overwhelmingly Chinese nationals, we are aware of non-Chinese citizens based in Chinese offices who have experience working on global public policy or providing advice to executives in Chinese companies. Non-Chinese nationals wishing to gain influence in a Chinese company will likely find it easier the larger the company’s international business is, and the stronger their Mandarin abilities.

While we are not currently aware of restrictions on Chinese citizens at top AI labs in the US or UK, it is possible that this could change in the future (see the section on international politics above).

Some US think tank jobs related to national security require a security clearance. US citizenship is required to gain security clearance, and significant ties with China can sometimes disbar even US citizens from obtaining such clearance. However, we expect that only a small fraction of roles at the think tanks mentioned below would require clearances. We are unsure about whether there are security clearance requirements for any roles in UK think tanks.

AI governance or policy advisors at top Chinese AI labs

A role in this category would likely involve assessing and mitigating risks from a lab’s development and deployment of AI. It could involve working with other internal stakeholders to design and implement governance and accountability mechanisms to reduce risks from unsafe AI systems.

It might also involve working with people outside the lab — for instance, discussing new regulatory proposals with the government, or working with other organisations in China or overseas to develop standards and best practices. If you support your organisation to take a lead in AI governance mechanisms, you could potentially also have a positive influence on other Chinese AI labs.

The Tencent Research Institute is an example of a team doing work related to AI governance. For Chinese labs that don’t have policy research teams or have not yet done much work on AI governance issues, you could try to encourage more discussion of these topics from a position on teams related to government relations, industry research, legal, or strategy, or in the office of the CEO.17

It could be difficult to get staff with diverse interests and goals to take meaningful action to reduce risks from AI. This may be especially true if such action is expected to negatively impact profit. It can be hard to identify from the outside which teams and roles will genuinely have the resources and backing from leadership to drive positive change. It sometimes takes high-profile incidents to get a lab to take safety and security seriously, so having impact with this path may depend on patiently building your network and resources and acting strategically when the right moment arises. This also means you need to keep your focus on safety even if your everyday work and colleagues are focused on other things most of the time, which can be challenging.

AI governance or policy advisors at top AI labs in the US and UK

Roles in this category might involve risk mitigation and stakeholder engagement tasks similar to those set out in the section above, but with more focus on trying to build understanding of and communication with Chinese actors. While it might be rare for top US and UK AI labs to advertise AI governance and policy roles that concentrate on China, you might be able to create opportunities if you have a China-related background. For instance, you could start knowledge-sharing and research collaborations with Chinese labs or academics in the area of AI safety, or advise internally on how initiatives and announcements might be perceived by Chinese stakeholders.

Some of the leading labs working on advanced AI (such as OpenAI, DeepMind, Facebook AI Research, and Google Brain) are currently based in the US and the UK. What they do matters not only for the systems they are building, but also for shaping the perceptions and behaviours of other AI labs around the world (including in China), where they are well known and well regarded. This makes working with these Western labs high leverage. (Read more about whether to work in a leading AI lab.)

Aside from policy roles in these labs, there could be other opportunities to engage with China in Western labs with a role in AI development. You could work in government relations for Microsoft or Google in Beijing, or work in a role in a Western company that involves international engagement. For instance, Microsoft has a representation office to the United Nations, and several Western companies are represented on the World Economic Forum’s AI Council.

However, we don’t know how many roles involve the opportunity to work on or engage with China, at least in the short term. Some Western companies may be cautious about operating in or engaging with China, for fear of negative media or political reaction.

China AI analysts at top think tanks in the US and the UK

This path would likely involve researching AI developments in China, making recommendations to policymakers in the US and UK, and communicating the outputs of research in written reports and oral briefings.

This could provide a route to impact not only through directly influencing government decisions, but also through shaping the wider public conversation on AI and China, as think tank researchers are often cited in media reports. In addition, think tanks may lead or participate in track-two (unofficial) dialogues with counterparts in other countries, which can help build trust and communication, and foster mutual understanding.

Think tanks with relevant research directions include:

Experts from CSET, CNAS, and GovAI have testified before the U.S.-China Economic and Security Review Commission.

Mandarin language skills and an academic background in international relations, China studies, and/or security studies would be helpful for pursuing this path.

Alternative routes that we haven’t investigated

In the policy sphere, it could be impactful to advise on China and technology policy within the US, UK, or EU governments. While we have written a career profile on US AI policy, we have not investigated specific China-focused roles within the governments of the US or other countries. Nor have we investigated the path of working on science and technology policy within the Chinese government, though this could be worth exploring if you are a Chinese citizen.

Advising parts of international organisations focused on AI, such as the UN Secretary General’s High-level Panel on Digital Cooperation or the OECD’s AI Policy Observatory, could also provide opportunities for impact.

In industry, it could be worth exploring opportunities in Chinese semiconductor or cloud computing companies. This is based on our view that building expertise in AI hardware is potentially high impact.

There may also be relevant positions in the philanthropic sector. For instance, Schmidt Futures has advertised research roles that address technology relations with China.

Finally, it could be valuable to work on improving great power relations more broadly, for instance through academic groups and foundations focused on China–US relations, journalism, or international relations research.

We have not investigated any of these paths in detail.

Personal fit

In other guides, we set out personal fit considerations for technical AI safety research, US AI public policy careers, and think tank research, many of which apply to the paths discussed here.

Rather than repeating ourselves, below we highlight some skills and traits that seem particularly relevant to China-related AI safety and governance paths.

Bilingualism and/or cross-cultural communication skills

People with strong Mandarin and English will have a greater chance of success in this area. Proficiency in French or German could also open up opportunities to support coordination between China and the EU.

Beyond bilingualism, the ability to communicate and empathise across cultures is important for any role involving international coordination. Experience living abroad and/or working in teams of people with highly diverse backgrounds may be useful in this respect.

It may be particularly challenging to maintain cross-cultural empathy if you are surrounded by colleagues with a non-empathetic view of people from the other country. In such situations, it seems important to:

  • Be able to resist influence by osmosis from your social surroundings
  • Actively work to cultivate and maintain cognitive empathy

Strong networking abilities (especially for policy roles)

Networking is important to career advancement in many cultures, and success in policy and strategy in particular require the ability to build and make good use of connections.

We have some practical advice on building and maintaining a strong network of relationships with others here and also here.

Good judgement and prudence

Good judgement is important to pursuing all of our recommended career paths, but we are including it here to really emphasise its importance.

Many of the paths described above involve challenges like working with people who might have their own agendas separate from AI safety, dealing with incentives that might point you toward less of a focus on safety, and navigating delicate and high-stakes relationships and situations.

The risk of information hazards also demonstrates the need for good judgement and prudence. Openness in AI development could contribute to harmful applications of the technology and make racing dynamics more closely competitive. It is important to recognise when information could be hazardous, to judge impartially the risks and benefits of wider disclosure, and to practise caution when making decisions.

All this makes good judgement — the ability to think through complex considerations and make measured decisions — particularly crucial.

Keep in mind too that good judgement often means knowing when to find someone else with good judgement to listen to, and avoiding making decisions unilaterally. (See our separate piece on good judgement for more.)

Learn more

If you are thinking about pursuing a career working on this problem, but would like to build more general understanding and career capital first, you could consider:

Speak with a specialist advisor

If you are interested in pursuing a career path described in this profile, contact 80,000 Hours’ one-on-one team and we may be able to put you in touch with a specialist advisor.

Get in touch

Additional resources

Top recommendations

Read next:  Learn about other high-impact careers

Want to consider more paths? See our list of the highest-impact career paths according to our research.

Plus, join our newsletter and we’ll mail you a free book

Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

Notes and references

  1. According to most recent (2019) World Bank figures

  2. CSET Issue Brief: Chinese Public AI R&D Spending: Provisional Findings

  3. See AI Governance: Opportunity and Theory of Impact on the EA Forum for further discussion of risks.

  4. For instance, a report by the US National Security Commission on AI published in March 2021 asserts that “whoever translates AI developments into applications first will have the advantage” (p. 28). China’s 2017 New Generation Artificial Intelligence Development Plan said that China must “firmly seize the strategic initiative in the new stage of international competition in AI development.” However, it is possible that AI companies may not strongly value being the first to develop a particular AI system; entering a market after the front-runner can have advantages such as free-riding on the front-runner’s R&D or benefiting from more information about the relevant market.

  5. See for instance, AI and International Stability: Risks and Confidence-Building Measures by Michael Horowitz and Paul Scharre. However, while there are first-mover advantages in many industries (for example, pharmaceuticals), they don’t always lead to a race to the bottom on safety, because of mechanisms such as market forces, regulation, and legal liability.

  6. There is debate among scholars over the relative importance of ideological differences in China-US tensions. See “The emerging ideological security dilemma between China
    and the U.S.
    ” by Dalei Jie at Peking University, and the blog post “Are the United States and China in an Ideological Competition?” from the Center for Strategic and International Studies.

  7. See page 43 of China AI-Brain Research: Brain-Inspired AI, Connectomics, Brain-Computer Interfaces from the Center for Security and Emerging Technology.

  8. This is based on the number of people working on China–Western coordination at prominent AI governance organisations such as GovAI and the Partnership on AI. As of October 2021, our count suggests three out of 60 staff members (including affiliates and research fellows) at these two organisations are working on China-related topics.

  9. To illustrate this, it seems plausible that a smart, motivated person would have struggled to make the Industrial Revolution go better or worse from a long-term perspective.

  10. A dual-use technology is one that can be simultaneously applied in both commercial and defence domains. The term can also be used more generally to describe technology that is both beneficial and harmful, depending on how it is used. (See page 39 of Jade Leung’s 2019 PhD thesis, Who will govern artificial intelligence? Learning from the history of strategic politics in emerging technologies.) Both these interpretations can be seen as applicable to AI.

  11. See for instance, 出国留学的人还能考公务员吗? (in Mandarin).

  12. An example of someone who reached an influential position in the US government after spending time in China is Matthew Pottinger. Pottinger, who was Deputy National Security Adviser from September 2019 to January 2021, was a journalist in Beijing with The Wall Street Journal and Reuters from 1998 to 2005.

  13. For the Chinese Foreign Ministry Spokesperson’s remarks on the sanctions, see here. Criticism of the escalation from the Center for Strategic & International Studies can be found here.

  14. Some other ideas that may be promising but that we have not had a chance to investigate in detail are treated in the following section.

  15. See examples for Baidu, Huawei, and Tencent.

  16. Foreign citizens should note that they will generally need two years of relevant work experience to get a work visa in China (though this requirement is reduced to one year for master’s graduates from Chinese universities and well-known overseas universities).

  17. For instance, BAAI has advertised positions for industry researchers covering technical trends in AI, including developments at top Western labs. To the extent that Western labs are producing quality technical safety research, industry researchers in Chinese labs could have the opportunity to showcase such research and encourage its application in their own organisations.