#227 – Helen Toner on the geopolitics of AI in China and the Middle East

With the US racing to develop AGI and superintelligence ahead of China, you might expect the two countries to be negotiating how they’ll deploy AI, including in the military, without coming to blows. But according to Helen Toner, director of the Center for Security and Emerging Technology in DC, “the US and Chinese governments are barely talking at all.”

In her role as a founder, and now leader, of DC’s top think tank focused on the geopolitical and military implications of AI, Helen has been closely tracking the US’s AI diplomacy since 2019.

“Over the last couple of years there have been some direct [US–China] talks on some small number of issues, but they’ve also often been completely suspended.” China knows the US wants to talk more, so “that becomes a bargaining chip for China to say, ‘We don’t want to talk to you. We’re not going to do these military-to-military talks about extremely sensitive, important issues, because we’re mad.'”

Helen isn’t sure the groundwork exists for productive dialogue in any case. “At the government level, [there’s] very little agreement” on what AGI is, whether it’s possible soon, whether it poses major risks. Without shared understanding of the problem, negotiating solutions is very difficult.

Another issue is that so far the Chinese Communist Party doesn’t seem especially “AGI-pilled.” While a few Chinese companies like DeepSeek are betting on scaling, she sees little evidence Chinese leadership shares Silicon Valley’s conviction that AGI will arrive any minute now, and export controls have made it very difficult for them to access compute to match US competitors.

When DeepSeek released R1 just three months after OpenAI’s o1, observers declared the US–China gap on AI had all but disappeared. But Helen notes OpenAI has since scaled to o3 and o4, with nothing to match on the Chinese side. “We’re now at something like a nine-month gap, and that might be longer.”

To find a properly AGI-pilled autocracy, we might need to look at nominal US allies. The US has approved massive data centres in the UAE and Saudi Arabia with “hundreds of thousands of next-generation Nvidia chips” — delivering colossal levels of computing power.

When OpenAI announced this deal with the UAE, they celebrated that it was “rooted in democratic values,” and would advance “democratic AI rails” and provide “a clear alternative to authoritarian versions of AI.”

But the UAE scores 18 out of 100 on Freedom House’s democracy index. “This is really not a country that respects rule of law,” Helen observes. Political parties are banned, elections are fake, dissidents are persecuted.

If AI access really determines future national power, handing world-class supercomputers to Gulf autocracies seems pretty questionable. The justification is typically that “if we don’t sell it, China will” — a transparently false claim, given severe Chinese production constraints. It also raises eyebrows that Gulf countries conduct joint military exercises with China and their rulers have “very tight personal and commercial relationships with Chinese political leaders and business leaders.”

In today’s episode, host Rob Wiblin and Helen discuss the above, plus:

  • Ways China exaggerates its chip production for strategic gain
  • The confusing and conflicting goals in the US’s AI policy towards China
  • Whether it matters that China could steal frontier AI models trained in the US
  • Whether Congress is starting to take superintelligence seriously this year
  • Why she rejects ‘non-proliferation’ as a model for AI
  • Plenty more.

CSET is hiring! Check out its careers page for current roles.

This episode was recorded on September 25, 2025.

Video editing: Luke Monsour and Simon Monsour
Audio engineering: Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: CORBIT
Coordination, transcriptions, and web: Katy Moore

The interview in a nutshell

Helen Toner, the interim executive director of the Center for Security and Emerging Technology (CSET) and a former board member of the OpenAI nonprofit, discusses the complexities of AI strategy and governance. Some key points she raises are:

  • US-China AI policy has not been driven by a clear and consistent set of goals, particularly regarding semiconductor export controls.
  • Global AI strategy involves difficult trade-offs between soft power, national security, and the immense risk of concentrating power in the hands of a few actors.
  • We need more robust frameworks for managing AI risk, focusing on societal adaptation and the fundamental challenge of making AI systems steerable.

US–China AI policy is riddled with unclear and shifting goals

CSET was influential in the analysis that led to export controls on certain kinds of semiconductor technology to China, but Helen believes the strategy has become muddled.

  • A key distinction in export controls: The initial, strategically sound case was for controlling semiconductor manufacturing equipment (SME), which creates a true chokepoint to slow China’s domestic development. However, policy focus has mistakenly shifted to controlling the chips themselves, and the original SME controls have been poorly implemented.
  • The “AI race” is a misleading metaphor: Helen argues against viewing the situation as a clear race with a finish line. She sees it as an ongoing competition where treating it as a winner-take-all race can lead to risky decisions.
  • China’s strategy is multifaceted: China is pursuing both practical AI applications through its “AI Plus” plan and the development of general-purpose AI/AGI. However, Helen is sceptical that the Chinese leadership is as “AGI-pilled” — convinced of the imminent arrival of transformative AGI — as the leading US companies are.
  • Diplomacy is stalled: Meaningful government-to-government talks on AI safety are hindered by a lack of shared concern at the government level and the broader dysfunctional state of the US-China relationship.

Global AI strategy involves complex power dynamics and risky tradeoffs

Helen outlines how the global expansion of AI capabilities — from software to hardware — creates new strategic dilemmas.

  • Data centres in the Gulf are a double-edged sword:
    • The pro: Building data centres in places like the UAE allows the US to expand compute capacity by trading American chips for the Gulf’s abundant land and energy.
    • The cons: These countries are autocracies with close ties to China. Handing autocratic governments access to world-class supercomputers is a major concern, and the argument that “if the US doesn’t sell, China will” is false because China cannot match the US’s supply.
  • Model theft is a critical vulnerability: Helen states bluntly that the US is not on track to prevent China from stealing its most advanced models. This vulnerability undermines the “move fast at all costs” argument, as any lead could be quickly erased by theft.
  • Open source for soft power: A key argument for promoting US open-source models is about “soft power,” similar to the global influence of Hollywood. Helen supports developing high-quality open models but believes the most advanced frontier systems should be kept closed, at least initially, to allow time to study and mitigate novel risks.

We need better frameworks for managing AI risk beyond simple “non-proliferation” or “alignment”

Helen proposes shifting the discourse on AI safety to focus on more practical and nuanced concepts.

  • “Adaptation buffers” over non-proliferation: For misuse risks like AI-assisted bioweapons, trying to prevent models from spreading is a losing battle that would require authoritarian oversight. A better approach is to use the “adaptation buffer” — the time between foreseeing a dangerous capability and its widespread availability — to build societal resilience and defences.
  • The risk of power concentration is severe: AI is a naturally power-concentrating technology due to its capital intensity. While some argue for concentrating development to manage existential risks, Helen warns that this ignores the immense historical dangers of unchecked power. The challenge is to navigate the middle ground between the risks of concentration and the risks of diffusion.
  • “Steerability” over “alignment”: Helen suggests the term “alignment” is misleading because it implies there is a single, clear objective to align to. “Steerability” better captures the core technical problem: can we reliably direct an AI system to behave as intended in the first place?

Highlights

Why don't China and the US talk more?

Rob Wiblin: Something I’ve been a bit confused by is it doesn’t seem like there’s any direct diplomacy going on, trying to move us any closer towards having some sort of treaty or agreement with China on governance of AGI, preventing AI being integrated into military prematurely before either country feels comfortable that they actually have a grip on this technology and understand its pros and cons — or let alone a treaty to govern superintelligence and say that maybe we should be cautious about that one, even if we’re barreling ahead on all the applications that we want in the economy right now.

Am I right that there is very little inter-country discussion of that? And if so, do you think that’s a mistake?

Helen Toner: I think you’re right. Especially at what gets called the track one level, meaning official government-to-government talks.

I think there’s a few reasons for it. One basic reason, one sort of AI-specific reason is I don’t think that the groundwork is there in terms of concern about this as a problem, or concern about this as a major problem to prioritise. What is AGI? What is superintelligence? Are they things we could build ever? Are they things we might build soon? Would that be good or bad, and for whom? I think there’s, especially at the government level, very little agreement on those questions — in contrast to perspectives in the AI safety community, for example.

Rob Wiblin: Or the companies.

Helen Toner: Or many of the companies. That’s right. Well, in the companies, it’s, “We can build this; we will build it soon.” And then in the AI safety community, it’s, “…and it’s going to be really bad, so we should be talking about it.” But I think that is a relatively insular set of beliefs still. That’s kind of the AI-specific reason that the groundwork is not there.

I think it’s also really important to look at the broader relationship as well, and understand that from the perspective of a diplomat — or certainly of a president or a chairman — this has to slot in with everything else that is going on in the relationship.

One thing is the US and Chinese governments are barely talking at all. Over the last couple of years there have been some direct country-to-country talks on some small number of issues, but they’ve also often been completely suspended. So for a period, I think it was after Nancy Pelosi went to Taiwan, I think just basically all talks were suspended.

Rob Wiblin: Isn’t that just crazy? It’s mind-blowing to me. This is the two most powerful countries in the world. They’ve got so many things to talk about!

Helen Toner: Yeah. My understanding of this is that usually the US is the one that has more appetite to talk. And China knows that, so that becomes a bargaining chip for China to say, “We don’t want to talk to you. We’re not going to do these military-to-military talks about extremely sensitive, important issues, because we’re mad. And if you want us to do them, then you have to give us something in return to come back and join these talks.”

Rob Wiblin: And I guess you don’t want to give into that.

Helen Toner: Well, yeah. I mean, if the whole point is that it’s mutually beneficial, then we don’t want to be acting as though we’re making a concession, or that they’re making a concession by letting us talk. There’s a lot of context here, and a lot of baggage.

A couple of other pieces of context and baggage I would give: one is I think a fair amount of scepticism among US diplomats. Diplomats in general are usually pro-engagement, pro-negotiation, pro-conversation: that’s why they become diplomats. I hear among US diplomats a lot of scepticism about the value of that with China, based on their track record of what happens when we do that.

An important example is in 2015, President Obama and Chairman Xi had a long discussion about this problem of China spying on US companies and stealing trade secrets from US companies, which is different from the long established “countries spy on each other for strategic reasons.” China was doing something different, namely corporate espionage — where they take trade secrets, benefit economically, hand things to their companies: very much not OK, very much out of the norm internationally.

In 2015, at the end of a long series of discussions, there was a big deal signed between Obama and Xi saying that China was going to stop doing that. And the consensus is that they basically started a few months later again. They stopped very briefly and then restarted. That’s one emblematic example that’s in cyberspace. A lot of the people who do AI stuff have previously worked in cyberspace, so that’s a very salient example for them.

There’s also scepticism about are they a good negotiating partner. Another example I would add there is the big example of track one government-to-government talks on AI in Geneva last year, this sort of initial foray into this conversation. My sense is that that didn’t go terribly, but it also didn’t go great.

And part of the reason — which again is sort of emblematic of US-China negotiations more generally — was the US sent over some really technical, well informed, some of their best AI policy people to have quite in-depth conversations. And the Chinese sent their America specialists — so their US diplomats who knew almost nothing about AI, who were just specialists in trying to handle the Americans. The term that sometimes gets used is they sent their “barbarian handlers” — meaning they specialise in going out there and playing nice with the foreigners.

So again, I don’t think that dialogue went terribly, but it wasn’t a good start. And again, it suggests that the groundwork is not there in terms of this actually being a priority that the Chinese government actually wants.

Are we even in a race with China?

Helen Toner: I don’t think it’s that clear that we’re racing with China towards AGI. I think there’s sometimes a set of background assumptions here around even the language of a race. I’m perfectly happy to say that we’re competing with China on AI. The thing with a race is there’s a finish line, and whoever crosses the finish line first wins for real. And if you cross the finish line second, then that’s useless and doesn’t matter.

And I think sometimes, in some AI circles, there’s an assumption that AI or AGI really is a race with a clear finish line — and the finish line is whoever builds self-improving AI first. Because then, if it’s true that once you get to a certain level of AI, then your AI can improve itself or can improve the next generation and you have this kind of compounding improvement, then that could genuinely be a situation where whoever gets to a certain point first then ultimately wins.

I don’t think it’s at all clear that is actually what it’s going to look like, versus the systems get more and more advanced, they’re used in more and more ways, they’re sort of diffused through multiple different applications. And in that case, I think we’re in this state of ongoing competition with China, but not necessarily a heated race, where whoever is a hair ahead at the very end at the finish line ultimately wins the future or something.

I think the shape of the competition is actually pretty unclear, and when people treat it as though it is very obviously just a winner-take-all race, that is a pretty risky proposition — because it implies that certain kinds of tradeoffs and certain kinds of decisions are obviously a good idea, when in fact I think it’s not at all clear.

Why is the Trump administration sabotaging the US high-tech sector?

Rob Wiblin: Despite being pretty bullish about AI, the Trump administration has done a bunch of stuff that I think is kind of inconsistent with wanting to win any sort of AI race. … There’s a bunch of stuff that you probably would be doing, like encouraging high-skilled immigration, if you wanted to be as far ahead of China as you possibly could be. Those things aren’t happening; indeed, it’s kind of going the other way. How can you make sense of that?

Helen Toner: I make sense of it by there being sort of different factions inside the Trump administration — different specific officials with different agendas and different priorities — and them not necessarily coming together into a coherent policy vision.

So for example, on immigration, that was one of the topics CSET looked at earliest. And when we were in the space in 2019, there was sort of this common wisdom that high-skilled immigration was good economically because it benefited US companies and helped the economy grow, but it was only a downside from a national security perspective because people could leak information, steal information, or at a minimum just get educated here and then go back to their home countries and benefit their home countries.

And some of our earliest work was on understanding actually the national security benefits of having high-skilled immigrants here, including just the fact that the US’s high-tech ecosystem is so driven by immigrants. Depending on which numbers you’re looking at, they’re at a minimum something like 30% or 40%, sometimes up to well over half of any given pool. I think it’s something like half of the founders of top startups are foreign born, things like that.

And that makes sense if you look at if the US is going to compete internationally: we just have a much smaller pool of domestic workers than somewhere like China or somewhere like India. So the fact that we can import them and that we can draw the best talent to this country is a huge asymmetric advantage.

But that is just obviously in contrast with the Trump administration and the Trump MAGA movement’s perspective on immigration more generally, so I think that policy wonky set of considerations around pros and cons just gets lost in the broader push to be anti-immigrant.

I do think that among the policy actions that the Trump administration has taken, the many things that are going to deter high-skilled immigrants from coming here are up there with the cuts to science funding as some of the most damaging to US competitiveness, just in terms of being technologically as sophisticated as this country has been in the past. And my understanding of it is not that there is a clear, well-thought-through strategy that explains why that is. It’s just these different components of the coalition sort of doing their own thing.

Pros and cons of building data centres in Gulf countries

Rob Wiblin: The US has approved the construction of big data centres with Nvidia chips in several Gulf countries — I think Saudi and UAE, possibly Qatar as well. What are the pros and cons of that, in your mind?

Helen Toner: I think the deals that you’re talking about are, as far as I understand, provisional — sort of early-stage announced, without the details being hammered out — so it will really depend on what the specifics of those deals turn out to be.

The big pro is that, if you look at the different inputs to AI, if you think computing power is an important input, then what do you need for that? You need chips, you need land — like permitting, the permission to build — and you need energy. So there’s sort of a natural trade that you could do and say, in the US, we have lots of chips, permitting is a nightmare, and our grid is struggling. But in the Gulf, they have plenty of land, they have plenty of sunlight, they have plenty of oil — so we bring the chips over there, they let us build and they give us lots of energy, great deal. So I think that’s the main pro, is being able to build out more computing capacity than you otherwise could.

I think the cons depend a lot on the specifics of the deals. There might not be that many cons. I’ve heard a comparison between US data centres in the Gulf and US military bases in the Gulf: it’s like this is a US asset, it is US soil, it is fully under US control. Maybe there’s not that many downsides if that’s the case.

But the more that the countries themselves have ownership rights over the chips or usage rights or ability to access the facilities, then there’s sort of two big potential downsides.

One is the connections with China that these countries have. This includes doing joint military exercises or having very tight personal and commercial relationships with Chinese political leaders and business leaders. Meaning, does this help China reverse-engineer chips, have more access to advanced ships in a way we wouldn’t want?

The other big set of downsides is just specifically around the fact that these countries are autocracies. They are not nice governments. I think people tend to know that Saudi Arabia is not a democracy: famous for not letting women drive; famous for hacking apart a journalist, Jamal Khashoggi, in the Saudi embassy, assassinating him in cold blood — which is sort of emblematic of how they think about free speech and free press.

The UAE though, I think people just have a bit of a sense of like, Dubai and Abu Dhabi are kind of nice places to visit. It’s a bit too hot, but they have big skyscrapers and fun indoor skiing or whatever.

But the UAE is an autocratic country. I think the score on the Freedom House democracy index is something like 18 out of 100, which is really, really low.

Political parties are banned. There’s one body that’s half elected and half appointed by the royal family, but it doesn’t actually have formal power anyway. So there’s kind of elections, but they’re sort of fake elections. They do mass trials of dissidents that are clearly not due process and actual real rule of law. They persecute the families of dissidents.

The economy runs on immigrant labour, and those people have very few rights — at a minimum they’re non-citizen workers who have very little ability to participate politically; in the worst cases they are essentially forced labour.

So this is really not a country that respects rule of law or that is interested in empowerment of its population. There’s quite strict rules around what the media can say about the royal family. It’s a hereditary autocracy with a royal family that is going to stay in power.

The shape of these deals is still up in the air; it’s not clear exactly who gets what. But if you believe, as some of the leading companies making these deals have said, that access to compute is going to be a huge determinant of national power in the future, and the deals are structured in such a way that the royal families, the autocratic governments here get access to essentially world class supercomputers, then that’s pretty concerning — because you’re handing over a large amount of power to actors whose interests are not in line with the public generally, or even with the US in terms of a long-term strategic outlook. Their priority is staying in power, essentially.

Is concentration of power inevitable?

Helen Toner: I think there’s a natural tension here among some people who are very concerned about existential risk from AI, really bad outcomes, and AI safety: there’s this sense that it’s actually helpful if there’s only a smaller number of players. Because, one, they can coordinate better — so maybe if racing leads to riskier outcomes, if you just have two top players, they can coordinate more directly than if you have three or four or 10 — and also a smaller number of players is going to be easier for an outside body to regulate, so if you just have a small number of companies, that’s going to be easier to regulate.

So I think there’s often a sense of actually that concentration is valuable. I see the logic there. But the problem is then the “Then what?” question of, if you do manage to avoid some of those worst-case outcomes, and then you have this incredibly powerful technology in the hands of a very small number of people, I think just historically that’s been really bad. It’s really bad when you have small groups that are very powerful, and typically it doesn’t result in good outcomes for the rest of the world and the rest of humanity.

How do we avoid this? I don’t really know. I would love for there to be more work on this. A lot of the thinking that has happened about this has been between people who say the risks are really large, so we have to try for concentration because otherwise we all die; and other people saying that’s stupid, we’re obviously not going to all die, and so therefore we should just diffuse power maximally. And I think trying to get those people to actually engage with each other and say maybe there is —

Rob Wiblin: What if both risks are medium? Then we’re in a real tough spot.

Helen Toner: Or both risks are high, right? Yeah. Sometimes when I talk about this, people think that I’m optimistic, and I’m like, “It’s all good, it’ll be fine, just let there be less power concentration.” But I actually think my take is a more pessimistic take: I don’t think concentrating is the solution, and I don’t think maximum diffusion is probably the solution — so how do we navigate that middle ground? I think it’s really hard.

One answer might be we might get lucky in terms of how the technology develops. It might be the case that actually things develop relatively gradually. There’s time for fast followers to catch up, and there can be relatively broad access to capabilities, and there aren’t these really decisive, huge civilisational downside risks that you need to manage in a concentrated way. So I think we might just get lucky. That’s sort of my best hope.

There’s also tools that I would love to see explored more that would target this. At a very basic level there’s things like AI literacy: how do we get a larger range of people to understand what is going on with this technology, to be able to engage with it, to be able to think about tradeoffs, pros and cons, risks and benefits? How do we think about worker empowerment or the role of workers, including workers at frontier companies, in shaping the development of the technology?

There’s also very basic things like taxing companies. If the only problem is just that it’s a naturally capital-intensive technology, so large actors are going to build it, and we don’t have to worry about the downside risks, then you have very traditional tools like tax and antitrust that can come in and help try to diffuse that concentration of power as well. But that doesn’t solve the safety challenges.

I guess my stance here is that it’s very unlikely that there are these two futures that we imagine, and the future goes down one of those tracks. I think it’s very likely there will be unexpected twists and turns, new things that develop that we didn’t anticipate. Technologies never develop exactly the way that we think they will. So I think our stance here should not just be about which of these two camps is right, but being open to and ready for many different possibilities.

Will we integrate AI into the military soon?

Rob Wiblin: How much appetite is there for rapidly integrating AI into the military?

Helen Toner: I think there is a lot of appetite, but institutionally the ability to procure or build AI systems and then roll them out at scale is tough.

We have another paper at CSET that I wasn’t involved in called Building the Tech Coalition, which is a case study of a successful adoption of AI. And really it’s the exception, not the rule, that they got this system to an operational state where it’s actually being used in practice. The case study is looking at what are the factors that actually made them able to succeed in that case.

One of the key parts there was the military sometimes talks about having bilingual leaders who are competent both in technology and also military operations. And one thing we really identified in that case study was you actually need trilingual leaders, who are competent in technology and military operations and also these acquisition/procurement questions, contracting, getting through all the legal language.

So it’s tough. There’s a lot of barriers. The military is not set up [for it]. The way that it does research and development, is not designed for software; the way that it does testing is not designed for non-deterministic AI systems. So I think the appetite is very much there, but I tend to [think] that in practice it’s going to be slow and piecemeal and a slog.

Rob Wiblin: Do you have any read on how worried the military is about AI being backdoored or having secret loyalties or agendas?

Helen Toner: I think they’re most worried about that [in the context of adversaries]. The military is naturally set up to think about adversaries.

Rob Wiblin: Sabotage.

Helen Toner: Right, sabotage, exactly. So I think they’re certainly worried about that in the context of what an adversary, whether it’s China or a different potential adversary, could do. That’s certainly a reason not to use Chinese models, or to be very cautious about even using US models that are trained on the broad internet. There’s already evidence of Russian groups, for example, doing essentially a version of data poisoning, trying to seed online datasets with pro-Russian views. So I think that’s primarily the lens that they’re thinking about it through.

Rob Wiblin: Do you hear any discussion of this question of, if you’re having AI-operated military equipment, should it decide whether to accept orders based on what it thinks the law of war is, or should it just follow orders of whoever its operator is?

I guess each of them has its issues. If your tank is having to make independent judgements about the law of war, about military law, maybe it’s not equipped to do that. And that also creates a lot of vulnerabilities that adversaries could try to use against you. On the other hand, if your equipment just absolutely follows any instructions that it’s given whatsoever, that creates a lot of opportunities for coups, where previously you just wouldn’t have been able to get human collaborators to go along with it. Do you have any thoughts on this?

Helen Toner: I think mostly this is only relevant inasmuch as you’re thinking of AI as very much not being a tool — so you’re thinking of it as having its own agency, making its own decisions. And I think the discussions in military circles are very focused on AI tools.

Rob Wiblin: They’re just not thinking about independent agents yet.

Helen Toner: I think that’s just not really a part of the discussion yet.

Rob Wiblin: Is it because it’s unacceptable or because it’s technologically not feasible yet?

Helen Toner: I think it’s sort of too sci-fi for the military at this point, and also based on where they’re at with adoption, the level of sophistication of the tools that they are looking at.

I will say that when these topics come up, to the extent that AI is operating in a more agentic, independent, autonomous way that is more equivalent to a human operator, there is a whole set of institutional expectations, standards, rules, laws for military personnel that you could in theory port over to an AI. For example, lower-level service members are expected to follow the commands of their commanding officers, but they are supposed to not if the command is illegal. But also things that they do that go poorly, that does then reflect back up on the commander.

So there’s ongoing questions about how does accountability and responsibility for the use of AI systems flow back through the command chain? If something goes wrong, who is held accountable? Which can actually work if the AI is primarily a tool and can potentially also work if it’s operating in a less tool-like way. But yeah, I think the conversations about AI that is not a tool are pretty nascent.

Articles, books, and other media discussed in the show

CSET is hiring! Read more about the position for a frontier AI research fellow or senior fellow and apply by November 10.

Check out the CSET careers page and sign up for its newsletter to learn about future openings.

Helen’s work:

CSET’s work:

Others’ work in these spaces:

Books:

Keeping up to date in these spaces:

Other 80,000 Hours podcast episodes:

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.