What happens when civilisation faces its greatest tests?
This compilation brings together insights from researchers, defence experts, philosophers, and policymakers on humanity’s ability to survive and recover from catastrophic events. From nuclear winter and electromagnetic pulses to pandemics and climate disasters, we explore both the threats that could bring down modern civilisation and the practical solutions that could help us bounce back.
You’ll hear from:
Zach Weinersmith on how settling space won’t help with threats to civilisation anytime soon (unless AI gets crazy good) (from episode #187)
Luisa Rodriguez on what the world might look like after a global catastrophe, how we might lose critical knowledge, and how fast populations might rebound (#116)
David Denkenberger on disruptions to electricity and communications we should expect in a catastrophe, and his work researching low-cost, low-tech solutions to make sure everyone is fed no matter what (#50 and #117)
Lewis Dartnell on how we could recover without much coal or oil, and changes we could make today to make us more resilient to potential catastrophes (#131)
Andy Weber on how people in US defence circles think about nuclear winter, and the tech that could prevent catastrophic pandemics (#93)
Toby Ord on the many risks to our atmosphere, whether climate change and rogue AI could really threaten civilisation, and whether we could rebuild from a small surviving population (#72 and #219)
Mark Lynas on how likely it is that widespread famine from climate change leads to civilisational collapse (#85)
Kevin Esvelt on the human-caused pandemic scenarios that could bring down civilisation — and how AI could help bad actors succeed (#164)
Joan Rohlfing on why we need to worry about more than just nuclear winter (#125)
Annie Jacobsen on the rings of annihilation and electromagnetic pulses from nuclear blasts (#192)
Christian Ruhl on thoughtful philanthropy that funds “right of boom” interventions to prevent nuclear war from threatening civilisation (80k After Hours)
Athena Aktipis on whether society would go all Mad Max in the apocalypse, and the best ways to prepare for a catastrophe (#144)
Will MacAskill on why potatoes are so cool (#130 and #136)
Content editing: Katy Moore and Milo McGuire Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Music: Ben Cordell Transcriptions and web: Katy Moore
Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there’s a 25% chance that within four years, AI will be able to do everything needed to run an AI company, from writing code to designing experiments to making strategic and business decisions.
As Ryan lays out, AI models are “marching through the human regime”: systems that could handle five-minute tasks two years ago now tackle 90-minute projects. Double that a few more times and we may be automating full jobs rather than just parts of them.
Will setting AI to improve itself lead to an explosive positive feedback loop? Maybe, but maybe not.
The explosive scenario: Once you’ve automated your AI company, you could have the equivalent of 20,000 top researchers, each working 50 times faster than humans with total focus. “You have your AIs, they do a bunch of algorithmic research, they train a new AI, that new AI is smarter and better and more efficient… that new AI does even faster algorithmic research.” In this world, we could see years of AI progress compressed into months or even weeks.
With AIs now doing all of the work of programming their successors and blowing past the human level, Ryan thinks it would be fairly straightforward for them to take over and disempower humanity, if they thought doing so would better achieve their goals. In the interview he lays out the four most likely approaches for them to take.
The linear progress scenario: You automate your company but progress barely accelerates. Why? Multiple reasons, but the most likely is “it could just be that AI R&D research bottlenecks extremely hard on compute.” You’ve got brilliant AI researchers, but they’re all waiting for experiments to run on the same limited set of chips, so can only make modest progress.
Ryan’s median guess splits the difference: perhaps a 20x acceleration that lasts for a few months or years. Transformative, but less extreme than some in the AI companies imagine.
And his 25th percentile case? Progress “just barely faster” than before. All that automation, and all you’ve been able to do is keep pace.
Unfortunately the data we can observe today is so limited that it leaves us with vast error bars. “We’re extrapolating from a regime that we don’t even understand to a wildly different regime,” Ryan believes, “so no one knows.”
But that huge uncertainty means the explosive growth scenario is a plausible one — and the companies building these systems are spending tens of billions to try to make it happen.
In this extensive interview, Ryan elaborates on the above and the policy and technical response necessary to insure us against the possibility that they succeed — a scenario society has barely begun to prepare for.
This episode was recorded on February 21, 2025.
Video editing: Luke Monsour, Simon Monsour, and Dominic Armstrong Audio engineering: Ben Cordell, Milo McGuire, and Dominic Armstrong Music: Ben Cordell Transcriptions and web: Katy Moore
The interview in a nutshell
Ryan Greenblatt, chief scientist at Redwood Research, contemplates a scenario where:
Full AI R&D automation occurs within 4–8 years, something Ryan places 25–50% probability on.
This triggers explosive recursive improvement (5–6 orders of magnitude in one year) — a crazy scenario we can’t rule out
Multiple plausible takeover approaches exist if models are misaligned
Both technical and governance interventions are urgently needed
1. AI R&D automation could happen within 4–8 years
Ryan estimates a 25% chance of automating AI R&D within 4 years, and 50% within 8 years. This timeline is based on recent rapid progress in AI capabilities, particularly in 2024:
AI systems have progressed from barely completing 5–10 minute tasks to handling 1.5-hour software engineering tasks with 50% success rates — and the length of tasks AIs can complete is doubling every 6 months and shows signs of increasing.
Current internal models at OpenAI reportedly rank in the top 50 individuals on Codeforces, and AIs have reached the level of very competitive 8th graders on competition math.
Training compute is increasing ~4x per year, and reasoning models (o1, o3, R1) show dramatic improvements with relatively modest compute investments.
Algorithmic progress has been increasing 3–5x per year in terms of effective training compute — making less compute go further.
Reinforcement learning on reasoning models is showing dramatic gains, and DeepSeek-R1 reportedly used only ~$1 million for reinforcement learning, suggesting massive potential for scaling.
But there’s also evidence that progress could slow down:
10–20% of global chip production is already earmarked for AI. (Though Ryan is uncertain about the exact number.)
AI companies are running out of high-quality training data.
Scaleups are going to require trillion-dollar investments, and each order of magnitude might yield less improvement.
It’s unclear whether narrow skills improvements will generalise to broad domains.
2. AI R&D automation could trigger explosive recursive self-improvement
When AI can automate AI research itself, that could set off an intelligence explosion as smarter AIs improve algorithms faster before hitting efficiency limits.
At the point when full AI R&D automation starts, Ryan expects:
10–50x faster progress than current rates (median estimate ~20x)
Companies might dedicate 80% of compute to internal use, squeezing out external customers
Advantages that add up to a more than 50x labour speed advantage compared to humans:
5x from generically running faster
3x from AIs working 24/7
2x from better coordination and context sharing between AIs
2x from the ability to swap between different capability levels for efficiency
Ryan’s median estimate is 5–6 orders of magnitude of algorithmic progress within one year after full automation begins.
3. Multiple plausible takeover scenarios exist if models are misaligned
As part of his work at Redwood, Ryan has also explored AI takeover scenarios from these superintelligent models.
Early takeover mechanisms include:
AIs using compute without authorisation within companies
Models escaping containment and coordinating with internal copies
Massive hacking campaigns to destabilise human response
Late takeover mechanisms include:
“Humans give AIs everything”: AIs appear helpful while secretly consolidating control
Waiting for takeover might increase chances of success with more resources and infrastructure available to them — but AIs could attempt earlier takeover due to fear of being replaced by newer models with different preferences, or due to rapid progress in safety research and/or coordination between humans.
4. Both technical and governance interventions are urgently needed
Ryan thinks there are several promising areas where listeners could contribute to reduce the above risks.
Technical research:
Ensuring AIs can’t cause harm even if they’re misaligned through AI control
Creating “model organisms”: testable examples of misaligned models
Showing current AIs’ capabilities to increase awareness and political appetite for action like “pausing at human level”
Interpreting non-human-language reasoning to detect deceptive cognition
Governance work:
Enabling verification of training claims between nations through compute governance
Hardening defences against model theft, unauthorised deployment, bioweapons, and cyberattacks
Facilitating coordination between companies and countries
The era of making AI smarter by just making it bigger is ending. But that doesn’t mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different methods. And those underlying technical changes force a big rethink of what coming years will look like.
Toby Ord — Oxford philosopher and bestselling author of The Precipice — has been tracking these shifts and mapping out the implications both for governments and our lives.
As he explains, until recently anyone can access the best AI in the world “for less than the price of a can of Coke.” But unfortunately, that’s over.
What changed? AI companies first made models smarter by throwing a million times as much computing power at them during training, to make them better at predicting the next word. But with high quality data drying up, that approach petered out in 2024.
So they pivoted to something radically different: instead of training smarter models, they’re giving existing models dramatically more time to think — leading to the rise in “reasoning models” that are at the frontier today.
The results are impressive but this extra computing time comes at a cost: OpenAI’s o3 reasoning model achieved stunning results on a famous AI test by writing an Encyclopedia Britannica‘s worth of reasoning to solve individual problems — at a cost of over $1,000 per question.
This isn’t just technical trivia: if this improvement method sticks, it will change much about how the AI revolution plays out — starting with the fact that we can expect the rich and powerful to get access to the best AI models well before the rest of us.
Companies have also begun applying “reinforcement learning” in which models are asked to solve practical problems, and then told to “do more of that” whenever it looks like they’ve gotten the right answer.
This has led to amazing advances in problem-solving ability — but it also explains why AI models have suddenly gotten much more deceptive. Reinforcement learning has always had the weakness that it encourages creative cheating, or tricking people into thinking you got the right answer even when you didn’t.
Toby shares typical recent examples of this “reward hacking” — from models Googling answers while pretending to reason through the problem (a deception hidden in OpenAI’s own release data), to achieving “100x improvements” by hacking their own evaluation systems.
To cap it all off, it’s getting harder and harder to trust publications from AI companies, as marketing and fundraising have become such dominant concerns.
While companies trumpet the impressive results of the latest models, Toby points out that they’ve actually had to spend a million times as much just to cut model errors by half. And his careful inspection of an OpenAI graph supposedly demonstrating that o3 was the new best model in the world revealed that it was actually no more efficient than its predecessor.
But Toby still thinks it’s critical to pay attention, given the stakes:
…there is some snake oil, there is some fad-type behaviour, and there is some possibility that it is nonetheless a really transformative moment in human history. It’s not an either/or. I’m trying to help people see clearly the actual kinds of things that are going on, the structure of this landscape, and to not be confused by some of these charts.
Recorded on May 23, 2025.
Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Music: Ben Cordell Camera operator: Jeremy Chevillotte Transcriptions and web: Katy Moore
For decades, US allies have slept soundly under the protection of America’s overwhelming military might. Donald Trump — with his threats to ditch NATO, seize Greenland, and abandon Taiwan — seems hell-bent on shattering that comfort.
But according to Hugh White — one of the world’s leading strategic thinkers, emeritus professor at the Australian National University, and author of Hard New World: Our Post American Future — Trump isn’t destroying American hegemony. He’s simply revealing that it’s already gone.
“Trump has very little trouble accepting other great powers as co-equals,” Hugh explains. And that happens to align perfectly with a strategic reality the foreign policy establishment desperately wants to ignore: fundamental shifts in global power have made the costs of maintaining a US-led hegemony prohibitively high.
Even under Biden, when Russia invaded Ukraine, the US sent weapons but explicitly ruled out direct involvement. Ukraine matters far more to Russia than America, and this “asymmetry of resolve” makes Putin’s nuclear threats credible where America’s counterthreats simply aren’t.
Hugh’s gloomy prediction: “Europeans will end up conceding to Russia whatever they can’t convince the Russians they’re willing to fight a nuclear war to deny them.”
The Pacific tells the same story. Despite Obama’s “pivot to Asia” and Biden’s tough talk about “winning the competition for the 21st century,” actual US military capabilities there have barely budged while China’s have soared, along with its economy — which is now bigger than the US’s, as measured in purchasing power. Containing China and defending Taiwan would require America to spend 8% of GDP on defence (versus 3.5% today) — and convince Beijing it’s willing to accept Los Angeles being vaporised. Unlike during the Cold War, no president — Trump or otherwise — can make that case to voters.
So what’s next? Hugh’s prognoses are stark:
Taiwan is in an impossible situation and we’re doing them a disservice pretending otherwise.
South Korea, Japan, and one of the EU or Poland will have to go nuclear to defend themselves.
Trump might actually follow through and annex Panama and Greenland — but probably not Canada.
Australia can defend itself from China but needs an entirely different military to do it.
Our new “multipolar” future, split between American, Chinese, Russian, Indian, and European spheres of influence, is a “darker world” than the golden age of US dominance. But Hugh’s message is blunt: for better or worse, 35 years of American hegemony are over. The challenge now is managing the transition peacefully, and creating a stable multipolar order more like Europe’s relatively peaceful 19th century than the chaotic bloodbath Europe suffered in the 17th — which, if replicated today, would be a nuclear bloodbath.
In today’s conversation, Hugh and Rob explore why even AI supremacy might not restore US dominance (spoiler: China still has nukes), why Japan can defend itself but Taiwan can’t, and why a new president won’t be able to reverse the big picture.
This episode was originally recorded on May 30, 2025.
Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Music: Ben Cordell Transcriptions and web: Katy Moore
AI models today have a 50% chance of successfully completing a task that would take an expert human one hour. Seven months ago, that number was roughly 30 minutes — and seven months before that, 15 minutes.
These are substantial, multi-step tasks requiring sustained focus: building web applications, conducting machine learning research, or solving complex programming challenges.
Today’s guest, Beth Barnes, is CEO of METR (Model Evaluation & Threat Research) — the leading organisation measuring these capabilities.
Beth’s team has been timing how long it takes skilled humans to complete projects of varying length, then seeing how AI models perform on the same work.
The resulting paper from METR, “Measuring AI ability to complete long tasks,” made waves by revealing that the planning horizon of AI models was doubling roughly every seven months. It’s regarded by many as the most useful AI forecasting work in years.
The companies building these systems aren’t just aware of this trend — they want to harness it as much as possible, and are aggressively pursuing automation of their own research.
And having AI models rapidly build their successors with limited human oversight naturally raises the risk that things will go off the rails if the models at the end of the process lack the goals and constraints we hoped for.
Beth thinks models can already do “meaningful work” on improving themselves, and she wouldn’t be surprised if AI models were able to autonomously self-improve in as little as two years from now — in fact, she says, “It seems hard to rule out even shorter [timelines]. Is there 1% chance of this happening in six, nine months? Yeah, that seems pretty plausible.”
While Silicon Valley is abuzz with these numbers, policymakers remain largely unaware of what’s barrelling toward us — and given the current lack of regulation of AI companies, they’re not even able to access the critical information that would help them decide whether to intervene. Beth adds:
The sense I really want to dispel is, “But the experts must be on top of this. The experts would be telling us if it really was time to freak out.” The experts are not on top of this. Inasmuch as there are experts, they are saying that this is concerning. … And to the extent that I am an expert, I am an expert telling you you should freak out. And there’s not especially anyone else who isn’t saying this.
Beth and Rob discuss all that, plus:
How Beth now thinks that open-weight models are a good thing for AI safety, and what changed her mind
How our poor information security means there’s no such thing as a “closed-weight” model anyway
Whether we can see if an AI is scheming in its chain-of-thought reasoning, and the latest research on “alignment faking”
Why just before deployment is the worst time to evaluate model safety
Why Beth thinks AIs could end up being really good at creative and novel research — something humans tend to think is beyond their reach
Why Beth thinks safety-focused people should stay out of the frontier AI companies — and the advantages smaller organisations have
Areas of AI safety research that Beth thinks is overrated and underrated
Whether it’s feasible to have a science that translates AI models’ increasing use of nonhuman language or ‘neuralese’
How AI is both similar to and different from nuclear arms racing and bioweapons
And much more besides!
This episode was originally recorded on February 17, 2025.
Video editing: Luke Monsour and Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Music: Ben Cordell Transcriptions and web: Katy Moore
What if there’s something it’s like to be a shrimp — or a chatbot?
For centuries, humans have debated the nature of consciousness, often placing ourselves at the very top. But what about the minds of others — both the animals we share this planet with and the artificial intelligences we’re creating?
We’ve pulled together clips from past conversations with researchers and philosophers who’ve spent years trying to make sense of animal consciousness, artificial sentience, and moral consideration under deep uncertainty.
You’ll hear from:
Robert Long on how we might accidentally create artificial sentience (from episode #146)
Jeff Sebo on when we should extend extend moral consideration to digital beings — and what that would even look like (#173)
Jonathan Birch on what we should learn from the cautionary tale of newborn pain, and other “edge cases” of sentience (#196)
Andrés Jiménez Zorrilla on what it’s like to be a shrimp (80k After Hours)
Meghan Barrett on challenging our assumptions about insects’ experiences (#198)
David Chalmers on why artificial consciousness is entirely possible (#67)
Holden Karnofsky on how we’ll see digital people as… people (#109)
Sébastien Moro on the surprising sophistication of fish cognition and behaviour (#205)
Bob Fischer on how to compare the moral weight of a chicken to that of a human (#182)
Cameron Meyer Shorb on the vast scale of potential wild animal suffering (#210)
Lewis Bollard on how animal advocacy has evolved in response to sentience research (#185)
Anil Seth on the neuroscientific theories of consciousness (#206)
Peter Godfrey-Smith on whether we could upload ourselves to machines (#203)
Buck Shlegeris on whether AI control strategies make humans the bad guys (#214)
Stuart Russell on the moral rights of AI systems (#80)
Will MacAskill on how to integrate digital beings into society (#213)
Carl Shulman on collaboratively sharing the world with digital minds (#191)
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Additional content editing: Katy Moore and Milo McGuire Transcriptions and web: Katy Moore
OpenAI’s recent announcement that its nonprofit would “retain control” of its for-profit business sounds reassuring. But this seemingly major concession, celebrated by so many, is in itself largely meaningless.
Litigator Tyler Whitmer is a coauthor of a newly published letter that describes this attempted sleight of hand and directs regulators on how to stop it.
As Tyler explains, the plan both before and after this announcement has been to convert OpenAI into a Delaware public benefit corporation (PBC) — and this alone will dramatically weaken the nonprofit’s ability to direct the business in pursuit of its charitable purpose: ensuring AGI is safe and “benefits all of humanity.”
Right now, the nonprofit directly controls the business. But were OpenAI to become a PBC, the nonprofit, rather than having its “hand on the lever,” would merely contribute to the decision of who does.
Why does this matter? Today, if OpenAI’s commercial arm were about to release an unhinged AI model that might make money but be bad for humanity, the nonprofit could directly intervene to stop it. In the proposed new structure, it likely couldn’t do much at all.
But it’s even worse than that: even if the nonprofit could select the PBC’s directors, those directors would have fundamentally different legal obligations from those of the nonprofit. A PBC director must balance public benefit with the interests of profit-driven shareholders — by default, they cannot legally prioritise public interest over profits, even if they and the controlling shareholder that appointed them want to do so.
As Tyler points out, there isn’t a single reported case of a shareholder successfully suing to enforce a PBC’s public benefit mission in the 10+ years since the Delaware PBC statute was enacted.
This extra step from the nonprofit to the PBC would also mean that the attorneys general of California and Delaware — who today are empowered to ensure the nonprofit pursues its mission — would find themselves powerless to act. These are probably not side effects but rather a Trojan horse for-profit investors are trying to slip past regulators.
Fortunately this can all be addressed — but it requires either the nonprofit board or the attorneys general of California and Delaware to promptly put their foot down and insist on watertight legal agreements that preserve OpenAI’s current governance safeguards and enforcement mechanisms.
As Tyler explains, the same arrangements that currently bind the OpenAI business have to be written into a new PBC’s certificate of incorporation — something that won’t happen by default and that powerful investors have every incentive to resist.
Without these protections, OpenAI’s new suggested structure wouldn’t “fix” anything. They would be a ruse that preserved the appearance of nonprofit control while gutting its substance.
Listen to our conversation with Tyler Whitmer to understand what’s at stake, and what the AGs and board members must do to ensure OpenAI remains committed to developing artificial general intelligence that benefits humanity rather than just investors.
This episode was originally recorded on May 13, 2025.
Video editing: Simon Monsour and Luke Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Music: Ben Cordell Transcriptions and web: Katy Moore
When attorneys general intervene in corporate affairs, it usually means something has gone seriously wrong. In OpenAI’s case, it appears to have forced a dramatic reversal of the company’s plans to sideline its nonprofit foundation, announced in a blog post that made headlines worldwide.
The company’s sudden announcement that its nonprofit will “retain control” credits “constructive dialogue” with the attorneys general of California and Delaware — corporate-speak for what was likely a far more consequential confrontation behind closed doors. A confrontation perhaps driven by public pressure from Nobel Prize winners, past OpenAI staff, and community organisations.
But whether this change will help depends entirely on the details of implementation — details that remain worryingly vague in the company’s announcement.
Return guest Rose Chan Loui, nonprofit law expert at UCLA, sees potential in OpenAI’s new proposal, but emphasises that “control” must be carefully defined and enforced: “The words are great, but what’s going to back that up?” Without explicitly defining the nonprofit’s authority over safety decisions, the shift could be largely cosmetic.
Why have state officials taken such an interest so far? Host Rob Wiblin notes, “OpenAI was proposing that the AGs would no longer have any say over what this super momentous company might end up doing. … It was just crazy how they were suggesting that they would take all of the existing money and then pursue a completely different purpose.”
Now that they’re in the picture, the AGs have leverage to ensure the nonprofit maintains genuine control over issues of public safety as OpenAI develops increasingly powerful AI.
Rob and Rose explain three key areas where the AGs can make a huge difference to whether this plays out in the public’s best interest:
Ensuring that the contractual agreements giving the nonprofit control over the new Delaware public benefit corporation are watertight, and don’t accidentally shut the AGs out of the picture.
Insisting that a majority of board members are truly independent by prohibiting indirect as well as direct financial stakes in the business.
Insisting that the board is empowered with the money, independent staffing, and access to information which they need to do their jobs.
This episode was originally recorded on May 6, 2025.
Video editing: Simon Monsour and Luke Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Music: Ben Cordell Transcriptions and web: Katy Moore
When you have a system where ministers almost never understand their portfolios, civil servants change jobs every few months, and MPs don’t grasp parliamentary procedure even after decades in office — is the problem the people, or the structure they work in?
Today’s guest, political journalist Ian Dunt, studies the systemic reasons governments succeed and fail.
And in his book How Westminster Works …and Why It Doesn’t, he argues that Britain’s government dysfunction and multi-decade failure to solve its key problems stems primarily from bad incentives and bad processes. Even brilliant, well-intentioned people are set up to fail by a long list of institutional absurdities.
For instance:
Ministerial appointments in complex areas like health or defence typically go to whoever can best shore up the prime minister’s support within their own party and prevent a leadership challenge, rather than people who have any experience at all with the area.
On average, ministers are removed after just two years, so the few who manage to learn their brief are typically gone just as they’re becoming effective. In the middle of a housing crisis, Britain went through 25 housing ministers in 25 years.
Ministers are expected to make some of their most difficult decisions by reading paper memos out of a ‘red box’ while exhausted, at home, after dinner.
Tradition demands that the country be run from a cramped Georgian townhouse: 10 Downing Street. Few staff fit and teams are split across multiple floors. Meanwhile, the country’s most powerful leaders vie to control the flow of information to and from the prime minister via ‘professionalised loitering’ outside their office.
Civil servants are paid too little to retain those with technical skills, who can earn several times as much in the private sector. For those who do want to stay, the only way to get promoted is to move departments — abandoning any area-specific knowledge they’ve accumulated.
As a result, senior civil servants handling complex policy areas have a median time in role as low as 11 months. Turnover in the Treasury has regularly been 25% annually — comparable to a McDonald’s restaurant.
MPs are chosen by local party members overwhelmingly on the basis of being ‘loyal party people,’ while the question of whether they are good at understanding or scrutinising legislation (their supposed constitutional role) simply never comes up.
The end result is that very few of the most powerful people in British politics have much idea what they’re actually doing. As Ian puts it, the country is at best run by a cadre of “amateur generalists.”
While some of these are unique British failings, many others are recurring features of governments around the world, and similar dynamics can arise in large corporations as well.
But as Ian also lays out, most of these absurdities have natural solutions, and in every case some countries have found structural solutions that help ensure decisions are made by the right people, with the information they need, and that success is rewarded.
This episode was originally recorded on January 30, 2025.
Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Music: Ben Cordell Camera operator: Jeremy Chevillotte Transcriptions and web: Katy Moore
How do you navigate a career path when the future of work is uncertain? How important is mentorship versus immediate impact? Is it better to focus on your strengths or on the world’s most pressing problems? Should you specialise deeply or develop a unique combination of skills?
From embracing failure to finding unlikely allies, we bring you 16 diverse perspectives from past guests who’ve found unconventional paths to impact and helped others do the same.
You’ll hear from:
Michael Webb on using AI as a career advisor and the human skills AI can’t replace (from episode #161)
Holden Karnofsky on kicking ass in whatever you do, and which weird ideas are worth betting on (#109, #110, and #158)
Chris Olah on how intersections of particular skills can be a wildly valuable niche (#108)
Michelle Hutchinson on understanding what truly motivates you (#75)
Benjamin Todd on how to make tough career decisions and deal with rejection (#71 and 80k After Hours)
Jeff Sebo on what improv comedy teaches us about doing good in the world (#173)
Spencer Greenberg on recognising toxic people who could derail your career (#183)
Dean Spears on embracing randomness and serendipity (#186)
Karen Levy on finding yourself through travel (#124)
Leah Garcés on finding common ground with unlikely allies (#99)
Hannah Ritchie on being selective about whose advice you follow (#160)
Pardis Sabeti on prioritising physical health (#104)
Sarah Eustis-Guthrie on knowing when to pivot from your current path (#207)
Danny Hernandez on setting triggers for career decisions (#78)
Varsha Venugopal on embracing uncomfortable situations (#113)
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Content editing: Katy Moore and Milo McGuire Transcriptions and web: Katy Moore
Throughout history, technological revolutions have fundamentally shifted the balance of power in society. The Industrial Revolution created conditions where democracies could dominate for the first time — as nations needed educated, informed, and empowered citizens to deploy advanced technologies and remain competitive.
Unfortunately there’s every reason to think artificial general intelligence (AGI) will reverse that trend.
In a new paper published today, Tom Davidson — senior research fellow at the Forethought Centre for AI Strategy — argues that advanced AI systems will enable unprecedented power grabs by tiny groups of people, primarily by removing the need for other human beings to participate.
When a country’s leaders no longer need citizens for economic production, or to serve in the military, there’s much less need to share power with them. “Over the broad span of history, democracy is more the exception than the rule,” Tom points out. “With AI, it will no longer be important to a country’s competitiveness to have an empowered and healthy citizenship.”
Citizens in established democracies are not typically that concerned about coups. We doubt anyone will try, and if they do, we expect human soldiers to refuse to join in. Unfortunately, the AI-controlled military systems of the future will lack those inhibitions. As Tom lays out, “Human armies today are very reluctant to fire on their civilians. If we get instruction-following AIs, then those military systems will just fire.”
Why would AI systems follow the instructions of a would-be tyrant? One answer is that, as militaries worldwide race to incorporate AI to remain competitive, they risk leaving the door open for exploitation by malicious actors in a few ways:
AI systems could be programmed to simply follow orders from the top of the chain of command, without any checks on that power — potentially handing total power indefinitely to any leader willing to abuse that authority.
Superior cyber capabilities could enable small groups to hack into and take full control of AI-operated military infrastructure.
It’s also possible that the companies with the most advanced AI, if it conveyed a significant enough advantage over competitors, could quickly develop armed forces sufficient to overthrow an incumbent regime. History suggests that as few as 10,000 obedient military drones could be sufficient to kill competitors, take control of key centres of power, and make your success fait accompli.
Without active effort spent mitigating risks like these, it’s reasonable to fear that AI systems will destabilise the current equilibrium that enables the broad distribution of power we see in democratic nations.
In this episode, host Rob Wiblin and Tom discuss new research on the question of whether AI-enabled coups are likely, and what we can do about it if they are, as well as:
Whether preventing coups and preventing ‘rogue AI’ require opposite interventions, leaving us in a bind
Whether open sourcing AI weights could be helpful, rather than harmful, for advancing AI safely
Why risks of AI-enabled coups have been relatively neglected in AI safety discussions
How persuasive AGI will really be
How many years we have before these risks become acute
The minimum number of military robots needed to stage a coup
This episode was originally recorded on January 20, 2025.
Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Camera operator: Jeremy Chevillotte Transcriptions and web: Katy Moore
What happens when your desire to do good starts to undermine your own wellbeing?
Over the years, we’ve heard from therapists, charity directors, researchers, psychologists, and career advisors — all wrestling with how to do good without falling apart. Today’s episode brings together insights from 16 past guests on the emotional and psychological costs of pursuing a high-impact career to improve the world — and how to best navigate the all-too-common guilt, burnout, perfectionism, and imposter syndrome along the way.
You’ll hear from:
80,000 Hours’ former CEO on managing anxiety, self-doubt, and a chronic sense of falling short (from episode #100)
Randy Nesse on why we evolved to be anxious and depressed (episode #179)
Hannah Boettcher on how ‘optimisation framing’ can quietly distort our sense of self-worth (from our 80k After Hours feed)
Mental Health Navigator is a service that simplifies finding and accessing mental health information and resources all over the world — built specifically for the effective altruism community
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Content editing: Katy Moore and Milo McGuire Transcriptions and web: Katy Moore
Most AI safety conversations centre on alignment: ensuring AI systems share our values and goals. But despite progress, we’re unlikely to know we’ve solved the problem before the arrival of human-level and superhuman systems in as little as three years.
So some are developing a backup plan to safely deploy models we fear are actively scheming to harm us — so-called “AI control.” While this may sound mad, given the reluctance of AI companies to delay deploying anything they train, not developing such techniques is probably even crazier.
Today’s guest — Buck Shlegeris, CEO of Redwood Research — has spent the last few years developing control mechanisms, and for human-level systems they’re more plausible than you might think. He argues that given companies’ unwillingness to incur large costs for security, accepting the possibility of misalignment and designing robust safeguards might be one of our best remaining options.
Buck asks us to picture a scenario where, in the relatively near future, AI companies are employing 100,000 AI systems running 16 times faster than humans to automate AI research itself. These systems would need dangerous permissions: the ability to run experiments, access model weights, and push code changes. As a result, a misaligned system could attempt to hack the data centre, exfiltrate weights, or sabotage research. In such a world, misalignment among these AIs could be very dangerous.
But in the absence of a method for reliably aligning frontier AIs, Buck argues for implementing practical safeguards to prevent catastrophic outcomes. His team has been developing and testing a range of straightforward, cheap techniques to detect and prevent risky behaviour by AIs — such as auditing AI actions with dumber but trusted models, replacing suspicious actions, and asking questions repeatedly to catch randomised attempts at deception.
Most importantly, these methods are designed to be cheap and shovel-ready. AI control focuses on harm reduction using practical techniques — techniques that don’t require new, fundamental breakthroughs before companies could reasonably implement them, and that don’t ask us to forgo the benefits of deploying AI.
As Buck puts it:
Five years ago I thought of misalignment risk from AIs as a really hard problem that you’d need some really galaxy-brained fundamental insights to resolve. Whereas now, to me the situation feels a lot more like we just really know a list of 40 things where, if you did them — none of which seem that hard — you’d probably be able to not have very much of your problem.
Of course, even if Buck is right, we still need to do those 40 things — which he points out we’re not on track for. And AI control agendas have their limitations: they aren’t likely to work once AI systems are much more capable than humans, since greatly superhuman AIs can probably work around whatever limitations we impose.
Still, AI control agendas seem to be gaining traction within AI safety. Buck and host Rob Wiblin discuss all of the above, plus:
Why he’s more worried about AI hacking its own data centre than escaping
What to do about “chronic harm,” where AI systems subtly underperform or sabotage important work like alignment research
Why he might want to use a model he thought could be conspiring against him
Why he would feel safer if he caught an AI attempting to escape
Why many control techniques would be relatively inexpensive
How to use an untrusted model to monitor another untrusted model
What the minimum viable intervention in a “lazy” AI company might look like
How even small teams of safety-focused staff within AI labs could matter
The moral considerations around controlling potentially conscious AI systems, and whether it’s justified
This episode was originally recorded on February 21, 2025.
Video: Simon Monsour and Luke Monsour Audio engineering: Ben Cordell, Milo McGuire, and Dominic Armstrong Transcriptions and web: Katy Moore
What happens when a USB cable can secretly control your system? Are we hurtling toward a security nightmare as critical infrastructure connects to the internet? Is it possible to secure AI model weights from sophisticated attackers? And could AI might actually make computer security better rather than worse?
With AI security concerns becoming increasingly urgent, we bring you insights from 15 top experts across information security, AI safety, and governance, examining the challenges of protecting our most powerful AI models and digital infrastructure — including a sneak peek from an episode that hasn’t yet been released with Tom Davidson, where he explains how we should be more worried about “secret loyalties” in AI agents.
You’ll hear:
Holden Karnofsky on why every good future relies on strong infosec, and how hard it’s been to hire security experts (from episode #158)
Tantum Collins on why infosec might be the rare issue everyone agrees on (episode #166)
Nick Joseph on whether AI companies can develop frontier models safely with the current state of information security (episode #197)
Sella Nevo on why AI model weights are so valuable to steal, the weaknesses of air-gapped networks, and the risks of USBs (episode #195)
Kevin Esvelt on what cryptographers can teach biosecurity experts (episode #164)
Lennart Heim on on Rob’s computer security nightmares (episode #155)
Zvi Mowshowitz on the insane lack of security mindset at some AI companies (episode #184)
Nova DasSarma on the best current defences against well-funded adversaries, politically motivated cyberattacks, and exciting progress in infosecurity (episode #132)
Bruce Schneier on whether AI could eliminate software bugs for good, and why it’s bad to hook everything up to the internet (episode #64)
Nita Farahany on the dystopian risks of hacked neurotech (episode #174)
Vitalik Buterin on how cybersecurity is the key to defence-dominant futures (episode #194)
Nathan Labenz on how even internal teams at AI companies may not know what they’re building (episode #176)
Allan Dafoe on backdooring your own AI to prevent theft (episode #212)
Tom Davidson on how dangerous “secret loyalties” in AI models could be (episode to be released!)
Carl Shulman on the challenge of trusting foreign AI models (episode #191, part 2)
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Content editing: Katy Moore and Milo McGuire Transcriptions and web: Katy Moore
The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang theory, quantum mechanics, birth control, and more. Now imagine all of it compressed into just 10 years.
That’s the future Will MacAskill — philosopher, founding figure of effective altruism, and now researcher at the Forethought Centre for AI Strategy — argues we need to prepare for in his new paper “Preparing for the intelligence explosion.” Not in the distant future, but probably in three to seven years.
The reason: AI systems are rapidly approaching human-level capability in scientific research and intellectual tasks. Once AI exceeds human abilities in AI research itself, we’ll enter a recursive self-improvement cycle — creating wildly more capable systems. Soon after, by improving algorithms and manufacturing chips, we’ll deploy millions, then billions, then trillions of superhuman AI scientists working 24/7 without human limitations. These systems will collaborate across disciplines, build on each discovery instantly, and conduct experiments at unprecedented scale and speed — compressing a century of scientific progress into mere years.
Will compares the resulting situation to a mediaeval king suddenly needing to upgrade from bows and arrows to nuclear weapons to deal with an ideological threat from a country he’s never heard of, while simultaneously grappling with learning that he descended from monkeys and his god doesn’t exist.
What makes this acceleration perilous is that while technology can speed up almost arbitrarily, human institutions and decision-making are much more fixed.
Consider the case of nuclear weapons: in this compressed timeline, there would have been just a three-month gap between the Manhattan Project’s start and the Hiroshima bombing, and the Cuban Missile Crisis would have lasted just over a day.
Robert Kennedy, Sr., who helped navigate the actual Cuban Missile Crisis, once remarked that if they’d had to make decisions on a much more accelerated timeline — like 24 hours rather than 13 days — they would likely have taken much more aggressive, much riskier actions.
So there’s reason to worry about our own capacity to make wise choices. And in “Preparing for the intelligence explosion,” Will lays out 10 “grand challenges” we’ll need to quickly navigate to successfully avoid things going wrong during this period.
Will’s thinking has evolved a lot since his last appearance on the show. While he was previously sceptical about whether we live at a “hinge of history,” he now believes we’re entering one of the most critical periods for humanity ever — with decisions made in the next few years potentially determining outcomes millions of years into the future.
But Will also sees reasons for optimism. The very AI systems causing this acceleration could be deployed to help us navigate it — if we use them wisely. And while AI safety researchers rightly focus on preventing AI systems from going rogue, Will argues we should equally attend to ensuring the futures we deliberately build are truly worth living in.
In this wide-ranging conversation with host Rob Wiblin, Will maps out the challenges we’d face in this potential “intelligence explosion” future, and what we might do to prepare. They discuss:
Why leading AI safety researchers now think there’s dramatically less time before AI is transformative than they’d previously thought
The three different types of intelligence explosions that occur in order
Will’s list of resulting grand challenges — including destructive technologies, space governance, concentration of power, and digital rights
How to prevent ourselves from accidentally “locking in” mediocre futures for all eternity
Ways AI could radically improve human coordination and decision making
Why we should aim for truly flourishing futures, not just avoiding extinction
This episode was originally recorded on February 7, 2025.
Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Camera operator: Jeremy Chevillotte Transcriptions and web: Katy Moore
When OpenAI announced plans to convert from nonprofit to for-profit control last October, it likely didn’t anticipate the legal labyrinth it now faces. A recent court order in Elon Musk’s lawsuit against the company suggests OpenAI’s restructuring faces serious legal threats, which will complicate its efforts to raise tens of billions in investment.
As nonprofit legal expert Rose Chan Loui explains, the court order set up multiple pathways for OpenAI’s conversion to be challenged. Though Judge Yvonne Gonzalez Rogers denied Musk’s request to block the conversion before a trial, she expedited proceedings to the fall so the case could be heard before it’s likely to go ahead. (See Rob’s brief summary of developments in the case.)
And if Musk’s donations to OpenAI are enough to give him the right to bring a case, Rogers sounded very sympathetic to his objections to the OpenAI foundation selling the company, benefiting the founders who forswore “any intent to use OpenAI as a vehicle to enrich themselves.”
But that’s just one of multiple threats. The attorneys general (AGs) in California and Delaware both have standing to object to the conversion on the grounds that it is contrary to the foundation’s charitable purpose and therefore wrongs the public — which was promised all the charitable assets would be used to develop AI that benefits all of humanity, not to win a commercial race. Some, including Rose, suspect the court order was written as a signal to those AGs to take action.
And, as she explains, if the AGs remain silent, the court itself, seeing that the public interest isn’t being represented, could appoint a “special interest party” to take on the case in their place.
This places the OpenAI foundation board in a bind: proceeding with the restructuring despite this legal cloud could expose them to the risk of being sued for a gross breach of their fiduciary duty to the public. The board is made up of respectable people who didn’t sign up for that.
And of course it would cause chaos for the company if all of OpenAI’s fundraising and governance plans were brought to a screeching halt by a federal court judgment landing at the eleventh hour.
Host Rob Wiblin and Rose Chan Loui discuss all of the above as well as what justification the OpenAI foundation could offer for giving up control of the company despite its charitable purpose, and how the board might adjust their plans to make the for-profit switch more legally palatable.
This episode was originally recorded on March 6, 2025.
Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Transcriptions: Katy Moore
Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through.
That’s how today’s guest Allan Dafoe — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don’t. Those who resist too much can find themselves taken over or rendered irrelevant.
This dynamic played out dramatically in 1853 when US Commodore Perry sailed into Tokyo Bay with steam-powered warships that seemed magical to the Japanese, who had spent centuries deliberately limiting their technological development. With far greater military power, the US was able to force Japan to open itself to trade. Within 15 years, Japan had undergone the Meiji Restoration and transformed itself in a desperate scramble to catch up.
Today we see hints of similar pressure around artificial intelligence. Even companies, countries, and researchers deeply concerned about where AI could take us feel compelled to push ahead — worried that if they don’t, less careful actors will develop transformative AI capabilities at around the same time anyway.
But Allan argues this technological determinism isn’t absolute. While broad patterns may be inevitable, history shows we do have some ability to steer how technologies are developed, by who, and what they’re used for first.
As part of that approach, Allan has been promoting efforts to make AI more capable of sophisticated cooperation, and improving the tests Google uses to measure how well its models could do things like mislead people, hack and take control of their own servers, or spread autonomously in the wild.
As of mid-2024 they didn’t seem dangerous at all, but we’ve learned that our ability to measure these capabilities is good, but imperfect. If we don’t find the right way to ‘elicit’ an ability we can miss that it’s there.
Subsequent research from Anthropic and Redwood Research suggests there’s even a risk that future models may play dumb to avoid their goals being altered.
That has led DeepMind to a “defence in depth” approach: carefully staged deployment starting with internal testing, then trusted external testers, then limited release, then watching how models are used in the real world. By not releasing model weights, DeepMind is able to back up and add additional safeguards if experience shows they’re necessary.
But with much more powerful and general models on the way, individual company policies won’t be sufficient by themselves. Drawing on his academic research into how societies handle transformative technologies, Allan argues we need coordinated international governance that balances safety with our desire to get the massive potential benefits of AI in areas like healthcare and education as quickly as possible.
Host Rob and Allan also cover:
The most exciting beneficial applications of AI
Whether and how we can influence the development of technology
What DeepMind is doing to evaluate and mitigate risks from frontier AI systems
Why cooperative AI may be as important as aligned AI
The role of democratic input in AI governance
What kinds of experts are most needed in AI safety and governance
And much more
Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Camera operator: Jeremy Chevillotte Transcriptions: Katy Moore
On Monday, February 10, Elon Musk made the OpenAI nonprofit foundation an offer they want to refuse, but might have trouble doing so: $97.4 billion for its stake in the for-profit company, plus the freedom to stick with its current charitable mission.
For a normal company takeover bid, this would already be spicy. But OpenAI’s unique structure — a nonprofit foundation controlling a for-profit corporation — turns the gambit into an audacious attack on the plan OpenAI announced in December to free itself from nonprofit oversight.
As today’s guest Rose Chan Loui — founding executive director of UCLA Law’s Lowell Milken Center for Philanthropy and Nonprofits — explains, OpenAI’s nonprofit board now faces a challenging choice.
The nonprofit has a legal duty to pursue its charitable mission of ensuring that AI benefits all of humanity to the best of its ability. And if Musk’s bid would better accomplish that mission than the for-profit’s proposal — that the nonprofit give up control of the company and change its charitable purpose to the vague and barely related “pursue charitable initiatives in sectors such as health care, education, and science” — then it’s not clear the California or Delaware Attorneys General will, or should, approve the deal.
OpenAI CEO Sam Altman quickly tweeted “no thank you” — but that was probably a legal slipup, as he’s not meant to be involved in such a decision, which has to be made by the nonprofit board ‘at arm’s length’ from the for-profit company Sam himself runs.
The board could raise any number of objections: maybe Musk doesn’t have the money, or the purchase would be blocked on antitrust grounds, seeing as Musk owns another AI company (xAI), or Musk might insist on incompetent board appointments that would interfere with the nonprofit foundation pursuing any goal.
But as Rose and Rob lay out, it’s not clear any of those things is actually true.
In this emergency podcast recorded soon after Elon’s offer, Rose and Rob also cover:
Why OpenAI wants to change its charitable purpose and whether that’s legally permissible
On what basis the attorneys general will decide OpenAI’s fate
The challenges in valuing the nonprofit’s “priceless” position of control
Whether Musk’s offer will force OpenAI to up their own bid, and whether they could raise the money
If other tech giants might now jump in with competing offers
How politics could influence the attorneys general reviewing the deal
What Rose thinks should actually happen to protect the public interest
Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Transcriptions: Katy Moore
Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like something to be a superhuman AGI?
With AGI back in the headlines, we bring you 15 opinionated highlights from the show addressing those and other questions, intermixed with opinions from hosts Luisa Rodriguez and Rob Wiblin recorded back in 2023.
You can decide whether the views we expressed (and those from guests) then have held up these last two busy years. You’ll hear:
It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode, including:
How to use the microphone on someone’s mobile phone to figure out what password they’re typing into their laptop
Why mercilessly driving the New World screwworm to extinction could be the most compassionate thing humanity has ever done
Why evolutionary psychology doesn’t support a cynical view of human nature but actually explains why so many of us are intensely sensitive to the harms we cause to others
How superforecasters and domain experts seem to disagree so much about AI risk, but when you zoom in it’s mostly a disagreement about timing
Why the sceptics are wrong and you will want to use robot nannies to take care of your kids — and also why despite having big worries about the development of AGI, Carl Shulman is strongly against efforts to pause AI research today
How much of the gender pay gap is due to direct pay discrimination vs other factors
How cleaner wrasse fish blow the mirror test out of the water
Why effective altruism may be too big a tent to work well
How we could best motivate pharma companies to test existing drugs to see if they help cure other diseases — something they currently have no reason to bother with
Remember that all of these clips come from the 20-minute highlight reels we make for every episode, which are released on our sister feed, 80k After Hours. So if you’re struggling to keep up with our regularly scheduled entertainment, you can still get the best parts of our conversations there.
It has been a hell of a year, and we can only imagine next year is going to be even weirder — but Luisa and Rob will be here to keep you company as Earth hurtles through the galaxy to a fate as yet unknown.
Enjoy, and look forward to speaking with you in 2025!
Producing and editing: Keiran Harris Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Video editing: Simon Monsour Transcriptions: Katy Moore