80,000 Hours’ goal is to get talented people working on the world’s most pressing problems. After more than 10 years of research into dozens of problem areas, we’re putting most of our focus on helping people work on positively shaping the trajectory of AI, because we think it presents the most serious and urgent challenge that the world is facing right now.
We’ve had over 10 million readers on our website, have ~600,000 subscribers to our newsletter, and have given one-on-one advice to over 6,000 people. We’ve also been one of the largest drivers of growth in the effective altruism community.
The operations team oversees 80,000 Hours’ HR, recruiting, finances, governance operations, org-wide metrics, and office management, as well as much of our fundraising, tech systems, and team coordination.
Currently, the operations team has ten full-time staff and some part-time staff. We’re planning to significantly grow the size of our operations team this year to stay on track with our ambitious goals and support a growing team.
The role
This role would be great for building career capital in operations, especially if you could one day see yourself in a more senior operations role (e.g. specialising in a particular area, taking on management, or eventually being a Head of Operations or COO).
We plan to hire people at both the associate and specialist levels during this round. The associate role is a more junior position,
80,000 Hours’ goal is to get talented people working on the world’s most pressing problems. After more than 10 years of research into dozens of problem areas, we’re putting most of our focus on helping people work on positively shaping the trajectory of AI, because we think it presents the most serious and urgent challenge that the world is facing right now.
We’ve had over 10 million readers on our website, have ~600,000 subscribers to our newsletter, and have given one-on-one advice to over 6,000 people. We’ve also been one of the largest drivers of growth in the effective altruism community.
The operations team oversees 80,000 Hours’ HR, recruiting, finances, governance operations, org-wide metrics, and office management, as well as much of our fundraising, tech systems, and team coordination.
Currently, the operations team has ten full-time staff and some part-time staff. We’re planning to significantly grow the size of our operations team this year to stay on track with our ambitious goals and support a growing team.
The role
This role would be great for building career capital in operations, especially if you could one day see yourself in a more senior operations role (e.g. specialising in a particular area, taking on management, or eventually being a Head of Operations or COO).
We plan to hire people at both the associate and specialist levels during this round. The associate role is a more junior position,
80,000 Hours’ goal is to get talented people working on the world’s most pressing problems. After more than 10 years of research into dozens of problem areas, we’re putting most of our focus on helping people work on positively shaping the trajectory of AI, because we think it presents the most serious and urgent challenge that the world is facing right now.
We’ve had over 10 million readers on our website, have ~600,000 subscribers to our newsletter, and have given one-on-one advice to over 6,000 people. We’ve also been one of the largest drivers of growth in the effective altruism community.
The operations function oversees 80,000 Hours’ HR, recruiting, finances, governance operations, org-wide metrics, and office management, as well as much of our fundraising, tech systems, and team coordination.
Currently, the operations team has ten full-time staff and some part-time staff. We’re planning to significantly grow the size of our operations team this year to stay on track with our ambitious goals and support a growing team.
The role
This role would be great for building career capital in operations, by helping us design and run high-quality events that strengthen our team, culture, and connections in the AI safety space. We’re looking for an Events Associate/Specialist who can take ownership of the day-to-day logistics and execution of our events.
We plan to hire people at both the associate and specialist levels during this round. The associate role is a more junior position,
80,000 Hours’ goal is to get talented people working on the world’s most pressing problems. After more than 10 years of research into dozens of problem areas, we’re putting most of our focus on helping people work on positively shaping the trajectory of AI, because we think it presents the most serious and urgent challenge that the world is facing right now.
We’ve had over 10 million readers on our website, have ~600,000 subscribers to our newsletter, and have given one-on-one advice to over 6,000 people. We’ve also been one of the largest drivers of growth in the effective altruism community.
The operations team oversees 80,000 Hours’ HR, recruiting, finances, governance operations, org-wide metrics, and office management, as well as much of our fundraising, tech systems, and team coordination.
Currently, the operations team has ten full-time staff and some part-time staff. We’re planning to significantly grow the size of our operations team this year to stay on track with our ambitious goals and support a growing team.
To learn more about the other teams hiring during this round (video team, office of the CEO), see the individual job descriptions.
The role
This role would be great for building career capital in operations, especially if you could one day see yourself in a more senior operations role (e.g. specialising in a particular area, taking on management, or eventually being a Head of Operations or COO).
We plan to hire people at both the associate and specialist levels during this round.
We expect there will be substantial progress in AI in the coming years, potentially even to the point where machines come to outperform humans in many, if not all, tasks. This could have enormous benefits, helping to solve currently intractable global problems, but could also pose severe risks. These risks could arise accidentally (for example, if we don’t find technical solutions to concerns about the safety of AI systems), or deliberately (for example, if AI systems worsen geopolitical conflict). We think more work needs to be done to reduce these risks.
Some of these risks from advanced AI could be existential — meaning they could cause human extinction, or an equally permanent and severe disempowerment of humanity.1 There have not yet been any satisfying answers to concerns — discussed below — about how this rapidly approaching, transformative technology can be safely developed and integrated into our society. Finding answers to these concerns is neglected and may well be tractable. We estimated that there were around 400 people worldwide working directly on this in 2022, though we believe that number has grown.2 As a result, the possibility of AI-related catastrophe may be the world’s most pressing problem — and the best thing to work on for those who are well-placed to contribute.
Promising options for working on this problem include technical research on how to create safe AI systems, strategy research into the particular risks AI might pose, and policy research into ways in which companies and governments could mitigate these risks. As policy approaches continue to be developed and refined, we need people to put them in place and implement them. There are also many opportunities to have a big impact in a variety of complementary roles, such as operations management, journalism, earning to give, and more — some of which we list below.
What happens when civilisation faces its greatest tests?
This compilation brings together insights from researchers, defence experts, philosophers, and policymakers on humanity’s ability to survive and recover from catastrophic events. From nuclear winter and electromagnetic pulses to pandemics and climate disasters, we explore both the threats that could bring down modern civilisation and the practical solutions that could help us bounce back.
You’ll hear from:
Zach Weinersmith on how settling space won’t help with threats to civilisation anytime soon (unless AI gets crazy good) (from episode #187)
Luisa Rodriguez on what the world might look like after a global catastrophe, how we might lose critical knowledge, and how fast populations might rebound (#116)
David Denkenberger on disruptions to electricity and communications we should expect in a catastrophe, and his work researching low-cost, low-tech solutions to make sure everyone is fed no matter what (#50 and #117)
Lewis Dartnell on how we could recover without much coal or oil, and changes we could make today to make us more resilient to potential catastrophes (#131)
Andy Weber on how people in US defence circles think about nuclear winter, and the tech that could prevent catastrophic pandemics (#93)
Toby Ord on the many risks to our atmosphere, whether climate change and rogue AI could really threaten civilisation, and whether we could rebuild from a small surviving population (#72 and #219)
Mark Lynas on how likely it is that widespread famine from climate change leads to civilisational collapse (#85)
Kevin Esvelt on the human-caused pandemic scenarios that could bring down civilisation — and how AI could help bad actors succeed (#164)
Joan Rohlfing on why we need to worry about more than just nuclear winter (#125)
Annie Jacobsen on the rings of annihilation and electromagnetic pulses from nuclear blasts (#192)
Christian Ruhl on thoughtful philanthropy that funds “right of boom” interventions to prevent nuclear war from threatening civilisation (80k After Hours)
Athena Aktipis on whether society would go all Mad Max in the apocalypse, and the best ways to prepare for a catastrophe (#144)
Will MacAskill on why potatoes are so cool (#130 and #136)
Content editing: Katy Moore and Milo McGuire Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Music: Ben Cordell Transcriptions and web: Katy Moore
AI 2027, a research-based scenario and report from the AI Futures Project, combines forecasting and storytelling to explore a possible future where AI radically transforms the world by 2027.
The report goes through the creation of AI agents, job loss, the role of AIs improving other AIs (R&D acceleration loops), security crackdowns, misalignment — and then a choice: slow down or race ahead.
Kokotajlo’s predictions from 2021 (pre-ChatGPT) in What 2026 looks like have proved prescient, and co-author Eli Lifland is among the world’s top forecasters. So even if you don’t end up buying all its claims, the report’s grounding in serious forecaster views, research, and dozens of wargames makes it worth taking seriously.
Why watch our AI 2027 video
Containing expert interviews, our analysis, and discussion of what a sane world would be doing, we think the video will be an enjoyable and informative watch whether you’re familiar with the report or not.
The video:
Takes you through one of the most detailed and influential AI forecast to date and brings you into the story
Explains the key upshots — how AI progress could accelerate dramatically via powerful feedback loops,
Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there’s a 25% chance that within four years, AI will be able to do everything needed to run an AI company, from writing code to designing experiments to making strategic and business decisions.
As Ryan lays out, AI models are “marching through the human regime”: systems that could handle five-minute tasks two years ago now tackle 90-minute projects. Double that a few more times and we may be automating full jobs rather than just parts of them.
Will setting AI to improve itself lead to an explosive positive feedback loop? Maybe, but maybe not.
The explosive scenario: Once you’ve automated your AI company, you could have the equivalent of 20,000 top researchers, each working 50 times faster than humans with total focus. “You have your AIs, they do a bunch of algorithmic research, they train a new AI, that new AI is smarter and better and more efficient… that new AI does even faster algorithmic research.” In this world, we could see years of AI progress compressed into months or even weeks.
With AIs now doing all of the work of programming their successors and blowing past the human level, Ryan thinks it would be fairly straightforward for them to take over and disempower humanity, if they thought doing so would better achieve their goals. In the interview he lays out the four most likely approaches for them to take.
The linear progress scenario: You automate your company but progress barely accelerates. Why? Multiple reasons, but the most likely is “it could just be that AI R&D research bottlenecks extremely hard on compute.” You’ve got brilliant AI researchers, but they’re all waiting for experiments to run on the same limited set of chips, so can only make modest progress.
Ryan’s median guess splits the difference: perhaps a 20x acceleration that lasts for a few months or years. Transformative, but less extreme than some in the AI companies imagine.
And his 25th percentile case? Progress “just barely faster” than before. All that automation, and all you’ve been able to do is keep pace.
Unfortunately the data we can observe today is so limited that it leaves us with vast error bars. “We’re extrapolating from a regime that we don’t even understand to a wildly different regime,” Ryan believes, “so no one knows.”
But that huge uncertainty means the explosive growth scenario is a plausible one — and the companies building these systems are spending tens of billions to try to make it happen.
In this extensive interview, Ryan elaborates on the above and the policy and technical response necessary to insure us against the possibility that they succeed — a scenario society has barely begun to prepare for.
This episode was recorded on February 21, 2025.
Video editing: Luke Monsour, Simon Monsour, and Dominic Armstrong Audio engineering: Ben Cordell, Milo McGuire, and Dominic Armstrong Music: Ben Cordell Transcriptions and web: Katy Moore
Podcast by Robert Wiblin · Published June 24th, 2025
The era of making AI smarter by just making it bigger is ending. But that doesn’t mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different methods. And those underlying technical changes force a big rethink of what coming years will look like.
Toby Ord — Oxford philosopher and bestselling author of The Precipice — has been tracking these shifts and mapping out the implications both for governments and our lives.
As he explains, until recently anyone can access the best AI in the world “for less than the price of a can of Coke.” But unfortunately, that’s over.
What changed? AI companies first made models smarter by throwing a million times as much computing power at them during training, to make them better at predicting the next word. But with high quality data drying up, that approach petered out in 2024.
So they pivoted to something radically different: instead of training smarter models, they’re giving existing models dramatically more time to think — leading to the rise in “reasoning models” that are at the frontier today.
The results are impressive but this extra computing time comes at a cost: OpenAI’s o3 reasoning model achieved stunning results on a famous AI test by writing an Encyclopedia Britannica‘s worth of reasoning to solve individual problems — at a cost of over $1,000 per question.
This isn’t just technical trivia: if this improvement method sticks, it will change much about how the AI revolution plays out — starting with the fact that we can expect the rich and powerful to get access to the best AI models well before the rest of us.
Companies have also begun applying “reinforcement learning” in which models are asked to solve practical problems, and then told to “do more of that” whenever it looks like they’ve gotten the right answer.
This has led to amazing advances in problem-solving ability — but it also explains why AI models have suddenly gotten much more deceptive. Reinforcement learning has always had the weakness that it encourages creative cheating, or tricking people into thinking you got the right answer even when you didn’t.
Toby shares typical recent examples of this “reward hacking” — from models Googling answers while pretending to reason through the problem (a deception hidden in OpenAI’s own release data), to achieving “100x improvements” by hacking their own evaluation systems.
To cap it all off, it’s getting harder and harder to trust publications from AI companies, as marketing and fundraising have become such dominant concerns.
While companies trumpet the impressive results of the latest models, Toby points out that they’ve actually had to spend a million times as much just to cut model errors by half. And his careful inspection of an OpenAI graph supposedly demonstrating that o3 was the new best model in the world revealed that it was actually no more efficient than its predecessor.
But Toby still thinks it’s critical to pay attention, given the stakes:
…there is some snake oil, there is some fad-type behaviour, and there is some possibility that it is nonetheless a really transformative moment in human history. It’s not an either/or. I’m trying to help people see clearly the actual kinds of things that are going on, the structure of this landscape, and to not be confused by some of these charts.
Recorded on May 23, 2025.
Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Music: Ben Cordell Camera operator: Jeremy Chevillotte Transcriptions and web: Katy Moore
Blog post by Brenton Mayer · Published June 20th, 2025
We’re looking for someone to build and lead 80,000 Hours’ recruiting function from scratch.
We’d like to make 15–25 strong hires each year through a recruiting team. Currently, we make around 10 hires — each of which represents a very large investment in capacity from team leads — which is a major bottleneck on our growth.
Our ideal candidate can solve the problems, hire the team, and build the systems needed to pull this off. So we expect this to be a challenging role which will require substantial relevant experience.
We’re looking for someone who has experience with:
The personnel challenges faced by organisations working in EA or AI — you’ve either worked at an EA organisation, an AI safety organisation, or have otherwise developed a strong understanding of the landscape of hiring challenges these organisations face.
Professional recruiting — this could be as an internally-facing recruiter or as a headhunter. Recruiting for your own team could be enough, especially if you’ve been especially interested in recruitment along the way.
Leading a team — you’ve managed people before and understand what it takes to build and lead a function.
We expect that you’ll initially report to Brenton Mayer (COO) and then transition to reporting to Sashika Coxhead (Head of People Operations) when she returns from maternity leave.
If you meet all three of the criteria mentioned above, we’d really love to hear from you through our EOI form.
80,000 Hours’ aim is to help people find careers that tackle the world’s most pressing problems. To do this, one thing we do is maintain a public list of what we see as the issues where additional people can have the greatest positive impact.
We’ve just made significant updates to our list. Here are the biggest changes:
We’ve broadened our coverage of particularly pressing issues downstream of the possibility that artificial general intelligence (AGI) might be here soon. In particular, we added a profile on AI-enabled power grabs near the top of our list and are adding several writeups of new emerging challenges that advanced AI could create or worsen.
We’ve removed ‘meta’ problems for simplicity and clarity. Our problem profiles list used to feature articles on building effective altruism, broadly improving institutional decision making, and global priorities research — which are all approaches to improving our ability to solve the world’s most pressing problems. Grouping these ‘meta problems’ with object-level problems sometimes causes confusion and makes it hard to compare across cause areas, so we’ve now taken them off the list. But we still think these topics are very important, so the articles are still live on our site, and related articles appear on our list of impactful career paths.
We’ve streamlined the presentation by consolidating related issues and restructuring the page as a more unified ranking rather than separate categories.
About half of people are worried they’ll lose their job to AI. And they’re right to be concerned: AI can now complete real-world coding tasks on GitHub, generate photorealistic video, drive a taxi more safely than humans, and do accurate medical diagnosis. And over the next five years, it’s set to continue to improve rapidly. Eventually, mass automation and falling wages are a real possibility.
But what’s less appreciated is that while AI drives down the value of skills it can do, it drives up the value of skills it can’t. Wages (on average) will increase before they fall, as automation generates a huge amount of wealth, and the remaining tasks become the bottlenecks to further growth. As I’ll explain, ATMs actually increased employment of bank clerks— until online banking automated the job much more.
Your best strategy is to learn the skills that AI will make more valuable, trying to ride the wave of automation. So what are those skills? Here’s a preview:
Skills most likely to increase in value as AI progresses
These will be especially valuable when combined with knowledge of fields needed for AI including machine learning, cyber & information security, data centre & power plant construction, robotics development and maintenance, and (lesso) fields that could expand a lot given economic growth.
In contrast, the future for these skills seems a lot more uncertain:
Podcast by Robert Wiblin · Published June 12th, 2025
For decades, US allies have slept soundly under the protection of America’s overwhelming military might. Donald Trump — with his threats to ditch NATO, seize Greenland, and abandon Taiwan — seems hell-bent on shattering that comfort.
But according to Hugh White — one of the world’s leading strategic thinkers, emeritus professor at the Australian National University, and author of Hard New World: Our Post American Future — Trump isn’t destroying American hegemony. He’s simply revealing that it’s already gone.
“Trump has very little trouble accepting other great powers as co-equals,” Hugh explains. And that happens to align perfectly with a strategic reality the foreign policy establishment desperately wants to ignore: fundamental shifts in global power have made the costs of maintaining a US-led hegemony prohibitively high.
Even under Biden, when Russia invaded Ukraine, the US sent weapons but explicitly ruled out direct involvement. Ukraine matters far more to Russia than America, and this “asymmetry of resolve” makes Putin’s nuclear threats credible where America’s counterthreats simply aren’t.
Hugh’s gloomy prediction: “Europeans will end up conceding to Russia whatever they can’t convince the Russians they’re willing to fight a nuclear war to deny them.”
The Pacific tells the same story. Despite Obama’s “pivot to Asia” and Biden’s tough talk about “winning the competition for the 21st century,” actual US military capabilities there have barely budged while China’s have soared, along with its economy — which is now bigger than the US’s, as measured in purchasing power. Containing China and defending Taiwan would require America to spend 8% of GDP on defence (versus 3.5% today) — and convince Beijing it’s willing to accept Los Angeles being vaporised. Unlike during the Cold War, no president — Trump or otherwise — can make that case to voters.
So what’s next? Hugh’s prognoses are stark:
Taiwan is in an impossible situation and we’re doing them a disservice pretending otherwise.
South Korea, Japan, and one of the EU or Poland will have to go nuclear to defend themselves.
Trump might actually follow through and annex Panama and Greenland — but probably not Canada.
Australia can defend itself from China but needs an entirely different military to do it.
Our new “multipolar” future, split between American, Chinese, Russian, Indian, and European spheres of influence, is a “darker world” than the golden age of US dominance. But Hugh’s message is blunt: for better or worse, 35 years of American hegemony are over. The challenge now is managing the transition peacefully, and creating a stable multipolar order more like Europe’s relatively peaceful 19th century than the chaotic bloodbath Europe suffered in the 17th — which, if replicated today, would be a nuclear bloodbath.
In today’s conversation, Hugh and Rob explore why even AI supremacy might not restore US dominance (spoiler: China still has nukes), why Japan can defend itself but Taiwan can’t, and why a new president won’t be able to reverse the big picture.
This episode was originally recorded on May 30, 2025.
Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Music: Ben Cordell Transcriptions and web: Katy Moore
Independently carry out complicated tasks for hours, days, or weeks
Interact with the physical world in complex and adaptive ways
Consistently learn from past interactions to improve performance over time
In other words, we don’t yet have AGI — AI systems with general intelligence that can reliably replace humans on a wide range of tasks.
In 2024, OpenAI’s Sam Altman declared, “we are now confident we know how to build AGI.” But how will we get there?
Some people think that we can build AGI by scaling up existing models. But others argue that scaling has only seriously improved AI performance in areas like software engineering, where the tasks are clearly defined and often quickly verifiable.
AI models today have a 50% chance of successfully completing a task that would take an expert human one hour. Seven months ago, that number was roughly 30 minutes — and seven months before that, 15 minutes.
These are substantial, multi-step tasks requiring sustained focus: building web applications, conducting machine learning research, or solving complex programming challenges.
Today’s guest, Beth Barnes, is CEO of METR (Model Evaluation & Threat Research) — the leading organisation measuring these capabilities.
Beth’s team has been timing how long it takes skilled humans to complete projects of varying length, then seeing how AI models perform on the same work.
The resulting paper from METR, “Measuring AI ability to complete long tasks,” made waves by revealing that the planning horizon of AI models was doubling roughly every seven months. It’s regarded by many as the most useful AI forecasting work in years.
The companies building these systems aren’t just aware of this trend — they want to harness it as much as possible, and are aggressively pursuing automation of their own research.
And having AI models rapidly build their successors with limited human oversight naturally raises the risk that things will go off the rails if the models at the end of the process lack the goals and constraints we hoped for.
Beth thinks models can already do “meaningful work” on improving themselves, and she wouldn’t be surprised if AI models were able to autonomously self-improve in as little as two years from now — in fact, she says, “It seems hard to rule out even shorter [timelines]. Is there 1% chance of this happening in six, nine months? Yeah, that seems pretty plausible.”
While Silicon Valley is abuzz with these numbers, policymakers remain largely unaware of what’s barrelling toward us — and given the current lack of regulation of AI companies, they’re not even able to access the critical information that would help them decide whether to intervene. Beth adds:
The sense I really want to dispel is, “But the experts must be on top of this. The experts would be telling us if it really was time to freak out.” The experts are not on top of this. Inasmuch as there are experts, they are saying that this is concerning. … And to the extent that I am an expert, I am an expert telling you you should freak out. And there’s not especially anyone else who isn’t saying this.
Beth and Rob discuss all that, plus:
How Beth now thinks that open-weight models are a good thing for AI safety, and what changed her mind
How our poor information security means there’s no such thing as a “closed-weight” model anyway
Whether we can see if an AI is scheming in its chain-of-thought reasoning, and the latest research on “alignment faking”
Why just before deployment is the worst time to evaluate model safety
Why Beth thinks AIs could end up being really good at creative and novel research — something humans tend to think is beyond their reach
Why Beth thinks safety-focused people should stay out of the frontier AI companies — and the advantages smaller organisations have
Areas of AI safety research that Beth thinks is overrated and underrated
Whether it’s feasible to have a science that translates AI models’ increasing use of nonhuman language or ‘neuralese’
How AI is both similar to and different from nuclear arms racing and bioweapons
And much more besides!
This episode was originally recorded on February 17, 2025.
Video editing: Luke Monsour and Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Music: Ben Cordell Transcriptions and web: Katy Moore
We’re excited to announce that 80,000 Hours has completed its spin-out from Effective Ventures (EV) and is now operating as an independent organisation. We announced this decision here in December 2023 and we’ve now concluded spinning out from our parent organisation. We’re deeply grateful to the Effective Ventures leadership and team for their support, especially during the complex transition process over the past year.
Our new structure
We’ve established two new UK entities, each with their own board:
80,000 Hours Limited — this is a nonprofit entity that houses our website, podcast, job board, one-on-one service, and our operations.
80,000 Hours Foundation — this is a registered charity that will facilitate donations and own the 80k intellectual property.
Our new boards
Board of Directors (80,000 Hours Limited):
Konstantin Sietzy — Deputy Director of Talent and Operations at UK AISI
Alex Lawsen — Senior Program Associate at Open Philanthropy and former 80,000 Hours Advising Manager
Anna Weldon — COO at the Centre for Effective Altruism (CEA) and former EV board member
What if there’s something it’s like to be a shrimp — or a chatbot?
For centuries, humans have debated the nature of consciousness, often placing ourselves at the very top. But what about the minds of others — both the animals we share this planet with and the artificial intelligences we’re creating?
We’ve pulled together clips from past conversations with researchers and philosophers who’ve spent years trying to make sense of animal consciousness, artificial sentience, and moral consideration under deep uncertainty.
You’ll hear from:
Robert Long on how we might accidentally create artificial sentience (from episode #146)
Jeff Sebo on when we should extend extend moral consideration to digital beings — and what that would even look like (#173)
Jonathan Birch on what we should learn from the cautionary tale of newborn pain, and other “edge cases” of sentience (#196)
Andrés Jiménez Zorrilla on what it’s like to be a shrimp (80k After Hours)
Meghan Barrett on challenging our assumptions about insects’ experiences (#198)
David Chalmers on why artificial consciousness is entirely possible (#67)
Holden Karnofsky on how we’ll see digital people as… people (#109)
Sébastien Moro on the surprising sophistication of fish cognition and behaviour (#205)
Bob Fischer on how to compare the moral weight of a chicken to that of a human (#182)
Cameron Meyer Shorb on the vast scale of potential wild animal suffering (#210)
Lewis Bollard on how animal advocacy has evolved in response to sentience research (#185)
Anil Seth on the neuroscientific theories of consciousness (#206)
Peter Godfrey-Smith on whether we could upload ourselves to machines (#203)
Buck Shlegeris on whether AI control strategies make humans the bad guys (#214)
Stuart Russell on the moral rights of AI systems (#80)
Will MacAskill on how to integrate digital beings into society (#213)
Carl Shulman on collaboratively sharing the world with digital minds (#191)
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Additional content editing: Katy Moore and Milo McGuire Transcriptions and web: Katy Moore
OpenAI’s recent announcement that its nonprofit would “retain control” of its for-profit business sounds reassuring. But this seemingly major concession, celebrated by so many, is in itself largely meaningless.
Litigator Tyler Whitmer is a coauthor of a newly published letter that describes this attempted sleight of hand and directs regulators on how to stop it.
As Tyler explains, the plan both before and after this announcement has been to convert OpenAI into a Delaware public benefit corporation (PBC) — and this alone will dramatically weaken the nonprofit’s ability to direct the business in pursuit of its charitable purpose: ensuring AGI is safe and “benefits all of humanity.”
Right now, the nonprofit directly controls the business. But were OpenAI to become a PBC, the nonprofit, rather than having its “hand on the lever,” would merely contribute to the decision of who does.
Why does this matter? Today, if OpenAI’s commercial arm were about to release an unhinged AI model that might make money but be bad for humanity, the nonprofit could directly intervene to stop it. In the proposed new structure, it likely couldn’t do much at all.
But it’s even worse than that: even if the nonprofit could select the PBC’s directors, those directors would have fundamentally different legal obligations from those of the nonprofit. A PBC director must balance public benefit with the interests of profit-driven shareholders — by default, they cannot legally prioritise public interest over profits, even if they and the controlling shareholder that appointed them want to do so.
As Tyler points out, there isn’t a single reported case of a shareholder successfully suing to enforce a PBC’s public benefit mission in the 10+ years since the Delaware PBC statute was enacted.
This extra step from the nonprofit to the PBC would also mean that the attorneys general of California and Delaware — who today are empowered to ensure the nonprofit pursues its mission — would find themselves powerless to act. These are probably not side effects but rather a Trojan horse for-profit investors are trying to slip past regulators.
Fortunately this can all be addressed — but it requires either the nonprofit board or the attorneys general of California and Delaware to promptly put their foot down and insist on watertight legal agreements that preserve OpenAI’s current governance safeguards and enforcement mechanisms.
As Tyler explains, the same arrangements that currently bind the OpenAI business have to be written into a new PBC’s certificate of incorporation — something that won’t happen by default and that powerful investors have every incentive to resist.
Without these protections, OpenAI’s new suggested structure wouldn’t “fix” anything. They would be a ruse that preserved the appearance of nonprofit control while gutting its substance.
Listen to our conversation with Tyler Whitmer to understand what’s at stake, and what the AGs and board members must do to ensure OpenAI remains committed to developing artificial general intelligence that benefits humanity rather than just investors.
This episode was originally recorded on May 13, 2025.
Video editing: Simon Monsour and Luke Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Music: Ben Cordell Transcriptions and web: Katy Moore
When attorneys general intervene in corporate affairs, it usually means something has gone seriously wrong. In OpenAI’s case, it appears to have forced a dramatic reversal of the company’s plans to sideline its nonprofit foundation, announced in a blog post that made headlines worldwide.
The company’s sudden announcement that its nonprofit will “retain control” credits “constructive dialogue” with the attorneys general of California and Delaware — corporate-speak for what was likely a far more consequential confrontation behind closed doors. A confrontation perhaps driven by public pressure from Nobel Prize winners, past OpenAI staff, and community organisations.
But whether this change will help depends entirely on the details of implementation — details that remain worryingly vague in the company’s announcement.
Return guest Rose Chan Loui, nonprofit law expert at UCLA, sees potential in OpenAI’s new proposal, but emphasises that “control” must be carefully defined and enforced: “The words are great, but what’s going to back that up?” Without explicitly defining the nonprofit’s authority over safety decisions, the shift could be largely cosmetic.
Why have state officials taken such an interest so far? Host Rob Wiblin notes, “OpenAI was proposing that the AGs would no longer have any say over what this super momentous company might end up doing. … It was just crazy how they were suggesting that they would take all of the existing money and then pursue a completely different purpose.”
Now that they’re in the picture, the AGs have leverage to ensure the nonprofit maintains genuine control over issues of public safety as OpenAI develops increasingly powerful AI.
Rob and Rose explain three key areas where the AGs can make a huge difference to whether this plays out in the public’s best interest:
Ensuring that the contractual agreements giving the nonprofit control over the new Delaware public benefit corporation are watertight, and don’t accidentally shut the AGs out of the picture.
Insisting that a majority of board members are truly independent by prohibiting indirect as well as direct financial stakes in the business.
Insisting that the board is empowered with the money, independent staffing, and access to information which they need to do their jobs.
This episode was originally recorded on May 6, 2025.
Video editing: Simon Monsour and Luke Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Music: Ben Cordell Transcriptions and web: Katy Moore