OpenAI: The nonprofit refuses to die (with Tyler Whitmer)

Last December, the OpenAI business put forward a plan to completely sideline its nonprofit board. But two state attorneys general have now blocked that effort and kept that board very much alive and kicking.

The for-profit’s trouble was that the entire operation was founded on the premise of — and legally pledged to — the purpose of ensuring that “artificial general intelligence benefits all of humanity.” So to get its restructure past regulators, the business entity has had to agree to 20 serious requirements designed to ensure it continues to serve that goal.

Attorney Tyler Whitmer, as part of his work with Legal Advocates for Safe Science and Technology, has been a vocal critic of OpenAI’s original restructure plan. In today’s conversation, he lays out all the changes and whether they will ultimately matter:

Not For Private Gain chart

After months of public pressure and scrutiny from the attorneys general (AGs) of California and Delaware, the December proposal itself was sidelined — and what replaced it is far more complex and goes a fair way towards protecting the original mission:

  • The nonprofit’s charitable purpose — “ensure that artificial general intelligence benefits all of humanity” — now legally controls all safety and security decisions at the company. The four people appointed to the new Safety and Security Committee can block model releases worth tens of billions.
  • The AGs retain ongoing oversight, meeting quarterly with staff and requiring advance notice of any changes that might undermine their authority.
  • OpenAI’s original charter, including the remarkable “stop and assist” commitment, remains binding.

But significant concessions were made. The nonprofit lost exclusive control of AGI once developed — Microsoft can commercialise it through 2032. And transforming from complete control to this hybrid model represents, as Tyler puts it, “a bad deal compared to what OpenAI should have been.”

The real question now: will the Safety and Security Committee use its powers? It currently has four part-time volunteer members and no permanent staff, yet they’re expected to oversee a company racing to build AGI while managing commercial pressures in the hundreds of billions.

Tyler calls on OpenAI to prove they’re serious about following the agreement:

  • Hire management for the SSC.
  • Add more independent directors with AI safety expertise.
  • Maximise transparency about mission compliance.

There’s a real opportunity for this to go well. A lot … depends on the boards, so I really hope that they … step into this role … and do a great job. … I will hope for the best and prepare for the worst, and stay vigilant throughout.

Host Rob Wiblin and Tyler discuss all that and more in today’s episode.

This episode was recorded on November 4, 2025.

Video editing: Milo McGuire, Dominic Armstrong, and Simon Monsour
Audio engineering: Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: CORBIT
Coordination, transcriptions, and web: Katy Moore

The interview in a nutshell

Tyler Whitmer, a commercial litigator and founder of Legal Advocates for Safe Science and Technology (LASST), explains how the California and Delaware attorneys general (AGs) rejected OpenAI’s December 2024 restructure proposal. While he argues the public suffered a “poignant loss” by ceding exclusive nonprofit control over AGI, the AGs forced major concessions that are a significant win for safety and oversight compared to the “flagrant misappropriation” that was originally proposed.

1. The AGs enshrined the nonprofit’s safety mission and control

OpenAI’s original proposal would have sidelined the nonprofit, turning it into a minority shareholder focused on generic grantmaking. The AGs rejected this, forcing a new structure that preserves the nonprofit’s power in several key ways:

  • Mission primacy: The nonprofit’s original mission is now enshrined in the new for-profit public benefit corporation (PBC) charter and legally takes precedence over profit motives on safety and security issues.
  • Nonprofit control: The nonprofit retains significant control over the for-profit PBC, rather than being sidelined:
    • It holds a special class of shares (Class N shares) that gives its board the right to appoint and fire the PBC’s board members.
    • It wields direct power over safety decisions through its Safety and Security Committee (SSC).
  • AG oversight: The AGs preserved their own authority to regulate the enterprise.
    • They now have “additional hooks” through a binding Memorandum of Understanding (California) and a Statement of Non-Objection (Delaware).
    • These agreements grant the AGs rights to regular meetings (twice a year with the nonprofit board, four times a year with senior staff) and 21-day notice of any changes that might undermine their monitoring.
  • Financial stake: The nonprofit’s stake is far larger than the ~10-20% originally proposed.
    • It now receives a 26% stake (worth ~$130 billion) plus an undisclosed warrant for more shares if OpenAI’s valuation increases 10x in 15 years.
    • The AGs hired their own independent financial advisors to confirm this was a fair deal for the nonprofit.
  • Charter and philanthropy: The OpenAI Charter, including the “Stop and Assist” commitment, was preserved. The nonprofit’s new $25 billion philanthropic fund will be partially focused on “technical solutions to AI resilience,” a more relevant goal than those originally suggested.

2. The nonprofit’s Safety & Security Committee (SSC) has real teeth — but may lack the will to use them

The new structure’s primary safety check is the SSC, a committee of the nonprofit board.

  • Explicit power: The AG agreements confirm the SSC has the explicit authority to require mitigation measures, up to and including “halting the release of models or AI systems,” even if those systems would otherwise be permitted for release.
  • The “intestinal fortitude” problem: This gives the four-person committee (Zico Kolter, Adam D’Angelo, Paul Nakasone, and Nicole Seligman) enormous power on paper.
  • The risk: Tyler worries that these four volunteer corporate directors will face immense commercial pressure from the PBC and investors like Microsoft, and may lack the “intestinal fortitude” to actually use this power and block a multibillion dollar product from being deployed.
  • Resource needs: For the SSC to be effective, Tyler argues it must hire its own dedicated, independent staff. The AG agreements allow the nonprofit to get resources, information, and employee access from the PBC via a “Support and Services Agreement” to do its oversight job.

3. The biggest loss: The public’s exclusive claim on AGI is gone

This was the one area where Tyler feels the public lost, as the AGs did not successfully intervene.

  • The old promise: The original 2019 structure implied that once AGI was achieved, it would be governed exclusively by the nonprofit for the benefit of all humanity, and Microsoft’s IP licence would terminate.
  • The new reality: The new structure allows the PBC to commercialise AGI like any other product, distributing its benefits to private investors.
  • Microsoft’s win: Microsoft’s IP rights, which were supposed to terminate at AGI, now extend to 2032, and the company can commercialise AGI independently.
  • Tyler calls this a “dramatic change” and a “poignant loss” that goes against the core founding principle of OpenAI.

4. “Scrappy resistance” worked, and continued vigilance is crucial

Tyler credits public advocacy from groups like his (LASST) for giving the AGs the “wind at their back” and “courage of their convictions” to challenge a “really well-heeled opponent.” The fight now shifts to monitoring this new structure.

  • New legal tools: The new settlement creates new, and potentially stronger, avenues for enforcement:
    • The AGs can now sue OpenAI for violating the explicit MOU/Statement of Non-Objection.
    • Because the safety mission is in the PBC charter, a shareholder derivative suit (from a 2%+ bloc of shareholders) arguing the PBC is ignoring safety for profit is now more likely to succeed in Delaware court.
  • What to watch for:
    • Hiring: Will the nonprofit hire a dedicated CEO or staff for the SSC and its $25B philanthropic fund? (Tyler sees this as a crucial positive signal).
    • New board member: The nonprofit must add another director who isn’t on the PBC board. Will this be a true AI safety expert or a “shill”?
    • Transparency: Will OpenAI publicly release the warrant details, the SSC’s specific powers (from a referenced September 2023 document), and be forthcoming in the annual mission reports required by the Delaware AG?

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.