#209 – Rose Chan Loui on OpenAI's gambit to ditch its nonprofit

One OpenAI critic describes it as “the theft of at least the millennium and quite possibly all of human history.” Are they right?

Back in 2015 OpenAI was but a humble nonprofit. That nonprofit started a for-profit, OpenAI LLC, but made sure to retain ownership and control. But that for-profit, having become a tech giant with vast staffing and investment, has grown tired of its shackles and wants to change the deal.

Facing off against it stand eight out-gunned and out-numbered part-time volunteers. Can they hope to defend the nonprofit’s interests against the overwhelming profit motives arrayed against them?

That’s the question host Rob Wiblin puts to nonprofit legal expert Rose Chan Loui of UCLA, who concludes that with a “heroic effort” and a little help from some friendly state attorneys general, they might just stand a chance.

As Rose lays out, on paper OpenAI is controlled by a nonprofit board that:

  • Can fire the CEO.
  • Would receive all the profits after the point OpenAI makes 100x returns on investment.
  • Is legally bound to do whatever it can to pursue its charitable purpose: “to build artificial general intelligence that benefits humanity.”

But that control is a problem for OpenAI the for-profit and its CEO Sam Altman — all the more so after the board concluded back in November 2023 that it couldn’t trust Altman and attempted to fire him (although those board members were ultimately ousted themselves after failing to adequately explain their rationale).

Nonprofit control makes it harder to attract investors, who don’t want a board stepping in just because they think what the company is doing is bad for humanity. And OpenAI the business is thirsty for as many investors as possible, because it wants to beat competitors and train the first truly general AI — able to do every job humans currently do — which is expected to cost hundreds of billions of dollars.

So, Rose explains, they plan to buy the nonprofit out. In exchange for giving up its windfall profits and the ability to fire the CEO or direct the company’s actions, the board will become minority shareholders with reduced voting rights, and presumably transform into a normal grantmaking foundation instead.

Is this a massive bait-and-switch? A case of the tail not only wagging the dog, but grabbing a scalpel and neutering it?

OpenAI repeatedly committed to California, Delaware, the US federal government, founding staff, and the general public that its resources would be used for its charitable mission and it could be trusted because of nonprofit control. Meanwhile, the divergence in interests couldn’t be more stark: every dollar the for-profit keeps from its nonprofit parent is another dollar it could invest in AGI and ultimately return to investors and staff.

To top it off, the OpenAI business has an investment bank estimating how much compensation it thinks it should pay the nonprofit — while the nonprofit, to our knowledge, isn’t getting its own independent valuation.

But as Rose lays out, this for-profit-to-nonprofit switch is not without precedent, and creating a new $40 billion grantmaking foundation could be its best available path.

In terms of pursuing its charitable purpose, true control of the for-profit might indeed be “priceless” and not something that it could be compensated for. But after failing to remove Sam Altman last November, the nonprofit has arguably lost practical control of its for-profit child, and negotiating for as many resources as possible — then making a lot of grants to further AI safety — could be its best fall-back option to pursue its mission of benefiting humanity.

And with the California and Delaware attorneys general saying they want to be convinced the transaction is fair and the nonprofit isn’t being ripped off, the board might just get the backup it needs to effectively stand up for itself.

In today’s energetic conversation, Rose and host Rob Wiblin discuss:

  • Why it’s essential the nonprofit gets cash and not just equity in any settlement.
  • How the nonprofit board can best play its cards.
  • How any of this can be regarded as an “arm’s-length transaction” as required by law.
  • Whether it’s truly in the nonprofit’s interest to sell control of OpenAI.
  • How to value the nonprofit’s control of OpenAI and its share of profits.
  • Who could challenge the outcome in court.
  • Cases where this has happened before.
  • The weird rule that lets the board cut off Microsoft’s access to OpenAI’s IP.
  • And plenty more.

Producer: Keiran Harris
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Video editing: Simon Monsour
Transcriptions: Katy Moore

Continue reading →

#208 – Elizabeth Cox on the case that TV shows, movies, and novels can improve the world

In today’s episode, Keiran Harris speaks with Elizabeth Cox — founder of the independent production company Should We Studio — about the case that storytelling can improve the world.

They cover:

  • How TV shows and movies compare to novels, short stories, and creative nonfiction if you’re trying to do good.
  • The existing empirical evidence for the impact of storytelling.
  • Their competing takes on the merits of thinking carefully about target audiences.
  • Whether stories can really change minds on deeply entrenched issues, or whether writers need to have more modest goals.
  • Whether humans will stay relevant as creative writers with the rise of powerful AI models.
  • Whether you can do more good with an overtly educational show vs other approaches.
  • Elizabeth’s experience with making her new five-part animated show Ada — including why she chose the topics of civilisational collapse, kidney donations, artificial wombs, AI, and gene drives.
  • The pros and cons of animation as a medium.
  • Career advice for creative writers.
  • Keiran’s idea for a longtermist Christmas movie.
  • And plenty more.

Producer: Keiran Harris
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

Continue reading →

#207 – Sarah Eustis-Guthrie on why she shut down her charity, and why more founders should follow her lead

In today’s episode, host Luisa Rodriguez speaks to Sarah Eustis-Guthrie — cofounder of the now-shut-down Maternal Health Initiative, a postpartum family planning nonprofit in Ghana — about her experience starting and running MHI, and ultimately making the difficult decision to shut down when the programme wasn’t as impactful as they expected.

They cover:

  • The evidence that made Sarah and her cofounder Ben think their organisation could be super impactful for women — both from a health perspective and an autonomy and wellbeing perspective.
  • Early yellow and red flags that maybe they didn’t have the full story about the effectiveness of the intervention.
  • All the steps Sarah and Ben took to build the organisation — and where things went wrong in retrospect.
  • Dealing with the emotional side of putting so much time and effort into a project that ultimately failed.
  • Why it’s so important to talk openly about things that don’t work out, and Sarah’s key lessons learned from the experience.
  • The misaligned incentives that discourage charities from shutting down ineffective programmes.
  • The movement of trust-based philanthropy, and Sarah’s ideas to further improve how global development charities get their funding and prioritise their beneficiaries over their operations.
  • The pros and cons of exploring and pivoting in careers.
  • What it’s like to participate in the Charity Entrepreneurship Incubation Program, and how listeners can assess if they might be a good fit.
  • And plenty more.

Producer: Keiran Harris
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

Continue reading →

Parenting insights from Rob and 8 past guests

With kids very much on the team’s mind we thought it would be fun to review some comments about parenting featured on the show over the years, then have hosts Luisa Rodriguez and Rob Wiblin react to them.

After hearing 8 former guests’ insights, Luisa and Rob chat about:

  • Which of these resonate the most with Rob, now that he’s been a dad for six months (plus an update at nine months).
  • What have been the biggest surprises for Rob in becoming a parent.
  • Whether the benefits of parenthood can actually be studied, and if we get skewed impressions of how bad parenting is.
  • How Rob’s dealt with work and parenting tradeoffs, and his advice for other would-be parents.
  • Rob’s list of recommended purchases for new or upcoming parents

This bonus episode includes excerpts from:

  • Ezra Klein on parenting yourself as well as your children (from episode #157)
  • Holden Karnofsky on freezing embryos and being surprised by how fun it is to have a kid (#110 and #158)
  • Parenting expert Emily Oster on how having kids affect relationships, careers and kids, and what actually makes a difference in young kids’ lives (#178)
  • Russ Roberts on empirical research when deciding whether to have kids (#87)
  • Spencer Greenberg on his surveys of parents (#183)
  • Elie Hassenfeld on how having children reframes his relationship to solving pressing global problems (#153)
  • Bryan Caplan on homeschooling (#172)
  • Nita Farahany on thinking about life and the world differently with kids (#174)

Producer: Keiran Harris
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

Continue reading →

#206 – Anil Seth on the predictive brain and how to study consciousness

In today’s episode, host Luisa Rodriguez speaks to Anil Seth — director of the Sussex Centre for Consciousness Science — about how much we can learn about consciousness by studying the brain.

They cover:

  • What groundbreaking studies with split-brain patients and blindsight have already taught us about the nature of consciousness.
  • Anil’s theory that our perception is a “controlled hallucination” generated by our predictive brains.
  • Whether looking for the parts of the brain that correlate with consciousness is the right way to learn about what consciousness is.
  • Whether our theories of human consciousness can be applied to nonhuman animals.
  • Anil’s thoughts on whether machines could ever be conscious.
  • Disagreements and open questions in the field of consciousness studies, and what areas Anil is most excited to explore next.
  • And much more.

Producer: Keiran Harris
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

Continue reading →

#205 – Sébastien Moro on the most insane things fish can do

In today’s episode, host Luisa Rodriguez speaks to science writer and video blogger Sébastien Moro about the latest research on fish consciousness, intelligence, and potential sentience.

They cover:

  • The insane capabilities of fish in tests of memory, learning, and problem-solving.
  • Examples of fish that can beat primates on cognitive tests and recognise individual human faces.
  • Fishes’ social lives, including pair bonding, “personalities,” cooperation, and cultural transmission.
  • Whether fish can experience emotions, and how this is even studied.
  • The wild evolutionary innovations of fish, who adapted to thrive in diverse environments from mangroves to the deep sea.
  • How some fish have sensory capabilities we can’t even really fathom — like “seeing” electrical fields and colours we can’t perceive.
  • Ethical issues raised by evidence that fish may be conscious and experience suffering.
  • And plenty more.

Producer: Keiran Harris
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

Continue reading →

#204 – Nate Silver on making sense of SBF, and his biggest critiques of effective altruism

In today’s episode, Rob Wiblin speaks with FiveThirtyEight election forecaster and author Nate Silver about his new book: On the Edge: The Art of Risking Everything.

On the Edge explores a cultural grouping Nate dubs “the River” — made up of people who are analytical, competitive, quantitatively minded, risk-taking, and willing to be contrarian. It’s a tendency he considers himself a part of, and the River has been doing well for itself in recent decades — gaining cultural influence through success in finance, technology, gambling, philanthropy, and politics, among other pursuits.

But on Nate’s telling, it’s a group particularly vulnerable to oversimplification and hubris. Where Riverians’ ability to calculate the “expected value” of actions isn’t as good as they believe, their poorly calculated bets can leave a trail of destruction — aptly demonstrated by Nate’s discussion of the extended time he spent with FTX CEO Sam Bankman-Fried before and after his downfall.

Given this show’s focus on the world’s most pressing problems and how to solve them, we narrow in on Nate’s discussion of effective altruism (EA), which has been little covered elsewhere. Nate met many leaders and members of the EA community in researching the book and has watched its evolution online for many years.

Effective altruism is the River style of doing good, because of its willingness to buck both fashion and common sense — making its giving decisions based on mathematical calculations and analytical arguments with the goal of maximising an outcome.

Nate sees a lot to admire in this, but the book paints a mixed picture in which effective altruism is arguably too trusting, too utilitarian, too selfless, and too reckless at some times, while too image-conscious at others.

But while everything has arguable weaknesses, could Nate actually do any better in practice? We ask him:

  • How would Nate spend $10 billion differently than today’s philanthropists influenced by EA?
  • Is anyone else competitive with EA in terms of impact per dollar?
  • Does he have any big disagreements with 80,000 Hours’ advice on how to have impact?
  • Is EA too big a tent to function?
  • What global problems could EA be ignoring?
  • Should EA be more willing to court controversy?
  • Does EA’s niceness leave it vulnerable to exploitation?
  • What moral philosophy would he have modelled EA on?

Rob and Nate also talk about:

  • Nate’s theory of Sam Bankman-Fried’s psychology.
  • Whether we had to “raise or fold” on COVID.
  • Whether Sam Altman and Sam Bankman-Fried are structurally similar cases or not.
  • “Winners’ tilt.”
  • Whether it’s selfish to slow down AI progress.
  • The ridiculous 13 Keys to the White House.
  • Whether prediction markets are now overrated.
  • Whether venture capitalists talk a big talk about risk while pushing all the risk off onto the entrepreneurs they fund.
  • And plenty more.

Producer and editor: Keiran Harris
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Video engineering: Simon Monsour
Transcriptions: Katy Moore

Continue reading →

#203 – Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation

In today’s episode, host Luisa Rodriguez speaks to Peter Godfrey-Smith — bestselling author and science philosopher — about his new book, Living on Earth: Forests, Corals, Consciousness, and the Making of the World.

They cover:

  • Why octopuses and dolphins haven’t developed complex civilisation despite their intelligence.
  • How the role of culture has been crucial in enabling human technological progress.
  • Why Peter thinks the evolutionary transition from sea to land was key to enabling human-like intelligence — and why we should expect to see that in extraterrestrial life too.
  • Whether Peter thinks wild animals’ lives are, on balance, good or bad, and when, if ever, we should intervene in their lives.
  • Whether we can and should avoid death by uploading human minds.
  • And plenty more.

Producer: Keiran Harris
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

Continue reading →

#202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

In today’s episode, host Luisa Rodriguez speaks to Venki Ramakrishnan — molecular biologist and Nobel Prize winner — about his new book, Why We Die: The New Science of Aging and the Quest for Immortality.

They cover:

  • What we can learn about extending human lifespan — if anything — from “immortal” aquatic animal species, cloned sheep, and the oldest people to have ever lived.
  • Which areas of anti-ageing research seem most promising to Venki — including caloric restriction, removing senescent cells, cellular reprogramming, and Yamanaka factors — and which Venki thinks are overhyped.
  • Why eliminating major age-related diseases might only extend average lifespan by 15 years.
  • The social impacts of extending healthspan or lifespan in an ageing population — including the potential danger of massively increasing inequality if some people can access life-extension interventions while others can’t.
  • And plenty more.

Producer: Keiran Harris
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

Continue reading →

#201 – Ken Goldberg on why your robot butler isn't here yet

In today’s episode, host Luisa Rodriguez speaks to Ken Goldberg — robotics professor at UC Berkeley — about the major research challenges still ahead before robots become broadly integrated into our homes and societies.

They cover:

  • Why training robots is harder than training large language models like ChatGPT.
  • The biggest engineering challenges that still remain before robots can be widely useful in the real world.
  • The sectors where Ken thinks robots will be most useful in the coming decades — like homecare, agriculture, and medicine.
  • Whether we should be worried about robot labour affecting human employment.
  • Recent breakthroughs in robotics, and what cutting-edge robots can do today.
  • Ken’s work as an artist, where he explores the complex relationship between humans and technology.
  • And plenty more.

Producer: Keiran Harris
Audio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon Monsour
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

Continue reading →

#200 – Ezra Karger on what superforecasters and experts think about existential risks

In today’s episode, host Luisa Rodriguez speaks to Ezra Karger — research director at the Forecasting Research Institute — about FRI’s 2022 Existential Risk Persuasion Tournament to come up with estimates of a range of catastrophic risks.

They cover:

  • How forecasting can improve our understanding of long-term catastrophic risks from things like AI, nuclear war, pandemics, and climate change.
  • What the Existential Risk Persuasion Tournament (XPT) is, how it was set up, and the results.
  • The challenges of predicting low-probability, high-impact events.
  • Why superforecasters’ estimates of catastrophic risks seem so much lower than experts’, and which group Ezra puts the most weight on.
  • The specific underlying disagreements that superforecasters and experts had about how likely catastrophic risks from AI are.
  • Why Ezra thinks forecasting tournaments can help build consensus on complex topics, and what he wants to do differently in future tournaments and studies.
  • Recent advances in the science of forecasting and the areas Ezra is most excited about exploring next.
  • Whether large language models could help or outperform human forecasters.
  • How people can improve their calibration and start making better forecasts personally.
  • Why Ezra thinks high-quality forecasts are relevant to policymakers, and whether they can really improve decision-making.
  • And plenty more.

Producer: Keiran Harris
Audio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon Monsour
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

Continue reading →

#199 – Nathan Calvin on California's AI bill SB 1047 and its potential to shape US AI policy

In today’s episode, host Luisa Rodriguez speaks to Nathan Calvin — senior policy counsel at the Center for AI Safety Action Fund — about the new AI safety bill in California, SB 1047, which he’s helped shape as it’s moved through the state legislature.

They cover:

  • What’s actually in SB 1047, and which AI models it would apply to.
  • The most common objections to the bill — including how it could affect competition, startups, open source models, and US national security — and which of these objections Nathan thinks hold water.
  • What Nathan sees as the biggest misunderstandings about the bill that get in the way of good public discourse about it.
  • Why some AI companies are opposed to SB 1047, despite claiming that they want the industry to be regulated.
  • How the bill is different from Biden’s executive order on AI and voluntary commitments made by AI companies.
  • Why California is taking state-level action rather than waiting for federal regulation.
  • How state-level regulations can be hugely impactful at national and global scales, and how listeners could get involved in state-level work to make a real difference on lots of pressing problems.
  • And plenty more.

Producer and editor: Keiran Harris
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#198 – Meghan Barrett on upending everything you thought you knew about bugs in 3 hours

In today’s episode, host Luisa Rodriguez speaks to Meghan Barrett — insect neurobiologist and physiologist at Indiana University Indianapolis and founding director of the Insect Welfare Research Society — about her work to understand insects’ potential capacity for suffering, and what that might mean for how humans currently farm and use insects.

They cover:

  • The scale of potential insect suffering in the wild, on farms, and in labs.
  • Examples from cutting-edge insect research, like how depression- and anxiety-like states can be induced in fruit flies and successfully treated with human antidepressants.
  • How size bias might help explain why many people assume insects can’t feel pain.
  • Practical solutions that Meghan’s team is working on to improve farmed insect welfare, such as standard operating procedures for more humane slaughter methods.
  • Challenges facing the nascent field of insect welfare research, and where the main research gaps are.
  • Meghan’s personal story of how she went from being sceptical of insect pain to working as an insect welfare scientist, and her advice for others who want to improve the lives of insects.
  • And much more.

Producer and editor: Keiran Harris
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task

The three biggest AI companies — Anthropic, OpenAI, and DeepMind — have now all released policies designed to make their AI models less likely to go rogue or cause catastrophic damage as they approach, and eventually exceed, human capabilities. Are they good enough?

That’s what host Rob Wiblin tries to hash out in this interview (recorded May 30) with Nick Joseph — one of the 11 people who left OpenAI to launch Anthropic, its current head of training, and a big fan of Anthropic’s “responsible scaling policy” (or “RSP”). Anthropic is the most safety focused of the AI companies, known for a culture that treats the risks of its work as deadly serious.

As Nick explains, these scaling policies commit companies to dig into what new dangerous things a model can do — after it’s trained, but before it’s in wide use. The companies then promise to put in place safeguards they think are sufficient to tackle those capabilities before availability is extended further. For instance, if a model could significantly help design a deadly bioweapon, then its weights need to be properly secured so they can’t be stolen by terrorists interested in using it that way.

As capabilities grow further — for example, if testing shows that a model could exfiltrate itself and spread autonomously in the wild — then new measures would need to be put in place to make that impossible, or demonstrate that such a goal can never arise.

Nick points out three big virtues to the RSP approach:

  • It allows us to set aside the question of when any of these things will be possible, and focus the conversation on what would be necessary if they are possible — something there is usually much more agreement on.
  • It means we don’t take costly precautions that developers will resent and resist before they are actually called for.
  • As the policies don’t allow models to be deployed until suitable safeguards are in place, they align a firm’s commercial incentives with safety — for example, a profitable product release could be blocked by insufficient investments in computer security or alignment research years earlier.

Rob then pushes Nick on some of the best objections to the RSP mechanisms he’s found, including:

  • It’s hard to trust that profit-motivated companies will stick to their scaling policies long term and not water them down to make their lives easier — particularly as commercial pressure heats up.
  • Even if you’re trying hard to find potential safety concerns, it’s difficult to truly measure what models can and can’t do. And if we fail to pick up a dangerous ability that’s really there under the hood, then perhaps all we’ve done is lull ourselves into a false sense of security.
  • Importantly, in some cases humanity simply hasn’t invented safeguards up to the task of addressing AI capabilities that could show up soon. Maybe that will change before it’s too late — but if not, we’re being written a cheque that will bounce when it comes due.

Nick explains why he thinks some of these worries are overblown, while others are legitimate but just point to the hard work we all need to put in to get a good outcome.

Nick and Rob also discuss whether it’s essential to eventually hand over operation of responsible scaling policies to external auditors or regulatory bodies, if those policies are going to be able to hold up against the intense commercial pressures that might end up arrayed against them.

In addition to all of that, Nick and Rob talk about:

  • What Nick thinks are the current bottlenecks in AI progress: people and time (rather than data or compute).
  • What it’s like working in AI safety research at the leading edge, and whether pushing forward capabilities (even in the name of safety) is a good idea.
  • What it’s like working at Anthropic, and how to get the skills needed to help with the safe development of AI.

And as a reminder, if you want to let us know your reaction to this interview, or send any other feedback, our inbox is always open at [email protected].

Producer and editor: Keiran Harris
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Video engineering: Simon Monsour
Transcriptions: Katy Moore

Continue reading →

#196 – Jonathan Birch on the edge cases of sentience and why they matter

In today’s episode, host Luisa Rodriguez speaks to Dr Jonathan Birch — philosophy professor at the London School of Economics — about his new book, The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. (Check out the free PDF version!)

They cover:

  • Candidates for sentience — such as humans with consciousness disorders, foetuses, neural organoids, invertebrates, and AIs.
  • Humanity’s history of acting as if we’re sure that such beings are incapable of having subjective experiences — and why Jonathan thinks that that certainty is completely unjustified.
  • Chilling tales about overconfident policies that probably caused significant suffering for decades.
  • How policymakers can act ethically given real uncertainty.
  • Whether simulating the brain of the roundworm C. elegans or Drosophila (aka fruit flies) would create minds equally sentient to the biological versions.
  • How new technologies like brain organoids could replace animal testing, and how big the risk is that they could be sentient too.
  • Why Jonathan is so excited about citizens’ assemblies.
  • Jonathan’s conversation with the Dalai Lama about whether insects are sentient.
  • And plenty more.

Producer and editor: Keiran Harris
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them

In today’s episode, host Luisa Rodriguez speaks to Sella Nevo — director of the Meselson Center at RAND — about his team’s latest report on how to protect the model weights of frontier AI models from actors who might want to steal them.

They cover:

  • Real-world examples of sophisticated security breaches, and what we can learn from them.
  • Why AI model weights might be such a high-value target for adversaries like hackers, rogue states, and other bad actors.
  • The many ways that model weights could be stolen, from using human insiders to sophisticated supply chain hacks.
  • The current best practices in cybersecurity, and why they may not be enough to keep bad actors away.
  • New security measures that Sella hopes can mitigate with the growing risks.
  • Sella’s work using machine learning for flood forecasting, which has significantly reduced injuries and costs from floods across Africa and Asia.
  • And plenty more.

Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

Can ‘effective accelerationists’ and AI ‘doomers’ agree on a common philosophy of technology? Common sense says no. But programmer and Ethereum cofounder Vitalik Buterin showed otherwise with his essay “My techno-optimism,” which both camps agreed was basically reasonable.

Seeing his social circle divided and fighting, Vitalik hoped to write a careful synthesis of the best ideas from both the optimists and the apprehensive.

Accelerationists are right: most technologies leave us better off, the human cost of delaying further advances can be dreadful, and centralising control in government hands often ends disastrously.

But the fearful are also right: some technologies are important exceptions, AGI has an unusually high chance of being one of those, and there are options to advance AI in safer directions.

The upshot? Defensive acceleration: humanity should run boldly but also intelligently into the future — speeding up technology to get its benefits, but preferentially developing ‘defensive’ technologies that lower systemic risks, permit safe decentralisation of power, and help both individuals and countries defend themselves against aggression and domination.

What sorts of things is he talking about? In the area of disease prevention it’s most easy to see: disinfecting indoor air, rapid-turnaround vaccine platforms, and nasal spray vaccines that prevent disease transmission all make us safer against pandemics without generating any apparent new threats of their own. (And they might eliminate the common cold to boot!)

Entrepreneur First is running a defensive acceleration incubation programme with $250,000 of investment. If these ideas resonate with you, learn about the programme and apply here. You don’t need a business idea yet — just the hustle to start a technology company. But you’ll need to act fast and apply by August 2, 2024.

Vitalik explains how he mentally breaks down defensive technologies into four broad categories:

  • Defence against big physical things like tanks.
  • Defence against small physical things like diseases.
  • Defence against unambiguously hostile information like fraud.
  • Defence against ambiguously hostile information like possible misinformation.

The philosophy of defensive acceleration has a strong basis in history. Mountain or island countries that are hard to invade, like Switzerland or Britain, tend to have more individual freedom and higher quality of life than the Mongolian steppes — where “your entire mindset is around kill or be killed, conquer or be conquered”: a mindset Vitalik calls “the breeding ground for dystopian governance.”

Defensive acceleration arguably goes back to ancient China, where the Mohists focused on helping cities build better walls and fortifications, an approach that really did reduce the toll of violent invasion, until progress in offensive technologies of siege warfare allowed them to be overcome.

In addition to all of that, host Rob Wiblin and Vitalik discuss:

  • AI regulation disagreements being less about AI in particular, and more whether you’re typically more scared of anarchy or totalitarianism.
  • Vitalik’s updated p(doom).
  • Whether the social impact of blockchain and crypto has been a disappointment.
  • Whether humans can merge with AI, and if that’s even desirable.
  • The most valuable defensive technologies to accelerate.
  • How to trustlessly identify what everyone will agree is misinformation
  • Whether AGI is offence-dominant or defence-dominant.
  • Vitalik’s updated take on effective altruism.
  • Plenty more.

Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore

Continue reading →

#193 – Sihao Huang on navigating the geopolitics of US–China AI competition

In today’s episode, host Luisa Rodriguez speaks with Sihao Huang about his work on AI governance and tech policy in China, what’s happening on the ground in China in AI development and regulation, and the importance of US–China cooperation on AI governance.

They cover:

  • Whether the US and China are in an AI race, and the global implications if they are.
  • The state of the art of AI in China.
  • China’s response to American export controls, and whether China is on track to indigenise its semiconductor supply chain.
  • How China’s current AI regulations try to maintain a delicate balance between fostering innovation and keeping strict information control over the Chinese people.
  • Whether China’s extensive AI regulations signal real commitment to safety or just censorship — and how AI is already used in China for surveillance and authoritarian control.
  • How advancements in AI could reshape global power dynamics, and Sihao’s vision of international cooperation to manage this responsibly.
  • And plenty more.

Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#192 – Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the US

In today’s episode, host Luisa Rodriguez speaks to Pulitzer Prize finalist and New York Times bestselling author Annie Jacobsen about her latest book, Nuclear War: A Scenario.

They cover:

  • The most harrowing findings from Annie’s hundreds of hours of interviews with nuclear experts.
  • What happens during the window that the US president would have to decide about nuclear retaliation after hearing news of a possible nuclear attack.
  • The horrific humanitarian impacts on millions of innocent civilians from nuclear strikes.
  • The overlooked dangers of a nuclear-triggered electromagnetic pulse (EMP) attack crippling critical infrastructure within seconds.
  • How we’re on the razor’s edge between the logic of nuclear deterrence and catastrophe, and urgently need reforms to move away from hair-trigger alert nuclear postures.
  • And plenty more.

Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#191 (Part 2) – Carl Shulman on government and society after AGI

This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI. You can listen to them in either order!

If we develop artificial general intelligence that’s reasonably aligned with human goals, it could put a fast and near-free superhuman advisor in everyone’s pocket. How would that affect culture, government, and our ability to act sensibly and coordinate together?

It’s common to worry that AI advances will lead to a proliferation of misinformation and further disconnect us from reality. But in today’s conversation, AI expert Carl Shulman argues that this underrates the powerful positive applications the technology could have in the public sphere.

As Carl explains, today the most important questions we face as a society remain in the “realm of subjective judgement” — without any “robust, well-founded scientific consensus on how to answer them.” But if AI ‘evals’ and interpretability advance to the point that it’s possible to demonstrate which AI models have truly superhuman judgement and give consistently trustworthy advice, society could converge on firm or ‘best-guess’ answers to far more cases.

If the answers are publicly visible and confirmable by all, the pressure on officials to act on that advice could be great.

That’s because when it’s hard to assess if a line has been crossed or not, we usually give people much more discretion. For instance, a journalist inventing an interview that never happened will get fired because it’s an unambiguous violation of honesty norms — but so long as there’s no universally agreed-upon standard for selective reporting, that same journalist will have substantial discretion to report information that favours their preferred view more often than that which contradicts it.

Similarly, today we have no generally agreed-upon way to tell when a decision-maker has behaved irresponsibly. But if experience clearly shows that following AI advice is the wise move, not seeking or ignoring such advice could become more like crossing a red line — less like making an understandable mistake and more like fabricating your balance sheet.

To illustrate the possible impact, Carl imagines how the COVID pandemic could have played out in the presence of AI advisors that everyone agrees are exceedingly insightful and reliable.

To start, advance investment in preventing, detecting, and containing pandemics would likely have been at a much higher and more sensible level, because it would have been straightforward to confirm which efforts passed a cost-benefit test for government spending. Politicians refusing to fund such efforts when the wisdom of doing so is an agreed and established fact would seem like malpractice.

Low-level Chinese officials in Wuhan would have been seeking advice from AI advisors instructed to recommend actions that are in the interests of the Chinese government as a whole. As soon as unexplained illnesses started appearing, that advice would be to escalate and quarantine to prevent a possible new pandemic escaping control, rather than stick their heads in the sand as happened in reality. Having been told by AI advisors of the need to warn national leaders, ignoring the problem would be a career-ending move.

From there, these AI advisors could have recommended stopping travel out of Wuhan in November or December 2019, perhaps fully containing the virus, as was achieved with SARS-1 in 2003. Had the virus nevertheless gone global, President Trump would have been getting excellent advice on what would most likely ensure his reelection. Among other things, that would have meant funding Operation Warp Speed far more than it in fact was, as well as accelerating the vaccine approval process, and building extra manufacturing capacity earlier. Vaccines might have reached everyone far faster.

These are just a handful of simple changes from the real course of events we can imagine — in practice, a significantly superhuman AI might suggest novel approaches better than any we can suggest here.

In the past we’ve usually found it easier to predict how hard technologies like planes or factories will change than to imagine the social shifts that those technologies will create — and the same is likely happening for AI.

Carl Shulman and host Rob Wiblin discuss the above, as well as:

  • The risk of society using AI to lock in its values.
  • The difficulty of preventing coups once AI is key to the military and police.
  • What international treaties we need to make this go well.
  • How to make AI superhuman at forecasting the future.
  • Whether AI will be able to help us with intractable philosophical questions.
  • Whether we need dedicated projects to make wise AI advisors, or if it will happen automatically as models scale.
  • Why Carl doesn’t support AI companies voluntarily pausing AI research, but sees a stronger case for binding international controls once we’re closer to ‘crunch time.’
  • Opportunities for listeners to contribute to making the future go well.

Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore

Continue reading →