How do you navigate a career path when the future of work is uncertain? How important is mentorship versus immediate impact? Is it better to focus on your strengths or on the world’s most pressing problems? Should you specialise deeply or develop a unique combination of skills?
From embracing failure to finding unlikely allies, we bring you 16 diverse perspectives from past guests who’ve found unconventional paths to impact and helped others do the same.
You’ll hear from:
Michael Webb on using AI as a career advisor and the human skills AI can’t replace (from episode #161)
Holden Karnofsky on kicking ass in whatever you do, and which weird ideas are worth betting on (#109, #110, and #158)
Chris Olah on how intersections of particular skills can be a wildly valuable niche (#108)
Michelle Hutchinson on understanding what truly motivates you (#75)
Benjamin Todd on how to make tough career decisions and deal with rejection (#71 and 80k After Hours)
Jeff Sebo on what improv comedy teaches us about doing good in the world (#173)
Spencer Greenberg on recognising toxic people who could derail your career (#183)
Dean Spears on embracing randomness and serendipity (#186)
Karen Levy on finding yourself through travel (#124)
Leah Garcés on finding common ground with unlikely allies (#99)
Hannah Ritchie on being selective about whose advice you follow (#160)
Pardis Sabeti on prioritising physical health (#104)
Sarah Eustis-Guthrie on knowing when to pivot from your current path (#207)
Danny Hernandez on setting triggers for career decisions (#78)
Varsha Venugopal on embracing uncomfortable situations (#113)
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Content editing: Katy Moore and Milo McGuire Transcriptions and web: Katy Moore
Throughout history, technological revolutions have fundamentally shifted the balance of power in society. The Industrial Revolution created conditions where democracies could dominate for the first time — as nations needed educated, informed, and empowered citizens to deploy advanced technologies and remain competitive.
Unfortunately there’s every reason to think artificial general intelligence (AGI) will reverse that trend.
In a new paper published today, Tom Davidson — senior research fellow at the Forethought Centre for AI Strategy — argues that advanced AI systems will enable unprecedented power grabs by tiny groups of people, primarily by removing the need for other human beings to participate.
When a country’s leaders no longer need citizens for economic production, or to serve in the military, there’s much less need to share power with them. “Over the broad span of history, democracy is more the exception than the rule,” Tom points out. “With AI, it will no longer be important to a country’s competitiveness to have an empowered and healthy citizenship.”
Citizens in established democracies are not typically that concerned about coups. We doubt anyone will try, and if they do, we expect human soldiers to refuse to join in. Unfortunately, the AI-controlled military systems of the future will lack those inhibitions. As Tom lays out, “Human armies today are very reluctant to fire on their civilians. If we get instruction-following AIs, then those military systems will just fire.”
Why would AI systems follow the instructions of a would-be tyrant? One answer is that, as militaries worldwide race to incorporate AI to remain competitive, they risk leaving the door open for exploitation by malicious actors in a few ways:
AI systems could be programmed to simply follow orders from the top of the chain of command, without any checks on that power — potentially handing total power indefinitely to any leader willing to abuse that authority.
Superior cyber capabilities could enable small groups to hack into and take full control of AI-operated military infrastructure.
It’s also possible that the companies with the most advanced AI, if it conveyed a significant enough advantage over competitors, could quickly develop armed forces sufficient to overthrow an incumbent regime. History suggests that as few as 10,000 obedient military drones could be sufficient to kill competitors, take control of key centres of power, and make your success fait accompli.
Without active effort spent mitigating risks like these, it’s reasonable to fear that AI systems will destabilise the current equilibrium that enables the broad distribution of power we see in democratic nations.
In this episode, host Rob Wiblin and Tom discuss new research on the question of whether AI-enabled coups are likely, and what we can do about it if they are, as well as:
Whether preventing coups and preventing ‘rogue AI’ require opposite interventions, leaving us in a bind
Whether open sourcing AI weights could be helpful, rather than harmful, for advancing AI safely
Why risks of AI-enabled coups have been relatively neglected in AI safety discussions
How persuasive AGI will really be
How many years we have before these risks become acute
The minimum number of military robots needed to stage a coup
This episode was originally recorded on January 20, 2025.
Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Camera operator: Jeremy Chevillotte Transcriptions and web: Katy Moore
What happens when your desire to do good starts to undermine your own wellbeing?
Over the years, we’ve heard from therapists, charity directors, researchers, psychologists, and career advisors — all wrestling with how to do good without falling apart. Today’s episode brings together insights from 16 past guests on the emotional and psychological costs of pursuing a high-impact career to improve the world — and how to best navigate the all-too-common guilt, burnout, perfectionism, and imposter syndrome along the way.
You’ll hear from:
80,000 Hours’ former CEO on managing anxiety, self-doubt, and a chronic sense of falling short (from episode #100)
Randy Nesse on why we evolved to be anxious and depressed (episode #179)
Hannah Boettcher on how ‘optimisation framing’ can quietly distort our sense of self-worth (from our 80k After Hours feed)
Mental Health Navigator is a service that simplifies finding and accessing mental health information and resources all over the world — built specifically for the effective altruism community
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Content editing: Katy Moore and Milo McGuire Transcriptions and web: Katy Moore
Most AI safety conversations centre on alignment: ensuring AI systems share our values and goals. But despite progress, we’re unlikely to know we’ve solved the problem before the arrival of human-level and superhuman systems in as little as three years.
So some are developing a backup plan to safely deploy models we fear are actively scheming to harm us — so-called “AI control.” While this may sound mad, given the reluctance of AI companies to delay deploying anything they train, not developing such techniques is probably even crazier.
Today’s guest — Buck Shlegeris, CEO of Redwood Research — has spent the last few years developing control mechanisms, and for human-level systems they’re more plausible than you might think. He argues that given companies’ unwillingness to incur large costs for security, accepting the possibility of misalignment and designing robust safeguards might be one of our best remaining options.
Buck asks us to picture a scenario where, in the relatively near future, AI companies are employing 100,000 AI systems running 16 times faster than humans to automate AI research itself. These systems would need dangerous permissions: the ability to run experiments, access model weights, and push code changes. As a result, a misaligned system could attempt to hack the data centre, exfiltrate weights, or sabotage research. In such a world, misalignment among these AIs could be very dangerous.
But in the absence of a method for reliably aligning frontier AIs, Buck argues for implementing practical safeguards to prevent catastrophic outcomes. His team has been developing and testing a range of straightforward, cheap techniques to detect and prevent risky behaviour by AIs — such as auditing AI actions with dumber but trusted models, replacing suspicious actions, and asking questions repeatedly to catch randomised attempts at deception.
Most importantly, these methods are designed to be cheap and shovel-ready. AI control focuses on harm reduction using practical techniques — techniques that don’t require new, fundamental breakthroughs before companies could reasonably implement them, and that don’t ask us to forgo the benefits of deploying AI.
As Buck puts it:
Five years ago I thought of misalignment risk from AIs as a really hard problem that you’d need some really galaxy-brained fundamental insights to resolve. Whereas now, to me the situation feels a lot more like we just really know a list of 40 things where, if you did them — none of which seem that hard — you’d probably be able to not have very much of your problem.
Of course, even if Buck is right, we still need to do those 40 things — which he points out we’re not on track for. And AI control agendas have their limitations: they aren’t likely to work once AI systems are much more capable than humans, since greatly superhuman AIs can probably work around whatever limitations we impose.
Still, AI control agendas seem to be gaining traction within AI safety. Buck and host Rob Wiblin discuss all of the above, plus:
Why he’s more worried about AI hacking its own data centre than escaping
What to do about “chronic harm,” where AI systems subtly underperform or sabotage important work like alignment research
Why he might want to use a model he thought could be conspiring against him
Why he would feel safer if he caught an AI attempting to escape
Why many control techniques would be relatively inexpensive
How to use an untrusted model to monitor another untrusted model
What the minimum viable intervention in a “lazy” AI company might look like
How even small teams of safety-focused staff within AI labs could matter
The moral considerations around controlling potentially conscious AI systems, and whether it’s justified
This episode was originally recorded on February 21, 2025.
Video: Simon Monsour and Luke Monsour Audio engineering: Ben Cordell, Milo McGuire, and Dominic Armstrong Transcriptions and web: Katy Moore
What happens when a USB cable can secretly control your system? Are we hurtling toward a security nightmare as critical infrastructure connects to the internet? Is it possible to secure AI model weights from sophisticated attackers? And could AI might actually make computer security better rather than worse?
With AI security concerns becoming increasingly urgent, we bring you insights from 15 top experts across information security, AI safety, and governance, examining the challenges of protecting our most powerful AI models and digital infrastructure — including a sneak peek from an episode that hasn’t yet been released with Tom Davidson, where he explains how we should be more worried about “secret loyalties” in AI agents.
You’ll hear:
Holden Karnofsky on why every good future relies on strong infosec, and how hard it’s been to hire security experts (from episode #158)
Tantum Collins on why infosec might be the rare issue everyone agrees on (episode #166)
Nick Joseph on whether AI companies can develop frontier models safely with the current state of information security (episode #197)
Sella Nevo on why AI model weights are so valuable to steal, the weaknesses of air-gapped networks, and the risks of USBs (episode #195)
Kevin Esvelt on what cryptographers can teach biosecurity experts (episode #164)
Lennart Heim on on Rob’s computer security nightmares (episode #155)
Zvi Mowshowitz on the insane lack of security mindset at some AI companies (episode #184)
Nova DasSarma on the best current defences against well-funded adversaries, politically motivated cyberattacks, and exciting progress in infosecurity (episode #132)
Bruce Schneier on whether AI could eliminate software bugs for good, and why it’s bad to hook everything up to the internet (episode #64)
Nita Farahany on the dystopian risks of hacked neurotech (episode #174)
Vitalik Buterin on how cybersecurity is the key to defence-dominant futures (episode #194)
Nathan Labenz on how even internal teams at AI companies may not know what they’re building (episode #176)
Allan Dafoe on backdooring your own AI to prevent theft (episode #212)
Tom Davidson on how dangerous “secret loyalties” in AI models could be (episode to be released!)
Carl Shulman on the challenge of trusting foreign AI models (episode #191, part 2)
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Content editing: Katy Moore and Milo McGuire Transcriptions and web: Katy Moore
The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang theory, quantum mechanics, birth control, and more. Now imagine all of it compressed into just 10 years.
That’s the future Will MacAskill — philosopher, founding figure of effective altruism, and now researcher at the Forethought Centre for AI Strategy — argues we need to prepare for in his new paper “Preparing for the intelligence explosion.” Not in the distant future, but probably in three to seven years.
The reason: AI systems are rapidly approaching human-level capability in scientific research and intellectual tasks. Once AI exceeds human abilities in AI research itself, we’ll enter a recursive self-improvement cycle — creating wildly more capable systems. Soon after, by improving algorithms and manufacturing chips, we’ll deploy millions, then billions, then trillions of superhuman AI scientists working 24/7 without human limitations. These systems will collaborate across disciplines, build on each discovery instantly, and conduct experiments at unprecedented scale and speed — compressing a century of scientific progress into mere years.
Will compares the resulting situation to a mediaeval king suddenly needing to upgrade from bows and arrows to nuclear weapons to deal with an ideological threat from a country he’s never heard of, while simultaneously grappling with learning that he descended from monkeys and his god doesn’t exist.
What makes this acceleration perilous is that while technology can speed up almost arbitrarily, human institutions and decision-making are much more fixed.
Consider the case of nuclear weapons: in this compressed timeline, there would have been just a three-month gap between the Manhattan Project’s start and the Hiroshima bombing, and the Cuban Missile Crisis would have lasted just over a day.
Robert Kennedy, Sr., who helped navigate the actual Cuban Missile Crisis, once remarked that if they’d had to make decisions on a much more accelerated timeline — like 24 hours rather than 13 days — they would likely have taken much more aggressive, much riskier actions.
So there’s reason to worry about our own capacity to make wise choices. And in “Preparing for the intelligence explosion,” Will lays out 10 “grand challenges” we’ll need to quickly navigate to successfully avoid things going wrong during this period.
Will’s thinking has evolved a lot since his last appearance on the show. While he was previously sceptical about whether we live at a “hinge of history,” he now believes we’re entering one of the most critical periods for humanity ever — with decisions made in the next few years potentially determining outcomes millions of years into the future.
But Will also sees reasons for optimism. The very AI systems causing this acceleration could be deployed to help us navigate it — if we use them wisely. And while AI safety researchers rightly focus on preventing AI systems from going rogue, Will argues we should equally attend to ensuring the futures we deliberately build are truly worth living in.
In this wide-ranging conversation with host Rob Wiblin, Will maps out the challenges we’d face in this potential “intelligence explosion” future, and what we might do to prepare. They discuss:
Why leading AI safety researchers now think there’s dramatically less time before AI is transformative than they’d previously thought
The three different types of intelligence explosions that occur in order
Will’s list of resulting grand challenges — including destructive technologies, space governance, concentration of power, and digital rights
How to prevent ourselves from accidentally “locking in” mediocre futures for all eternity
Ways AI could radically improve human coordination and decision making
Why we should aim for truly flourishing futures, not just avoiding extinction
This episode was originally recorded on February 7, 2025.
Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Camera operator: Jeremy Chevillotte Transcriptions and web: Katy Moore
When OpenAI announced plans to convert from nonprofit to for-profit control last October, it likely didn’t anticipate the legal labyrinth it now faces. A recent court order in Elon Musk’s lawsuit against the company suggests OpenAI’s restructuring faces serious legal threats, which will complicate its efforts to raise tens of billions in investment.
As nonprofit legal expert Rose Chan Loui explains, the court order set up multiple pathways for OpenAI’s conversion to be challenged. Though Judge Yvonne Gonzalez Rogers denied Musk’s request to block the conversion before a trial, she expedited proceedings to the fall so the case could be heard before it’s likely to go ahead. (See Rob’s brief summary of developments in the case.)
And if Musk’s donations to OpenAI are enough to give him the right to bring a case, Rogers sounded very sympathetic to his objections to the OpenAI foundation selling the company, benefiting the founders who forswore “any intent to use OpenAI as a vehicle to enrich themselves.”
But that’s just one of multiple threats. The attorneys general (AGs) in California and Delaware both have standing to object to the conversion on the grounds that it is contrary to the foundation’s charitable purpose and therefore wrongs the public — which was promised all the charitable assets would be used to develop AI that benefits all of humanity, not to win a commercial race. Some, including Rose, suspect the court order was written as a signal to those AGs to take action.
And, as she explains, if the AGs remain silent, the court itself, seeing that the public interest isn’t being represented, could appoint a “special interest party” to take on the case in their place.
This places the OpenAI foundation board in a bind: proceeding with the restructuring despite this legal cloud could expose them to the risk of being sued for a gross breach of their fiduciary duty to the public. The board is made up of respectable people who didn’t sign up for that.
And of course it would cause chaos for the company if all of OpenAI’s fundraising and governance plans were brought to a screeching halt by a federal court judgment landing at the eleventh hour.
Host Rob Wiblin and Rose Chan Loui discuss all of the above as well as what justification the OpenAI foundation could offer for giving up control of the company despite its charitable purpose, and how the board might adjust their plans to make the for-profit switch more legally palatable.
This episode was originally recorded on March 6, 2025.
Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Transcriptions: Katy Moore
Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through.
That’s how today’s guest Allan Dafoe — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don’t. Those who resist too much can find themselves taken over or rendered irrelevant.
This dynamic played out dramatically in 1853 when US Commodore Perry sailed into Tokyo Bay with steam-powered warships that seemed magical to the Japanese, who had spent centuries deliberately limiting their technological development. With far greater military power, the US was able to force Japan to open itself to trade. Within 15 years, Japan had undergone the Meiji Restoration and transformed itself in a desperate scramble to catch up.
Today we see hints of similar pressure around artificial intelligence. Even companies, countries, and researchers deeply concerned about where AI could take us feel compelled to push ahead — worried that if they don’t, less careful actors will develop transformative AI capabilities at around the same time anyway.
But Allan argues this technological determinism isn’t absolute. While broad patterns may be inevitable, history shows we do have some ability to steer how technologies are developed, by who, and what they’re used for first.
As part of that approach, Allan has been promoting efforts to make AI more capable of sophisticated cooperation, and improving the tests Google uses to measure how well its models could do things like mislead people, hack and take control of their own servers, or spread autonomously in the wild.
As of mid-2024 they didn’t seem dangerous at all, but we’ve learned that our ability to measure these capabilities is good, but imperfect. If we don’t find the right way to ‘elicit’ an ability we can miss that it’s there.
Subsequent research from Anthropic and Redwood Research suggests there’s even a risk that future models may play dumb to avoid their goals being altered.
That has led DeepMind to a “defence in depth” approach: carefully staged deployment starting with internal testing, then trusted external testers, then limited release, then watching how models are used in the real world. By not releasing model weights, DeepMind is able to back up and add additional safeguards if experience shows they’re necessary.
But with much more powerful and general models on the way, individual company policies won’t be sufficient by themselves. Drawing on his academic research into how societies handle transformative technologies, Allan argues we need coordinated international governance that balances safety with our desire to get the massive potential benefits of AI in areas like healthcare and education as quickly as possible.
Host Rob and Allan also cover:
The most exciting beneficial applications of AI
Whether and how we can influence the development of technology
What DeepMind is doing to evaluate and mitigate risks from frontier AI systems
Why cooperative AI may be as important as aligned AI
The role of democratic input in AI governance
What kinds of experts are most needed in AI safety and governance
And much more
Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Camera operator: Jeremy Chevillotte Transcriptions: Katy Moore
On Monday, February 10, Elon Musk made the OpenAI nonprofit foundation an offer they want to refuse, but might have trouble doing so: $97.4 billion for its stake in the for-profit company, plus the freedom to stick with its current charitable mission.
For a normal company takeover bid, this would already be spicy. But OpenAI’s unique structure — a nonprofit foundation controlling a for-profit corporation — turns the gambit into an audacious attack on the plan OpenAI announced in December to free itself from nonprofit oversight.
As today’s guest Rose Chan Loui — founding executive director of UCLA Law’s Lowell Milken Center for Philanthropy and Nonprofits — explains, OpenAI’s nonprofit board now faces a challenging choice.
The nonprofit has a legal duty to pursue its charitable mission of ensuring that AI benefits all of humanity to the best of its ability. And if Musk’s bid would better accomplish that mission than the for-profit’s proposal — that the nonprofit give up control of the company and change its charitable purpose to the vague and barely related “pursue charitable initiatives in sectors such as health care, education, and science” — then it’s not clear the California or Delaware Attorneys General will, or should, approve the deal.
OpenAI CEO Sam Altman quickly tweeted “no thank you” — but that was probably a legal slipup, as he’s not meant to be involved in such a decision, which has to be made by the nonprofit board ‘at arm’s length’ from the for-profit company Sam himself runs.
The board could raise any number of objections: maybe Musk doesn’t have the money, or the purchase would be blocked on antitrust grounds, seeing as Musk owns another AI company (xAI), or Musk might insist on incompetent board appointments that would interfere with the nonprofit foundation pursuing any goal.
But as Rose and Rob lay out, it’s not clear any of those things is actually true.
In this emergency podcast recorded soon after Elon’s offer, Rose and Rob also cover:
Why OpenAI wants to change its charitable purpose and whether that’s legally permissible
On what basis the attorneys general will decide OpenAI’s fate
The challenges in valuing the nonprofit’s “priceless” position of control
Whether Musk’s offer will force OpenAI to up their own bid, and whether they could raise the money
If other tech giants might now jump in with competing offers
How politics could influence the attorneys general reviewing the deal
What Rose thinks should actually happen to protect the public interest
Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Transcriptions: Katy Moore
Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like something to be a superhuman AGI?
With AGI back in the headlines, we bring you 15 opinionated highlights from the show addressing those and other questions, intermixed with opinions from hosts Luisa Rodriguez and Rob Wiblin recorded back in 2023.
You can decide whether the views we expressed (and those from guests) then have held up these last two busy years. You’ll hear:
It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode, including:
How to use the microphone on someone’s mobile phone to figure out what password they’re typing into their laptop
Why mercilessly driving the New World screwworm to extinction could be the most compassionate thing humanity has ever done
Why evolutionary psychology doesn’t support a cynical view of human nature but actually explains why so many of us are intensely sensitive to the harms we cause to others
How superforecasters and domain experts seem to disagree so much about AI risk, but when you zoom in it’s mostly a disagreement about timing
Why the sceptics are wrong and you will want to use robot nannies to take care of your kids — and also why despite having big worries about the development of AGI, Carl Shulman is strongly against efforts to pause AI research today
How much of the gender pay gap is due to direct pay discrimination vs other factors
How cleaner wrasse fish blow the mirror test out of the water
Why effective altruism may be too big a tent to work well
How we could best motivate pharma companies to test existing drugs to see if they help cure other diseases — something they currently have no reason to bother with
Remember that all of these clips come from the 20-minute highlight reels we make for every episode, which are released on our sister feed, 80k After Hours. So if you’re struggling to keep up with our regularly scheduled entertainment, you can still get the best parts of our conversations there.
It has been a hell of a year, and we can only imagine next year is going to be even weirder — but Luisa and Rob will be here to keep you company as Earth hurtles through the galaxy to a fate as yet unknown.
Enjoy, and look forward to speaking with you in 2025!
Producing and editing: Keiran Harris Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Video editing: Simon Monsour Transcriptions: Katy Moore
Rich countries seem to find it harder and harder to do anything that creates some losers. People who don’t want houses, offices, power stations, trains, subway stations (or whatever) built in their area can usually find some way to block them, even if the benefits to society outweigh the costs 10 or 100 times over.
The result of this ‘vetocracy’ has been skyrocketing rent in major cities — not to mention exacerbating homelessness, energy poverty, and a host of other social maladies. This has been known for years but precious little progress has been made. When trains, tunnels, or nuclear reactors are occasionally built, they’re comically expensive and slow compared to 50 years ago. And housing construction in the UK and California has barely increased, remaining stuck at less than half what it was in the ’60s and ’70s.
Today’s guest — economist and editor of Works in ProgressSam Bowman — isn’t content to just condemn the Not In My Backyard (NIMBY) mentality behind this stagnation. He wants to actually get a tonne of stuff built, and by that standard the strategy of attacking ‘NIMBYs’ has been an abject failure. They are too politically powerful, and if you try to crush them, sooner or later they crush you.
So, as Sam explains, a different strategy is needed, one that acknowledges that opponents of development are often correct that a given project will make them worse off. But the thing is, in the cases we care about, these modest downsides are outweighed by the enormous benefits to others — who will finally have a place to live, be able to get to work, and have the energy to heat their home.
But democracies are majoritarian, so if most existing residents think they’ll be a little worse off if more dwellings are built in their area, it’s no surprise they aren’t getting built.
Luckily we already have a simple way to get people to do things they don’t enjoy for the greater good, a strategy that we apply every time someone goes in to work at a job they wouldn’t do for free: compensate them.
Currently, if you don’t want apartments going up on your street, your only option is to try to veto it or impose enough delays that the project’s not worth doing. But there’s a better way: if a project costs one person $1 and benefits another person $100, why can’t they share the benefits to win over the ‘losers’? Sam thinks experience around the world in cities like Tel Aviv, Houston, and London shows they can.
Fortunately our construction crisis is so bad there’s a lot of surplus to play with. Sam notes that if you’re able to get permission to build on a piece of farmland in southeast England, that property increases in value 180-fold: “You’re almost literally printing money to get permission to build houses.” So if we can identify the people who are actually harmed by a project and compensate them a sensible amount, we can turn them from opponents into active supporters who will fight to prevent it from getting blocked.
Sam thinks this idea, which he calls “Coasean democracy,” could create a politically sustainable majority in favour of building and underlies the proposals he thinks have the best chance of success:
Spending the additional property tax produced by a new development in the local area, rather than transferring it to a regional or national pot — and even charging new arrivals higher rates for some period of time
Allowing individual streets to vote permit medium-density townhouses (‘street votes’), or apartment blocks to vote to be replaced by taller apartments
Upzoning a whole city while allowing individual streets to vote to opt out
In this interview, host Rob Wiblin and Sam discuss the above as well as:
How this approach could backfire
How to separate truly harmed parties from ‘slacktivists’ who just want to complain on Instagram
The empirical results where these approaches have been tried
The prospects for any of this happening on a mass scale
How the UK ended up with the worst planning problems in the world
Why avant garde architects might be causing enormous harm
Why we should start up new good institutions alongside existing bad ones and let them run in parallel
Why northern countries can’t rely on solar or wind and need nuclear to avoid high energy prices
Why Ozempic is highly rated but still highly underrated
How the field of ‘progress studies’ has maintained high intellectual standards
And plenty more
Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Transcriptions: Katy Moore
In today’s episode, host Luisa Rodriguez speaks to Cameron Meyer Shorb — executive director of the Wild Animal Initiative — about the cutting-edge research on wild animal welfare.
They cover:
How it’s almost impossible to comprehend the sheer number of wild animals on Earth — and why that makes their potential suffering so important to consider.
How bad experiences like disease, parasites, and predation truly are for wild animals — and how we would even begin to study that empirically.
The tricky ethical dilemmas in trying to help wild animals without unintended consequences for ecosystems or other potentially sentient beings.
Potentially promising interventions to help wild animals — like selective reforestation, vaccines, fire management, and gene drives.
Why Cameron thinks the best approach to improving wild animal welfare is to first build a dedicated research field — and how Wild Animal Initiative’s activities support this.
The many career paths in science, policy, and technology that could contribute to improving wild animal welfare.
And much more.
Producer: Keiran Harris Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris Transcriptions: Katy Moore
One OpenAI critic describes it as “the theft of at least the millennium and quite possibly all of human history.” Are they right?
Back in 2015 OpenAI was but a humble nonprofit. That nonprofit started a for-profit, OpenAI LLC, but made sure to retain ownership and control. But that for-profit, having become a tech giant with vast staffing and investment, has grown tired of its shackles and wants to change the deal.
Facing off against it stand eight out-gunned and out-numbered part-time volunteers. Can they hope to defend the nonprofit’s interests against the overwhelming profit motives arrayed against them?
That’s the question host Rob Wiblin puts to nonprofit legal expert Rose Chan Loui of UCLA, who concludes that with a “heroic effort” and a little help from some friendly state attorneys general, they might just stand a chance.
As Rose lays out, on paper OpenAI is controlled by a nonprofit board that:
Can fire the CEO.
Would receive all the profits after the point OpenAI makes 100x returns on investment.
Is legally bound to do whatever it can to pursue its charitable purpose: “to build artificial general intelligence that benefits humanity.”
But that control is a problem for OpenAI the for-profit and its CEO Sam Altman — all the more so after the board concluded back in November 2023 that it couldn’t trust Altman and attempted to fire him (although those board members were ultimately ousted themselves after failing to adequately explain their rationale).
Nonprofit control makes it harder to attract investors, who don’t want a board stepping in just because they think what the company is doing is bad for humanity. And OpenAI the business is thirsty for as many investors as possible, because it wants to beat competitors and train the first truly general AI — able to do every job humans currently do — which is expected to cost hundreds of billions of dollars.
So, Rose explains, they plan to buy the nonprofit out. In exchange for giving up its windfall profits and the ability to fire the CEO or direct the company’s actions, the board will become minority shareholders with reduced voting rights, and presumably transform into a normal grantmaking foundation instead.
Is this a massive bait-and-switch? A case of the tail not only wagging the dog, but grabbing a scalpel and neutering it?
OpenAI repeatedly committed to California, Delaware, the US federal government, founding staff, and the general public that its resources would be used for its charitable mission and it could be trusted because of nonprofit control. Meanwhile, the divergence in interests couldn’t be more stark: every dollar the for-profit keeps from its nonprofit parent is another dollar it could invest in AGI and ultimately return to investors and staff.
To top it off, the OpenAI business has an investment bank estimating how much compensation it thinks it should pay the nonprofit — while the nonprofit, to our knowledge, isn’t getting its own independent valuation.
But as Rose lays out, this for-profit-to-nonprofit switch is not without precedent, and creating a new $40 billion grantmaking foundation could be its best available path.
In terms of pursuing its charitable purpose, true control of the for-profit might indeed be “priceless” and not something that it could be compensated for. But after failing to remove Sam Altman last November, the nonprofit has arguably lost practical control of its for-profit child, and negotiating for as many resources as possible — then making a lot of grants to further AI safety — could be its best fall-back option to pursue its mission of benefiting humanity.
And with the California and Delaware attorneys general saying they want to be convinced the transaction is fair and the nonprofit isn’t being ripped off, the board might just get the backup it needs to effectively stand up for itself.
In today’s energetic conversation, Rose and host Rob Wiblin discuss:
Why it’s essential the nonprofit gets cash and not just equity in any settlement.
How the nonprofit board can best play its cards.
How any of this can be regarded as an “arm’s-length transaction” as required by law.
Whether it’s truly in the nonprofit’s interest to sell control of OpenAI.
How to value the nonprofit’s control of OpenAI and its share of profits.
Who could challenge the outcome in court.
Cases where this has happened before.
The weird rule that lets the board cut off Microsoft’s access to OpenAI’s IP.
And plenty more.
Producer: Keiran Harris Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Video editing: Simon Monsour Transcriptions: Katy Moore
In today’s episode, Keiran Harris speaks with Elizabeth Cox — founder of the independent production company Should We Studio — about the case that storytelling can improve the world.
They cover:
How TV shows and movies compare to novels, short stories, and creative nonfiction if you’re trying to do good.
The existing empirical evidence for the impact of storytelling.
Their competing takes on the merits of thinking carefully about target audiences.
Whether stories can really change minds on deeply entrenched issues, or whether writers need to have more modest goals.
Whether humans will stay relevant as creative writers with the rise of powerful AI models.
Whether you can do more good with an overtly educational show vs other approaches.
Elizabeth’s experience with making her new five-part animated show Ada — including why she chose the topics of civilisational collapse, kidney donations, artificial wombs, AI, and gene drives.
The pros and cons of animation as a medium.
Career advice for creative writers.
Keiran’s idea for a longtermist Christmas movie.
And plenty more.
Producer: Keiran Harris Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris Transcriptions: Katy Moore
In today’s episode, host Luisa Rodriguez speaks to Sarah Eustis-Guthrie — cofounder of the now-shut-down Maternal Health Initiative, a postpartum family planning nonprofit in Ghana — about her experience starting and running MHI, and ultimately making the difficult decision to shut down when the programme wasn’t as impactful as they expected.
They cover:
The evidence that made Sarah and her cofounder Ben think their organisation could be super impactful for women — both from a health perspective and an autonomy and wellbeing perspective.
Early yellow and red flags that maybe they didn’t have the full story about the effectiveness of the intervention.
All the steps Sarah and Ben took to build the organisation — and where things went wrong in retrospect.
Dealing with the emotional side of putting so much time and effort into a project that ultimately failed.
Why it’s so important to talk openly about things that don’t work out, and Sarah’s key lessons learned from the experience.
The misaligned incentives that discourage charities from shutting down ineffective programmes.
The movement of trust-based philanthropy, and Sarah’s ideas to further improve how global development charities get their funding and prioritise their beneficiaries over their operations.
The pros and cons of exploring and pivoting in careers.
With kids very much on the team’s mind we thought it would be fun to review some comments about parenting featured on the show over the years, then have hosts Luisa Rodriguez and Rob Wiblin react to them.
After hearing 8 former guests’ insights, Luisa and Rob chat about:
Which of these resonate the most with Rob, now that he’s been a dad for six months (plus an update at nine months).
What have been the biggest surprises for Rob in becoming a parent.
Whether the benefits of parenthood can actually be studied, and if we get skewed impressions of how bad parenting is.
How Rob’s dealt with work and parenting tradeoffs, and his advice for other would-be parents.
Ezra Klein on parenting yourself as well as your children (from episode #157)
Holden Karnofsky on freezing embryos and being surprised by how fun it is to have a kid (#110 and #158)
Parenting expert Emily Oster on how having kids affect relationships, careers and kids, and what actually makes a difference in young kids’ lives (#178)
Russ Roberts on empirical research when deciding whether to have kids (#87)
Spencer Greenberg on his surveys of parents (#183)
Elie Hassenfeld on how having children reframes his relationship to solving pressing global problems (#153)
In today’s episode, host Luisa Rodriguez speaks to science writer and video blogger Sébastien Moro about the latest research on fish consciousness, intelligence, and potential sentience.
They cover:
The insane capabilities of fish in tests of memory, learning, and problem-solving.
Examples of fish that can beat primates on cognitive tests and recognise individual human faces.
Fishes’ social lives, including pair bonding, “personalities,” cooperation, and cultural transmission.
Whether fish can experience emotions, and how this is even studied.
The wild evolutionary innovations of fish, who adapted to thrive in diverse environments from mangroves to the deep sea.
How some fish have sensory capabilities we can’t even really fathom — like “seeing” electrical fields and colours we can’t perceive.
Ethical issues raised by evidence that fish may be conscious and experience suffering.
And plenty more.
Producer: Keiran Harris Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris Transcriptions: Katy Moore
On the Edge explores a cultural grouping Nate dubs “the River” — made up of people who are analytical, competitive, quantitatively minded, risk-taking, and willing to be contrarian. It’s a tendency he considers himself a part of, and the River has been doing well for itself in recent decades — gaining cultural influence through success in finance, technology, gambling, philanthropy, and politics, among other pursuits.
But on Nate’s telling, it’s a group particularly vulnerable to oversimplification and hubris. Where Riverians’ ability to calculate the “expected value” of actions isn’t as good as they believe, their poorly calculated bets can leave a trail of destruction — aptly demonstrated by Nate’s discussion of the extended time he spent with FTX CEO Sam Bankman-Fried before and after his downfall.
Given this show’s focus on the world’s most pressing problems and how to solve them, we narrow in on Nate’s discussion of effective altruism (EA), which has been little covered elsewhere. Nate met many leaders and members of the EA community in researching the book and has watched its evolution online for many years.
Effective altruism is the River style of doing good, because of its willingness to buck both fashion and common sense — making its giving decisions based on mathematical calculations and analytical arguments with the goal of maximising an outcome.
Nate sees a lot to admire in this, but the book paints a mixed picture in which effective altruism is arguably too trusting, too utilitarian, too selfless, and too reckless at some times, while too image-conscious at others.
But while everything has arguable weaknesses, could Nate actually do any better in practice? We ask him:
How would Nate spend $10 billion differently than today’s philanthropists influenced by EA?
Is anyone else competitive with EA in terms of impact per dollar?
Does he have any big disagreements with 80,000 Hours’ advice on how to have impact?
Is EA too big a tent to function?
What global problems could EA be ignoring?
Should EA be more willing to court controversy?
Does EA’s niceness leave it vulnerable to exploitation?
What moral philosophy would he have modelled EA on?
Rob and Nate also talk about:
Nate’s theory of Sam Bankman-Fried’s psychology.
Whether we had to “raise or fold” on COVID.
Whether Sam Altman and Sam Bankman-Fried are structurally similar cases or not.
“Winners’ tilt.”
Whether it’s selfish to slow down AI progress.
The ridiculous 13 Keys to the White House.
Whether prediction markets are now overrated.
Whether venture capitalists talk a big talk about risk while pushing all the risk off onto the entrepreneurs they fund.
And plenty more.
Producer and editor: Keiran Harris Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Video engineering: Simon Monsour Transcriptions: Katy Moore