#195 – Sella Nevo on who’s trying to steal frontier AI models, and what they could do with them

In today’s episode, host Luisa Rodriguez speaks to Sella Nevo — director of the Meselson Center at RAND — about his team’s latest report on how to protect the model weights of frontier AI models from actors who might want to steal them.

They cover:

  • Real-world examples of sophisticated security breaches, and what we can learn from them.
  • Why AI model weights might be such a high-value target for adversaries like hackers, rogue states, and other bad actors.
  • The many ways that model weights could be stolen, from using human insiders to sophisticated supply chain hacks.
  • The current best practices in cybersecurity, and why they may not be enough to keep bad actors away.
  • New security measures that Sella hopes can mitigate with the growing risks.
  • Sella’s work using machine learning for flood forecasting, which has significantly reduced injuries and costs from floods across Africa and Asia.
  • And plenty more.

Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

Can ‘effective accelerationists’ and AI ‘doomers’ agree on a common philosophy of technology? Common sense says no. But programmer and Ethereum cofounder Vitalik Buterin showed otherwise with his essay “My techno-optimism,” which both camps agreed was basically reasonable.

Seeing his social circle divided and fighting, Vitalik hoped to write a careful synthesis of the best ideas from both the optimists and the apprehensive.

Accelerationists are right: most technologies leave us better off, the human cost of delaying further advances can be dreadful, and centralising control in government hands often ends disastrously.

But the fearful are also right: some technologies are important exceptions, AGI has an unusually high chance of being one of those, and there are options to advance AI in safer directions.

The upshot? Defensive acceleration: humanity should run boldly but also intelligently into the future — speeding up technology to get its benefits, but preferentially developing ‘defensive’ technologies that lower systemic risks, permit safe decentralisation of power, and help both individuals and countries defend themselves against aggression and domination.

What sorts of things is he talking about? In the area of disease prevention it’s most easy to see: disinfecting indoor air, rapid-turnaround vaccine platforms, and nasal spray vaccines that prevent disease transmission all make us safer against pandemics without generating any apparent new threats of their own. (And they might eliminate the common cold to boot!)

Entrepreneur First is running a defensive acceleration incubation programme with $250,000 of investment. If these ideas resonate with you, learn about the programme and apply here. You don’t need a business idea yet — just the hustle to start a technology company. But you’ll need to act fast and apply by August 2, 2024.

Vitalik explains how he mentally breaks down defensive technologies into four broad categories:

  • Defence against big physical things like tanks.
  • Defence against small physical things like diseases.
  • Defence against unambiguously hostile information like fraud.
  • Defence against ambiguously hostile information like possible misinformation.

The philosophy of defensive acceleration has a strong basis in history. Mountain or island countries that are hard to invade, like Switzerland or Britain, tend to have more individual freedom and higher quality of life than the Mongolian steppes — where “your entire mindset is around kill or be killed, conquer or be conquered”: a mindset Vitalik calls “the breeding ground for dystopian governance.”

Defensive acceleration arguably goes back to ancient China, where the Mohists focused on helping cities build better walls and fortifications, an approach that really did reduce the toll of violent invasion, until progress in offensive technologies of siege warfare allowed them to be overcome.

In addition to all of that, host Rob Wiblin and Vitalik discuss:

  • AI regulation disagreements being less about AI in particular, and more whether you’re typically more scared of anarchy or totalitarianism.
  • Vitalik’s updated p(doom).
  • Whether the social impact of blockchain and crypto has been a disappointment.
  • Whether humans can merge with AI, and if that’s even desirable.
  • The most valuable defensive technologies to accelerate.
  • How to trustlessly identify what everyone will agree is misinformation
  • Whether AGI is offence-dominant or defence-dominant.
  • Vitalik’s updated take on effective altruism.
  • Plenty more.

Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore

Continue reading →

#193 – Sihao Huang on navigating the geopolitics of US–China AI competition

In today’s episode, host Luisa Rodriguez speaks with Sihao Huang about his work on AI governance and tech policy in China, what’s happening on the ground in China in AI development and regulation, and the importance of US–China cooperation on AI governance.

They cover:

  • Whether the US and China are in an AI race, and the global implications if they are.
  • The state of the art of AI in China.
  • China’s response to American export controls, and whether China is on track to indigenise its semiconductor supply chain.
  • How China’s current AI regulations try to maintain a delicate balance between fostering innovation and keeping strict information control over the Chinese people.
  • Whether China’s extensive AI regulations signal real commitment to safety or just censorship — and how AI is already used in China for surveillance and authoritarian control.
  • How advancements in AI could reshape global power dynamics, and Sihao’s vision of international cooperation to manage this responsibly.
  • And plenty more.

Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#192 – Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the US

In today’s episode, host Luisa Rodriguez speaks to Pulitzer Prize finalist and New York Times bestselling author Annie Jacobsen about her latest book, Nuclear War: A Scenario.

They cover:

  • The most harrowing findings from Annie’s hundreds of hours of interviews with nuclear experts.
  • What happens during the window that the US president would have to decide about nuclear retaliation after hearing news of a possible nuclear attack.
  • The horrific humanitarian impacts on millions of innocent civilians from nuclear strikes.
  • The overlooked dangers of a nuclear-triggered electromagnetic pulse (EMP) attack crippling critical infrastructure within seconds.
  • How we’re on the razor’s edge between the logic of nuclear deterrence and catastrophe, and urgently need reforms to move away from hair-trigger alert nuclear postures.
  • And plenty more.

Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#191 – Carl Shulman on government and society after AGI (Part 2)

This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI. You can listen to them in either order!

If we develop artificial general intelligence that’s reasonably aligned with human goals, it could put a fast and near-free superhuman advisor in everyone’s pocket. How would that affect culture, government, and our ability to act sensibly and coordinate together?

It’s common to worry that AI advances will lead to a proliferation of misinformation and further disconnect us from reality. But in today’s conversation, AI expert Carl Shulman argues that this underrates the powerful positive applications the technology could have in the public sphere.

As Carl explains, today the most important questions we face as a society remain in the “realm of subjective judgement” — without any “robust, well-founded scientific consensus on how to answer them.” But if AI ‘evals’ and interpretability advance to the point that it’s possible to demonstrate which AI models have truly superhuman judgement and give consistently trustworthy advice, society could converge on firm or ‘best-guess’ answers to far more cases.

If the answers are publicly visible and confirmable by all, the pressure on officials to act on that advice could be great.

That’s because when it’s hard to assess if a line has been crossed or not, we usually give people much more discretion. For instance, a journalist inventing an interview that never happened will get fired because it’s an unambiguous violation of honesty norms — but so long as there’s no universally agreed-upon standard for selective reporting, that same journalist will have substantial discretion to report information that favours their preferred view more often than that which contradicts it.

Similarly, today we have no generally agreed-upon way to tell when a decision-maker has behaved irresponsibly. But if experience clearly shows that following AI advice is the wise move, not seeking or ignoring such advice could become more like crossing a red line — less like making an understandable mistake and more like fabricating your balance sheet.

To illustrate the possible impact, Carl imagines how the COVID pandemic could have played out in the presence of AI advisors that everyone agrees are exceedingly insightful and reliable.

To start, advance investment in preventing, detecting, and containing pandemics would likely have been at a much higher and more sensible level, because it would have been straightforward to confirm which efforts passed a cost-benefit test for government spending. Politicians refusing to fund such efforts when the wisdom of doing so is an agreed and established fact would seem like malpractice.

Low-level Chinese officials in Wuhan would have been seeking advice from AI advisors instructed to recommend actions that are in the interests of the Chinese government as a whole. As soon as unexplained illnesses started appearing, that advice would be to escalate and quarantine to prevent a possible new pandemic escaping control, rather than stick their heads in the sand as happened in reality. Having been told by AI advisors of the need to warn national leaders, ignoring the problem would be a career-ending move.

From there, these AI advisors could have recommended stopping travel out of Wuhan in November or December 2019, perhaps fully containing the virus, as was achieved with SARS-1 in 2003. Had the virus nevertheless gone global, President Trump would have been getting excellent advice on what would most likely ensure his reelection. Among other things, that would have meant funding Operation Warp Speed far more than it in fact was, as well as accelerating the vaccine approval process, and building extra manufacturing capacity earlier. Vaccines might have reached everyone far faster.

These are just a handful of simple changes from the real course of events we can imagine — in practice, a significantly superhuman AI might suggest novel approaches better than any we can suggest here.

In the past we’ve usually found it easier to predict how hard technologies like planes or factories will change than to imagine the social shifts that those technologies will create — and the same is likely happening for AI.

Carl Shulman and host Rob Wiblin discuss the above, as well as:

  • The risk of society using AI to lock in its values.
  • The difficulty of preventing coups once AI is key to the military and police.
  • What international treaties we need to make this go well.
  • How to make AI superhuman at forecasting the future.
  • Whether AI will be able to help us with intractable philosophical questions.
  • Whether we need dedicated projects to make wise AI advisors, or if it will happen automatically as models scale.
  • Why Carl doesn’t support AI companies voluntarily pausing AI research, but sees a stronger case for binding international controls once we’re closer to ‘crunch time.’
  • Opportunities for listeners to contribute to making the future go well.

Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore

Continue reading →

#191 – Carl Shulman on the economy and national security after AGI (Part 1)

This is the first part of our marathon interview with Carl Shulman. The second episode is on government and society after AGI. You can listen to them in either order!

The human brain does what it does with a shockingly low energy supply: just 20 watts — a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution has already managed, and could accomplish the work of top human professionals given a 20-watt power supply?

Many people sort of consider that hypothetical, but maybe nobody has followed through and considered all the implications as much as Carl Shulman. Behind the scenes, his work has greatly influenced how leaders in artificial general intelligence (AGI) picture the world they’re creating.

Carl simply follows the logic to its natural conclusion. This is a world where 1 cent of electricity can be turned into medical advice, company management, or scientific research that would today cost $100s, resulting in a scramble to manufacture chips and apply them to the most lucrative forms of intellectual labour.

It’s a world where, given their incredible hourly salaries, the supply of outstanding AI researchers quickly goes from 10,000 to 10 million or more, enormously accelerating progress in the field.

It’s a world where companies operated entirely by AIs working together are much faster and more cost-effective than those that lean on humans for decision making, and the latter are progressively driven out of business.

It’s a world where the technical challenges around control of robots are rapidly overcome, leading to robots into strong, fast, precise, and tireless workers able to accomplish any physical work the economy requires, and a rush to build billions of them and cash in.

It’s a world where, overnight, the number of human beings becomes irrelevant to rates of economic growth, which is now driven by how quickly the entire machine economy can copy all its components. Looking at how long it takes complex biological systems to replicate themselves (some of which can do so in days) that occurring every few months could be a conservative estimate.

It’s a world where any country that delays participating in this economic explosion risks being outpaced and ultimately disempowered by rivals whose economies grow to be 10-fold, 100-fold, and then 1,000-fold as large as their own.

As the economy grows, each person could effectively afford the practical equivalent of a team of hundreds of machine ‘people’ to help them with every aspect of their lives.

And with growth rates this high, it doesn’t take long to run up against Earth’s physical limits — in this case, the toughest to engineer your way out of is the Earth’s ability to release waste heat. If this machine economy and its insatiable demand for power generates more heat than the Earth radiates into space, then it will rapidly heat up and become uninhabitable for humans and other animals.

This eventually creates pressure to move economic activity off-planet. There’s little need for computer chips to be on Earth, and solar energy and minerals are more abundant in space. So you could develop effective populations of billions of scientific researchers operating on computer chips orbiting in space, sending the results of their work, such as drug designs, back to Earth for use.

These are just some of the wild implications that could follow naturally from truly embracing the hypothetical: what if we develop artificial general intelligence that could accomplish everything that the most productive humans can, using the same energy supply?

In today’s episode, Carl explains the above, and then host Rob Wiblin pushes back on whether that’s realistic or just a cool story, asking:

  • If we’re heading towards the above, how come economic growth remains slow now and not really increasing?
  • Why have computers and computer chips had so little effect on economic productivity so far?
  • Are self-replicating biological systems a good comparison for self-replicating machine systems?
  • Isn’t this just too crazy and weird to be plausible?
  • What bottlenecks would be encountered in supplying energy and natural resources to this growing economy?
  • Might there not be severely declining returns to bigger brains and more training?
  • Wouldn’t humanity get scared and pull the brakes if such a transformation kicked off?
  • If this is right, how come economists don’t agree and think all sorts of bottlenecks would hold back explosive growth?

Finally, Carl addresses the moral status of machine minds themselves. Would they be conscious or otherwise have a claim to moral or rights? And how might humans and machines coexist with neither side dominating or exploiting the other?

Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore

Continue reading →

#190 – Eric Schwitzgebel on whether the US is conscious

In today’s episode, host Luisa Rodriguez speaks to Eric Schwitzgebel — professor of philosophy at UC Riverside — about some of the most bizarre and unintuitive claims from his recent book, The Weirdness of the World.

They cover:

  • Why our intuitions seem so unreliable for answering fundamental questions about reality.
  • What the materialist view of consciousness is, and how it might imply some very weird things — like that the United States could be a conscious entity.
  • Thought experiments that challenge our intuitions — like supersquids that think and act through detachable tentacles, and intelligent species whose brains are made up of a million bugs.
  • Eric’s claim that consciousness and cosmology are universally bizarre and dubious.
  • How to think about borderline states of consciousness, and whether consciousness is more like a spectrum or more like a light flicking on.
  • The nontrivial possibility that we could be dreaming right now, and the ethical implications if that’s true.
  • Why it’s worth it to grapple with the universe’s most complex questions, even if we can’t find completely satisfying solutions.
  • And much more.

Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#189 – Rachel Glennerster on why we still don’t have vaccines that could save millions

In today’s episode, host Luisa Rodriguez speaks to Rachel Glennerster — associate professor of economics at the University of Chicago and a pioneer in the field of development economics — about how her team’s new Market Shaping Accelerator aims to leverage market forces to drive innovations that can solve pressing world problems.

They cover:

  • How market failures and misaligned incentives stifle critical innovations for social goods like pandemic preparedness, climate change interventions, and vaccine development.
  • How “pull mechanisms” like advance market commitments (AMCs) can help overcome these challenges — including concrete examples like how one AMC led to speeding up the development of three vaccines which saved around 700,000 lives in low-income countries.
  • The challenges in designing effective pull mechanisms, from design to implementation.
  • Why it’s important to tie innovation incentives to real-world impact and uptake, not just the invention of a new technology.
  • The massive benefits of accelerating vaccine development, in some cases, even if it’s only by a few days or weeks.
  • The case for a $6 billion advance market commitment to spur work on a universal COVID-19 vaccine.
  • The shortlist of ideas from the Market Shaping Accelerator’s recent Innovation Challenge that use pull mechanisms to address market failures around improving indoor air quality, repurposing generic drugs for alternative uses, and developing eco-friendly air conditioners for a warming planet.
  • “Best Buys” and “Bad Buys” for improving education systems in low- and middle-income countries, based on evidence from over 400 studies.
  • Lessons from Rachel’s career at the forefront of global development, and how insights from economics can drive transformative change.
  • And much more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#188 – Matt Clancy on whether science is good

In today’s episode, host Luisa Rodriguez speaks to Matt Clancy — who oversees Open Philanthropy’s Innovation Policy programme — about his recent work modelling the risks and benefits of the increasing speed of scientific progress.

They cover:

  • Whether scientific progress is actually net positive for humanity.
  • Scenarios where accelerating science could lead to existential risks, such as advanced biotechnology being used by bad actors.
  • Why Matt thinks metascience research and targeted funding could improve the scientific process and better incentivise outcomes that are good for humanity.
  • Whether Matt trusts domain experts or superforecasters more when estimating how the future will turn out.
  • Why Matt is sceptical that AGI could really cause explosive economic growth.
  • And much more.

Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#187 – Zach Weinersmith on how researching his book turned him from a space optimist into a “space bastard”

In today’s episode, host Luisa Rodriguez speaks to Zach Weinersmith — the cartoonist behind Saturday Morning Breakfast Cereal — about the latest book he wrote with his wife Kelly: A City on Mars: Can We Settle Space, Should We Settle Space, and Have We Really Thought This Through?

They cover:

  • Why space travel is suddenly getting a lot cheaper and re-igniting enthusiasm around space settlement.
  • What Zach thinks are the best and worst arguments for settling space.
  • Zach’s journey from optimistic about space settlement to a self-proclaimed “space bastard” (pessimist).
  • How little we know about how microgravity and radiation affects even adults, much less the children potentially born in a space settlement.
  • A rundown of where we could settle in the solar system, and the major drawbacks of even the most promising candidates.
  • Why digging bunkers or underwater cities on Earth would beat fleeing to Mars in a catastrophe.
  • How new space settlements could look a lot like old company towns — and whether or not that’s a bad thing.
  • The current state of space law and how it might set us up for international conflict.
  • How space cannibalism legal loopholes might work on the International Space Station.
  • And much more.

Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#186 – Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives

In today’s episode, host Luisa Rodriguez speaks to Dean Spears — associate professor of economics at the University of Texas at Austin and founding director of r.i.c.e. — about his experience implementing a surprisingly low-tech but highly cost-effective kangaroo mother care programme in Uttar Pradesh, India to save the lives of vulnerable newborn infants.

They cover:

  • The shockingly high neonatal mortality rates in Uttar Pradesh, India, and how social inequality and gender dynamics contribute to poor health outcomes for both mothers and babies.
  • The remarkable benefits for vulnerable newborns that come from skin-to-skin contact and breastfeeding support.
  • The challenges and opportunities that come with working with a government hospital to implement new, evidence-based programmes.
  • How the currently small programme might be scaled up to save more newborns’ lives in other regions of Uttar Pradesh and beyond.
  • How targeted health interventions stack up against direct cash transfers.
  • Plus, a sneak peak into Dean’s new book, which explores the looming global population peak that’s expected around 2080, and the consequences of global depopulation.
  • And much more.

Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#185 – Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals

In today’s episode, host Luisa Rodriguez speaks to Lewis Bollard — director of the Farm Animal Welfare programme at Open Philanthropy — about the promising progress and future interventions to end the worst factory farming practices still around today.

They cover:

  • The staggering scale of animal suffering in factory farms, and how it will only get worse without intervention.
  • Work to improve farmed animal welfare that Open Philanthropy is excited about funding.
  • The amazing recent progress made in farm animal welfare — including regulatory attention in the EU and a big win at the US Supreme Court — and the work that still needs to be done.
  • The occasional tension between ending factory farming and curbing climate change.
  • How AI could transform factory farming for better or worse — and Lewis’s fears that the technology will just help us maximise cruelty in the name of profit.
  • How Lewis has updated his opinions or grantmaking as a result of new research on the “moral weights” of different species.
  • Lewis’s personal journey working on farm animal welfare, and how he copes with the emotional toll of confronting the scale of animal suffering.
  • How listeners can get involved in the growing movement to end factory farming — from career and volunteer opportunities to impactful donations.
  • And much more.

Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#184 – Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT

Many of you will have heard of Zvi Mowshowitz as a superhuman information-absorbing-and-processing machine — which he definitely is.

As the author of the Substack Don’t Worry About the Vase, Zvi has spent as much time as literally anyone in the world over the last two years tracking in detail how the explosion of AI has been playing out — and he has strong opinions about almost every aspect of it. So in today’s episode, host Rob Wiblin asks Zvi for his takes on:

  • US-China negotiations
  • Whether AI progress has stalled
  • The biggest wins and losses for alignment in 2023
  • EU and White House AI regulations
  • Which major AI lab has the best safety strategy
  • The pros and cons of the Pause AI movement
  • Recent breakthroughs in capabilities
  • In what situations it’s morally acceptable to work at AI labs

Whether you agree or disagree with his views, Zvi is super informed and brimming with concrete details.

Zvi and Rob also talk about:

  • The risk of AI labs fooling themselves into believing their alignment plans are working when they may not be.
  • The “sleeper agent” issue uncovered in a recent Anthropic paper, and how it shows us how hard alignment actually is.
  • Why Zvi disagrees with 80,000 Hours’ advice about gaining career capital to have a positive impact.
  • Zvi’s project to identify the most strikingly horrible and neglected policy failures in the US, and how Zvi founded a new think tank (Balsa Research) to identify innovative solutions to overthrow the horrible status quo in areas like domestic shipping, environmental reviews, and housing supply.
  • Why Zvi thinks that improving people’s prosperity and housing can make them care more about existential risks like AI.
  • An idea from the online rationality community that Zvi thinks is really underrated and more people should have heard of: simulacra levels.
  • And plenty more.

Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore

Continue reading →

#183 – Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and more

In today’s episode, host Rob Wiblin speaks for a fourth time with listener favourite Spencer Greenberg — serial entrepreneur and host of the Clearer Thinking podcast — about a grab-bag of topics that Spencer has explored since his last appearance on the show a year ago.

They cover:

  • How much money makes you happy — and the tricky methodological issues that come up trying to answer that question.
  • The importance of hype in making valuable things happen.
  • How to recognise warning signs that someone is untrustworthy or likely to hurt you.
  • Whether Registered Reports are successfully solving reproducibility issues in science.
  • The personal principles Spencer lives by, and whether or not we should all establish our own list of life principles.
  • The biggest and most harmful systemic mistakes we commit when making decisions, both individually and as groups.
  • The potential harms of lightgassing, which is the opposite of gaslighting.
  • How Spencer’s team used non-statistical methods to test whether astrology works.
  • Whether there’s any social value in retaliation.
  • And much more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore

Continue reading →

#182 – Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, and more

In today’s episode, host Luisa Rodriguez speaks to Bob Fischer — senior research manager at Rethink Priorities and the director of the Society for the Study of Ethics and Animals — about Rethink Priorities’s Moral Weight Project.

They cover:

  • The methods used to assess the welfare ranges and capacities for pleasure and pain of chickens, pigs, octopuses, bees, and other animals — and the limitations of that approach.
  • Concrete examples of how someone might use the estimated moral weights to compare the benefits of animal vs human interventions.
  • The results that most surprised Bob.
  • Why the team used a hedonic theory of welfare to inform the project, and what non-hedonic theories of welfare might bring to the table.
  • Thought experiments like Tortured Tim that test different philosophical assumptions about welfare.
  • Confronting our own biases when estimating animal mental capacities and moral worth.
  • The limitations of using neuron counts as a proxy for moral weights.
  • How different types of risk aversion, like avoiding worst-case scenarios, could impact cause prioritisation.
  • And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#181 – Laura Deming on the science that could keep us healthy in our 80s and beyond

In today’s episode, host Luisa Rodriguez speaks to Laura Deming — founder of The Longevity Fund — about the challenge of ending ageing.

They cover:

  • How lifespan is surprisingly easy to manipulate in animals, which suggests human longevity could be increased too.
  • Why we irrationally accept age-related health decline as inevitable.
  • The engineering mindset Laura takes to solving the problem of ageing.
  • Laura’s thoughts on how ending ageing is primarily a social challenge, not a scientific one.
  • The recent exciting regulatory breakthrough for an anti-ageing drug for dogs.
  • Laura’s vision for how increased longevity could positively transform society by giving humans agency over when and how they age.
  • Why this decade may be the most important decade ever for making progress on anti-ageing research.
  • The beauty and fascination of biology, which makes it such a compelling field to work in.
  • And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#180 – Hugo Mercier on why gullibility and misinformation are overrated

The World Economic Forum’s global risks survey of 1,400 experts, policymakers, and industry leaders ranked misinformation and disinformation as the number one global risk over the next two years — ranking it ahead of war, environmental problems, and other threats from AI.

And the discussion around misinformation and disinformation has shifted to focus on how generative AI or a future super-persuasive AI might change the game and make it extremely hard to figure out what was going on in the world — or alternatively, extremely easy to mislead people into believing convenient lies.

But this week’s guest, cognitive scientist Hugo Mercier, has a very different view on how people form beliefs and figure out who to trust — one in which misinformation really is barely a problem today, and is unlikely to be a problem anytime soon. As he explains in his book Not Born Yesterday, Hugo believes we seriously underrate the perceptiveness and judgement of ordinary people.

In this interview, host Rob Wiblin and Hugo discuss:

  • How our reasoning mechanisms evolved to facilitate beneficial communication, not blind gullibility.
  • How Hugo makes sense of our apparent gullibility in many cases — like falling for financial scams, astrology, or bogus medical treatments, and voting for policies that aren’t actually beneficial for us.
  • Rob and Hugo’s ideas about whether AI might make misinformation radically worse, and which mass persuasion approaches we should be most worried about.
  • Why Hugo thinks our intuitions about who to trust are generally quite sound, even in today’s complex information environment.
  • The distinction between intuitive beliefs that guide our actions versus reflective beliefs that don’t.
  • Why fake news and conspiracy theories actually have less impact than most people assume.
  • False beliefs that have persisted across cultures and generations — like bloodletting and vaccine hesitancy — and theories about why.
  • And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore

Continue reading →

#179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young people. At any point in time, something like 20% of young people are working through anxiety or depression that’s seriously interfering with their lives — but nowhere near 20% of people in their 20s have severe heart disease or cancer or a similar failure in a key organ of the body other than the brain.

From an evolutionary perspective, that’s to be expected, right? If your heart or lungs or legs or skin stop working properly while you’re a teenager, you’re less likely to reproduce, and the genes that cause that malfunction get weeded out of the gene pool.

So why is it that these evolutionary selective pressures seemingly fixed our bodies so that they work pretty smoothly for young people most of the time, but it feels like evolution fell asleep on the job when it comes to the brain? Why did evolution never get around to patching the most basic problems, like social anxiety, panic attacks, debilitating pessimism, or inappropriate mood swings? For that matter, why did evolution go out of its way to give us the capacity for low mood or chronic anxiety or extreme mood swings at all?

Today’s guest, Randy Nesse — a leader in the field of evolutionary psychiatry — wrote the book Good Reasons for Bad Feelings, in which he sets out to try to resolve this paradox.

In the interview, host Rob Wiblin and Randy discuss the key points of the book, as well as:

  • How the evolutionary psychiatry perspective can help people appreciate that their mental health problems are often the result of a useful and important system.
  • How evolutionary pressures and dynamics lead to a wide range of different personalities, behaviours, strategies, and tradeoffs.
  • The missing intellectual foundations of psychiatry, and how an evolutionary lens could revolutionise the field.
  • How working as both an academic and a practicing psychiatrist shaped Randy’s understanding of treating mental health problems.
  • The “smoke detector principle” of why we experience so many false alarms along with true threats.
  • The origins of morality and capacity for genuine love, and why Randy thinks it’s a mistake to try to explain these from a selfish gene perspective.
  • Evolutionary theories on why we age and die.
  • And much more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Dominic Armstrong
Transcriptions: Katy Moore

Continue reading →

#178 – Emily Oster on what the evidence actually says about pregnancy and parenting

In today’s episode, host Luisa Rodriguez speaks to Emily Oster — economist at Brown University, host of the ParentData podcast, and the author of three hugely popular books that provide evidence-based insights into pregnancy and early childhood.

They cover:

  • Common pregnancy myths and advice that Emily disagrees with — and why you should probably get a doula.
  • Whether it’s fine to continue with antidepressants and coffee during pregnancy.
  • What the data says — and doesn’t say — about outcomes from parenting decisions around breastfeeding, sleep training, childcare, and more.
  • Which factors really matter for kids to thrive — and why that means parents shouldn’t sweat the small stuff.
  • How to reduce parental guilt and anxiety with facts, and reject judgemental “Mommy Wars” attitudes when making decisions that are best for your family.
  • The effects of having kids on career ambitions, pay, and productivity — and how the effects are different for men and women.
  • Practical advice around managing the tradeoffs between career and family.
  • What to consider when deciding whether and when to have kids.
  • Relationship challenges after having kids, and the protective factors that help.
  • And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps

Back in December, we released an episode where Rob Wiblin interviewed Nathan Labenz — AI entrepreneur and host of The Cognitive Revolution podcast — on his takes on the pace of development of AGI and the OpenAI leadership drama, based on his experience red teaming an early version of GPT-4 and the conversations with OpenAI staff and board members that followed.

In today’s episode, their conversation continues, with Nathan diving deeper into:

  • What AI now actually can and can’t do — across language and visual models, medicine, scientific research, self-driving cars, robotics, weapons — and what the next big breakthrough might be.
  • Why most people, including most listeners, probably don’t know and can’t keep up with the new capabilities and wild results coming out across so many AI applications — and what we should do about that.
  • How we need to learn to talk about AI more productively — particularly addressing the growing chasm between those concerned about AI risks and those who want to see progress accelerate, which may be counterproductive for everyone.
  • Where Nathan agrees with and departs from the views of ‘AI scaling accelerationists.’
  • The chances that anti-regulation rhetoric from some AI entrepreneurs backfires.
  • How governments could (and already do) abuse AI tools like facial recognition, and how militarisation of AI is progressing.
  • Preparing for coming societal impacts and potential disruption from AI.
  • Practical ways that curious listeners can try to stay abreast of everything that’s going on.
  • And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore

Continue reading →