#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through.

That’s how today’s guest Allan Dafoe — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don’t. Those who resist too much can find themselves taken over or rendered irrelevant.

This dynamic played out dramatically in 1853 when US Commodore Perry sailed into Tokyo Bay with steam-powered warships that seemed magical to the Japanese, who had spent centuries deliberately limiting their technological development. With far greater military power, the US was able to force Japan to open itself to trade. Within 15 years, Japan had undergone the Meiji Restoration and transformed itself in a desperate scramble to catch up.

Today we see hints of similar pressure around artificial intelligence. Even companies, countries, and researchers deeply concerned about where AI could take us feel compelled to push ahead — worried that if they don’t, less careful actors will develop transformative AI capabilities at around the same time anyway.

But Allan argues this technological determinism isn’t absolute. While broad patterns may be inevitable, history shows we do have some ability to steer how technologies are developed, by who, and what they’re used for first.

As part of that approach, Allan has been promoting efforts to make AI more capable of sophisticated cooperation, and improving the tests Google uses to measure how well its models could do things like mislead people, hack and take control of their own servers, or spread autonomously in the wild.

As of mid-2024 they didn’t seem dangerous at all, but we’ve learned that our ability to measure these capabilities is good, but imperfect. If we don’t find the right way to ‘elicit’ an ability we can miss that it’s there.

Subsequent research from Anthropic and Redwood Research suggests there’s even a risk that future models may play dumb to avoid their goals being altered.

That has led DeepMind to a “defence in depth” approach: carefully staged deployment starting with internal testing, then trusted external testers, then limited release, then watching how models are used in the real world. By not releasing model weights, DeepMind is able to back up and add additional safeguards if experience shows they’re necessary.

But with much more powerful and general models on the way, individual company policies won’t be sufficient by themselves. Drawing on his academic research into how societies handle transformative technologies, Allan argues we need coordinated international governance that balances safety with our desire to get the massive potential benefits of AI in areas like healthcare and education as quickly as possible.

Host Rob and Allan also cover:

  • The most exciting beneficial applications of AI
  • Whether and how we can influence the development of technology
  • What DeepMind is doing to evaluate and mitigate risks from frontier AI systems
  • Why cooperative AI may be as important as aligned AI
  • The role of democratic input in AI governance
  • What kinds of experts are most needed in AI safety and governance
  • And much more

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Camera operator: Jeremy Chevillotte
Transcriptions: Katy Moore

Continue reading →

Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui)

On Monday, February 10, Elon Musk made the OpenAI nonprofit foundation an offer they want to refuse, but might have trouble doing so: $97.4 billion for its stake in the for-profit company, plus the freedom to stick with its current charitable mission.

For a normal company takeover bid, this would already be spicy. But OpenAI’s unique structure — a nonprofit foundation controlling a for-profit corporation — turns the gambit into an audacious attack on the plan OpenAI announced in December to free itself from nonprofit oversight.

As today’s guest Rose Chan Loui — founding executive director of UCLA Law’s Lowell Milken Center for Philanthropy and Nonprofits — explains, OpenAI’s nonprofit board now faces a challenging choice.

The nonprofit has a legal duty to pursue its charitable mission of ensuring that AI benefits all of humanity to the best of its ability. And if Musk’s bid would better accomplish that mission than the for-profit’s proposal — that the nonprofit give up control of the company and change its charitable purpose to the vague and barely related “pursue charitable initiatives in sectors such as health care, education, and science” — then it’s not clear the California or Delaware Attorneys General will, or should, approve the deal.

OpenAI CEO Sam Altman quickly tweeted “no thank you” — but that was probably a legal slipup, as he’s not meant to be involved in such a decision, which has to be made by the nonprofit board ‘at arm’s length’ from the for-profit company Sam himself runs.

The board could raise any number of objections: maybe Musk doesn’t have the money, or the purchase would be blocked on antitrust grounds, seeing as Musk owns another AI company (xAI), or Musk might insist on incompetent board appointments that would interfere with the nonprofit foundation pursuing any goal.

But as Rose and Rob lay out, it’s not clear any of those things is actually true.

In this emergency podcast recorded soon after Elon’s offer, Rose and Rob also cover:

  • Why OpenAI wants to change its charitable purpose and whether that’s legally permissible
  • On what basis the attorneys general will decide OpenAI’s fate
  • The challenges in valuing the nonprofit’s “priceless” position of control
  • Whether Musk’s offer will force OpenAI to up their own bid, and whether they could raise the money
  • If other tech giants might now jump in with competing offers
  • How politics could influence the attorneys general reviewing the deal
  • What Rose thinks should actually happen to protect the public interest

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Transcriptions: Katy Moore

Continue reading →

Bonus: AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like something to be a superhuman AGI?

With AGI back in the headlines, we bring you 15 opinionated highlights from the show addressing those and other questions, intermixed with opinions from hosts Luisa Rodriguez and Rob Wiblin recorded back in 2023.

You can decide whether the views we expressed (and those from guests) then have held up these last two busy years. You’ll hear:

  • Ajeya Cotra on overrated AGI worries
  • Holden Karnofsky on the dangers of aligned AI, why unaligned AI might not kill us, and the power that comes from just making models bigger
  • Ian Morris on why the future must be radically different from the present
  • Nick Joseph on whether his companies internal safety policies are enough
  • Richard Ngo on what everyone gets wrong about how ML models work
  • Tom Davidson on why he believes crazy-sounding explosive growth stories… and Michael Webb on why he doesn’t
  • Carl Shulman on why you’ll prefer robot nannies over human ones
  • Zvi Mowshowitz on why he’s against working at AI companies except in some safety roles
  • Hugo Mercier on why even superhuman AGI won’t be that persuasive
  • Rob Long on the case for and against digital sentience
  • Anil Seth on why he thinks consciousness is probably biological
  • Lewis Bollard on whether AI advances will help or hurt nonhuman animals
  • Rohin Shah on whether humanity’s work ends at the point it creates AGI

And of course, Rob and Luisa also regularly chime in on what they agree and disagree with.

Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Transcriptions and additional content editing: Katy Moore

Continue reading →

Preventing catastrophic pandemics

Some of the deadliest events in history have been pandemics. COVID-19 demonstrated that we’re still vulnerable to these events, and future outbreaks could be far more lethal.

In fact, we face the possibility of biological disasters that are worse than ever before due to developments in technology.

The chances of such catastrophic pandemics — bad enough to potentially derail civilisation and threaten humanity’s future — seem uncomfortably high. We believe this risk is one of the world’s most pressing problems.

And there are a number of practical options for reducing global catastrophic biological risks (GCBRs). So we think working to reduce GCBRs is one of the most promising ways to safeguard the future of humanity right now.

Continue reading →

How quickly could robots scale up?

This post was written by Benjamin Todd in his personal capacity and originally posted on benjamintodd.substack.com.

Today robots barely have the dexterity of a toddler, but are rapidly improving.

If their algorithms and hardware advance enough to handle many physical human jobs, how quickly could they become a major part of the workforce?

Here’s some order of magnitude estimates showing it could happen pretty fast.

Robot cost of production

Today’s humanoid robots cost about $100,000, with perhaps 10,000 units produced annually. But manufacturing costs tend to plummet with scale:

For solar energy, every doubling of production was associated with a 20% decline in costs. In other industries, we see estimates ranging from 5-40%, so 20% seems a reasonable middle point.

Solar power cost over time

That means a 1000x increase in production (10 doublings), should decrease costs 10x to $10,000/unit. That’s around the cost of manufacturing a car.

However, humanoid robots only use about 10% the materials of a car, so it’s plausible they could eventually become another 10x cheaper, or $1000 each.

Though, it’s also possible the elements for fine motor control remain far more difficult to manufacture. Let’s add 2x to account for that.

Robot operating costs

If a robot costs $10,000 and lasts for three years working 24/7, the hardware costs $0.40 per hour.

At $2000 each,

Continue reading →

    It looks like there are some good funding opportunities in AI safety right now

    This post was written by Benjamin Todd in his personal capacity and originally posted on benjamintodd.substack.com.

    The AI safety community has grown rapidly since the ChatGPT wake-up call, but available funding doesn’t seem to have kept pace.

    However, there’s a more recent dynamic that’s created even better funding opportunities, which I witnessed as a recommender in the most recent Survival and Flourishing Fund grant round.

    Most philanthropic (vs. government or industry) AI safety funding (50%) comes from one source: Good Ventures, via Open Philanthropy. But they’ve recently stopped funding several categories of work (my own categories, not theirs):

    In addition, they are currently not funding (or not fully funding):

    • Many non-US think tanks, who don’t want to appear influenced by an American organisation (there’s now probably more than 20 of these)
    • They do fund technical safety non-profits like FAR AI, though they’re probably underfunding this area, in part due to difficulty hiring for this area the last few years (though they’ve hired recently)
    • Political campaigns,

    Continue reading →

    What happened with AI in 2024?

    The idea this week: despite claims of stagnation, AI research still advanced rapidly in 2024.

    Some people say AI research has plateaued. But a lot of evidence from the last year points in the opposite direction:

    • New capabilities were developed and emerged
    • Research indicates existing AI can accelerate science

    And at the same time, important findings about AI safety and risk came out (see below).

    AI advances might still stall. Some leaders in the field have warned that a lack of good data, for example, may impede further capability growth, though others disagree. Regardless, growth clearly hasn’t stopped yet.

    Meanwhile, the aggregate forecast on Metaculus of when we’ll see the first “general” AI system — which would be highly capable across a wide range of tasks — is 2031.

    All of this matters a lot, because AI poses potentially existential risks. We think making sure AI goes well is a top pressing world problem.

    If AI advances fast, this work is not only important but urgent.

    Here are some of the key developments in AI from the last year:

    New AI models and capabilities

    OpenAI announced in late December that its new model o3 achieved a large leap forward in capabilities. It builds on the o1 language model (also released in 2024),

    Continue reading →

      Bonus episode: 2024 highlightapalooza!

      It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode, including:

      • How to use the microphone on someone’s mobile phone to figure out what password they’re typing into their laptop
      • Why mercilessly driving the New World screwworm to extinction could be the most compassionate thing humanity has ever done
      • Why evolutionary psychology doesn’t support a cynical view of human nature but actually explains why so many of us are intensely sensitive to the harms we cause to others
      • How superforecasters and domain experts seem to disagree so much about AI risk, but when you zoom in it’s mostly a disagreement about timing
      • Why the sceptics are wrong and you will want to use robot nannies to take care of your kids — and also why despite having big worries about the development of AGI, Carl Shulman is strongly against efforts to pause AI research today
      • How much of the gender pay gap is due to direct pay discrimination vs other factors
      • How cleaner wrasse fish blow the mirror test out of the water
      • Why effective altruism may be too big a tent to work well
      • How we could best motivate pharma companies to test existing drugs to see if they help cure other diseases — something they currently have no reason to bother with

      …as well as 27 other top observations and arguments from the past year of the show.

      Remember that all of these clips come from the 20-minute highlight reels we make for every episode, which are released on our sister feed, 80k After Hours. So if you’re struggling to keep up with our regularly scheduled entertainment, you can still get the best parts of our conversations there.

      It has been a hell of a year, and we can only imagine next year is going to be even weirder — but Luisa and Rob will be here to keep you company as Earth hurtles through the galaxy to a fate as yet unknown.

      Enjoy, and look forward to speaking with you in 2025!

      Producing and editing: Keiran Harris
      Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
      Video editing: Simon Monsour
      Transcriptions: Katy Moore

      Continue reading →

      2024 in review: some of our top pieces from this year

      This week, we’re looking back at some of our top content from the year!

      Here are some of our favourite and most important articles, posts, and podcast episodes we published in 2024:

      Articles

      Factory farming — There’s a clear candidate for the biggest moral mistake that humanity is currently making: factory farming. We raise and slaughter 1.6-4.5 trillion animals a year on factory farms, causing tremendous amounts of suffering.

      The moral status of digital minds — Understanding whether AI systems might suffer, be sentient, or otherwise matter morally is potentially one of the most pressing problems in the world.

      Should you work at a frontier AI company? — Working at a frontier AI company is plausibly some people’s highest-impact option, but some roles could be extremely harmful. So it’s critical to be discerning when considering this option — and particularly open to changing course. We’ve previously written about this topic, but explored it in more depth this year while taking account of recent developments, such as prominent departures at OpenAI.

      Risks of stable totalitarianism — Some of the worst atrocities have been committed by totalitarian rulers. In the future, the threat posed by these regimes could be even greater.

      Nuclear weapons safety and security — Nuclear weapons continue to pose an existential threat to humanity, but there are some promising pathways to reducing the risk.

      Other posts

      AI for epistemics — Our president and founder,

      Continue reading →

        #211 – Sam Bowman on why housing still isn't fixed and what would actually work

        Rich countries seem to find it harder and harder to do anything that creates some losers. People who don’t want houses, offices, power stations, trains, subway stations (or whatever) built in their area can usually find some way to block them, even if the benefits to society outweigh the costs 10 or 100 times over.

        The result of this ‘vetocracy’ has been skyrocketing rent in major cities — not to mention exacerbating homelessness, energy poverty, and a host of other social maladies. This has been known for years but precious little progress has been made. When trains, tunnels, or nuclear reactors are occasionally built, they’re comically expensive and slow compared to 50 years ago. And housing construction in the UK and California has barely increased, remaining stuck at less than half what it was in the ’60s and ’70s.

        Today’s guest — economist and editor of Works in Progress Sam Bowman — isn’t content to just condemn the Not In My Backyard (NIMBY) mentality behind this stagnation. He wants to actually get a tonne of stuff built, and by that standard the strategy of attacking ‘NIMBYs’ has been an abject failure. They are too politically powerful, and if you try to crush them, sooner or later they crush you.

        So, as Sam explains, a different strategy is needed, one that acknowledges that opponents of development are often correct that a given project will make them worse off. But the thing is, in the cases we care about, these modest downsides are outweighed by the enormous benefits to others — who will finally have a place to live, be able to get to work, and have the energy to heat their home.

        But democracies are majoritarian, so if most existing residents think they’ll be a little worse off if more dwellings are built in their area, it’s no surprise they aren’t getting built.

        Luckily we already have a simple way to get people to do things they don’t enjoy for the greater good, a strategy that we apply every time someone goes in to work at a job they wouldn’t do for free: compensate them.

        Currently, if you don’t want apartments going up on your street, your only option is to try to veto it or impose enough delays that the project’s not worth doing. But there’s a better way: if a project costs one person $1 and benefits another person $100, why can’t they share the benefits to win over the ‘losers’? Sam thinks experience around the world in cities like Tel Aviv, Houston, and London shows they can.

        Fortunately our construction crisis is so bad there’s a lot of surplus to play with. Sam notes that if you’re able to get permission to build on a piece of farmland in southeast England, that property increases in value 180-fold: “You’re almost literally printing money to get permission to build houses.” So if we can identify the people who are actually harmed by a project and compensate them a sensible amount, we can turn them from opponents into active supporters who will fight to prevent it from getting blocked.

        Sam thinks this idea, which he calls “Coasean democracy,” could create a politically sustainable majority in favour of building and underlies the proposals he thinks have the best chance of success:

        1. Spending the additional property tax produced by a new development in the local area, rather than transferring it to a regional or national pot — and even charging new arrivals higher rates for some period of time
        2. Allowing individual streets to vote permit medium-density townhouses (‘street votes’), or apartment blocks to vote to be replaced by taller apartments
        3. Upzoning a whole city while allowing individual streets to vote to opt out

        In this interview, host Rob Wiblin and Sam discuss the above as well as:

        • How this approach could backfire
        • How to separate truly harmed parties from ‘slacktivists’ who just want to complain on Instagram
        • The empirical results where these approaches have been tried
        • The prospects for any of this happening on a mass scale
        • How the UK ended up with the worst planning problems in the world
        • Why avant garde architects might be causing enormous harm
        • Why we should start up new good institutions alongside existing bad ones and let them run in parallel
        • Why northern countries can’t rely on solar or wind and need nuclear to avoid high energy prices
        • Why Ozempic is highly rated but still highly underrated
        • How the field of ‘progress studies’ has maintained high intellectual standards
        • And plenty more

        Video editing: Simon Monsour
        Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
        Transcriptions: Katy Moore

        Continue reading →

        Two key tips for giving season

        It’s giving season! Cue excitement…or dread?

        If you’re anything like me, December is a busy time. You’re wrapping up projects, reviewing the past year’s work, planning for holidays, and buying gifts. (Not to mention, drafting a newsletter!)

        So “giving season” — the time of year when most charitable donations are made — may just feel like one more thing to do. But donating is one of the most important decisions you can make.

        Consider that:

        Of course, the high stakes of giving can make it feel even worse to rush it, and even more daunting.

        I’ve definitely struggled to live up to my ideals when faced with this. I can confirm that New Year’s Eve is not when you want to deal with details like finding a quick online payment method. And I used to be the executive director of Giving What We Can and a grantmaker for the EA Infrastructure Fund — so if you also struggle with this, you’re not alone!

        My bottom line advice is: find ways to make fewer decisions. They’re stressful and time consuming.

        Below are two key tips that work for me and make my giving season slightly less hectic.

        Continue reading →

          #210 – Cameron Meyer Shorb on dismantling the myth that we can't do anything to help wild animals

          In today’s episode, host Luisa Rodriguez speaks to Cameron Meyer Shorb — executive director of the Wild Animal Initiative — about the cutting-edge research on wild animal welfare.

          They cover:

          • How it’s almost impossible to comprehend the sheer number of wild animals on Earth — and why that makes their potential suffering so important to consider.
          • How bad experiences like disease, parasites, and predation truly are for wild animals — and how we would even begin to study that empirically.
          • The tricky ethical dilemmas in trying to help wild animals without unintended consequences for ecosystems or other potentially sentient beings.
          • Potentially promising interventions to help wild animals — like selective reforestation, vaccines, fire management, and gene drives.
          • Why Cameron thinks the best approach to improving wild animal welfare is to first build a dedicated research field — and how Wild Animal Initiative’s activities support this.
          • The many career paths in science, policy, and technology that could contribute to improving wild animal welfare.
          • And much more.

          Producer: Keiran Harris
          Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
          Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
          Transcriptions: Katy Moore

          Continue reading →

          #209 – Rose Chan Loui on OpenAI's gambit to ditch its nonprofit

          One OpenAI critic describes it as “the theft of at least the millennium and quite possibly all of human history.” Are they right?

          Back in 2015 OpenAI was but a humble nonprofit. That nonprofit started a for-profit, OpenAI LLC, but made sure to retain ownership and control. But that for-profit, having become a tech giant with vast staffing and investment, has grown tired of its shackles and wants to change the deal.

          Facing off against it stand eight out-gunned and out-numbered part-time volunteers. Can they hope to defend the nonprofit’s interests against the overwhelming profit motives arrayed against them?

          That’s the question host Rob Wiblin puts to nonprofit legal expert Rose Chan Loui of UCLA, who concludes that with a “heroic effort” and a little help from some friendly state attorneys general, they might just stand a chance.

          As Rose lays out, on paper OpenAI is controlled by a nonprofit board that:

          • Can fire the CEO.
          • Would receive all the profits after the point OpenAI makes 100x returns on investment.
          • Is legally bound to do whatever it can to pursue its charitable purpose: “to build artificial general intelligence that benefits humanity.”

          But that control is a problem for OpenAI the for-profit and its CEO Sam Altman — all the more so after the board concluded back in November 2023 that it couldn’t trust Altman and attempted to fire him (although those board members were ultimately ousted themselves after failing to adequately explain their rationale).

          Nonprofit control makes it harder to attract investors, who don’t want a board stepping in just because they think what the company is doing is bad for humanity. And OpenAI the business is thirsty for as many investors as possible, because it wants to beat competitors and train the first truly general AI — able to do every job humans currently do — which is expected to cost hundreds of billions of dollars.

          So, Rose explains, they plan to buy the nonprofit out. In exchange for giving up its windfall profits and the ability to fire the CEO or direct the company’s actions, the board will become minority shareholders with reduced voting rights, and presumably transform into a normal grantmaking foundation instead.

          Is this a massive bait-and-switch? A case of the tail not only wagging the dog, but grabbing a scalpel and neutering it?

          OpenAI repeatedly committed to California, Delaware, the US federal government, founding staff, and the general public that its resources would be used for its charitable mission and it could be trusted because of nonprofit control. Meanwhile, the divergence in interests couldn’t be more stark: every dollar the for-profit keeps from its nonprofit parent is another dollar it could invest in AGI and ultimately return to investors and staff.

          To top it off, the OpenAI business has an investment bank estimating how much compensation it thinks it should pay the nonprofit — while the nonprofit, to our knowledge, isn’t getting its own independent valuation.

          But as Rose lays out, this for-profit-to-nonprofit switch is not without precedent, and creating a new $40 billion grantmaking foundation could be its best available path.

          In terms of pursuing its charitable purpose, true control of the for-profit might indeed be “priceless” and not something that it could be compensated for. But after failing to remove Sam Altman last November, the nonprofit has arguably lost practical control of its for-profit child, and negotiating for as many resources as possible — then making a lot of grants to further AI safety — could be its best fall-back option to pursue its mission of benefiting humanity.

          And with the California and Delaware attorneys general saying they want to be convinced the transaction is fair and the nonprofit isn’t being ripped off, the board might just get the backup it needs to effectively stand up for itself.

          In today’s energetic conversation, Rose and host Rob Wiblin discuss:

          • Why it’s essential the nonprofit gets cash and not just equity in any settlement.
          • How the nonprofit board can best play its cards.
          • How any of this can be regarded as an “arm’s-length transaction” as required by law.
          • Whether it’s truly in the nonprofit’s interest to sell control of OpenAI.
          • How to value the nonprofit’s control of OpenAI and its share of profits.
          • Who could challenge the outcome in court.
          • Cases where this has happened before.
          • The weird rule that lets the board cut off Microsoft’s access to OpenAI’s IP.
          • And plenty more.

          Producer: Keiran Harris
          Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
          Video editing: Simon Monsour
          Transcriptions: Katy Moore

          Continue reading →

          #208 – Elizabeth Cox on the case that TV shows, movies, and novels can improve the world

          In today’s episode, Keiran Harris speaks with Elizabeth Cox — founder of the independent production company Should We Studio — about the case that storytelling can improve the world.

          They cover:

          • How TV shows and movies compare to novels, short stories, and creative nonfiction if you’re trying to do good.
          • The existing empirical evidence for the impact of storytelling.
          • Their competing takes on the merits of thinking carefully about target audiences.
          • Whether stories can really change minds on deeply entrenched issues, or whether writers need to have more modest goals.
          • Whether humans will stay relevant as creative writers with the rise of powerful AI models.
          • Whether you can do more good with an overtly educational show vs other approaches.
          • Elizabeth’s experience with making her new five-part animated show Ada — including why she chose the topics of civilisational collapse, kidney donations, artificial wombs, AI, and gene drives.
          • The pros and cons of animation as a medium.
          • Career advice for creative writers.
          • Keiran’s idea for a longtermist Christmas movie.
          • And plenty more.

          Producer: Keiran Harris
          Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
          Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
          Transcriptions: Katy Moore

          Continue reading →

          #207 – Sarah Eustis-Guthrie on why she shut down her charity, and why more founders should follow her lead

          In today’s episode, host Luisa Rodriguez speaks to Sarah Eustis-Guthrie — cofounder of the now-shut-down Maternal Health Initiative, a postpartum family planning nonprofit in Ghana — about her experience starting and running MHI, and ultimately making the difficult decision to shut down when the programme wasn’t as impactful as they expected.

          They cover:

          • The evidence that made Sarah and her cofounder Ben think their organisation could be super impactful for women — both from a health perspective and an autonomy and wellbeing perspective.
          • Early yellow and red flags that maybe they didn’t have the full story about the effectiveness of the intervention.
          • All the steps Sarah and Ben took to build the organisation — and where things went wrong in retrospect.
          • Dealing with the emotional side of putting so much time and effort into a project that ultimately failed.
          • Why it’s so important to talk openly about things that don’t work out, and Sarah’s key lessons learned from the experience.
          • The misaligned incentives that discourage charities from shutting down ineffective programmes.
          • The movement of trust-based philanthropy, and Sarah’s ideas to further improve how global development charities get their funding and prioritise their beneficiaries over their operations.
          • The pros and cons of exploring and pivoting in careers.
          • What it’s like to participate in the Charity Entrepreneurship Incubation Program, and how listeners can assess if they might be a good fit.
          • And plenty more.

          Producer: Keiran Harris
          Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
          Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
          Transcriptions: Katy Moore

          Continue reading →

          Bonus episode: Parenting insights from Rob and 8 past guests

          With kids very much on the team’s mind we thought it would be fun to review some comments about parenting featured on the show over the years, then have hosts Luisa Rodriguez and Rob Wiblin react to them.

          After hearing 8 former guests’ insights, Luisa and Rob chat about:

          • Which of these resonate the most with Rob, now that he’s been a dad for six months (plus an update at nine months).
          • What have been the biggest surprises for Rob in becoming a parent.
          • Whether the benefits of parenthood can actually be studied, and if we get skewed impressions of how bad parenting is.
          • How Rob’s dealt with work and parenting tradeoffs, and his advice for other would-be parents.
          • Rob’s list of recommended purchases for new or upcoming parents

          This bonus episode includes excerpts from:

          • Ezra Klein on parenting yourself as well as your children (from episode #157)
          • Holden Karnofsky on freezing embryos and being surprised by how fun it is to have a kid (#110 and #158)
          • Parenting expert Emily Oster on how having kids affect relationships, careers and kids, and what actually makes a difference in young kids’ lives (#178)
          • Russ Roberts on empirical research when deciding whether to have kids (#87)
          • Spencer Greenberg on his surveys of parents (#183)
          • Elie Hassenfeld on how having children reframes his relationship to solving pressing global problems (#153)
          • Bryan Caplan on homeschooling (#172)
          • Nita Farahany on thinking about life and the world differently with kids (#174)

          Producer: Keiran Harris
          Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
          Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
          Transcriptions: Katy Moore

          Continue reading →

          Why we get burned out — and what helps

          The idea this week: there are no magical fixes for career burnout, but there are concrete steps that can help.

          When I was in the last years of my PhD studying representations of science and technology in fiction, I started feeling tired every day. I was checked out from my research, and I had a nagging sense that I wasn’t as good at it as I used to be.

          I now realise I was experiencing burnout — and these feelings are quite common.

          The World Health Organisation will tell you that burnout is an occupational syndrome that results from chronic workplace stress. It’s characterised by energy depletion, increased negativity and cynicism about your job, and reduced efficacy.

          In my case, I was struggling with a mismatch between my work and what I thought really mattered. I was doing a PhD to become a literature professor, but my research seemed fundamentally disconnected from what I cared about: helping with pressing world problems.

          Once something feels pointless, it’s very difficult to muster the motivation to get it done.

          The silver lining is that now I can use this experience to help others in my role as a career advisor. And here is one piece of advice I often give: if you can, try to find work that aligns with what you think matters. In order to do this, it’s important to first reflect on what problems you care about and how to best tackle them.

          Continue reading →

            #206 – Anil Seth on the predictive brain and how to study consciousness

            In today’s episode, host Luisa Rodriguez speaks to Anil Seth — director of the Sussex Centre for Consciousness Science — about how much we can learn about consciousness by studying the brain.

            They cover:

            • What groundbreaking studies with split-brain patients and blindsight have already taught us about the nature of consciousness.
            • Anil’s theory that our perception is a “controlled hallucination” generated by our predictive brains.
            • Whether looking for the parts of the brain that correlate with consciousness is the right way to learn about what consciousness is.
            • Whether our theories of human consciousness can be applied to nonhuman animals.
            • Anil’s thoughts on whether machines could ever be conscious.
            • Disagreements and open questions in the field of consciousness studies, and what areas Anil is most excited to explore next.
            • And much more.

            Producer: Keiran Harris
            Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
            Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
            Transcriptions: Katy Moore

            Continue reading →

            What are experts in biosecurity worried about?

            The idea this week: biosecurity experts disagree on many of the field’s most important questions.

            We spoke to more than a dozen biosecurity experts to understand the space better. We let them give their answers anonymously so that they could feel comfortable speaking their minds.

            We don’t agree with everything the experts told us — they don’t even agree with one another! But we think it can be really useful for people who want to learn about or enter this field to understand the ongoing debates and disagreements.

            We already published the first article on their answers about misconceptions in biosecurity, and we’re now sharing three more editions, completing this four-part series:

            1. AI’s impact on biosecurity

            We think one of the world’s most pressing problems is the risk of catastrophic pandemics, and powerful AI could make this risk higher than ever before.

            Experts generally agreed that AI developments pose new risks, but there was some disagreement on how big and immediate the threat is.

            These are some key quotes from the experts on areas of disagreement:

            • “AI may really accelerate biorisk. Unfortunately, I don’t think we have yet figured out great tools to manage that risk.” (Read more)
            • “My hot take is that AI is obviously a big deal, but I’m not sure it’s actually as big a deal in biosecurity as it might be for other areas.”

            Continue reading →

              #205 – Sébastien Moro on the most insane things fish can do

              In today’s episode, host Luisa Rodriguez speaks to science writer and video blogger Sébastien Moro about the latest research on fish consciousness, intelligence, and potential sentience.

              They cover:

              • The insane capabilities of fish in tests of memory, learning, and problem-solving.
              • Examples of fish that can beat primates on cognitive tests and recognise individual human faces.
              • Fishes’ social lives, including pair bonding, “personalities,” cooperation, and cultural transmission.
              • Whether fish can experience emotions, and how this is even studied.
              • The wild evolutionary innovations of fish, who adapted to thrive in diverse environments from mangroves to the deep sea.
              • How some fish have sensory capabilities we can’t even really fathom — like “seeing” electrical fields and colours we can’t perceive.
              • Ethical issues raised by evidence that fish may be conscious and experience suffering.
              • And plenty more.

              Producer: Keiran Harris
              Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
              Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
              Transcriptions: Katy Moore

              Continue reading →