#175 – Lucia Coulter on preventing lead poisoning for $1.66 per child

Lead is one of the most poisonous things going. A single sugar sachet of lead, spread over a park the size of an American football field, is enough to give a child that regularly plays there lead poisoning. For life they’ll be condemned to a ~3-point-lower IQ; a 50% higher risk of heart attacks; and elevated risk of kidney disease, anaemia, and ADHD, among other effects.

We’ve known lead is a health nightmare for at least 50 years, and that got lead out of car fuel everywhere. So is the situation under control? Not even close.

Around half the kids in poor and middle-income countries have blood lead levels above 5 micrograms per decilitre; the US declared a national emergency when just 5% of the children in Flint, Michigan exceeded that level. The collective damage this is doing to children’s intellectual potential, health, and life expectancy is vast — the health damage involved is around that caused by malaria, tuberculosis, and HIV combined.

This week’s guest, Lucia Coulter — cofounder of the incredibly successful Lead Exposure Elimination Project (LEEP) — speaks about how LEEP has been reducing childhood lead exposure in poor countries by getting bans on lead in paint enforced.

Various estimates suggest the work is absurdly cost effective. LEEP is in expectation preventing kids from getting lead poisoning for under $2 per child (explore the analysis here). Or, looking at it differently, LEEP is saving a year of healthy life for $14, and in the long run is increasing people’s lifetime income anywhere from $300–1,200 for each $1 it spends, by preventing intellectual stunting.

Which raises the question: why hasn’t this happened already? How is lead still in paint in most poor countries, even when that’s oftentimes already illegal? And how is LEEP able to get bans on leaded paint enforced in a country while spending barely tens of thousands of dollars? When leaded paint is gone, what should they target next?

With host Robert Wiblin, Lucia answers all those questions and more:

  • Why LEEP isn’t fully funded, and what it would do with extra money (you can donate here).
  • How bad lead poisoning is in rich countries.
  • Why lead is still in aeroplane fuel.
  • How lead got put straight in food in Bangladesh, and a handful of people got it removed.
  • Why the enormous damage done by lead mostly goes unnoticed.
  • The other major sources of lead exposure aside from paint.
  • Lucia’s story of founding a highly effective nonprofit, despite having no prior entrepreneurship experience, through Charity Entrepreneurship’s Incubation Program.
  • Why Lucia pledges 10% of her income to cost-effective charities.
  • Lucia’s take on why GiveWell didn’t support LEEP earlier on.
  • How the invention of cheap, accessible lead testing for blood and consumer products would be a game changer.
  • Generalisable lessons LEEP has learned from coordinating with governments in poor countries.
  • And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Milo McGuire and Dominic Armstrong
Transcriptions: Katy Moore

Continue reading →

Experience with an emerging power (especially China)

China will likely play an especially influential role in determining the outcome of many of the biggest challenges of the next century. India also seems very likely to be important over the next few decades, and many other non-western countries — for example, Russia — are also major players on the world stage.

A lack of understanding and coordination between all these countries and the West means we might not tackle those challenges as well as we can (and need to).

So it’s going to be very valuable to have more people gaining real experience with emerging powers, especially China, and then specialising in the intersection of emerging powers and pressing global problems.

Why is experience with an emerging power (especially China) valuable?

China in particular plays a crucial role in many of the major global problems we highlight. For instance:

  • The Chinese government ‘s spending on artificial intelligence research and development is estimated to be on the same order of magnitude as that of the US government.
  • As the largest trading partner of North Korea, China plays an important role in reducing the chance of conflict, especially nuclear conflict, on the Korean peninsula.
  • China is the largest emitter of CO2, accounting for 30% of the global total.
  • China recently became the largest consumer of factory-farmed meat.
  • China is one of the most important nuclear and military powers.

Continue reading →

Research skills

Norman Borlaug was an agricultural scientist. Through years of research, he developed new, high-yielding, disease-resistant varieties of wheat.

It might not sound like much, but as a result of Borlaug’s research, wheat production in India and Pakistan almost doubled between 1965 and 1970, and formerly famine-stricken countries across the world were suddenly able to produce enough food for their entire populations. These developments have been credited with saving up to a billion people from famine, and in 1970, Borlaug was awarded the Nobel Peace Prize.

Many of the highest-impact people in history, whether well-known or completely obscure, have been researchers.

Why are research skills valuable?

Not everyone can be a Norman Borlaug, and not every discovery gets adopted. Nevertheless, we think research can often be one of the most valuable skill sets to build — if you’re a good fit.

We’ll argue that:

Together, this suggests that research skills could be particularly useful for having an impact.

Continue reading →

Policy and political skills

Suzy Deuster wanted to be a public defender, a career path that could help hundreds receive fair legal representation. But she realised that by shifting her focus to government work, she could improve the justice system for thousands or even millions. Suzy ended up doing just that from her position in the US Executive Office of the President, working on criminal justice reform.

This logic doesn’t just apply to criminal justice. For almost any global issue you’re interested in, roles in powerful institutions like governments often offer unique and high-leverage ways to address some of the most pressing challenges of our time.

Why are policy and political skills valuable?

We’ll argue that:

Together, this suggests that building the skills needed to get things done in large institutions could give you a lot of opportunities to have an impact.

Later, we’ll look at:

Governments (and other powerful institutions) have a huge impact in the world

National governments are hugely powerful.

Continue reading →

Organisation-building

When most people think of careers that “do good,” the first thing they think of is working at a charity.

The thing is, lots of jobs at charities just aren’t that impactful.

Some charities focus on programmes that don’t work, like Scared Straight, which actually caused kids to commit more crimes. Others focus on ways of helping that, while thoughtful and helpful, don’t have much leverage, like knitting individual sweaters for penguins affected by oil spills (this actually happened!) instead of funding large-scale ocean cleanup projects.

A penguin wearing a knitted sweaterWhile this penguin certainly looks all warm and cosy, we’d guess that knitting each sweater one-by-one wouldn’t be the best use of an organisation’s time.

But there are also many organisations out there — both for-profit and nonprofit — focused on pressing problems, implementing effective and scalable solutions, run by great teams, and in need of people.

If you can build skills that are useful for helping an organisation like this, it could well be one of the highest-impact things you can do.

In particular, organisations often need generalists able to do the bread and butter of building an organisation — hiring people, management, administration, communications, running software systems, crafting strategy, fundraising, and so on.

We call these ‘organisation-building’ skills. They can be high impact because you can increase the scale and effectiveness of the organisation you’re working at, while also gaining skills that can be applied to a wide range of global problems in the future (and make you generally employable too).

Continue reading →

Preventing catastrophic pandemics

Some of the deadliest events in history have been pandemics. COVID-19 demonstrated that we’re still vulnerable to these events, and future outbreaks could be far more lethal.

In fact, we face the possibility of biological disasters that are worse than ever before due to developments in technology.

The chances of such catastrophic pandemics — bad enough to potentially derail civilisation and threaten humanity’s future — seem uncomfortably high. We believe this risk is one of the world’s most pressing problems.

And there are a number of practical options for reducing global catastrophic biological risks (GCBRs). So we think working to reduce GCBRs is one of the most promising ways to safeguard the future of humanity right now.

Continue reading →

#174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers

In today’s episode, host Luisa Rodriguez speaks to Nita Farahany — professor of law and philosophy at Duke Law School — about applications of cutting-edge neurotechnology.

They cover:

  • How close we are to actual mind reading.
  • How hacking neural interfaces could cure depression.
  • How companies might use neural data in the workplace — like tracking how productive you are, or using your emotional states against you in negotiations.
  • How close we are to being able to unlock our phones by singing a song in our heads.
  • How neurodata has been used for interrogations, and even criminal prosecutions.
  • The possibility of linking brains to the point where you could experience exactly the same thing as another person.
  • Military applications of this tech, including the possibility of one soldier controlling swarms of drones with their mind.
  • And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

Benjamin Todd on the history of 80,000 Hours

In this episode of 80k After Hours — recorded in June 2022 — Rob Wiblin and Benjamin Todd discuss the history of 80,000 Hours.

They cover:

  • Ben’s origin story
  • How 80,000 Hours got off the ground
  • Its scrappy early days
  • How 80,000 Hours evolved
  • Team trips to China and Thailand
  • The choice to set up several programmes rather than focus on one
  • The move to California and back
  • Various mistakes they think 80,000 Hours has made along the way
  • Why Ben left the CEO position
  • And the future of 80,000 Hours

Who this episode is for:

  • People who work on or plan to work on promoting important ideas in a way that’s similar to 80,000 Hours
  • People who work at organisations similar to 80,000 Hours
  • People who work at 80,000 Hours

Who this episode isn’t for:

  • People who, if asked if they’d like to join a dinner at 80,000 Hours where the team reminisce on the good old days, would say, “Sorry, can’t make it — I’m washing my hair that night”

Producer: Keiran Harris
Audio mastering: Ryan Kessler and Ben Cordell

Gershwin – Rhapsody in Blue, original 1924 version” by Jason Weinberger is licensed under creative commons

Continue reading →

#173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe

In today’s episode, host Luisa Rodriguez interviews Jeff Sebo — director of the Mind, Ethics, and Policy Program at NYU — about preparing for a world with digital minds.

They cover:

  • The non-negligible chance that AI systems will be sentient by 2030
  • What AI systems might want and need, and how that might affect our moral concepts
  • What happens when beings can copy themselves? Are they one person or multiple people? Does the original own the copy or does the copy have its own rights? Do copies get the right to vote?
  • What kind of legal and political status should AI systems have? Legal personhood? Political citizenship?
  • What happens when minds can be connected? If two minds are connected, and one does something illegal, is it possible to punish one but not the other?
  • The repugnant conclusion and the rebugnant conclusion
  • The experience of trying to build the field of AI welfare
  • What improv comedy can teach us about doing good in the world
  • And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Dominic Armstrong and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#172 – Bryan Caplan on why you should stop reading the news

Is following important political and international news a civic duty — or is it our civic duty to avoid it?

It’s common to think that ‘staying informed’ and checking the headlines every day is just what responsible adults do.

But in today’s episode, host Rob Wiblin is joined by economist Bryan Caplan to discuss the book Stop Reading the News: A Manifesto for a Happier, Calmer and Wiser Life — which argues that reading the news both makes us miserable and distorts our understanding of the world. Far from informing us and enabling us to improve the world, consuming the news distracts us, confuses us, and leaves us feeling powerless.

In the first half of the episode, Bryan and Rob discuss various alleged problems with the news, including:

  • That it overwhelmingly provides us with information we can’t usefully act on.
  • That it’s very non-representative in what it covers, in particular favouring the negative over the positive and the new over the significant.
  • That it obscures the big picture, falling into the trap of thinking ‘something important happens every day.’
  • That it’s highly addictive, for many people chewing up 10% or more of their waking hours.
  • That regularly checking the news leaves us in a state of constant distraction and less able to engage in deep thought.
  • And plenty more.

Bryan and Rob conclude that if you want to understand the world, you’re better off blocking news websites and spending your time on Wikipedia, Our World in Data, or reading a textbook. And if you want to generate political change, stop reading about problems you already know exist and instead write your political representative a physical letter — or better yet, go meet them in person.

In the second half of the episode, Bryan and Rob cover:

  • Why Bryan is pretty sceptical that AI is going to lead to extreme, rapid changes, or that there’s a meaningful chance of it going terribly.
  • Bryan’s case that rational irrationality on the part of voters leads to many very harmful policy decisions.
  • How to allocate resources in space.
  • Bryan’s experience homeschooling his kids.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore

Continue reading →

A note of appreciation for your efforts to help others

The idea this week: it’s incredible how dedicated many of you are to helping others.

One of my favourite parts of working on the one-on-one advising team is getting to see the important work so many people are doing up close. It’s incredibly inspiring to learn about the thoughtful, dedicated steps you’re taking to have an impact. In our conversations, we get to directly express appreciation for each person’s efforts. But we only get to do that for a fraction of readers, and only occasionally.

So I wanted to take this chance to say thank you to all of you working so hard and intentionally to help others. There are countless ways to make a difference — different problems needing solutions and different approaches to tackle them. I can’t speak to nearly all of those here. But I do want to highlight a few examples of work I know many of you are doing that I find deeply admirable.

  • To those working long hours at a challenging job in order to donate a significant portion of your salary to effective organisations — thank you. It’s hard to stay motivated when the work itself doesn’t feel valuable. It’s hard to make time outside a full-time job to thoughtfully decide where your money can do the most good. And it can be tough being surrounded by people with different values who get to directly enjoy the fruits of their labour rather than using it to reduce suffering.

Continue reading →

    #171 – Alison Young on how top labs have jeopardised public health with repeated biosafety failures

    In today’s episode, host Luisa Rodriguez interviews award-winning investigative journalist Alison Young on the surprising frequency of lab leaks and what needs to be done to prevent them in the future.

    They cover:

    • The most egregious biosafety mistakes made by the CDC, and how Alison uncovered them through her investigative reporting
    • The Dugway life science test facility case, where live anthrax was accidentally sent to labs across the US and several other countries over a period of many years
    • The time the Soviets had a major anthrax leak, and then hid it for over a decade
    • The 1977 influenza pandemic caused by vaccine trial gone wrong in China
    • The last death from smallpox, caused not by the virus spreading in the wild, but by a lab leak in the UK
    • Ways we could get more reliable oversight and accountability for these labs
    • And the investigative work Alison’s most proud of

    Producer and editor: Keiran Harris
    Audio Engineering Lead: Ben Cordell
    Technical editing: Simon Monsour and Milo McGuire
    Additional content editing: Katy Moore and Luisa Rodriguez
    Transcriptions: Katy Moore

    Continue reading →

    New opportunities are opening up in AI governance

    The news this week: major new initiatives show governments are taking AI risks seriously — but there’s still a long way to go.

    From DC to London and beyond, leaders are waking up to AI. They see potential dangers from the technology on the horizon.

    Take the White House. This week, President Joe Biden announced a sweeping new executive order to respond to the risks potentially posed by advanced AI systems, including risks to national security.

    The new order includes the following:

    • A requirement for AI labs working on the most powerful models to share information about safety tests and training plans
    • Direction to the National Institute of Standards and Technology to create standards for red teaming and assessing the safety of powerful new AI models
    • Efforts to reduce the risk of AI-related biological threats and to mitigate cybersecurity vulnerabilities
    • Provisions on fraud, privacy, equity, civil rights, workers’ rights, and international coordination

    Vice President Kamala Harris also announced the creation of the United States AI Safety Institute this week, which will help evaluate and mitigate dangerous capabilities of AI models.

    And the US government is making a big push to hire more AI professionals. They’ve extended the deadline for applying to the Presidential Innovation Fellowship in light of this push.

    Continue reading →

      #170 – Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down

      In today’s episode, host Rob Wiblin interviews Santosh Harish — leader of Open Philanthropy’s grantmaking in South Asian air quality — about the scale of the harm caused by air pollution.

      They cover:

      • How bad air pollution is for our health and life expectancy
      • The different kinds of harm that particulate pollution causes
      • The strength of the evidence that it damages our brain function and reduces our productivity
      • Whether it was a mistake to switch our attention to climate change and away from air pollution
      • Whether most listeners to this show should have an air purifier running in their house right now
      • Where air pollution in India is worst and why, and whether it’s going up or down
      • Where most air pollution comes from
      • The policy blunders that led to many sources of air pollution in India being effectively unregulated
      • Why indoor air pollution packs an enormous punch
      • The politics of air pollution in India
      • How India ended up spending a lot of money on outdoor air purifiers
      • The challenges faced by foreign philanthropists in India
      • Why Santosh has made the grants he has so far
      • And plenty more

      Producer and editor: Keiran Harris
      Audio Engineering Lead: Ben Cordell
      Technical editing: Simon Monsour and Milo McGuire
      Transcriptions: Katy Moore

      Continue reading →

      #169 – Paul Niehaus on whether cash transfers cause economic growth, and keeping theft to acceptable levels

      In today’s episode, host Luisa Rodriguez interviews Paul Niehaus — cofounder of GiveDirectly — on the case for giving unconditional cash to the world’s poorest households.

      They cover:

      • The empirical evidence on whether giving cash directly can drive meaningful economic growth
      • How the impacts of GiveDirectly compare to USAID employment programmes
      • GiveDirectly vs GiveWell’s top-recommended charities
      • How long-term guaranteed income affects people’s risk-taking and investments
      • Whether recipients prefer getting lump sums or monthly instalments
      • How GiveDirectly tackles cases of fraud and theft
      • The case for universal basic income, and GiveDirectly’s UBI studies in Kenya, Malawi, and Liberia
      • The political viability of UBI
      • Plenty more

      Producer and editor: Keiran Harris
      Audio Engineering Lead: Ben Cordell
      Technical editing: Dominic Armstrong and Milo McGuire
      Additional content editing: Katy Moore and Luisa Rodriguez
      Transcriptions: Katy Moore

      Continue reading →

      #168 – Ian Morris on whether deep history says we're heading for an intelligence explosion

      In today’s episode, host Rob Wiblin speaks with repeat guest Ian Morris about what big-picture history says about the likely impact of machine intelligence.

      They cover:

      • Some crazy anomalies in the historical record of civilisational progress
      • Whether we should think about today’s technology from an evolutionary perspective
      • Whether war will make a resurgence
      • Why we can’t end up living like The Jetsons
      • Whether stagnation or cyclical futures are realistic
      • What it means that over the very long term the rate of economic growth has increased
      • Whether violence between humans and powerful AI systems is likely
      • The most likely reasons for Rob and Ian to be really wrong about all of this
      • How professional historians react to this sort of talk
      • The future of Ian’s work
      • Plenty more

      Producer and editor: Keiran Harris
      Audio Engineering Lead: Ben Cordell
      Technical editing: Milo McGuire
      Transcriptions: Katy Moore

      Continue reading →

      #167 – Seren Kell on the research gaps holding back alternative proteins from mass adoption

      In today’s episode, host Luisa Rodriguez interviews Seren Kell — Senior Science and Technology Manager at the Good Food Institute Europe — about making alternative proteins as tasty, cheap, and convenient as traditional meat, dairy, and egg products.

      They cover:

      • The basic case for alternative proteins, and why they’re so hard to make
      • Why fermentation is a surprisingly promising technology for creating delicious alternative proteins
      • The main scientific challenges that need to be solved to make fermentation even more useful
      • The progress that’s been made on the cultivated meat front, and what it will take to make cultivated meat affordable
      • How GFI Europe is helping with some of these challenges
      • How people can use their careers to contribute to replacing factory farming with alternative proteins
      • The best part of Seren’s job
      • Plenty more

      Producer and editor: Keiran Harris
      Audio Engineering Lead: Ben Cordell
      Technical editing: Dominic Armstrong and Milo McGuire
      Additional content editing: Luisa Rodriguez and Katy Moore
      Transcriptions: Katy Moore

      Continue reading →

      #166 – Tantum Collins on what he's learned as an AI policy insider at the White House, DeepMind and elsewhere

      In today’s episode, host Rob Wiblin gets the rare chance to interview someone with insider AI policy experience at the White House and DeepMind who’s willing to speak openly — Tantum Collins.

      They cover:

      • How AI could strengthen government capacity, and how that’s a double-edged sword
      • How new technologies force us to confront tradeoffs in political philosophy that we were previously able to pretend weren’t there
      • To what extent policymakers take different threats from AI seriously
      • Whether the US and China are in an AI arms race or not
      • Whether it’s OK to transform the world without much of the world agreeing to it
      • The tyranny of small differences in AI policy
      • Disagreements between different schools of thought in AI policy, and proposals that could unite them
      • How the US AI Bill of Rights could be improved
      • Whether AI will transform the labour market, and whether it will become a partisan political issue
      • The tensions between the cultures of San Francisco and DC, and how to bridge the divide between them
      • What listeners might be able to do to help with this whole mess
      • Panpsychism
      • Plenty more

      Producer and editor: Keiran Harris
      Audio engineering lead: Ben Cordell
      Technical editing: Simon Monsour and Milo McGuire
      Transcriptions: Katy Moore

      Continue reading →

      #165 – Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe

      In today’s episode, host Rob Wiblin speaks with repeat guest and audience favourite Anders Sandberg about the most impressive things that could be achieved in our universe given the laws of physics.

      They cover:

      • The epic new book Anders is working on, and whether he’ll ever finish it
      • Whether there’s a best possible world or we can just keep improving forever
      • What wars might look like if the galaxy is mostly settled
      • The impediments to AI or humans making it to other stars
      • How the universe will end a million trillion years in the future
      • Whether it’s useful to wonder about whether we’re living in a simulation
      • The grabby aliens theory
      • Whether civilizations get more likely to fail the older they get
      • The best way to generate energy that could ever exist
      • Black hole bombs
      • Whether superintelligence is necessary to get a lot of value
      • The likelihood that life from elsewhere has already visited Earth
      • And plenty more.

      Producer and editor: Keiran Harris
      Audio Engineering Lead: Ben Cordell
      Technical editing: Simon Monsour and Milo McGuire
      Transcriptions: Katy Moore

      Continue reading →

      We’ve made mistakes in our careers — here’s what we learned

      The idea this week: you can learn a lot from mistakes.

      In that spirit, we’re sharing six stories of mistakes that staff at 80,000 Hours think they’ve made in their careers.

      And if you’re interested in hearing more, we strongly recommend a recent episode of our podcast, 80k After Hours, about 10 mistakes people make when pursuing a high-impact career.

      1. Not asking for help

      A mistake I have frequently made, and still sometimes do, is not asking for help with applications. I usually feel awkward about others reading my letters or essays or practicing interview questions, and I also don’t want to waste my friends’ time.

      But whenever I end up asking for help, it improves my applications significantly, and people are usually happy to help. (I enjoy giving feedback on applications as well!)

      Anemone Franz, advisor

      2. Ruling out an option too quickly

      I first became concerned about risks from artificial intelligence in 2014, when I read Superintelligence. The book convinced me these risks were serious. And more importantly, I couldn’t find persuasive counterarguments at the time.

      But because I didn’t have a background in technical fields — I thought of myself as a writer — I concluded there was little I could contribute to the field and mostly worked on other problems.

      Now I think this was a mistake.

      Continue reading →