AI safety technical research

Progress in AI — while it could be hugely beneficial — comes with significant risks. Risks that we’ve argued could be existential.

But these risks can be tackled.

With further progress in AI safety, we have an opportunity to develop AI for good: systems that are safe, ethical, and beneficial for everyone.

This article explains how you can help.

Continue reading →

#196 – Jonathan Birch on the edge cases of sentience and why they matter

In today’s episode, host Luisa Rodriguez speaks to Dr Jonathan Birch — philosophy professor at the London School of Economics — about his new book, The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. (Check out the free PDF version!)

They cover:

  • Candidates for sentience — such as humans with consciousness disorders, foetuses, neural organoids, invertebrates, and AIs.
  • Humanity’s history of acting as if we’re sure that such beings are incapable of having subjective experiences — and why Jonathan thinks that that certainty is completely unjustified.
  • Chilling tales about overconfident policies that probably caused significant suffering for decades.
  • How policymakers can act ethically given real uncertainty.
  • Whether simulating the brain of the roundworm C. elegans or Drosophila (aka fruit flies) would create minds equally sentient to the biological versions.
  • How new technologies like brain organoids could replace animal testing, and how big the risk is that they could be sentient too.
  • Why Jonathan is so excited about citizens’ assemblies.
  • Jonathan’s conversation with the Dalai Lama about whether insects are sentient.
  • And plenty more.

Producer and editor: Keiran Harris
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

Should you work at a frontier AI company?

We think AI is likely to have transformative effects over the coming decades, and that reducing the chances of an AI-related catastrophe is one of the world’s most pressing problems.

So it’s natural to wonder whether you should try to work at one of the companies that are doing the most to build and shape these future AI systems.

As of summer 2024, OpenAI, Google DeepMind, Meta, and Anthropic seem to be the leading frontier AI companies — meaning they have produced the most capable models so far and seem likely to continue doing so. Mistral, and xAI are contenders as well — and others may enter the industry from here.

Why might it be high impact to work for a frontier AI company?
Some roles at these companies might be among the best for reducing risks

We suggest working at frontier AI companies in several of our career reviews because a lot of important safety, governance, and security work is done in them.

In these reviews, we highlight:

Continue reading →

Why Orwell would hate AI

The idea this week: totalitarian regimes killed over 100 million people in less than 100 years — and in the future they could be far worse.

That’s because advanced artificial intelligence may prove very useful for dictators. They could use it to surveil their population, secure their grip on power, and entrench their rule, perhaps indefinitely.

I explore this possibility in my new article for 80,000 Hours on the risk of stable totalitarianism.

This is a serious risk. Many of the worst crimes in history, from the Holocaust to the Cambodian Genocide, have been perpetrated by totalitarian regimes. When megalomaniacal dictators decide massive sacrifices are justified to pursue national or personal glory, the results are often catastrophic.

However, even the most successful totalitarian regimes rarely survive more than a few decades. They tend to be brought down by internal resistance, war, or the succession problem — the possibility for sociopolitical change, including liberalisation, after a dictator’s death.

But that could all be upended if technological advancements help dictators overcome these challenges.

In the new article, I address:

To be sure,

Continue reading →

    Open position: Advisor

    The role

    80,000 Hours provides free research and support to help people find careers tackling the world’s most pressing problems.

    We’re keen to hire another advisor to talk to talented and altruistic people in order to help them find high-impact careers.

    It’s a great sign you’d enjoy being an 80,000 Hours advisor if you’ve enjoyed managing, mentoring, or teaching. We’ve found that experience with coaching is not necessary — backgrounds in a range of fields like medicine, research, management consulting, and more have helped our advisors become strong candidates for the role.

    For example, Laura González-Salmerón joined us after working as an investment manager, Abigail Hoskin completed her PhD in Psychology, and Matt Reardon was previously a corporate lawyer. But it’s also particularly useful for us to have a broad range of experience on the team, so we’re excited to hear from people with all kinds of backgrounds.

    The core of this role is having one-on-one conversations with people to help them plan their careers. We have a tight-knit, fast-paced team, though, so people take on a variety of responsibilities . These include, for example, building networks and expertise in our priority paths, analysing data to improve our services, and writing posts for the 80,000 Hours website or the EA Forum.

    What we’re looking for

    We’re looking for someone who has:

    • A strong interest in effective altruism and longtermism
    • Strong analytical skills,

    Continue reading →

      #195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them

      In today’s episode, host Luisa Rodriguez speaks to Sella Nevo — director of the Meselson Center at RAND — about his team’s latest report on how to protect the model weights of frontier AI models from actors who might want to steal them.

      They cover:

      • Real-world examples of sophisticated security breaches, and what we can learn from them.
      • Why AI model weights might be such a high-value target for adversaries like hackers, rogue states, and other bad actors.
      • The many ways that model weights could be stolen, from using human insiders to sophisticated supply chain hacks.
      • The current best practices in cybersecurity, and why they may not be enough to keep bad actors away.
      • New security measures that Sella hopes can mitigate with the growing risks.
      • Sella’s work using machine learning for flood forecasting, which has significantly reduced injuries and costs from floods across Africa and Asia.
      • And plenty more.

      Producer and editor: Keiran Harris
      Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
      Additional content editing: Katy Moore and Luisa Rodriguez
      Transcriptions: Katy Moore

      Continue reading →

      Preventing an AI-related catastrophe

      I expect that there will be substantial progress in AI in the next few decades, potentially even to the point where machines come to outperform humans in many, if not all, tasks. This could have enormous benefits, helping to solve currently intractable global problems, but could also pose severe risks. These risks could arise accidentally (for example, if we don’t find technical solutions to concerns about the safety of AI systems), or deliberately (for example, if AI systems worsen geopolitical conflict). I think more work needs to be done to reduce these risks.

      Some of these risks from advanced AI could be existential — meaning they could cause human extinction, or an equally permanent and severe disempowerment of humanity.2 There have not yet been any satisfying answers to concerns — discussed below — about how this rapidly approaching, transformative technology can be safely developed and integrated into our society. Finding answers to these concerns is neglected and may well be tractable. I estimated that there were around 400 people worldwide working directly on this in 2022, though I believe that number has grown.3 As a result, the possibility of AI-related catastrophe may be the world’s most pressing problem — and the best thing to work on for those who are well-placed to contribute.

      Promising options for working on this problem include technical research on how to create safe AI systems, strategy research into the particular risks AI might pose, and policy research into ways in which companies and governments could mitigate these risks. As policy approaches continue to be developed and refined, we need people to put them in place and implement them. There are also many opportunities to have a big impact in a variety of complementary roles, such as operations management, journalism, earning to give, and more — some of which we list below.

      Continue reading →

      #194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

      Can ‘effective accelerationists’ and AI ‘doomers’ agree on a common philosophy of technology? Common sense says no. But programmer and Ethereum cofounder Vitalik Buterin showed otherwise with his essay “My techno-optimism,” which both camps agreed was basically reasonable.

      Seeing his social circle divided and fighting, Vitalik hoped to write a careful synthesis of the best ideas from both the optimists and the apprehensive.

      Accelerationists are right: most technologies leave us better off, the human cost of delaying further advances can be dreadful, and centralising control in government hands often ends disastrously.

      But the fearful are also right: some technologies are important exceptions, AGI has an unusually high chance of being one of those, and there are options to advance AI in safer directions.

      The upshot? Defensive acceleration: humanity should run boldly but also intelligently into the future — speeding up technology to get its benefits, but preferentially developing ‘defensive’ technologies that lower systemic risks, permit safe decentralisation of power, and help both individuals and countries defend themselves against aggression and domination.

      What sorts of things is he talking about? In the area of disease prevention it’s most easy to see: disinfecting indoor air, rapid-turnaround vaccine platforms, and nasal spray vaccines that prevent disease transmission all make us safer against pandemics without generating any apparent new threats of their own. (And they might eliminate the common cold to boot!)

      Entrepreneur First is running a defensive acceleration incubation programme with $250,000 of investment. If these ideas resonate with you, learn about the programme and apply here. You don’t need a business idea yet — just the hustle to start a technology company. But you’ll need to act fast and apply by August 2, 2024.

      Vitalik explains how he mentally breaks down defensive technologies into four broad categories:

      • Defence against big physical things like tanks.
      • Defence against small physical things like diseases.
      • Defence against unambiguously hostile information like fraud.
      • Defence against ambiguously hostile information like possible misinformation.

      The philosophy of defensive acceleration has a strong basis in history. Mountain or island countries that are hard to invade, like Switzerland or Britain, tend to have more individual freedom and higher quality of life than the Mongolian steppes — where “your entire mindset is around kill or be killed, conquer or be conquered”: a mindset Vitalik calls “the breeding ground for dystopian governance.”

      Defensive acceleration arguably goes back to ancient China, where the Mohists focused on helping cities build better walls and fortifications, an approach that really did reduce the toll of violent invasion, until progress in offensive technologies of siege warfare allowed them to be overcome.

      In addition to all of that, host Rob Wiblin and Vitalik discuss:

      • AI regulation disagreements being less about AI in particular, and more whether you’re typically more scared of anarchy or totalitarianism.
      • Vitalik’s updated p(doom).
      • Whether the social impact of blockchain and crypto has been a disappointment.
      • Whether humans can merge with AI, and if that’s even desirable.
      • The most valuable defensive technologies to accelerate.
      • How to trustlessly identify what everyone will agree is misinformation
      • Whether AGI is offence-dominant or defence-dominant.
      • Vitalik’s updated take on effective altruism.
      • Plenty more.

      Producer and editor: Keiran Harris
      Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
      Transcriptions: Katy Moore

      Continue reading →

      Open position: Head of Video

      Why this role?

      80,000 Hours provides free research and support to help people find careers tackling the world’s most pressing problems.

      Users can engage with our research in many forms: as longform articles published on our site, as a paperback book received via our book giveaway, as a podcast, or in smaller chunks via our newsletter. But we have relatively little support available in video format.

      Time spent on the internet is increasingly spent watching video, and for many people in our target audience, video is the main way that they both find entertainment and learn about topics that matter to them.

      We think that one of the best ways we could increase our impact going forward is to have a mature and robust pipeline for producing videos on topics that will help our audience find more impactful careers.

      Back in 2017, we started a podcast; today, our podcast episodes reach more than 100,000 listeners, and are commonly cited by listeners as one of the best ways they know of to learn about the world’s most pressing problems. Our hope is that a video programme could be similarly successful — or reach an even larger scale.

      We’ve also produced two ten-minute videos already that we were pleased with and got mostly positive feedback on. Using targeted digital advertising, we found we could generate an hour of engagement with the videos for just $0.40,

      Continue reading →

        #193 – Sihao Huang on the risk that US–China AI competition leads to war

        In today’s episode, host Luisa Rodriguez speaks to Sihao Huang — a technology and security policy fellow at RAND — about his work on AI governance and tech policy in China, what’s happening on the ground in China in AI development and regulation, and the importance of US–China cooperation on AI governance.

        They cover:

        • Whether the US and China are in an AI race, and the global implications if they are.
        • The state of the art of AI in China.
        • China’s response to American export controls, and whether China is on track to indigenise its semiconductor supply chain.
        • How China’s current AI regulations try to maintain a delicate balance between fostering innovation and keeping strict information control over the Chinese people.
        • Whether China’s extensive AI regulations signal real commitment to safety or just censorship — and how AI is already used in China for surveillance and authoritarian control.
        • How advancements in AI could reshape global power dynamics, and Sihao’s vision of international cooperation to manage this responsibly.
        • And plenty more.

        Producer and editor: Keiran Harris
        Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
        Additional content editing: Katy Moore and Luisa Rodriguez
        Transcriptions: Katy Moore

        Continue reading →

        Open position: Marketer

        Why this role?

        80,000 Hours provides free research and support to help people find careers tackling the world’s most pressing problems.

        Since the launch of our marketing programme in 2022, we’ve increased the hours that people spend engaging with our content by 6.5x, reached millions of new users across different platforms, and now have over 500,000 newsletter subscribers. We’re also the largest single source of people getting involved in the effective altruism community, according to the most recent EA Survey.

        Even so, it seems like there’s considerable room to grow further — we’re not nearly at the ceiling of what we think we can achieve. So, we’re looking for a new marketer to help us bring the marketing team to its full potential.

        We anticipate that the right person in this role could help us massively increase our readership, and lead to hundreds or thousands of additional people pursuing high-impact careers.

        As some indication of what success in the role might look like, over the next couple of years your team might have:

        • Cost-effectively deployed $5 million reaching people from our target audience.
        • Worked with some of the largest and most well-regarded YouTube channels (for instance, we have run sponsorships with Veritasium, Kurzgesagt, and Wendover Productions).
        • Designed digital ad campaigns that reached hundreds of millions of people.
        • Driven hundreds of thousands of additional newsletter subscriptions,

        Continue reading →

          Open position: Head of Marketing

          Why this role?

          80,000 Hours provides free research and support to help people find careers tackling the world’s most pressing problems.

          Since the launch of our marketing programme in 2022, we’ve increased the hours that people spend engaging with our content by 6.5x, reached millions of new users across different platforms, and now have over 500,000 newsletter subscribers. We’re also the largest single source of people getting involved in the effective altruism community, according to the most recent EA Survey.

          Even so, it seems like there’s considerable room to grow further — we’re not nearly at the ceiling of what we think we can achieve. So, we’re looking for a new team lead to help us bring the marketing team to its full potential.

          We anticipate that the right person in this role could help us massively increase our readership, and lead to hundreds or thousands of additional people pursuing high-impact careers.

          As some indication of what success in the role might look like, over the next couple of years your team might have:

          • Cost-effectively deployed $5 million reaching people from our target audience.
          • Worked with some of the largest and most well-regarded YouTube channels (for instance, we have run sponsorships with Veritasium, Kurzgesagt, and Wendover Productions).
          • Designed digital ad campaigns that reached hundreds of millions of people.
          • Driven hundreds of thousands of additional newsletter subscriptions,

          Continue reading →

            Mental health and your career: our top resources

            The idea this week: people pursuing altruistic careers often struggle with imposter syndrome, anxiety, and moral perfectionism. And we’ve spent a lot of time trying to understand what helps.

            More than 20% of working US adults said their work harmed their mental health in 2023, according to a survey from the American Psychological Association.

            Jobs can put a strain on anyone. And if you aim — like many of our readers do — to help others with your career, your work may feel extra demanding.

            Work that you feel really matters can be much more interesting and fulfilling. But it can also sometimes be a double-edged sword — after all, your success doesn’t only matter for you but also for those you’re trying to help.

            So this week, we want to share a roundup of some of our top content on mental health:

            1. An interview with our previous CEO on having a successful career with depression, anxiety, and imposter syndrome — this is one of our most popular interviews ever. It gives a remarkably honest and insightful account of what struggles with mental health can feel like from the inside, how they can derail a career, and how you can get back on track. It also provides lots of practical tips for how you can navigate these issues, and tries to offer a corrective to common advice that doesn’t work for everyone.

            Continue reading →

            #192 – Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the US

            In today’s episode, host Luisa Rodriguez speaks to Pulitzer Prize finalist and New York Times bestselling author Annie Jacobsen about her latest book, Nuclear War: A Scenario.

            They cover:

            • The most harrowing findings from Annie’s hundreds of hours of interviews with nuclear experts.
            • What happens during the window that the US president would have to decide about nuclear retaliation after hearing news of a possible nuclear attack.
            • The horrific humanitarian impacts on millions of innocent civilians from nuclear strikes.
            • The overlooked dangers of a nuclear-triggered electromagnetic pulse (EMP) attack crippling critical infrastructure within seconds.
            • How we’re on the razor’s edge between the logic of nuclear deterrence and catastrophe, and urgently need reforms to move away from hair-trigger alert nuclear postures.
            • And plenty more.

            Producer and editor: Keiran Harris
            Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
            Additional content editing: Katy Moore and Luisa Rodriguez
            Transcriptions: Katy Moore

            Continue reading →

            #191 (Part 2) – Carl Shulman on government and society after AGI

            This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI. You can listen to them in either order!

            If we develop artificial general intelligence that’s reasonably aligned with human goals, it could put a fast and near-free superhuman advisor in everyone’s pocket. How would that affect culture, government, and our ability to act sensibly and coordinate together?

            It’s common to worry that AI advances will lead to a proliferation of misinformation and further disconnect us from reality. But in today’s conversation, AI expert Carl Shulman argues that this underrates the powerful positive applications the technology could have in the public sphere.

            As Carl explains, today the most important questions we face as a society remain in the “realm of subjective judgement” — without any “robust, well-founded scientific consensus on how to answer them.” But if AI ‘evals’ and interpretability advance to the point that it’s possible to demonstrate which AI models have truly superhuman judgement and give consistently trustworthy advice, society could converge on firm or ‘best-guess’ answers to far more cases.

            If the answers are publicly visible and confirmable by all, the pressure on officials to act on that advice could be great.

            That’s because when it’s hard to assess if a line has been crossed or not, we usually give people much more discretion. For instance, a journalist inventing an interview that never happened will get fired because it’s an unambiguous violation of honesty norms — but so long as there’s no universally agreed-upon standard for selective reporting, that same journalist will have substantial discretion to report information that favours their preferred view more often than that which contradicts it.

            Similarly, today we have no generally agreed-upon way to tell when a decision-maker has behaved irresponsibly. But if experience clearly shows that following AI advice is the wise move, not seeking or ignoring such advice could become more like crossing a red line — less like making an understandable mistake and more like fabricating your balance sheet.

            To illustrate the possible impact, Carl imagines how the COVID pandemic could have played out in the presence of AI advisors that everyone agrees are exceedingly insightful and reliable.

            To start, advance investment in preventing, detecting, and containing pandemics would likely have been at a much higher and more sensible level, because it would have been straightforward to confirm which efforts passed a cost-benefit test for government spending. Politicians refusing to fund such efforts when the wisdom of doing so is an agreed and established fact would seem like malpractice.

            Low-level Chinese officials in Wuhan would have been seeking advice from AI advisors instructed to recommend actions that are in the interests of the Chinese government as a whole. As soon as unexplained illnesses started appearing, that advice would be to escalate and quarantine to prevent a possible new pandemic escaping control, rather than stick their heads in the sand as happened in reality. Having been told by AI advisors of the need to warn national leaders, ignoring the problem would be a career-ending move.

            From there, these AI advisors could have recommended stopping travel out of Wuhan in November or December 2019, perhaps fully containing the virus, as was achieved with SARS-1 in 2003. Had the virus nevertheless gone global, President Trump would have been getting excellent advice on what would most likely ensure his reelection. Among other things, that would have meant funding Operation Warp Speed far more than it in fact was, as well as accelerating the vaccine approval process, and building extra manufacturing capacity earlier. Vaccines might have reached everyone far faster.

            These are just a handful of simple changes from the real course of events we can imagine — in practice, a significantly superhuman AI might suggest novel approaches better than any we can suggest here.

            In the past we’ve usually found it easier to predict how hard technologies like planes or factories will change than to imagine the social shifts that those technologies will create — and the same is likely happening for AI.

            Carl Shulman and host Rob Wiblin discuss the above, as well as:

            • The risk of society using AI to lock in its values.
            • The difficulty of preventing coups once AI is key to the military and police.
            • What international treaties we need to make this go well.
            • How to make AI superhuman at forecasting the future.
            • Whether AI will be able to help us with intractable philosophical questions.
            • Whether we need dedicated projects to make wise AI advisors, or if it will happen automatically as models scale.
            • Why Carl doesn’t support AI companies voluntarily pausing AI research, but sees a stronger case for binding international controls once we’re closer to ‘crunch time.’
            • Opportunities for listeners to contribute to making the future go well.

            Producer and editor: Keiran Harris
            Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
            Transcriptions: Katy Moore

            Continue reading →

            #191 (Part 1) – Carl Shulman on the economy and national security after AGI

            This is the first part of our marathon interview with Carl Shulman. The second episode is on government and society after AGI. You can listen to them in either order!

            The human brain does what it does with a shockingly low energy supply: just 20 watts — a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution has already managed, and could accomplish the work of top human professionals given a 20-watt power supply?

            Many people sort of consider that hypothetical, but maybe nobody has followed through and considered all the implications as much as Carl Shulman. Behind the scenes, his work has greatly influenced how leaders in artificial general intelligence (AGI) picture the world they’re creating.

            Carl simply follows the logic to its natural conclusion. This is a world where 1 cent of electricity can be turned into medical advice, company management, or scientific research that would today cost $100s, resulting in a scramble to manufacture chips and apply them to the most lucrative forms of intellectual labour.

            It’s a world where, given their incredible hourly salaries, the supply of outstanding AI researchers quickly goes from 10,000 to 10 million or more, enormously accelerating progress in the field.

            It’s a world where companies operated entirely by AIs working together are much faster and more cost-effective than those that lean on humans for decision making, and the latter are progressively driven out of business.

            It’s a world where the technical challenges around control of robots are rapidly overcome, leading to robots into strong, fast, precise, and tireless workers able to accomplish any physical work the economy requires, and a rush to build billions of them and cash in.

            It’s a world where, overnight, the number of human beings becomes irrelevant to rates of economic growth, which is now driven by how quickly the entire machine economy can copy all its components. Looking at how long it takes complex biological systems to replicate themselves (some of which can do so in days) that occurring every few months could be a conservative estimate.

            It’s a world where any country that delays participating in this economic explosion risks being outpaced and ultimately disempowered by rivals whose economies grow to be 10-fold, 100-fold, and then 1,000-fold as large as their own.

            As the economy grows, each person could effectively afford the practical equivalent of a team of hundreds of machine ‘people’ to help them with every aspect of their lives.

            And with growth rates this high, it doesn’t take long to run up against Earth’s physical limits — in this case, the toughest to engineer your way out of is the Earth’s ability to release waste heat. If this machine economy and its insatiable demand for power generates more heat than the Earth radiates into space, then it will rapidly heat up and become uninhabitable for humans and other animals.

            This eventually creates pressure to move economic activity off-planet. There’s little need for computer chips to be on Earth, and solar energy and minerals are more abundant in space. So you could develop effective populations of billions of scientific researchers operating on computer chips orbiting in space, sending the results of their work, such as drug designs, back to Earth for use.

            These are just some of the wild implications that could follow naturally from truly embracing the hypothetical: what if we develop artificial general intelligence that could accomplish everything that the most productive humans can, using the same energy supply?

            In today’s episode, Carl explains the above, and then host Rob Wiblin pushes back on whether that’s realistic or just a cool story, asking:

            • If we’re heading towards the above, how come economic growth remains slow now and not really increasing?
            • Why have computers and computer chips had so little effect on economic productivity so far?
            • Are self-replicating biological systems a good comparison for self-replicating machine systems?
            • Isn’t this just too crazy and weird to be plausible?
            • What bottlenecks would be encountered in supplying energy and natural resources to this growing economy?
            • Might there not be severely declining returns to bigger brains and more training?
            • Wouldn’t humanity get scared and pull the brakes if such a transformation kicked off?
            • If this is right, how come economists don’t agree and think all sorts of bottlenecks would hold back explosive growth?

            Finally, Carl addresses the moral status of machine minds themselves. Would they be conscious or otherwise have a claim to moral or rights? And how might humans and machines coexist with neither side dominating or exploiting the other?

            Producer and editor: Keiran Harris
            Audio engineering lead: Ben Cordell
            Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
            Transcriptions: Katy Moore

            Continue reading →

            Does your vote matter? What the research says

            The idea this week: the cynical case against voting and getting involved in politics doesn’t hold up.

            Does your vote matter? Around half of the world’s population is expected to see national elections this year, and voters in places like Taiwan, India, and Mexico have already gone to the polls. The UK and France both recently scheduled elections.

            And of course, the 2024 US national election campaigns are off and running, with control of the House of Representatives, the Senate, and the White House in contention — as well as many state houses, governorships, and other important offices.

            Sometimes people think that their vote doesn’t matter because they’re just a drop in the ocean.

            But my colleague Rob has explored the research on this topic, and he concluded that voting can actually be a surprisingly impactful way to spend your time. So it’s not just your civic duty — it can also be a big opportunity to influence the world for the better.

            That’s because, while the chance your vote will change the outcome of an election is small, it can still matter a lot given the massive impact governments can have.

            To take a simple model: if the US government discretionary spending is $6.4 trillion over four years, and you have a 1 in 10 million chance of changing the outcome of the national election,

            Continue reading →

              Dive into our most in-depth research on careers

              The idea this week: your career choices may be much more important than you think — and we have a lot of resources to help you think them through.

              Your career is one of your biggest opportunities to make a difference in the world and also have a rewarding and interesting life.

              That’s why we wrote our career guide — to help people create a career plan that’s aimed at having a positive impact and a fulfilling career.

              But there’s a lot of ground to cover, so we couldn’t do it all in a single book.

              That’s why we wrote our advanced series. It covers our most in-depth research on questions like:

              • What does it mean to “make a difference”?
              • What is “longtermism,” and why does it matter?
              • Is it ever OK to take a harmful job?
              • Can we balance doing what we love with having a positive impact?
              • What role should finding your personal strengths play in your career?
              • How should you coordinate with others when trying to do good?
              • How long should you explore different career options?
              • And a whole lot more!

              We hope the articles in our advanced series help you tackle these questions and accelerate you along your path to an impactful career.

              See the whole series here or just browse selected topics below.

              Continue reading →

                #190 – Eric Schwitzgebel on whether the US is conscious

                In today’s episode, host Luisa Rodriguez speaks to Eric Schwitzgebel — professor of philosophy at UC Riverside — about some of the most bizarre and unintuitive claims from his recent book, The Weirdness of the World.

                They cover:

                • Why our intuitions seem so unreliable for answering fundamental questions about reality.
                • What the materialist view of consciousness is, and how it might imply some very weird things — like that the United States could be a conscious entity.
                • Thought experiments that challenge our intuitions — like supersquids that think and act through detachable tentacles, and intelligent species whose brains are made up of a million bugs.
                • Eric’s claim that consciousness and cosmology are universally bizarre and dubious.
                • How to think about borderline states of consciousness, and whether consciousness is more like a spectrum or more like a light flicking on.
                • The nontrivial possibility that we could be dreaming right now, and the ethical implications if that’s true.
                • Why it’s worth it to grapple with the universe’s most complex questions, even if we can’t find completely satisfying solutions.
                • And much more.

                Producer and editor: Keiran Harris
                Audio engineering lead: Ben Cordell
                Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
                Additional content editing: Katy Moore and Luisa Rodriguez
                Transcriptions: Katy Moore

                Continue reading →

                #189 – Rachel Glennerster on how "market shaping" could help solve climate change, pandemics, and other global problems

                In today’s episode, host Luisa Rodriguez speaks to Rachel Glennerster — associate professor of economics at the University of Chicago and a pioneer in the field of development economics — about how her team’s new Market Shaping Accelerator aims to leverage market forces to drive innovations that can solve pressing world problems.

                They cover:

                • How market failures and misaligned incentives stifle critical innovations for social goods like pandemic preparedness, climate change interventions, and vaccine development.
                • How “pull mechanisms” like advance market commitments (AMCs) can help overcome these challenges — including concrete examples like how one AMC led to speeding up the development of three vaccines which saved around 700,000 lives in low-income countries.
                • The challenges in designing effective pull mechanisms, from design to implementation.
                • Why it’s important to tie innovation incentives to real-world impact and uptake, not just the invention of a new technology.
                • The massive benefits of accelerating vaccine development, in some cases, even if it’s only by a few days or weeks.
                • The case for a $6 billion advance market commitment to spur work on a universal COVID-19 vaccine.
                • The shortlist of ideas from the Market Shaping Accelerator’s recent Innovation Challenge that use pull mechanisms to address market failures around improving indoor air quality, repurposing generic drugs for alternative uses, and developing eco-friendly air conditioners for a warming planet.
                • “Best Buys” and “Bad Buys” for improving education systems in low- and middle-income countries, based on evidence from over 400 studies.
                • Lessons from Rachel’s career at the forefront of global development, and how insights from economics can drive transformative change.
                • And much more.

                Producer and editor: Keiran Harris
                Audio Engineering Lead: Ben Cordell
                Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
                Additional content editing: Katy Moore and Luisa Rodriguez
                Transcriptions: Katy Moore

                Continue reading →