We know of many people with technical talent and track records choosing to work in governance right now because they think it’s where they can make a bigger difference.
It’s become more clear that policy-shaping and governance positions within key AI organisations can play critical roles in how the technology progresses.
We’re seeing a particularly large increase in the number of roles available in AI governance and policy,
In today’s episode, host Luisa Rodriguez speaks to Nathan Calvin — senior policy counsel at the Center for AI Safety Action Fund — about the new AI safety bill in California, SB 1047, which he’s helped shape as it’s moved through the state legislature.
They cover:
What’s actually in SB 1047, and which AI models it would apply to.
The most common objections to the bill — including how it could affect competition, startups, open source models, and US national security — and which of these objections Nathan thinks hold water.
What Nathan sees as the biggest misunderstandings about the bill that get in the way of good public discourse about it.
Why some AI companies are opposed to SB 1047, despite claiming that they want the industry to be regulated.
How the bill is different from Biden’s executive order on AI and voluntary commitments made by AI companies.
Why California is taking state-level action rather than waiting for federal regulation.
How state-level regulations can be hugely impactful at national and global scales, and how listeners could get involved in state-level work to make a real difference on lots of pressing problems.
And plenty more.
Producer and editor: Keiran Harris Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore
In today’s episode, host Luisa Rodriguez speaks to Meghan Barrett — insect neurobiologist and physiologist at Indiana University Indianapolis and founding director of the Insect Welfare Research Society — about her work to understand insects’ potential capacity for suffering, and what that might mean for how humans currently farm and use insects.
They cover:
The scale of potential insect suffering in the wild, on farms, and in labs.
Examples from cutting-edge insect research, like how depression- and anxiety-like states can be induced in fruit flies and successfully treated with human antidepressants.
How size bias might help explain why many people assume insects can’t feel pain.
Practical solutions that Meghan’s team is working on to improve farmed insect welfare, such as standard operating procedures for more humane slaughter methods.
Challenges facing the nascent field of insect welfare research, and where the main research gaps are.
Meghan’s personal story of how she went from being sceptical of insect pain to working as an insect welfare scientist, and her advice for others who want to improve the lives of insects.
And much more.
Producer and editor: Keiran Harris Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore
The three biggest AI companies — Anthropic, OpenAI, and DeepMind — have now all released policies designed to make their AI models less likely to go rogue or cause catastrophic damage as they approach, and eventually exceed, human capabilities. Are they good enough?
That’s what host Rob Wiblin tries to hash out in this interview (recorded May 30) with Nick Joseph — one of the 11 people who left OpenAI to launch Anthropic, its current head of training, and a big fan of Anthropic’s “responsible scaling policy” (or “RSP”). Anthropic is the most safety focused of the AI companies, known for a culture that treats the risks of its work as deadly serious.
As Nick explains, these scaling policies commit companies to dig into what new dangerous things a model can do — after it’s trained, but before it’s in wide use. The companies then promise to put in place safeguards they think are sufficient to tackle those capabilities before availability is extended further. For instance, if a model could significantly help design a deadly bioweapon, then its weights need to be properly secured so they can’t be stolen by terrorists interested in using it that way.
As capabilities grow further — for example, if testing shows that a model could exfiltrate itself and spread autonomously in the wild — then new measures would need to be put in place to make that impossible, or demonstrate that such a goal can never arise.
Nick points out three big virtues to the RSP approach:
It allows us to set aside the question of when any of these things will be possible, and focus the conversation on what would be necessary if they are possible — something there is usually much more agreement on.
It means we don’t take costly precautions that developers will resent and resist before they are actually called for.
As the policies don’t allow models to be deployed until suitable safeguards are in place, they align a firm’s commercial incentives with safety — for example, a profitable product release could be blocked by insufficient investments in computer security or alignment research years earlier.
Rob then pushes Nick on some of the best objections to the RSP mechanisms he’s found, including:
It’s hard to trust that profit-motivated companies will stick to their scaling policies long term and not water them down to make their lives easier — particularly as commercial pressure heats up.
Even if you’re trying hard to find potential safety concerns, it’s difficult to truly measure what models can and can’t do. And if we fail to pick up a dangerous ability that’s really there under the hood, then perhaps all we’ve done is lull ourselves into a false sense of security.
Importantly, in some cases humanity simply hasn’t invented safeguards up to the task of addressing AI capabilities that could show up soon. Maybe that will change before it’s too late — but if not, we’re being written a cheque that will bounce when it comes due.
Nick explains why he thinks some of these worries are overblown, while others are legitimate but just point to the hard work we all need to put in to get a good outcome.
Nick and Rob also discuss whether it’s essential to eventually hand over operation of responsible scaling policies to external auditors or regulatory bodies, if those policies are going to be able to hold up against the intense commercial pressures that might end up arrayed against them.
In addition to all of that, Nick and Rob talk about:
What Nick thinks are the current bottlenecks in AI progress: people and time (rather than data or compute).
What it’s like working in AI safety research at the leading edge, and whether pushing forward capabilities (even in the name of safety) is a good idea.
What it’s like working at Anthropic, and how to get the skills needed to help with the safe development of AI.
And as a reminder, if you want to let us know your reaction to this interview, or send any other feedback, our inbox is always open at [email protected].
Producer and editor: Keiran Harris Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Video engineering: Simon Monsour Transcriptions: Katy Moore
Career review by Cody Fenwick · Last updated August 2024 · First published June 2023
As advancing AI capabilities gained widespread attention in late 2022 and 2023, interest in governing and regulating these systems has grown. Discussion of the potential catastrophic risks of misaligned or uncontrollable AI has become more prominent, potentially opening up opportunities for policy that could mitigate the threats.
There’s still a lot of uncertainty about which AI governance strategies would be best. Many have proposed policies and strategies aimed at reducing the largest risks, which we discuss below.
But there’s no roadmap here. There’s plenty of room for debate about what’s needed, and we may not have found the best ideas yet in this space. In any case, there’s still a lot of work to figure out how promising policies and strategies would work in practice. We hope to see more people enter this field to develop expertise and skills that will contribute to risk-reducing AI governance and policy.
Why this could be a high-impact career path
Artificial intelligence has advanced rapidly. In 2022 and 2023, new language and image generation models gained widespread attention for their abilities, blowing past previous benchmarks.
And the applications of these models are still new; with more tweaking and integration into society, the existing AI systems may become easier to use and more ubiquitous.
We don’t know where all these developments will lead us. There’s reason to be optimistic that AI will eventually help us solve many of the world’s problems,
Blog post by Cody Fenwick · Published August 16th, 2024
The idea this week: mpox and a bird flu virus are testing our pandemic readiness.
Would we be ready for another pandemic?
It became clear in 2020 that the world hadn’t done enough to prepare for the rapid, global spread of a particularly deadly virus. Four years on, our resilience faces new tests.
H5N1 — a strain of bird flu — has been spreading among animals in the United States and elsewhere, with a small number of infections reported in humans.
Here’s what we know about each:
Mpox
Mpox drew international attention in 2022 when it started spreading globally, including in the US and the UK. During that outbreak, around 95,000 cases and about 180 deaths were reported. That wave largely subsided in much of the world, in part due to targeted vaccination campaigns, but the spread of another strain of the virus has sharply accelerated in Central Africa.
The strain driving the current outbreak may be significantly more deadly. Around 22,000 suspected mpox infections and more than 1,200 deaths have been reported in the DRC since January 2023..
Candidates for sentience — such as humans with consciousness disorders, foetuses, neural organoids, invertebrates, and AIs.
Humanity’s history of acting as if we’re sure that such beings are incapable of having subjective experiences — and why Jonathan thinks that that certainty is completely unjustified.
Chilling tales about overconfident policies that probably caused significant suffering for decades.
How policymakers can act ethically given real uncertainty.
Whether simulating the brain of the roundworm C. elegans or Drosophila (aka fruit flies) would create minds equally sentient to the biological versions.
How new technologies like brain organoids could replace animal testing, and how big the risk is that they could be sentient too.
Why Jonathan is so excited about citizens’ assemblies.
Jonathan’s conversation with the Dalai Lama about whether insects are sentient.
And plenty more.
Producer and editor: Keiran Harris Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore
So it’s natural to wonder whether you should try to work at one of the companies that are doing the most to build and shape these future AI systems.
As of summer 2024, OpenAI, Google DeepMind, Meta, and Anthropic seem to be the leading frontier AI companies — meaning they have produced the most capable models so far and seem likely to continue doing so. Mistral, and xAI are contenders as well — and others may enter the industry from here.
Why might it be high impact to work for a frontier AI company? Some roles at these companies might be among the best for reducing risks
We suggest working at frontier AI companies in several of our career reviews because a lot of important safety, governance, and security work is done in them.
Blog post by Stephen Clare · Published August 6th, 2024
The idea this week: totalitarian regimes killed over 100 million people in less than 100 years — and in the future they could be far worse.
That’s because advanced artificial intelligence may prove very useful for dictators. They could use it to surveil their population, secure their grip on power, and entrench their rule, perhaps indefinitely.
This is a serious risk. Many of the worst crimes in history, from the Holocaust to the Cambodian Genocide, have been perpetrated by totalitarian regimes. When megalomaniacal dictators decide massive sacrifices are justified to pursue national or personal glory, the results are often catastrophic.
However, even the most successful totalitarian regimes rarely survive more than a few decades. They tend to be brought down by internal resistance, war, or the succession problem — the possibility for sociopolitical change, including liberalisation, after a dictator’s death.
But that could all be upended if technological advancements help dictators overcome these challenges.
We’re keen to hire another advisor to talk to talented and altruistic people in order to help them find high-impact careers.
It’s a great sign you’d enjoy being an 80,000 Hours advisor if you’ve enjoyed managing, mentoring, or teaching. We’ve found that experience with coaching is not necessary — backgrounds in a range of fields like medicine, research, management consulting, and more have helped our advisors become strong candidates for the role.
For example, Laura González-Salmerón joined us after working as an investment manager, Abigail Hoskin completed her PhD in Psychology, and Matt Reardon was previously a corporate lawyer. But it’s also particularly useful for us to have a broad range of experience on the team, so we’re excited to hear from people with all kinds of backgrounds.
The core of this role is having one-on-one conversations with people to help them plan their careers. We have a tight-knit, fast-paced team, though, so people take on a variety of responsibilities . These include, for example, building networks and expertise in our priority paths, analysing data to improve our services, and writing posts for the 80,000 Hours website or the EA Forum.
What we’re looking for
We’re looking for someone who has:
A strong interest in effective altruism and longtermism
In today’s episode, host Luisa Rodriguez speaks to Sella Nevo — director of the Meselson Center at RAND — about his team’s latest report on how to protect the model weights of frontier AI models from actors who might want to steal them.
They cover:
Real-world examples of sophisticated security breaches, and what we can learn from them.
Why AI model weights might be such a high-value target for adversaries like hackers, rogue states, and other bad actors.
The many ways that model weights could be stolen, from using human insiders to sophisticated supply chain hacks.
The current best practices in cybersecurity, and why they may not be enough to keep bad actors away.
New security measures that Sella hopes can mitigate with the growing risks.
Sella’s work using machine learning for flood forecasting, which has significantly reduced injuries and costs from floods across Africa and Asia.
And plenty more.
Producer and editor: Keiran Harris Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore
I expect that there will be substantial progress in AI in the next few decades, potentially even to the point where machines come to outperform humans in many, if not all, tasks. This could have enormous benefits, helping to solve currently intractable global problems, but could also pose severe risks. These risks could arise accidentally (for example, if we don’t find technical solutions to concerns about the safety of AI systems), or deliberately (for example, if AI systems worsen geopolitical conflict). I think more work needs to be done to reduce these risks.
Some of these risks from advanced AI could be existential — meaning they could cause human extinction, or an equally permanent and severe disempowerment of humanity.2 There have not yet been any satisfying answers to concerns — discussed below — about how this rapidly approaching, transformative technology can be safely developed and integrated into our society. Finding answers to these concerns is neglected and may well be tractable. I estimated that there were around 400 people worldwide working directly on this in 2022, though I believe that number has grown.3 As a result, the possibility of AI-related catastrophe may be the world’s most pressing problem — and the best thing to work on for those who are well-placed to contribute.
Promising options for working on this problem include technical research on how to create safe AI systems, strategy research into the particular risks AI might pose, and policy research into ways in which companies and governments could mitigate these risks. As policy approaches continue to be developed and refined, we need people to put them in place and implement them. There are also many opportunities to have a big impact in a variety of complementary roles, such as operations management, journalism, earning to give, and more — some of which we list below.
Can ‘effective accelerationists’ and AI ‘doomers’ agree on a common philosophy of technology? Common sense says no. But programmer and Ethereum cofounder Vitalik Buterin showed otherwise with his essay “My techno-optimism,” which both camps agreed was basically reasonable.
Seeing his social circle divided and fighting, Vitalik hoped to write a careful synthesis of the best ideas from both the optimists and the apprehensive.
Accelerationists are right: most technologies leave us better off, the human cost of delaying further advances can be dreadful, and centralising control in government hands often ends disastrously.
But the fearful are also right: some technologies are important exceptions, AGI has an unusually high chance of being one of those, and there are options to advance AI in safer directions.
The upshot? Defensive acceleration: humanity should run boldly but also intelligently into the future — speeding up technology to get its benefits, but preferentially developing ‘defensive’ technologies that lower systemic risks, permit safe decentralisation of power, and help both individuals and countries defend themselves against aggression and domination.
What sorts of things is he talking about? In the area of disease prevention it’s most easy to see: disinfecting indoor air, rapid-turnaround vaccine platforms, and nasal spray vaccines that prevent disease transmission all make us safer against pandemics without generating any apparent new threats of their own. (And they might eliminate the common cold to boot!)
Entrepreneur First is running a defensive acceleration incubation programme with $250,000 of investment. If these ideas resonate with you, learn about the programme and apply here. You don’t need a business idea yet — just the hustle to start a technology company. But you’ll need to act fast and apply by August 2, 2024.
Vitalik explains how he mentally breaks down defensive technologies into four broad categories:
Defence against big physical things like tanks.
Defence against small physical things like diseases.
Defence against unambiguously hostile information like fraud.
Defence against ambiguously hostile information like possible misinformation.
The philosophy of defensive acceleration has a strong basis in history. Mountain or island countries that are hard to invade, like Switzerland or Britain, tend to have more individual freedom and higher quality of life than the Mongolian steppes — where “your entire mindset is around kill or be killed, conquer or be conquered”: a mindset Vitalik calls “the breeding ground for dystopian governance.”
Defensive acceleration arguably goes back to ancient China, where the Mohists focused on helping cities build better walls and fortifications, an approach that really did reduce the toll of violent invasion, until progress in offensive technologies of siege warfare allowed them to be overcome.
In addition to all of that, host Rob Wiblin and Vitalik discuss:
AI regulation disagreements being less about AI in particular, and more whether you’re typically more scared of anarchy or totalitarianism.
Vitalik’s updated p(doom).
Whether the social impact of blockchain and crypto has been a disappointment.
Whether humans can merge with AI, and if that’s even desirable.
The most valuable defensive technologies to accelerate.
How to trustlessly identify what everyone will agree is misinformation
Whether AGI is offence-dominant or defence-dominant.
Vitalik’s updated take on effective altruism.
Plenty more.
Producer and editor: Keiran Harris Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong Transcriptions: Katy Moore
Users can engage with our research in many forms: as longform articles published on our site, as a paperback book received via our book giveaway, as a podcast, or in smaller chunks via our newsletter. But we have relatively little support available in video format.
Time spent on the internet is increasingly spent watching video, and for many people in our target audience, video is the main way that they both find entertainment and learn about topics that matter to them.
We think that one of the best ways we could increase our impact going forward is to have a mature and robust pipeline for producing videos on topics that will help our audience find more impactful careers.
Back in 2017, we started a podcast; today, our podcast episodes reach more than 100,000 listeners, and are commonly cited by listeners as one of the best ways they know of to learn about the world’s most pressing problems. Our hope is that a video programme could be similarly successful — or reach an even larger scale.
We’ve also produced two ten-minute videos already that we were pleased with and got mostly positive feedback on. Using targeted digital advertising, we found we could generate an hour of engagement with the videos for just $0.40,
In today’s episode, host Luisa Rodriguez speaks to Sihao Huang — a technology and security policy fellow at RAND — about his work on AI governance and tech policy in China, what’s happening on the ground in China in AI development and regulation, and the importance of US–China cooperation on AI governance.
They cover:
Whether the US and China are in an AI race, and the global implications if they are.
The state of the art of AI in China.
China’s response to American export controls, and whether China is on track to indigenise its semiconductor supply chain.
How China’s current AI regulations try to maintain a delicate balance between fostering innovation and keeping strict information control over the Chinese people.
Whether China’s extensive AI regulations signal real commitment to safety or just censorship — and how AI is already used in China for surveillance and authoritarian control.
How advancements in AI could reshape global power dynamics, and Sihao’s vision of international cooperation to manage this responsibly.
And plenty more.
Producer and editor: Keiran Harris Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore
Since the launch of our marketing programme in 2022, we’ve increased the hours that people spend engaging with our content by 6.5x, reached millions of new users across different platforms, and now have over 500,000 newsletter subscribers. We’re also the largest single source of people getting involved in the effective altruism community, according to the most recent EA Survey.
Even so, it seems like there’s considerable room to grow further — we’re not nearly at the ceiling of what we think we can achieve. So, we’re looking for a new marketer to help us bring the marketing team to its full potential.
We anticipate that the right person in this role could help us massively increase our readership, and lead to hundreds or thousands of additional people pursuing high-impact careers.
As some indication of what success in the role might look like, over the next couple of years your team might have:
Cost-effectively deployed $5 million reaching people from our target audience.
Worked with some of the largest and most well-regarded YouTube channels (for instance, we have run sponsorships with Veritasium, Kurzgesagt, and Wendover Productions).
Designed digital ad campaigns that reached hundreds of millions of people.
Driven hundreds of thousands of additional newsletter subscriptions,
Since the launch of our marketing programme in 2022, we’ve increased the hours that people spend engaging with our content by 6.5x, reached millions of new users across different platforms, and now have over 500,000 newsletter subscribers. We’re also the largest single source of people getting involved in the effective altruism community, according to the most recent EA Survey.
Even so, it seems like there’s considerable room to grow further — we’re not nearly at the ceiling of what we think we can achieve. So, we’re looking for a new team lead to help us bring the marketing team to its full potential.
We anticipate that the right person in this role could help us massively increase our readership, and lead to hundreds or thousands of additional people pursuing high-impact careers.
As some indication of what success in the role might look like, over the next couple of years your team might have:
Cost-effectively deployed $5 million reaching people from our target audience.
Worked with some of the largest and most well-regarded YouTube channels (for instance, we have run sponsorships with Veritasium, Kurzgesagt, and Wendover Productions).
Designed digital ad campaigns that reached hundreds of millions of people.
Driven hundreds of thousands of additional newsletter subscriptions,
Blog post by Cody Fenwick · Published July 16th, 2024
The idea this week: people pursuing altruistic careers often struggle with imposter syndrome, anxiety, and moral perfectionism. And we’ve spent a lot of time trying to understand what helps.
More than 20% of working US adults said their work harmed their mental health in 2023, according to a survey from the American Psychological Association.
Jobs can put a strain on anyone. And if you aim — like many of our readers do — to help others with your career, your work may feel extra demanding.
Work that you feel really matters can be much more interesting and fulfilling. But it can also sometimes be a double-edged sword — after all, your success doesn’t only matter for you but also for those you’re trying to help.
So this week, we want to share a roundup of some of our top content on mental health:
An interview with our previous CEO on having a successful career with depression, anxiety, and imposter syndrome — this is one of our most popular interviews ever. It gives a remarkably honest and insightful account of what struggles with mental health can feel like from the inside, how they can derail a career, and how you can get back on track. It also provides lots of practical tips for how you can navigate these issues, and tries to offer a corrective to common advice that doesn’t work for everyone.
In today’s episode, host Luisa Rodriguez speaks to Pulitzer Prize finalist and New York Times bestselling author Annie Jacobsen about her latest book, Nuclear War: A Scenario.
They cover:
The most harrowing findings from Annie’s hundreds of hours of interviews with nuclear experts.
What happens during the window that the US president would have to decide about nuclear retaliation after hearing news of a possible nuclear attack.
The horrific humanitarian impacts on millions of innocent civilians from nuclear strikes.
The overlooked dangers of a nuclear-triggered electromagnetic pulse (EMP) attack crippling critical infrastructure within seconds.
How we’re on the razor’s edge between the logic of nuclear deterrence and catastrophe, and urgently need reforms to move away from hair-trigger alert nuclear postures.
And plenty more.
Producer and editor: Keiran Harris Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore