Blog post by Alex Lawsen · Published June 15th, 2022
There is (sometimes) such a thing as a free lunch
You live in a world where most people, most of the time, think of things as categorical, rather than continuous. People either agree with you or they don’t. Food is healthy or unhealthy. Your career is ‘good for the world,’ or it’s neutral, or maybe even it’s bad — but it’s only the category that matters, not the size of the benefit or harm. Ideas are wrong, or they are right. Predictions end up confirmed or falsified.
In my view, one of the central ideas of effective altruism is the realisation that ‘doing good’ is not such a binary. That as well as it mattering that we help others at all, it matters how much we help. That helping more is better than helping less, and helping a lot more is a lot better.
For me, this is also a useful framing for thinking rationally. Here, rather than ‘goodness,’ the continuous quantity is truth. The central realisation is that ideas are not simply true or false; they are all flawed attempts to model reality, and just how flawed is up for grabs. If we’re wrong, our response should not be to give up, but to try to be less wrong.
When you realise something is continuous that most people are treating as binary,
If a business has spent $100 million developing a product, it’s a fair bet that they don’t want it stolen in two seconds and uploaded to the web where anyone can use it for free.
This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars. But once trained, such models may be only a few gigabytes in size and run just fine on ordinary laptops.
Today’s guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic with the security team. One of her jobs is to stop hackers exfiltrating Anthropic’s incredibly expensive intellectual property, as recently happened to Nvidia. As she explains, given models’ small size, the need to store such models on internet-connected servers, and the poor state of computer security in general, this is a serious challenge.
The worries aren’t purely commercial though. This problem looms especially large for the growing number of people who expect that in coming decades we’ll develop so-called artificial ‘general’ intelligence systems that can learn and apply a wide range of skills all at once, and thereby have a transformative effect on society.
If aligned with the goals of their owners, such general AI models could operate like a team of super-skilled assistants, going out and doing whatever wonderful (or malicious) things are asked of them. This might represent a huge leap forward for humanity, though the transition to a very different new economy and power structure would have to be handled delicately.
If unaligned with the goals of their owners or humanity as a whole, such broadly capable models would naturally ‘go rogue,’ breaking their way into additional computer systems to grab more computing power — all the better to pursue their goals and make sure they can’t be shut off.
As Nova explains, in either case, we don’t want such models disseminated all over the world before we’ve confirmed they are deeply safe and law-abiding, and have figured out how to integrate them peacefully into society. In the first scenario, premature mass deployment would be risky and destabilising. In the second scenario, it could be catastrophic — perhaps even leading to human extinction if such general AI systems turn out to be able to self-improve rapidly rather than slowly, something we can only speculate on at this point.
If highly capable general AI systems are coming in the next 10 or 20 years, Nova may be flying below the radar with one of the most important jobs in the world.
We’ll soon need the ability to ‘sandbox’ (i.e. contain) models with a wide range of superhuman capabilities, including the ability to learn new skills, for a period of careful testing and limited deployment — preventing the model from breaking out, and criminals from breaking in. Nova and her colleagues are trying to figure out how to do this, but as this episode reveals, even the state of the art is nowhere near good enough.
In today’s conversation, Rob and Nova cover:
How good or bad information security is today
The most secure computer systems that exist today
How to design an AI training compute centre for maximum efficiency
Whether ‘formal verification’ can help us design trustworthy systems
How wide the practical gap is between AI capabilities and AI safety
How to disincentivise hackers
What should listeners do to strengthen their own security practices
Jobs at Anthropic
And a few more things as well
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Ben Cordell and Beppe Rådvik Transcriptions: Katy Moore
“We’re leaving these 16 contestants on an island with nothing but what they can scavenge from an abandoned factory and apartment block. Over the next 365 days, they’ll try to rebuild as much of civilisation as they can — from glass, to lenses, to microscopes. This is: The Knowledge!”
If you were a contestant on such a TV show, you’d love to have a guide to how basic things you currently take for granted are done — how to grow potatoes, fire bricks, turn wood to charcoal, find acids and alkalis, and so on.
But in the aftermath of a nuclear war or incredibly deadly pandemic that kills most people, many of the ways we do things today will be impossible — and even some of the things people did in the past, like collect coal from the surface of the Earth, will be impossible the second time around.
As Lewis points out, there’s “no point telling this band of survivors how to make something ultra-efficient or ultra-useful or ultra-capable if it’s just too damned complicated to build in the first place. You have to start small and then level up, pull yourself up by your own bootstraps.”
So it might sound good to tell people to build solar panels — they’re a wonderful way of generating electricity. But the photovoltaic cells we use today need pure silicon, and nanoscale manufacturing — essentially the same technology as microchips used in a computer — so actually making solar panels would be incredibly difficult.
Instead, you’d want to tell our group of budding engineers to use more appropriate technologies like solar concentrators that use nothing more than mirrors — which turn out to be relatively easy to make.
A disaster that unravels the complex way we produce goods in the modern world is all too possible. Which raises the question: why not set dozens of people to plan out exactly what any survivors really ought to do if they need to support themselves and rebuild civilisation? Such a guide could then be translated and distributed all around the world.
The goal would be to provide the best information to speed up each of the many steps that would take survivors from rubbing sticks together in the wilderness to adjusting a thermostat in their comfy apartments.
This is clearly not a trivial task. Lewis’s own book (at 300 pages) only scratched the surface of the most important knowledge humanity has accumulated, relegating all of mathematics to a single footnote.
And the ideal guide would offer pretty different advice depending on the scenario. Are survivors dealing with a radioactive ice age following a nuclear war? Or is it an eerily intact but near-empty post-pandemic world with mountains of goods to scavenge from the husks of cities?
If we take catastrophic risks seriously and want humanity to recover from a devastating shock as far and fast as possible, producing such a guide before it’s too late might be one of the higher-impact projects someone could take on.
As a brand-new parent, Lewis couldn’t do one of our classic three- or four-hour episodes — so this is an unusually snappy one-hour interview, where Rob and Lewis are joined by Luisa Rodriguez to continue the conversation from her episode of the show last year.
They cover:
The biggest impediments to bouncing back
The reality of humans trying to actually do this
The most valuable pro-resilience adjustments we can make today
How to recover without much coal or oil
How to feed the Earth in disasters
And the most exciting recent findings in astrobiology
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
Andrew Snyder-Beattie is a programme officer at Open Philanthropy, a foundation which has more than $1 billion available to fund big pandemic-prevent projects. He and Ethan Alley (co-CEO at Alvea) recently wrote an exciting list of projects they’d like to see get founded, including:
An international early detection centre
Actually good PPE
Rapid and broad-spectrum antivirals and vaccines
A bioweapons watchdog
Self-sterilising buildings
Refuges
One of these ideas is already happening. Alvea aims to produce a cheap, flexible vaccine platform using a new type of vaccine (DNA vaccines), starting with an Omicron-specific shot. In two months, they hired 35 people and started preclinical trials.
The above are technical solutions, which make it possible for a relatively small number of people to make a significant difference to the problem. But policy change is also an important angle.
Guarding Against Pandemics was an effort to lobby the US government for $30 billion in funding for pandemic prevention. Unfortunately the relevant bill didn’t pass, but the sum at stake made it clearly worth trying.
In this episode of 80k After Hours, Rob Wiblin interviews Clay Graubard and Robert de Neufville about forecasting the war between Russia and Ukraine.
They cover:
Their early predictions for the war
The performance of the Russian military
The risk of use of nuclear weapons
The most interesting remaining topics on Russia and Ukraine
General lessons we can take from the war
The evolution of the forecasting space
What Robert and Clay were reading back in February
Forecasters vs. subject matter experts
Ways to get involved with the forecasting community
Impressive past predictions
And more
Who this episode is for:
People interested in forecasting
People interested in the war in Ukraine
People who prefer to know how likely they are to die in a nuclear war
Who this episode isn’t for:
People who’d hate it if a friend said they were 65% likely to come out for drinks
People who’d prefer if their death from nuclear war was a total surprise
Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ’80k After Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
Imagine you lead a nonprofit that operates on a shoestring budget. Staff are paid minimum wage, lunch is bread and hummus, and you’re all bunched up on a few tables in a basement office.
But over a few years, your cause attracts some major new donors. Your funding jumps a thousandfold, from $100,000 a year to $100,000,000 a year. You’re the same group of people committed to making sacrifices for the cause — but these days, rather than cutting costs, the right thing to do seems to be to spend serious money and get things done ASAP.
You suddenly have the opportunity to make more progress than ever before, but as well as excitement about this, you have worries about the impacts that large amounts of funding can have.
This is roughly the situation faced by today’s guest Will MacAskill — University of Oxford philosopher, author of the forthcoming book What We Owe The Future, and founding figure in the effective altruism movement.
Years ago, Will pledged to give away more than 50% of his income over his life, and was already donating 10% back when he was a student with next to no income. Since then, the coalition he founded has been super successful at attracting the interest of donors who collectively want to give away billions in the way Will and his colleagues were proposing.
While surely a huge success, it brings with it risks that he’s never had to consider before:
Will and his colleagues might try to spend a lot of money trying to get more things done more quickly — but actually just waste it.
Being seen as profligate could strike onlookers as selfish and disreputable.
Folks might start pretending to agree with their agenda just to get grants.
People working on nearby issues that are less flush with funding may end up resentful.
People might lose their focus on helping others as they get seduced by the prospect of earning a nice living.
Mediocre projects might find it too easy to get funding, even when the people involved would be better off radically changing their strategy, or shutting down and launching something else entirely.
But all these ‘risks of commission’ have to be weighed against ‘risk of omission’: the failure to achieve all you could have if you’d been truly ambitious.
People looking askance at you for paying high salaries to attract the staff you want is unpleasant.
But failing to prevent the next pandemic because you didn’t have the necessary medical experts on your grantmaking team is worse than unpleasant — it’s a true disaster. Yet few will complain, because they’ll never know what might have been if you’d only set frugality aside.
Will aims to strike a sensible balance between these competing errors, which he has taken to calling judicious ambition. In today’s episode, Rob and Will discuss the above as well as:
Will humanity likely converge on good values as we get more educated and invest more in moral philosophy — or are the things we care about actually quite arbitrary and contingent?
Why are so many nonfiction books full of factual errors?
How does Will avoid anxiety and depression with more responsibility on his shoulders than ever?
What does Will disagree with his colleagues on?
Should we focus on existential risks more or less the same way, whether we care about future generations or not?
Are potatoes one of the most important technologies ever developed?
And plenty more.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
Climate change is going to significantly and negatively impact the world. Its impacts on the poorest people in our society and our planet’s biodiversity are cause for particular concern. Looking at the worst possible scenarios, it could be an important factor that increases existential threats from other sources, like great power conflicts, nuclear war, or pandemics. But because the worst potential consequences seem to run through those other sources, and these other risks seem larger and more neglected, we think most readers can have a greater impact in expectation working directly on one of these other risks.
We think your personal carbon footprint is much less important than what you do for work, and that some ways of making a difference on climate change are likely to be much more effective than others. In particular, you could use your career to help develop technology or advocate for policy that would reduce our current emissions, or research technology that could remove carbon from the atmosphere in the future.
This post gives an overview of how I’m thinking about the “funding in EA” issue, building on many conversations. Although I’m involved with a number of organisations in EA, this post is written in my personal capacity. You might also want to see my EAG talk which has a related theme, though with different emphases. For helpful comments, I thank Abie Rohrig, Asya Bergal, Claire Zabel, Eirin Evjen, Julia Wise, Ketan Ramakrishnan, Leopold Aschenbrenner, Matt Wage, Max Daniel, Nick Beckstead, Stephen Clare, and Toby Ord.
Main points
EA is in a very different funding situation than it was when it was founded. This is both an enormous responsibility and an incredible opportunity.
It means the norms and culture that made sense at EA’s founding will have to adapt. It’s good that there’s now a serious conversation about this.
There are two ways we could fail to respond correctly:
By commission: we damage, unnecessarily, the aspects of EA culture that make it valuable; we support harmful projects; or we just spend most of our money in a way that’s below-the-bar.
By omission: we aren’t ambitious enough, and fail to make full use of the opportunities we now have available to us. Failure by omission is much less salient than failure by commission, but it’s no less real, and may be more likely.
Though it’s hard, we need to inhabit both modes of mind at once.
Why might becoming an expert in data collection for AI alignment be high impact?
We think it’s crucial that we work to positively shape the development of AI, including through technical research on how to ensure that any potentially transformative AI we develop does what we want it to do (known as the alignment problem). If we don’t find ways to align AI with our values and goals — or worse, don’t find ways to prevent AI from actively harming us or otherwise working against our values — the development of AI could pose an existential threat to humanity.
We already have dirt-cheap ways to prevent and treat malaria, and the fraction of the Earth’s surface where the disease exists at all has been halved since 1900. So why is it such a persistent problem in some places, even rebounding 15% since 2019?
That’s one of many questions I put to today’s guest, James Tibenderana — doctor, medical researcher, and technical director at a major global health nonprofit known as Malaria Consortium. James studies the cutting edge of malaria control and treatment in order to optimise how Malaria Consortium spends £100 million a year across countries like Uganda, Nigeria, and Chad.
In sub-Saharan Africa, where 90% of malaria deaths occur, the infection is spread by a few dozen species of mosquito that are ideally suited to the local climatic conditions and have thus been impossible to eliminate so far.
And as James explains, while COVID-19 may have an ‘R’ (reproduction number) of 5, in some situations malaria has a reproduction number in the 1,000s. A single person with malaria can pass the parasite to hundreds of mosquitoes, which themselves each go on to bite dozens of people each, allowing cases to quickly explode.
The nets and antimalarial drugs Malaria Consortium distributes have been highly effective where distributed, but there are tens of millions of young children who are yet to be covered simply due to a lack of funding.
Despite the success of these approaches, given how challenging it will be to create a malaria-free world, there’s enthusiasm to find new approaches to throw at the problem. Two new interventions have recently generated buzz: vaccines and genetic approaches to control the mosquito species that carry malaria.
The RTS,S vaccine is the first-ever vaccine that attacks a protozoa as opposed to a virus or bacteria. Under development for decades, it’s a great scientific achievement. But James points out that even after three doses, it’s still only about 30% effective. Unless future vaccines are substantially more effective, they will remain just a complement to nets and antimalarial drugs, which are cheaper and each cut mortality by more than half.
On the other hand, the latest mosquito-control technologies are almost too effective. It is possible to insert genes into specific mosquito populations that reduce their ability to reproduce. Of course these genes would normally be eliminated by natural selection, but by using a ‘gene drive,’ you can ensure mosquitoes hand these detrimental genes down to 100% of their offspring. If deployed, these genes would spread and ultimately eliminate the mosquitoes that carry malaria at low cost, thereby largely ridding the world of the disease.
Because a single country embracing this method would have global effects, James cautions that it’s important to get buy-in from all the countries involved, and to have a way of reversing the intervention if we realise we’ve made a mistake. Groups like Target Malaria are working on exactly these two issues.
James also emphasises that with thousands of similar mosquito species out there, most of which don’t carry malaria, for better or worse gene drives may not make any difference to the number of mosquitoes out there.
In this comprehensive conversation, Rob and James discuss all of the above, as well as most of what you could reasonably want to know about the state of the art in malaria control today, including:
How malaria spreads and the symptoms it causes
The use of insecticides and poison baits
How big a problem insecticide resistance is
How malaria was eliminated in North America and Europe
Whether funding is a key bottleneck right now
The key strategic choices faced by Malaria Consortium in its efforts to create a malaria-free world
And much more
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Ryan Kessler Transcriptions: Katy Moore
Blog post by Benjamin Todd · Published May 5th, 2022
Hi readers!
We’ve decided for Howie to become CEO and for me to become President of 80,000 Hours.
After ten years in the role, I’d become less excited about overseeing several aspects of the organisation’s on-going operation. We asked the board to investigate, and their recommendation was that Howie Lempel is the best person to take the org to its next level of scale.
In the President role I hope I’ll be able to focus on my most valuable contributions – providing advice on org strategy & the website, writing, and helping with outreach – and won’t have set responsibilities.
I also have a growing list of other projects in effective altruism that I’m excited to explore.
Howie and I expect the transition to be smooth – in part because Howie is already doing several parts of the role as Chief of Staff. We intend for Howie to officially become CEO this week, and to complete the transfer in about a month.
I’m excited to explore this new role and for 80,000 Hours to continue growing and getting the next generation working on the world’s most pressing problems.
Ben
Note from Howie:
Hi everyone,
I’m really looking forward to taking on this new role and leading 80,000 Hours as we continue to grow.
I’m going to send an initial update on our plans as part of our post-Q2 email update.
In nature, animals roar and bare their teeth to intimidate adversaries — but one side usually backs down, and real fights are rare. The wisdom of evolution is that the risk of violence is just too great.
Which might make one wonder: if war is so destructive, why does it happen? The question may sound naïve, but in fact it represents a deep puzzle. If a war will cost trillions and kill tens of thousands, it should be easy for either side to make a peace offer that both they and their opponents prefer to actually fighting it out.
The conundrum of how humans can engage in incredibly costly and protracted conflicts has occupied academics across the social sciences for years. In today’s episode, we speak with economist Chris Blattman about his new book, Why We Fight: The Roots of War and the Paths to Peace, which summarises what they think they’ve learned.
Chris’s first point is that while organised violence may feel like it’s all around us, it’s actually very rare in humans, just as it is with other animals. Across the world, hundreds of groups dislike one another — but knowing the cost of war, they prefer to simply loathe one another in peace.
In order to understand what’s wrong with a sick patient, a doctor needs to know what a healthy person looks like. And to understand war, social scientists need to study all the wars that could have happened but didn’t — so they can see what a healthy society looks like and what’s missing in the places where war does take hold.
Chris argues that social scientists have generated five cogent models of when war can be ‘rational’ for both sides of a conflict:
Unchecked interests — such as national leaders who bear few of the costs of launching a war.
Intangible incentives — such as an intrinsic desire for revenge.
Uncertainty — such as both sides underestimating each other’s resolve to fight.
Commitment problems — such as the inability to credibly promise not to use your growing military might to attack others in future.
Misperceptions — such as our inability to see the world through other people’s eyes.
In today’s interview, we walk through how each of the five explanations work and what specific wars or actions they might explain.
In the process, Chris outlines how many of the most popular explanations for interstate war are wildly overused (e.g. leaders who are unhinged or male) or misguided from the outset (e.g. resource scarcity).
The interview also covers:
What Chris and Rob got wrong about the war in Ukraine
What causes might not fit into these five categories
The role of people’s choice to escalate or deescalate a conflict
How great power wars or nuclear wars are different, and what can be done to prevent them
How much representative government helps to prevent war
And much more
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
I’ve felt like an imposter since my first year of university.
I was accepted to the university that I believed was well out of my league — my ‘stretch’ school. I’d gotten good grades in high school, but I’d never seen myself as especially smart: I wasn’t selected for gifted programmes in elementary school like some of my friends were, and my standardised test scores were in the bottom half of those attending my university.
I was pretty confident I got into the university because of some fluke in the system (my top hypothesis was that I was admitted as part of an affirmative action initiative) — and that belief stayed with me (and was amplified) during the decade that followed.
Throughout that decade, there was evidence that I really was good at my work at different points, but I could always come up with an explanation for why the evidence was unreliable.
For example, as an undergraduate, I was the only first-year student in my biology department to get a research internship at the Mayo Clinic — one of the most prestigious biomedical institutions in the US. But I felt I only got the internship because I’d met the right person at the right time, and tricked them into thinking I was smarter than I was by saying smart-sounding things.
Likewise, during my final year of university, I was given an award for being the top performer in my sociology department.
Blog post by Arden Koehler · Published April 19th, 2022
About the 80,000 Hours web team
80,000 Hours provides free research and support to help people find careers tackling the world’s most pressing problems.
We’ve had over 8 million visitors to our website (with over 100,000 hours of reading time per year), and more than 3,000 people have told us that they’ve significantly changed their career plans due to our work. We’re also the largest single source of people getting involved in the effective altruism community, according to the most recent EA Community Survey.
Our articles are read by thousands, and are among the most important ways we help people shift their careers towards higher-impact options.
The role
As a writer, you would:
Research, outline, and write new articles for the 80,000 Hours website — e.g. new career reviews.
Rewrite or update older articles with information and resources — e.g. about rapidly evolving global problems.
Generate ideas for new pieces.
Talk to experts and readers to help prioritise our new articles and updates.
Generally help grow the impact of the site.
Some of the types of pieces you could work on include:
This podcast highlighted Sam Bankman-Fried as a positive example of someone ambitiously pursuing a high-impact career. To say the least, we no longer endorse that. See our statement for why.
The show’s host, Rob Wiblin, has also released some personal comments on this episode and the FTX bankruptcy on The 80,000 Hours Podcast feed, which you can listen to here.
If you were offered a 100% chance of $1 million to keep yourself, or a 10% chance of $15 million — it makes total sense to play it safe. You’d be devastated if you lost, and barely happier if you won.
But if you were offered a 100% chance of donating $1 billion, or a 10% chance of donating $15 billion, you should just go with whatever has the highest expected value — that is, probability multiplied by the goodness of the outcome — and so swing for the fences.
This is the totally rational but rarely seen high-risk approach to philanthropy championed by today’s guest, Sam Bankman-Fried. Sam founded the cryptocurrency trading platform FTX, which has grown his wealth from around $1 million to $20,000 million.
Added 30 November 2022: What I meant to refer to as totally rational in the above paragraph is thinking about the ‘expected value’ of one’s actions, not maximizing expected dollar returns as if you were entirely ‘risk-neutral’. See clarifications on what I (Rob Wiblin) think about risk-aversion here.
Despite that, Sam still drives a Corolla and sleeps on a beanbag, because the only reason he started FTX was to make money to give it away. In 2020, when he was 5% as rich as he is now, he was nonetheless the second biggest individual donor to Joe Biden’s general election campaign.
In today’s conversation, Sam outlines how at every stage in FTX’s development, he and his team were able to choose the high-risk path to maximise expected value — precisely because they weren’t out to earn money for themselves.
This year his philanthropy has kicked into high gear with the launch of the FTX Future Fund, which has the initial ambition of giving away hundreds of millions a year and hopes to soon escalate to over a billion a year.
The Fund is run by previous guest of the show Nick Beckstead, and embodies the same risk-loving attitude Sam has learned from entrepreneurship and trading on financial markets. Unlike most foundations, the Future Fund:
Is open to supporting young people trying to get their first big break
Makes applying for a grant surprisingly straightforward
Is willing to make bets on projects it completely expects to fail, just because they have positive expected value.
Their website lists both areas of interest and more concrete project ideas they are looking to support. The hope is these will inspire entrepreneurs to come forward, seize the mantle, and be the champions who actually make these things happen. Some of the project proposals are pretty natural, such as:
Create a ‘epistemic appeals system’ — a sort of for-hire fact checking organisation that builds credibility through a longstanding reputation for impartiality, transparency, and reliability
While these ideas may seem pretty random, they all stem from a particular underlying moral and empirical vision that the Future Fund has laid out.
In this conversation, we speak with Sam about the hopes he and the Fund have for how the long-term future of humanity might go incredibly well, the fears they hold about how it could go incredibly badly, and what levers they might be able to pull to slightly nudge us towards the former.
Listeners who want to launch an ambitious project to improve humanity’s future should not only listen to the episode, but also look at the full list of the kind of things Sam and his colleagues are hoping to fund, see if they’re inspired, and if so, apply to get the ball rolling.
On top of that we also cover:
How Sam feels now about giving $5 million to Biden’s general election campaign
His fears and hopes for artificial intelligence
Whether or not blockchain technology actually has useful real-world applications
What lessons Sam learned from some serious early setbacks
Why he fears the effective altruism community is too conservative
Why Sam is as authentic now as he was before he was a celebrity
And much more.
Note: Sam has donated to 80,000 Hours in the past
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
November 17 2022, 1pm GMT: This podcast highlighted Sam Bankman-Fried as a positive example of someone ambitiously pursuing a high-impact career. To say the least, we no longer endorse that.See our statement for why.
Everybody knows that good parenting has a big impact on how kids turn out. Except that maybe they don’t, because it doesn’t.
Incredible though it might seem, according to today’s guest — economist Bryan Caplan, the author of Selfish Reasons To Have More Kids, The Myth of the Rational Voter, and The Case Against Education — the best evidence we have on the question suggests that, within reason, what parents do has little impact on how their children’s lives play out once they’re adults.
Of course, kids do resemble their parents. But just as we probably can’t say it was attentive parenting that gave me my mother’s nose, perhaps we can’t say it was attentive parenting that made me succeed at school. Both the social environment we grow up in and the genes we receive from our parents influence the person we become, and looking at a typical family we can’t really distinguish the impact of one from the other.
But nature does offer us up a random experiment that can let us tell the difference: identical twins share all their genes, while fraternal twins only share half their genes. If you look at how much more similar outcomes are for identical twins than fraternal twins, you see the effect of sharing 100% of your genetic material, rather than the usual 50%. Double that amount, and you’ve got the full effect of genetic inheritance. Whatever unexplained variation remains is still up for grabs — and might be down to different experiences in the home, outside the home, or just random noise.
The crazy thing about this research is that it says for a range of adult outcomes (e.g. years of education, income, health, personality, and happiness), it’s differences in the genes children inherit rather than differences in parental behaviour that are doing most of the work. Other research suggests that differences in “out-of-home environment,” such as the friends one makes at school, take second place. Parenting style does matter for something, but it comes in a clear third.
You might think that these studies are accidentally recruiting parents who are all unusually competent, by including only the kind of people who respond to letters asking them to participate in a university study of twin behaviour. But in fact that effect is small, because many countries and hospitals have enrolled twins in this research almost by default, and academics can check on some kinds of outcomes using tax, death, and court records, which include almost everyone.
He is quick to point out that there are several factors that help reconcile these findings with conventional wisdom about the importance of parenting.
First, for some adult outcomes, parenting was a big deal (i.e. the quality of the parent/child relationship) or at least a moderate deal (i.e. drug use, criminality, and religious/political identity).
Second, these are adult outcomes — parents can and do influence you quite a lot, so long as you’re young and still living with them. But as soon as you move out, the influence of their behaviour begins to wane and eventually becomes hard to spot.
Third, this research only studies variation in parenting behaviour that was common among the families studied. The studies are just mute on anything that wasn’t actually done by many parents in their sample.
And fourth, research on international adoptions shows they can cause massive improvements in health, income and other outcomes. So a large enough change in one’s entire environment, say from Haiti to the United States, does matter, even if moving between families within the United States has modest effects.
Despite all that, the findings are still remarkable, and imply many hyper-diligent parents could live much less stressful lives without doing their kids any harm at all. In this extensive interview host Rob Wiblin interrogates whether Bryan can really be right, or whether the research he’s drawing on has taken a wrong turn somewhere.
And that’s just one topic we cover, some of the others being:
People’s biggest misconceptions about the labour market
Arguments against high levels of immigration
Whether most people actually vote based on self-interest
Whether philosophy should stick to common sense or depart from it radically
How to weigh personal autonomy against the possible benefits of government regulation
Bryan’s track record of winning 23 out of 23 bets about how the future would play out
And much more
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
Since the Soviet Union split into different countries in 1991, the pervasive fear of catastrophe that people lived with for decades has gradually faded from memory, and nuclear warhead stockpiles have declined by 83%. Nuclear brinksmanship, proxy wars, and the game theory of mutually assured destruction (MAD) have come to feel like relics of another era.
Russia’s invasion of Ukraine has changed all that.
According to Joan Rohlfing — President of the Nuclear Threat Initiative, a Washington, DC-based nonprofit focused on reducing threats from nuclear and biological weapons — the annual risk of a ‘global catastrophic nuclear event’‘ never fell as low as people like to think, and for some time has been on its way back up.
At the same time, civil society funding for research and advocacy around nuclear risks is being cut in half over a period of years — despite the fact that at $60 million a year, it was already just a thousandth as much as the US spends maintaining its nuclear deterrent.
If new funding sources are not identified to replace donors that are withdrawing (like the MacArthur Foundation), the existing pool of talent will have to leave for greener pastures, and most of the next generation will see a career in the field as unviable.
While global poverty is on the decline and life expectancy increasing, the chance of a catastrophic nuclear event is probably trending in the wrong direction.
Joan points out that the New START treaty, which dramatically limits the number of warheads the US and Russia can deploy at one time, narrowly survived in 2021 due to the election of Joe Biden. But it will again require renewal in 2026, which may or may not happen, depending on whether the relationship between the two great powers can be repaired over the next four years.
Ukraine gave up its nuclear weapons in 1994 in exchange for security guarantees that turned out not to be worth the paper they were written on. States that have nuclear weapons (such as North Korea), states that are pursuing them (such as Iran), and states that have pursued nuclear weapons but since abandoned them (such as Libya, Syria, and South Africa) may take this as a valuable lesson in the importance of military power over promises.
China has been expanding its arsenal and testing hypersonic glide missiles that can evade missile defences. Japan now toys with the idea of nuclear weapons as a way to ensure its security against its much larger neighbour. India and Pakistan both acquired nuclear weapons in the late 1980s and their relationship continues to oscillate from hostile to civil and back.
At the same time, the risk that nuclear weapons could be interfered with due to weaknesses in computer security is far higher than during the Cold War, when systems were simpler and less networked.
In the interview, Joan discusses several steps that can be taken in the immediate term, such as renewed efforts to extend and expand arms control treaties, changes to nuclear use policy, and the retirement of what they see as vulnerable delivery systems, such as land-based silos.
In the bigger picture, NTI seeks to keep hope alive that a better system than deterrence through mutually assured destruction remains possible. The threat of retaliation does indeed make nuclear wars unlikely, but it necessarily means the system fails in an incredibly destructive way: with the death of hundreds of millions if not billions.
In the long run, even a tiny 1 in 500 risk of a nuclear war each year adds up to around an 18% chance of catastrophe over the century.
Joan concedes that MAD was probably the best available system for preventing the use of nuclear weapons in 1950. But we’ve had 70 years of advances in technology since then that have opened up new possibilities, such as far more reliable surveillance than could have been dreamed up by Truman and Stalin. But MAD has been the conventional wisdom for so long that almost nobody is working on alternative paradigms.
In this conversation we cover all that, as well as:
How arms control treaties have evolved over the last few decades
Whether lobbying by arms manufacturers is an important factor shaping nuclear strategy
Places listeners could work at or donate to
The Biden Nuclear Posture Review
How easily humanity might recover from a nuclear exchange
Implications for the use of nuclear energy
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
If someone said a global health and development programme was sustainable, participatory, and holistic, you’d have to guess that they were saying something positive. But according to today’s guest Karen Levy — deworming pioneer and veteran of Innovations for Poverty Action, Evidence Action, and Y Combinator — each of those three concepts has become so fashionable that they’re at risk of being seriously overrated and applied where they don’t belong.
Such concepts might even cause harm — trying to make a project embody all three is as likely to ruin it as help it flourish.
First, what do people mean by ‘sustainability’? Usually they mean something like the programme will eventually be able to continue without needing further financial support from the donor. But how is that possible? Governments, nonprofits, and aid agencies aim to provide health services, education, infrastructure, financial services, and so on — and all of these require ongoing funding to pay for materials and staff to keep them running.
I buy my groceries from a supermarket, and I’m not under the illusion that one day I’ll be able to stop paying and still get everything I need for free. And there’s nothing wrong with this way of getting life’s necessities being ‘unsustainable’ — so long as I want groceries, I’ll keep paying for them.
Given that someone needs to keep paying, Karen tells us that in practice, ‘sustainability’ is usually a euphemism for the programme at some point being passed on to someone else to fund — usually the national government. And while that can be fine, the national government of Kenya only spends $400 per person to provide each and every government service — just 2% of what the US spends on each resident. Incredibly tight budgets like that are typical of low-income countries. While the concept of ‘sustainability’ sounds great, to say “We’re going to pass the cost of this programme on to a government funded by very poor people’s taxes” sounds at best ambiguous.
‘Participatory’ also sounds nice, and inasmuch as it means leaders are accountable to the people they’re trying to help, it probably is. But Karen tells us that in the field, ‘participatory’ usually means that recipients are expected to be involved in planning and delivering services themselves.
While that might be suitable in some situations, it’s hardly something people in rich countries always want for themselves. Ideally we want government healthcare and education to be high quality without us having to attend meetings to keep it on track — and people in poor countries have as many or more pressures on their time. While accountability is desirable, an expectation of participation can be as much a burden as a blessing.
Finally, making a programme ‘holistic’ could be smart, but as Karen lays out, it also has some major downsides. For one, it means you’re doing lots of things at once, which makes it hard to tell which parts of the project are making the biggest difference relative to their cost. For another, when you have a lot of goals at once, it’s hard to tell whether you’re making progress, or really put your mind to focusing on making one thing go extremely well. And finally, holistic programmes can be impractically expensive — Karen tells the story of a wonderful ‘holistic school health’ programme that, if continued, was going to cost 3.5 times the entire school’s budget.
Smallpox elimination was one of humanity’s greatest health achievements and its focus on one thing to the exclusion of all else made it the complete opposite of a holistic program.
In today’s in-depth conversation, Karen Levy and I chat about the above, as well as:
Why it pays to figure out how you’ll interpret the results of an experiment ahead of time
The trouble with misaligned incentives within the development industry
Projects that don’t deliver value for money and should be scaled down
Whether governments typically pay for a project once outside funding is withdrawn
How Karen accidentally became a leading figure in the push to deworm tens of millions of schoolchildren
Logistical challenges in reaching huge numbers of people with essential services
How Karen has enjoyed living in Kenya for several decades
Lessons from Karen’s many-decades career
The goals of Karen’s new project: Fit for Purpose
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type “80,000 Hours” into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Ben Cordell and Ryan Kessler Transcriptions: Katy Moore
Russia’s invasion of Ukraine is devastating the lives of Ukrainians, and so long as it continues there’s a risk that the conflict could escalate to include other countries or the use of nuclear weapons. It’s essential that NATO, the US, and the EU play their cards right to ideally end the violence, maintain Ukrainian sovereignty, and discourage any similar invasions in the future.
But how? To pull together the most valuable information on how to react to this crisis, we spoke with Samuel Charap — a senior political scientist at the RAND Corporation, one of the US’s foremost experts on Russia’s relationship with former Soviet states, and co-author of Everyone Loses: The Ukraine Crisis and the Ruinous Contest for Post-Soviet Eurasia.
Samuel believes that Putin views the alignment of Ukraine with NATO as an existential threat to Russia — a perhaps unreasonable view, but a sincere one nevertheless. Ukraine has been drifting further into Western Europe’s orbit and improving its defensive military capabilities, so Putin has concluded that if Russia wants to put a stop to that, there will never be a better time to act in the future.
Despite early successes holding off the Russian military, Samuel is sceptical that time is on the Ukrainian side. Though it won’t be able to create a puppet government Ukrainians view as legitimate, if committed to the task, Russia will likely gradually grind down Ukrainian resistance and take formal control of the country. If the war is to end before much of Ukraine is reduced to rubble, it will likely have to be through negotiation, rather than Russian defeat.
Many hope for Putin to be ousted from office, but Samuel cautions that he has enormous control of the Russian government and the media Russians consume, making that very unlikely in the near term. Furthermore, someone who successfully booted Putin from office is just as likely to be even more of an intransigent hardliner as they are to be a dove. In the meantime, loose talk of assassinating Putin could drive him to further reckless aggression.
The US policy response has so far been largely good, successfully balancing the need to punish Russia to dissuade large nations from bullying small ones in the future, while preventing NATO from being drawn into the war directly — which would pose a horrifying risk of escalation to a full nuclear exchange. The pressure from the general public to ‘do something’ might eventually cause national leaders to confront Russia more directly, but so far they are sensibly showing no interest in doing so.
However, use of nuclear weapons remains a low but worrying possibility. That could happen in various ways, such as:
NATO shoots down Russian planes to enforce a no-fly zone — a problematic idea in Samuel’s opinion.
An unintentional cycle of mutual escalation between Russia and NATO, perhaps starting with cyber attacks, or Russian bombs accidentally landing in NATO countries that neighbour Ukraine.
Putin ends up with his back against the wall and believes he can no longer win the war or defend Russia without using tactical nuclear weapons.
Putin decides to invade a country other than Ukraine.
Samuel is also worried that Russia may deploy chemical and biological weapons and blame it on the Ukrainians.
In Samuel’s opinion, the recent focus on the delivery of fighter jets to Ukraine is risky and not the key defence priority in any case. Instead, Ukraine could use more ground-to-air missiles to shoot Russian planes out of the sky.
Before war broke out, it’s possible Russia could have been satisfied if Ukraine followed through on the Minsk agreements and committed not to join NATO. Or it might not have, if Putin was committed to war, come what may. In any case, most Ukrainians found those terms intolerable.
At this point, the situation is even worse, and it’s hard to see how an enduring ceasefire could be agreed upon. On top of the above, Russia is also demanding recognition that Crimea is part of Russia, and acceptance of the independence of the so-calked Donetsk and Luhansk People’s Republics. These conditions — especially the second — are entirely unacceptable to the Ukrainians. Hence the war continues, and could grind on for months until one side is sufficiently beaten down to compromise on their core demands.
Rob and Samuel discuss all of the above and also:
What are the implications if Sweden and/or Finland decide to join NATO?
What should NATO do now, and did it make any mistakes in the past?
What’s the most likely situation for us to be looking at in three months’ time?
Can Ukraine effectively win the war?
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
80,000 Hours provides research and support to help students and graduates switch into careers that effectively tackle the world’s most pressing problems.
Over one million people visit our website each year, and more than 3,000 people have told us that they’ve significantly changed their career plans due to our work. We’re also the largest single source of people getting involved in the effective altruism community, according to the most recent EA Survey.
The Internal Systems team
The Internal Systems team is here to build the organisation and systems that support 80,000 Hours to achieve its mission.
We oversee 80,000 Hours’ office, finances, and impact evaluation, as well as much of our fundraising, org-wide metrics, tech systems, HR, and recruiting.
Currently, we have two full-time staff (Brenton Mayer and Sashika Coxhead), some part-time staff, and receive support from CEA (our fiscal sponsor).
Role
This role would be excellent experience for someone who wants to build career capital in operations, especially if you could one day see yourself in a more senior operations role (e.g. taking on more management, and perhaps eventually being a Head of Operations or COO).
Your responsibilities will likely include:
Creating an outstanding office environment. You’ll hire and manage the team that oversees our beautiful central London office. Your team will be responsible for all the systems that keep the office running smoothly,