Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young people. At any point in time, something like 20% of young people are working through anxiety or depression that’s seriously interfering with their lives — but nowhere near 20% of people in their 20s have severe heart disease or cancer or a similar failure in a key organ of the body other than the brain.
From an evolutionary perspective, that’s to be expected, right? If your heart or lungs or legs or skin stop working properly while you’re a teenager, you’re less likely to reproduce, and the genes that cause that malfunction get weeded out of the gene pool.
So why is it that these evolutionary selective pressures seemingly fixed our bodies so that they work pretty smoothly for young people most of the time, but it feels like evolution fell asleep on the job when it comes to the brain? Why did evolution never get around to patching the most basic problems, like social anxiety, panic attacks, debilitating pessimism, or inappropriate mood swings? For that matter, why did evolution go out of its way to give us the capacity for low mood or chronic anxiety or extreme mood swings at all?
Today’s guest, Randy Nesse — a leader in the field of evolutionary psychiatry — wrote the book Good Reasons for Bad Feelings, in which he sets out to try to resolve this paradox.
In the interview, host Rob Wiblin and Randy discuss the key points of the book, as well as:
How the evolutionary psychiatry perspective can help people appreciate that their mental health problems are often the result of a useful and important system.
How evolutionary pressures and dynamics lead to a wide range of different personalities, behaviours, strategies, and tradeoffs.
The missing intellectual foundations of psychiatry, and how an evolutionary lens could revolutionise the field.
How working as both an academic and a practicing psychiatrist shaped Randy’s understanding of treating mental health problems.
The “smoke detector principle” of why we experience so many false alarms along with true threats.
The origins of morality and capacity for genuine love, and why Randy thinks it’s a mistake to try to explain these from a selfish gene perspective.
Evolutionary theories on why we age and die.
And much more.
Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Dominic Armstrong Transcriptions: Katy Moore
In today’s episode, host Luisa Rodriguez speaks to Emily Oster — economist at Brown University, host of the ParentData podcast, and the author of three hugely popular books that provide evidence-based insights into pregnancy and early childhood.
They cover:
Common pregnancy myths and advice that Emily disagrees with — and why you should probably get a doula.
Whether it’s fine to continue with antidepressants and coffee during pregnancy.
What the data says — and doesn’t say — about outcomes from parenting decisions around breastfeeding, sleep training, childcare, and more.
Which factors really matter for kids to thrive — and why that means parents shouldn’t sweat the small stuff.
How to reduce parental guilt and anxiety with facts, and reject judgemental “Mommy Wars” attitudes when making decisions that are best for your family.
The effects of having kids on career ambitions, pay, and productivity — and how the effects are different for men and women.
Practical advice around managing the tradeoffs between career and family.
What to consider when deciding whether and when to have kids.
Relationship challenges after having kids, and the protective factors that help.
And plenty more.
Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour and Milo McGuire Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore
Blog post by Robert Wiblin · Published January 31st, 2024
We’re excited to announce that the boards of Effective Ventures US and Effective Ventures UK have approved our selection committee’s choice of Niel Bowerman as the new CEO of 80,000 Hours.
I (Rob Wiblin) was joined on the selection committee by Will MacAskill, Hilary Greaves, Simran Dhaliwal, and Max Daniel.
We want to thank Brenton Mayer, who has served as 80,000 Hours interim CEO since late 2022, for his dedication and thoughtful management. Brenton expressed enthusiasm about the committee’s choice, and he expects to take on the role of chief operations officer, where he will continue to work closely with Niel to keep 80,000 Hours running smoothly.
By the end of its deliberations, the selection committee agreed that Niel was the best candidate to be 80,000 Hours’ long-term CEO. We think Niel’s drive and attitude will help him significantly improve the organisation and shift its strategy to keep up with events in the world. We were particularly impressed by his ability to use evidence to inform difficult strategic decisions and lay out a clear vision for the organisation.
Niel was very forthcoming and candid with the committee about his weaknesses. His focus on getting frank feedback and using it to drive a self-improvement cycle really impressed the selection committee.
In today’s episode, their conversation continues, with Nathan diving deeper into:
What AI now actually can and can’t do — across language and visual models, medicine, scientific research, self-driving cars, robotics, weapons — and what the next big breakthrough might be.
Why most people, including most listeners, probably don’t know and can’t keep up with the new capabilities and wild results coming out across so many AI applications — and what we should do about that.
How we need to learn to talk about AI more productively — particularly addressing the growing chasm between those concerned about AI risks and those who want to see progress accelerate, which may be counterproductive for everyone.
Where Nathan agrees with and departs from the views of ‘AI scaling accelerationists.’
The chances that anti-regulation rhetoric from some AI entrepreneurs backfires.
How governments could (and already do) abuse AI tools like facial recognition, and how militarisation of AI is progressing.
Preparing for coming societal impacts and potential disruption from AI.
Practical ways that curious listeners can try to stay abreast of everything that’s going on.
And plenty more.
Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour and Milo McGuire Transcriptions: Katy Moore
Blog post by Niel Bowerman · Published January 23rd, 2024
The idea this week: developing skills and habits takes time, effort, and using the right techniques.
At the start of a new year, we often reflect on how to improve and develop better habits. People often want to exercise more or become better at self-study. I, for one, wanted to consistently get to work earlier.
But actually making progress requires more than just wanting it — it takes a systematic approach. Doing this is a key part succeeding at your current job, improving your career trajectory, and even just being more fulfilled generally. (Read more in our article on all the evidence-based advice we found on how to be more successful in any job.)
You want to take something that’s a problem in your life and find a solution that becomes second nature.
For example, for some people, getting to work at an early hour is just part of their normal routine — they barely have to think about it. But if that’s not the case for you – like it wasn’t for me – you’ll need to make a conscious change, and work on it until it becomes second nature.
But lots of things block us from forming these new habits and skills.
The key is closing the loop — get feedback about your problem, analyse why you haven’t adopted the habit yet, make a change, test it out, and repeat:
Blog post by Lauren Kuhns · Published January 8th, 2024
As we kick off 2024, we’re taking a moment to look back at our 2023 content.
We published a lot of pieces aimed at helping our readers have more impactful careers, including a completely updated career guide, our revamped advanced series, around 35 podcast episodes, dozens of blog posts, and a bunch of updates to our career reviews and problem profiles.
We’d like to highlight some of the new content that stands out to us:
Standout blog posts
How to cope with rejection in your career — Luisa Rodriguez, one of the hosts of The 80,000 Hours podcast, wrote this powerful personal piece about her experience with career rejection, the unexpected benefits of getting rejected, and helpful tips for dealing with it that have worked for her. If you have ever struggled with rejection, I think this piece might help you feel less alone.
My thoughts on parenting and having an impactful career — Michelle Hutchinson, the director of the one-on-one programme at 80,000 Hours, wrote this thoughtful reflection on her decision to become a parent, the effects of parenthood on her career and social impact, and the challenges and benefits of being a parent in a community of people trying to have an impactful career.
Some thoughts on moderation in doing good — in this post, 80,000 Hours founder Ben Todd addressed why moderation may be underrated by people trying to have a big social impact and how to avoid the pitfalls of extremism.
80,000 Hours runs a programme where subscribers to our newsletter can order a free, paperback copy of a book to be sent to them in the mail. Readers choose between getting a copy of our career guide, Toby Ord’s The Precipice, and Will MacAskill’s Doing Good Better.
This giveaway has been open to all newsletter subscribers since early 2022. The number of orders we get depends on the number of new subscribers that day, but in general, we get around 150 orders a day.
Over the past week, however, we received an overwhelming number of orders. The offer of the free book appears to have been promoted by some very popular posts on Instagram, which generated an unprecedented amount of interest for us.
While we’re really grateful that these people were interested in what we have to offer, we couldn’t handle the massive uptick in demand. We’re a nonprofit funded by donations, and everything we provide is free. We had budgeted to run the book giveaway projecting the demand would be in line with what it’s been for the past two years. Instead, we had more than 20,000 orders in just a few days — which we anticipated would run through around six months of the book giveaway’s budget.
We’ve now paused taking new orders, and we’re unsure when we’ll be able to re-open them.
Also, because of this large spike in demand, we had to tell many people who subscribed to our newsletter hoping to get a physical book that we’re not able to complete their order.
Blog post by Robert Wiblin · Published December 31st, 2023
Happy new year! We’re celebrating with a special podcast holiday release: our favourite highlights from each episode of the show that came out in 2023.
That’s 32 of our favourite ideas packed into one episode that’s so bursting with substance it might be more than the human mind can safely handle.
We are excited to share that 80,000 Hours has officially decided to spin out as a project from our parent organisations and establish an independent legal structure.
80,000 Hours is a project of the Effective Ventures group — the umbrella term for Effective Ventures Foundation and Effective Ventures Foundation USA, Inc., which are two separate legal entities that work together. It also includes the projects Giving What We Can, the Centre for Effective Altruism, and others.
We’re incredibly grateful to the Effective Ventures leadership and team and the other orgs for all their support, particularly in the last year. They devoted countless hours and enormous effort to helping ensure that we and the other orgs could pursue our missions.
And we deeply appreciate Effective Ventures’ support in our spin-out. They recently announced that all of the other organisations under their umbrella will likewise become their own legal entities; we’re excited to continue to work alongside them to improve the world.
Back in May, we investigated whether it was the right time to spin out of our parent organisations. We’ve considered this option at various points in the last three years.
There have been many benefits to being part of a larger entity since our founding. But as 80,000 Hours and the other projects within Effective Ventures have grown, we concluded we can now best pursue our mission and goals independently. Effective Ventures leadership approved the plan.
Becoming our own legal entity will allow us to:
Match our governing structure to our function and purpose
Design operations systems that best meet our staff’s needs
Reduce interdependence with other entities that raises financial,
OpenAI says its mission is to build AGI — an AI system that is better than human beings at everything. Should the world trust them to do this safely?
That’s the central theme of today’s episode with Nathan Labenz — entrepreneur, AI scout, and host of The Cognitive Revolution podcast. Nathan saw the AI revolution coming years ago, and, astonished by the research he was seeing, set aside his role as CEO of Waymark and made it his full-time job to understand AI capabilities across every domain. He has been obsessively tracking the AI world since — including joining OpenAI’s “red team” that probed GPT-4 to find ways it could be abused, long before it was public.
Whether OpenAI was taking AI safety seriously enough became a topic of dinner table conversation around the world after the shocking firing and reinstatement of Sam Altman as CEO last month.
Nathan’s view: it’s complicated. Discussion of this topic has often been heated, polarising, and personal. But Nathan wants to avoid that and simply lay out, in a way that is impartial and fair to everyone involved, what OpenAI has done right and how it could do better in his view.
When he started on the GPT-4 red team, the model would do anything from diagnose a skin condition to plan a terrorist attack without the slightest reservation or objection. When later shown a “Safety” version of GPT-4 that was almost the same, he approached a member of OpenAI’s board to share his concerns and tell them they really needed to try out GPT-4 for themselves and form an opinion.
In today’s episode, we share this story as Nathan told it on his own show, The Cognitive Revolution, which he did in the hope that it would provide useful background to understanding the OpenAI board’s reservations about Sam Altman, which to this day have not been laid out in any detail.
But while he feared throughout 2022 that OpenAI and Sam Altman didn’t understand the power and risk of their own system, he has since been repeatedly impressed, and came to think of OpenAI as among the better companies that could hypothetically be working to build AGI.
Their efforts to make GPT-4 safe turned out to be much larger and more successful than Nathan was seeing. Sam Altman and other leaders at OpenAI seem to sincerely believe they’re playing with fire, and take the threat posed by their work very seriously. With the benefit of hindsight, Nathan suspects OpenAI’s decision to release GPT-4 when it did was for the best.
On top of that, OpenAI has been among the most sane and sophisticated voices advocating for AI regulations that would target just the most powerful AI systems — the type they themselves are building — and that could make a real difference. They’ve also invested major resources into new ‘Superalignment’ and ‘Preparedness’ teams, while avoiding using competition with China as an excuse for recklessness.
At the same time, it’s very hard to know whether it’s all enough. The challenge of making an AGI safe and beneficial may require much more than they hope or have bargained for. Given that, Nathan poses the question of whether it makes sense to try to build a fully general AGI that can outclass humans in every domain at the first opportunity. Maybe in the short term, we should focus on harvesting the enormous possible economic and humanitarian benefits of narrow applied AI models, and wait until we not only have a way to build AGI, but a good way to build AGI — an AGI that we’re confident we want, which we can prove will remain safe as its capabilities get ever greater.
By threatening to follow Sam Altman to Microsoft before his reinstatement as OpenAI CEO, OpenAI’s research team has proven they have enormous influence over the direction of the company. If they put their minds to it, they’re also better placed than maybe anyone in the world to assess if the company’s strategy is on the right track and serving the interests of humanity as a whole. Nathan concludes that this power and insight only adds to the enormous weight of responsibility already resting on their shoulders.
In today’s extensive conversation, Nathan and host Rob Wiblin discuss not only all of the above, but also:
Speculation about the OpenAI boardroom drama with Sam Altman, given Nathan’s interactions with the board when he raised concerns from his red teaming efforts.
Which AI applications we should be urgently rolling out, with less worry about safety.
Whether governance issues at OpenAI demonstrate AI research can only be slowed by governments.
Whether AI capabilities are advancing faster than safety efforts and controls.
The costs and benefits of releasing powerful models like GPT-4.
Nathan’s view on the game theory of AI arms races and China.
Whether it’s worth taking some risk with AI for huge potential upside.
The need for more “AI scouts” to understand and communicate AI progress.
And plenty more.
Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Milo McGuire and Dominic Armstrong Transcriptions: Katy Moore
Blog post by Cody Fenwick · Published December 15th, 2023
The idea this giving season: figuring out where to donate is tricky, but a few key tips can help.
There are lots of pressing problems in the world, and even more possible solutions. We mostly focus on careers, but donating to effective organisations tackling these problems — if you can — is another great way to help.
But how can you figure out where it’s best to donate?
Our article on choosing where to donate lays out how you can make this choice. First, you have to decide whether:
You want to defer to someone you think is trustworthy, shares your values, and has already evaluated charities. Just following their recommendations can save you work. (We discuss some options below.)
You want to do your own research instead, which might allow you to find unusually high-impact options matched to your specific values, plus improve your knowledge of effective giving.
You can also enter a donor lottery — learn more about them here.
If you decide to do your own research, you can use our article to figure out how much time you should spend. For example, we think young people might especially benefit from doing research since they’ll learn lessons about charity evaluation that they can apply for a long time in the future.
If you do your own research, we recommend you:
Decide which global problems you think are most pressing right now.
We’d guess Bohlin’s impact wasn’t quite that large. For one thing, seat belts already existed: in 1951, a Y-shaped three-point seat belt was patented that avoided the risks of internal injuries from simple lap belts. Bohlin’s innovation was doing this with just one strap, making it simple and convenient to use. For another thing, it seems likely that someone else would have come up with Bohlin’s design eventually.
Nevertheless, a simple estimate suggests that Bohlin saved hundreds of lives at the very least — incredible for such a simple piece of engineering.
Thanks to Jessica Wen and Sean Lawrence at High Impact Engineers for their help with this article. Much of the content is based on their website.
Also, having skills in this area means you’ll likely be highly paid, offering excellent options to earn to give.
Moreover, basic programming skills can be extremely useful whatever you end up doing. You’ll find ways to automate tasks or analyse data throughout your career.
What does a career using software and tech skills involve?
A career using these skills typically involves three steps:
Learn to code with a university course or self-study and then find positions where you can get great mentorship. (Read more about how to get started.)
Optionally, specialise in a particular area, for example, by building skills in machine learning or information security.
Apply your skills to helping solve a pressing global problem.
Skill by Benjamin Hilton · Last updated December 2023 · First published September 2023
What specialist knowledge is valuable?
Many highly specific areas of knowledge seem applicable to solving the world’s most pressing problems, especially risks posed by biotechnology and artificial intelligence.
In particular we’d highlight:
Subfields of biology relevant to pandemic prevention. Working on many of the possible technical solutions to reduce the risk of pandemics will require expertise in parts of biology. We’d particularly highlight synthetic biology, mathematical biology, virology, immunology, pharmacology, and vaccinology. This expertise can also be helpful for pursuing a biorisk-focused policy career. (Read more about careers to prevent catastrophic pandemics.)
AI hardware. Specialised hardware is a crucial input to the development of frontier AI systems. As a result, we expect expertise in AI hardware to become increasingly important to the governance of AI systems. (Read more about becoming an expert in AI hardware).
Many of the highest-impact people in history have been communicators and advocates of one kind or another.
Take Rosa Parks, who in 1955 refused to give up her seat to a white man on a bus, sparking a protest which led to a Supreme Court ruling that segregated buses were unconstitutional. Parks was a seamstress in her day job, but in her spare time she was involved with the civil rights movement. When Parks sat down on that bus, she wasn’t acting completely spontaneously: just a few months before she’d been attending workshops on effective communication and civil disobedience, and the resulting boycott was carefully planned by Parks and the local NAACP. After she was arrested, they used widely distributed fliers to launch a total boycott of buses in a city with 40,000 African Americans, while simultaneously pushing forward with legal action. This led to major progress for civil rights.
There are many ways to communicate ideas. One is social advocacy, like Rosa Parks. Another is more like being an individual public intellectual, who can either specialise in a mass audience (like Carl Sagan), or a particular niche (like Paul Farmer, a medical anthropologist who wrote about global health). Or you can learn skills in marketing and public relations and then work as part of a team or organisation to spread important ideas.
Why are communication skills valuable?
In the 20th century, smallpox killed around 400 million people — far more than died in all the century’s wars and political famines.
Lead is one of the most poisonous things going. A single sugar sachet of lead, spread over a park the size of an American football field, is enough to give a child that regularly plays there lead poisoning. For life they’ll be condemned to a ~3-point-lower IQ; a 50% higher risk of heart attacks; and elevated risk of kidney disease, anaemia, and ADHD, among other effects.
We’ve known lead is a health nightmare for at least 50 years, and that got lead out of car fuel everywhere. So is the situation under control? Not even close.
Around half the kids in poor and middle-income countries have blood lead levels above 5 micrograms per decilitre; the US declared a national emergency when just 5% of the children in Flint, Michigan exceeded that level. The collective damage this is doing to children’s intellectual potential, health, and life expectancy is vast — the health damage involved is around that caused by malaria, tuberculosis, and HIV combined.
This week’s guest, Lucia Coulter — cofounder of the incredibly successful Lead Exposure Elimination Project (LEEP) — speaks about how LEEP has been reducing childhood lead exposure in poor countries by getting bans on lead in paint enforced.
Various estimates suggest the work is absurdly cost effective. LEEP is in expectation preventing kids from getting lead poisoning for under $2 per child (explore the analysis here). Or, looking at it differently, LEEP is saving a year of healthy life for $14, and in the long run is increasing people’s lifetime income anywhere from $300–1,200 for each $1 it spends, by preventing intellectual stunting.
Which raises the question: why hasn’t this happened already? How is lead still in paint in most poor countries, even when that’s oftentimes already illegal? And how is LEEP able to get bans on leaded paint enforced in a country while spending barely tens of thousands of dollars? When leaded paint is gone, what should they target next?
With host Robert Wiblin, Lucia answers all those questions and more:
Why LEEP isn’t fully funded, and what it would do with extra money (you can donate here).
How bad lead poisoning is in rich countries.
Why lead is still in aeroplane fuel.
How lead got put straight in food in Bangladesh, and a handful of people got it removed.
Why the enormous damage done by lead mostly goes unnoticed.
The other major sources of lead exposure aside from paint.
Generalisable lessons LEEP has learned from coordinating with governments in poor countries.
And plenty more.
Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Milo McGuire and Dominic Armstrong Transcriptions: Katy Moore
China will likely play an especially influential role in determining the outcome of many of the biggest challenges of the next century. India also seems very likely to be important over the next few decades, and many other non-western countries — for example, Russia — are also major players on the world stage.
A lack of understanding and coordination between all these countries and the West means we might not tackle those challenges as well as we can (and need to).
So it’s going to be very valuable to have more people gaining real experience with emerging powers, especially China, and then specialising in the intersection of emerging powers and pressing global problems.
Why is experience with an emerging power (especially China) valuable?
The Chinese government ‘s spending on artificial intelligence research and development is estimated to be on the same order of magnitude as that of the US government.
As the largest trading partner of North Korea, China plays an important role in reducing the chance of conflict, especially nuclear conflict, on the Korean peninsula.
China is the largest emitter of CO2, accounting for 30% of the global total.
Skill by Benjamin Hilton · Last updated December 2023 · First published September 2023
Norman Borlaug was an agricultural scientist. Through years of research, he developed new, high-yielding, disease-resistant varieties of wheat.
It might not sound like much, but as a result of Borlaug’s research, wheat production in India and Pakistan almost doubled between 1965 and 1970, and formerly famine-stricken countries across the world were suddenly able to produce enough food for their entire populations. These developments have been credited with saving up to a billion people from famine, and in 1970, Borlaug was awarded the Nobel Peace Prize.
Not everyone can be a Norman Borlaug, and not every discovery gets adopted. Nevertheless, we think research can often be one of the most valuable skill sets to build — if you’re a good fit.
Suzy Deuster wanted to be a public defender, a career path that could help hundreds receive fair legal representation. But she realised that by shifting her focus to government work, she could improve the justice system for thousands or even millions. Suzy ended up doing just that from her position in the US Executive Office of the President, working on criminal justice reform.
This logic doesn’t just apply to criminal justice. For almost any global issue you’re interested in, roles in powerful institutions like governments often offer unique and high-leverage ways to address some of the most pressing challenges of our time.
Together, this suggests that building the skills needed to get things done in large institutions could give you a lot of opportunities to have an impact.
Skill by Benjamin Todd · Last updated December 2023 · First published September 2023
When most people think of careers that “do good,” the first thing they think of is working at a charity.
The thing is, lots of jobs at charities just aren’t that impactful.
Some charities focus on programmes that don’t work, like Scared Straight, which actually caused kids to commit more crimes. Others focus on ways of helping that, while thoughtful and helpful, don’t have much leverage, like knitting individual sweaters for penguins affected by oil spills (this actually happened!) instead of funding large-scale ocean cleanup projects.
While this penguin certainly looks all warm and cosy, we’d guess that knitting each sweater one-by-one wouldn’t be the best use of an organisation’s time.
But there are also many organisations out there — both for-profit and nonprofit — focused on pressing problems, implementing effective and scalable solutions, run by great teams, and in need of people.
If you can build skills that are useful for helping an organisation like this, it could well be one of the highest-impact things you can do.
In particular, organisations often need generalists able to do the bread and butter of building an organisation — hiring people, management, administration, communications, running software systems, crafting strategy, fundraising, and so on.
We call these ‘organisation-building’ skills. They can be high impact because you can increase the scale and effectiveness of the organisation you’re working at, while also gaining skills that can be applied to a wide range of global problems in the future (and make you generally employable too).