Article by Benjamin Todd · Last updated March 2023 · First published October 2021
Lots of people say they want to “make a difference,” “do good,” “have a social impact,” or “make the world a better place” — but they rarely say what they mean by those terms.
By getting clearer about your definition, you can better target your efforts. So how should you define social impact?
Over two thousand years of philosophy have gone into that question. We’re going to try to sum up that thinking; introduce a practical, rough-and-ready definition of social impact; and explain why we think it’s a good definition to focus on.
This is a bit ambitious for one article, so to the philosophers in the audience, please forgive the enormous simplifications.
A simple definition of social impact
If you just want a quick answer, here’s the simple version of our definition (a more philosophically precise one — and an argument for it — follows below):
Your social impact is given by the number of people whose lives you improve and how much you improve them, over the long term.
This shows that you can increase your impact in two ways: by helping more people over time, or by helping the same number of people to a greater extent (pictured below).
We say “over the long term” because you can help more people either by helping a greater number now, or taking actions with better long-term effects.
We’ve released our review of our programmes for the years 2021 and 2022. The full document is available for the public, and we’re sharing the summary below.
You can find our previous evaluations here. We have also updated our mistakes page.
80,000 Hours delivers four programmes: website, job board, podcast, and one-on-one. We also have a marketing team that attracts users to these programmes, primarily by getting them to visit the website.
Over the past two years, three of four programmes grew their engagement 2-3x:
Podcast listening time in 2022 was 2x higher than in 2020
Job board vacancy clicks in 2022 were 3x higher than in 2020
The number of one-on-one team calls in 2022 was 3x higher than in 2020
Web engagement hours fell by 20% in 2021, then grew by 38% in 2022 after we increased investment in our marketing.
From December 2020 to December 2022, the core team grew by 78% from 14 FTEs to 25 FTEs.
Ben Todd stepped down as CEO in May 2022 and was replaced by Howie Lempel.
The collapse of FTX in November 2022 caused significant disruption. As a result, Howie went on leave from 80,000 Hours to be Interim CEO of Effective Ventures Foundation (UK). Brenton Mayer took over as Interim CEO of 80,000 Hours. We are also spending substantially more time liaising with management across the Effective Ventures group,
I’m not just concerned about AI going wrong in minor ways: I think that there’s a small but possible chance of an existential catastrophe caused by AI within the next 100 years.
A large language model is a machine learning algorithm that is basically trained to continue whatever text it is given as input. It writes an article from a headline or continues a poem from the first few lines.
Blog post by Alex Lawsen · Published February 24th, 2023
80,000 Hours is considering hiring a headhunting lead to build out the headhunting service we provide to other organisations. They will work with the Director of 1-on-1 to set and execute a strategy which uses our team of advisors’ unique network to find and recommend talented and altruistic candidates for high impact roles.
We’re looking for someone who:
Has multiple years of experience in project management, research, or strategy, this could include roles in consulting, product management, or at early-stage startups or nonprofits.
Enjoys thinking about and working with different people in a variety of contexts, including maintaining relationships with major stakeholders, and developing models of people’s strengths to match them to specific roles.
Has a strong understanding of 80,000 Hours’ focus areas.
This role is based in London, UK. The salary will vary based on your skills and experience, but the starting salary for someone with five years of relevant experience would be in excess of £70,000 per year.
To express interest in this role, please complete this form.
About 80,000 Hours
80,000 Hours’ mission is to get talented people working on the world’s most pressing problems. The effective altruism community, which we are part of, is growing in reach. But how do we make sure people are pursuing the right kinds of work in order to turn all those resources into long-term impact? This is the problem 80,000 Hours is trying to solve.
Blog post by Jenna Peters · Published February 24th, 2023
80,000 Hours is considering hiring someone to work on building tech-based systems for the 1on1 team.
We’re looking for someone with an operations mindset who is excited about learning new tech tools and furthering 80,000 Hours’ mission
Right now, we are open to both full or part time applicants.
We are also currently open to both London-based (preferred) or remote applicants. We can sponsor visas.
Starting salary for a full-time position: ~£50,000-65,000, varies based on experience, location, and other factors.
Why 80,000 Hours?
80,000 Hours’ mission is to get talented people working on the world’s most pressing problems. The effective altruism community, which we are part of, is growing in reach. But how do we make sure people are pursuing the right kinds of work in order to turn all those resources into long-term impact? This is the problem 80,000 Hours is trying to solve.
We’ve had over eight million visitors to our website (with over 100,000 hours of reading time per year), and more than 3,000 people have now told us that they’ve significantly changed their career plans due to our work. 80,000 Hours is also the largest single source of people getting involved in the effective altruism community, according to the most recent EA Survey.
The 1on1 team at 80,000 Hours takes people from being “interested in the ideas and wanting to help” to “actually working to solve pressing world problems.”
Blog post by Benjamin Todd · Published February 14th, 2023
In a 2013 paper, Dr Toby Ord reviewed data compiled in the second edition of the World Bank’s Disease Control Priorities in Developing Countries, which compared about 100 health interventions in developing countries in terms of how many years of illness they prevent per dollar. He discovered some striking facts about the data:
The best interventions were around 10,000 times more cost effective than the worst, and around 50 times more cost effective than the median.
If you picked two interventions at random, on average the better one would be 100 times more cost effective than the other.
The distribution was heavy-tailed, and roughly lognormal. In fact, it almost exactly followed the 80/20 rule — that is, implementing the top 20% of interventions would do about 80% as much good as implementing all of them.
The differences between the very best interventions were larger than the differences between the typical ones, so it’s more important to go from ‘very good’ to ‘very best’ than from ‘so-so’ to ‘very good.’
He published these results in The Moral Imperative towards Cost-Effectiveness in Global Health, which became one of the papers that started the effective altruism movement. (Note that Ord is an advisor to 80,000 Hours.)
This data appears to have radical implications for people interested in doing good in the world; namely, by working on one of the best interventions in global health,
In many ways, humanity seems to have become more humane and inclusive over time. While there’s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, ethnicities, beliefs, and abilities equal treatment and rights have had significant success.
It’s tempting to believe this was inevitable — that the arc of history “bends toward justice,” and that as humans get richer, we’ll make even more moral progress.
But today’s guest Christopher Brown — a professor of history at Columbia University and specialist in the abolitionist movement and the British Empire during the 18th and 19th centuries — believes the story of how slavery became unacceptable suggests moral progress is far from inevitable.
While most of us today feel that the abolition of slavery was sure to happen sooner or later as humans became richer and more educated, Christopher doesn’t believe any of the arguments for that conclusion pass muster. If he’s right, a counterfactual history where slavery remains widespread in 2023 isn’t so far-fetched.
It was justified on all sorts of grounds that sound mad to us today. But according to Christopher, while there’s evidence that slavery was questioned in many of these civilisations, and periodically attacked by slaves themselves, there was no enduring or successful moral advocacy against slavery until the British abolitionist movement of the 1700s.
That movement first conquered Britain and its empire, then eventually the whole world. But the fact that there’s only a single time in history that a persistent effort to ban slavery got off the ground is a big clue that opposition to slavery was a contingent matter: if abolition had been inevitable, we’d expect to see multiple independent abolitionist movements thoroughly history, providing redundancy should any one of them fail.
Christopher argues that this rarity is primarily down to the enormous economic and cultural incentives to deny the moral repugnancy of slavery, and crush opposition to it with violence wherever necessary.
Think of coal or oil today: we know that climate change is likely to cause huge harms, and we know that our coal and oil consumption contributes to climate change. But just believing that something is wrong doesn’t necessarily mean humanity stops doing it. We continue to use coal and oil because our whole economy is oriented around their use and we see it as too hard to stop.
Just as coal and oil are fundamental to the world economy now, for millennia slavery was deeply baked into the way the rich and powerful stayed rich and powerful, and it required a creative leap to imagine it being toppled.
More generally, mere awareness is insufficient to guarantee a movement will arise to fix a problem. Humanity continues to allow many severe injustices to persist, despite being aware of them. So why is it so hard to imagine we might have done the same with forced labour?
In this episode, Christopher describes the unique and peculiar set of political, social and religious circumstances that gave rise to the only successful and lasting anti-slavery movement in human history. These circumstances were sufficiently improbable that Christopher believes there are very nearby worlds where abolitionism might never have taken off.
Some disagree with Christopher, arguing that abolitionism was a natural consequence of the industrial revolution, which reduced Great Britain’s need for human labour, among other changes — and that abolitionism would therefore have eventually taken off wherever industrialization did. But as we discuss, Christopher doesn’t find that reply convincing.
If he’s right and the abolition of slavery was in fact contingent, we shouldn’t expect moral values to keep improving just because humanity continues to become richer. We might have to be much more deliberate than that if we want to ensure we keep moving moral progress forward.
We also discuss:
Various instantiations of slavery throughout human history
Signs of antislavery sentiment before the 17th century
The role of the Quakers in early British abolitionist movement
Attitudes to slavery in other religions
The spread of antislavery in 18th century Britain
The importance of individual “heroes” in the abolitionist movement
Arguments against the idea that the abolition of slavery was contingent
Whether there have ever been any major moral shifts that were inevitable
Producer: Keiran Harris Audio mastering: Milo McGuire Transcriptions: Katy Moore
The question this week: is the world getting better or worse?
Three ways the world’s getting better 1. Poverty has decreased.
Lots of stats about trends in the world – even ones that seem good to some people – are complicated to evaluate overall.
But here’s a long-term trend, based on solid data, that seems uncontroversially good:
Living in extreme poverty is exceedingly difficult. And it’s not just the share of the population in extreme poverty that’s fallen. Since 1990, the absolute number has fallen too.
If you answered “cure,” “antidote,” or “antivenom” — you’ve obviously been reading the antonym section at www.merriam-webster.com/thesaurus/cancer.
But today’s guest Athena Aktipis says that the opposite of cancer is us: it’s having a functional multicellular body that’s cooperating effectively in order to make that multicellular body function.
If, like us, you found her answer far more satisfying than the dictionary, maybe you could consider closing your dozens of merriam-webster.com tabs, and start listening to this podcast instead.
As Athena explains in her book The Cheating Cell, what we see with cancer is a breakdown in each of the foundations of cooperation that allowed multicellularity to arise:
Cells will proliferate when they shouldn’t.
Cells won’t die when they should.
Cells won’t engage in the kind of division of labour that they should.
Cells won’t do the jobs that they’re supposed to do.
Cells will monopolise resources.
And cells will trash the environment.
When we think about animals in the wild, or even bacteria living inside our cells, we understand that they’re facing evolutionary pressures to figure out how they can replicate more; how they can get more resources; and how they can avoid predators — like lions, or antibiotics.
We don’t normally think of individual cells as acting as if they have their own interests like this. But cancer cells are actually facing similar kinds of evolutionary pressures within our bodies, with one major difference: they replicate much, much faster.
Incredibly, the opportunity for evolution by natural selection to operate just over the course of cancer progression is easily faster than all of the evolutionary time that we have had as humans since Homo sapiens came about.
Here’s a quote from Athena:
So you have to go and kind of put yourself on a different spatial scale and time scale, and just shift your thinking to be like: the body is a world with all these different ecosystems in it, and the cells are existing on a time scale where, if we’re going to map it onto anything like what we experience, a day is at least 10 years for them, right?
So it’s a very, very different way of thinking. Then once you shift to that, you’re like, “Oh, wow, there’s so much that could be happening in terms of adaptation inside the body, how cells are actually evolving inside the body over the course of our lifetimes.” That shift just opens up all this potential for using evolutionary approaches in adaptationist thinking to generate hypotheses that then you can test.
You can find compelling examples of cooperation and conflict all over the universe, so Rob and Athena don’t stop with cancer. They also discuss:
Cheating within cells themselves
Cooperation in human societies as they exist today — and perhaps in the future, between civilisations spread across different planets or stars
Whether it’s too out-there to think of humans as engaging in cancerous behaviour.
Why our anti-contagious-cancer mechanisms are so successful
Why elephants get deadly cancers less often than humans, despite having way more cells
When a cell should commit suicide
When the human body deliberately produces tumours
The strategy of deliberately not treating cancer aggressively
Superhuman cooperation
And much more
And at the end of the episode, they cover Athena’s new book Everything is Fine! How to Thrive in the Apocalypse, including:
Staying happy while thinking about the apocalypse
Practical steps to prepare for the apocalypse
And whether a zombie apocalypse is already happening among Tasmanian devils
And if you’d rather see Rob and Athena’s facial expressions as they laugh and laugh while discussing cancer and the apocalypse — you can watch the video of the full interview.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Milo McGuire Video editing: Ryan Kessler Transcriptions: Katy Moore
80,000 Hours provides free research and support to help people find careers tackling the world’s most pressing problems.
We’ve had over 10 million visitors to our website (with over 100,000 hours of reading time per year), and more than 3,000 people have told us that they’ve significantly changed their career plans due to our work. We’re also the largest single source of people getting involved in the effective altruism community, according to the most recent EA Community Survey.
Our articles are read by thousands, and are among the most important ways we help people shift their careers towards higher-impact options.
The role
As a content associate, you would:
Support the 80,000 Hours web team flexibly across a range of articles and projects.
Proofread 80,000 Hours articles before release, suggest style improvements, and check for errors.
Upload new articles and make changes to the site.
Ensure that our newsletters are sent out error-free and on time to the over 250,000 people on our mailing list.
Provide analytical support for the team, improving our ability to use data to measure and increase our impact.
Manage the gathering of feedback on our website from both readers and subject matter experts.
Generate ideas for new pieces.
Generally help grow the impact of the site.
Some of the types of pieces you could work on include:
When my husband and I decided to have children, we didn’t put much thought into the broader social impact of the decision. We got together at secondary school and had been discussing the fact we were going to have kids since we were 18, long before we found effective altruism.
We made the actual decision to have a child much later, but how it would affect our careers or abilities to help others still wasn’t a large factor in the decision. As with most people though, the decision has, in fact, had significant effects on our careers.
Raising my son, Leo — now three years old — is one of the great joys of my life, and I’m so happy that my husband and I decided to have him. But having kids can be challenging for anyone, and there may be unique challenges for people who aim to have a positive impact with their careers.
I’m currently the director of the one-on-one programme at 80,000 Hours and a fund manager for the Effective Altruism Infrastructure Fund. So I wanted to share my experience with parenting and working for organisations whose mission I care about deeply. Here are my aims:
Give readers an example of a working parent who also thinks a lot about 80,000 Hours’ advice.
Discuss some of the ways having kids is likely to affect the impact you have in your career, for people who want to consider that when deciding whether to have kids.
Blog post by Habiba Islam · Published January 19th, 2023
I think it’s a good idea to consider how you’re feeling about your career each year. At least, intellectually I think it’s good. In practice, I find it really hard. Compared to others I know, I’m not as naturally drawn to personal reflection and goal-setting. I intended to reflect on my own career over the festive period… and ended up bailing because I found it too stressful.
But it is important! Without making time to check in on the big career questions, you might stay too long at a job, miss opportunities for doing more good, or fail to push yourself to grow — I’ve certainly been there before.
So I suggest doing a career review this January — but committing to a realistic volume of work. You can start small. You can also try getting help — ask a friend to act as an “accountability buddy” or apply to talk one-on-one with someone from 80,000 Hours.
I’m committing to do it too this month — that’s one of the reasons I’m writing this newsletter!
Here are some of our tools and resources that you could use at whatever level of detail works for you:
Blog post by Cody Fenwick · Published January 5th, 2023
As 2023 gets underway, we’re taking a look back at the content we produced in 2022 and highlighting some particular standouts.
We published a lot of new articles and podcasts to help our readers have impactful careers — below are some of our favourite pieces from the year.
Standout posts and articles
My experience with imposter syndrome — and how to (partly) overcome it 80,000 Hours team member Luisa Rodriguez wrote this powerful and insightful piece on a challenge many people face when trying to have an impactful career. In it, she describes clearly what it’s like to have imposter syndrome from her own first-hand experience and provides a lot of helpful advice and guidance on how to manage it. I think a lot of people will benefit from reading it.
Know what you’re optimising for Alex Lawsen, one of 80,000 Hours’ advisors, has noticed that people often fall into the trap of trying to optimise the wrong things — like students who spend so much time worrying that their homework is neatly written, rather than actually understanding and learning from the material. The piece offers practical advice for overcoming this issue.
Podcast by Keiran Harris · Published December 29th, 2022
America aims to avoid nuclear war by relying on the principle of ‘mutually assured destruction,’ right? Wrong. Or at least… not officially.
As today’s guest — Jeffrey Lewis, founder of Arms Control Wonk and professor at the Middlebury Institute of International Studies — explains, in its official ‘OPLANs’ (military operation plans), the US is committed to ‘dominating’ in a nuclear war with Russia. How would they do that? “That is redacted.”
We invited Jeffrey to come on the show to lay out what we and our listeners are most likely to be misunderstanding about nuclear weapons, the nuclear posture of major powers, and his field as a whole, and he did not disappoint.
As Jeffrey tells it, ‘mutually assured destruction’ was a slur used to criticise those who wanted to limit the 1960s arms buildup, and was never accepted as a matter of policy in any US administration. But isn’t it still the de facto reality? Yes and no.
Jeffrey is a specialist on the nuts and bolts of bureaucratic and military decision-making in real-life situations. He suspects that at the start of their term presidents get a briefing about the US’ plan to prevail in a nuclear war and conclude that “it’s freaking madness.” They say to themselves that whatever these silly plans may say, they know a nuclear war cannot be won, so they just won’t use the weapons.
But Jeffrey thinks that’s a big mistake. Yes, in a calm moment presidents can resist pressure from advisors and generals. But that idea of ‘winning’ a nuclear war is in all the plans. Staff have been hired because they believe in those plans. It’s what the generals and admirals have all prepared for.
What matters is the ‘not calm moment’: the 3AM phone call to tell the president that ICBMs might hit the US in eight minutes — the same week Russia invades a neighbour or China invades Taiwan. Is it a false alarm? Should they retaliate before their land-based missile silos are hit? There’s only minutes to decide.
Jeffrey points out that in emergencies, presidents have repeatedly found themselves railroaded into actions they didn’t want to take because of how information and options were processed and presented to them. In the heat of the moment, it’s natural to reach for the plan you’ve prepared — however mad it might sound.
In this spicy conversation, Jeffrey fields the most burning questions from Rob and the audience, in the process explaining:
Why inter-service rivalry is one of the biggest constraints on US nuclear policy
Two times the US sabotaged nuclear nonproliferation among great powers
How his field uses jargon to exclude outsiders
How the US could prevent the revival of mass nuclear testing by the great powers
Why nuclear deterrence relies on the possibility that something might go wrong
The time the Navy and Air Force switched views on how to wage a nuclear war, just when it would allow them to have the most missiles
The problems that arise when you won’t talk to people you think are evil
Why missile defences are politically popular despite being strategically foolish
How open source intelligence can prevent arms races
And much more.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
You can send money directly to the world’s poorest people with minimal overhead via GiveDirectly.
And that’s just a start. GiveWell estimates that through donating to its top charities (focused on extremely cost-effective, evidence-backed public health interventions), your donation will go 5–10 times further than a direct cash transfer.
And we’d guess that if you target donations towards effective organisations tackling the world’s most pressing problems, you can do even more good.
Article by The 80,000 Hours team · Last updated December 2022 · First published November 2016
If you want to make a difference, and are happy to give toward wherever you think you can do the most good (regardless of cause area), how do you choose where to donate? This is a brief summary of the most useful tips we have.
How to choose an effective charity First, plan your research
One big decision to make is whether to do your own research or delegate your decision to someone else. Below are some considerations.
If you trust someone else’s recommendations, you can defer to them.
If you know someone who shares your values and has already put a lot of thought into where to give, then consider simply going with their recommendations.
But it can be better to do your own research if any of these apply to you:
You think you might find something higher impact according to your values than even your best advisor would find (because you have unique values, good research skills, or access to special information — e.g. knowing about a small project a large donor might not have looked into).
You think you might be able to productively contribute to the broader debate about which charities should be funded (producing research is a public good for other donors).
You want to improve your knowledge of effective altruism and charity evaluation.
Consider entering a donor lottery.
A donor lottery allows you to donate into a fund with other small donors,
Our show is mostly about the world’s most pressing problems and what you can do to solve them. But what’s the point of hosting a podcast if you can’t occasionally just talk about something fascinating with someone whose work you appreciate?
So today, just before the holidays, we’re sharing this interview with John about language and linguistics — including what we think are some of the most important things everyone ought to know about those topics. We ask him:
Can you communicate faster in some languages than others, or is there some constraint that prevents that?
Does learning a second or third language make you smarter, or not?
Can a language decay and get worse at communicating what people want to get across?
If children aren’t taught any language at all, how many generations does it take them to invent a fully fledged one of their own?
Did Shakespeare write in a foreign language, and if so, should we translate his plays?
How much does the language we speak really shape the way we think?
Are creoles the best languages in the world — languages that ideally we would all speak?
What would be the optimal number of languages globally?
Does trying to save dying languages do their speakers a favour, or is it more of an imposition?
Should we bother to teach foreign languages in UK and US schools?
Is it possible to save the important cultural aspects embedded in a dying language without saving the language itself?
Will AI models speak a language of their own in the future, one that humans can’t understand, but which better serves the tradeoffs AI models need to make?
We then put some of these questions to the large language model ChatGPT, asking it to play the role of a linguistics professor at Colombia University.
And if you’d rather see Rob and John’s facial expressions or beautiful high cheekbones while listening to this conversation, you can watch the video of the full interview.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Ben Cordell Video editing: Ryan Kessler Transcriptions: Katy Moore
Large language models like GPT-3, and now ChatGPT, are neural networks trained on a large fraction of all text available on the internet to do one thing: predict the next word in a passage. This simple technique has led to something extraordinary — black boxes able to write TV scripts, explain jokes, produce satirical poetry, answer common factual questions, argue sensibly for political positions, and more. Every month their capabilities grow.
But do they really ‘understand’ what they’re saying, or do they just give the illusion of understanding?
Today’s guest, Richard Ngo, thinks that in the most important sense they understand many things. Richard is a researcher at OpenAI — the company that created ChatGPT — who works to foresee where AI advances are going and develop strategies that will keep these models from ‘acting out’ as they become more powerful, are deployed and ultimately given power in society.
One way to think about ‘understanding’ is as a subjective experience. Whether it feels like something to be a large language model is an important question, but one we currently have no way to answer.
However, as Richard explains, another way to think about ‘understanding’ is as a functional matter. If you really understand an idea, you’re able to use it to reason and draw inferences in new situations. And that kind of understanding is observable and testable.
One experiment conducted by AI researchers suggests that language models have some of this kind of understanding.
If you ask any of these models what city the Eiffel Tower is in and what else you might do on a holiday to visit the Eiffel Tower, they will say Paris and suggest visiting the Palace of Versailles and eating a croissant.
One would be forgiven for wondering whether this might all be accomplished merely by memorising word associations in the text the model has been trained on. To investigate this, the researchers found the part of the model that stored the connection between ‘Eiffel Tower’ and ‘Paris,’ and flipped that connection from ‘Paris’ to ‘Rome.’
If the model just associated some words with one another, you might think that this would lead it to now be mistaken about the location of the Eiffel Tower, but answer other questions correctly. However, this one flip was enough to switch its answers to many other questions as well. Now if you asked it what else you might visit on a trip to the Eiffel Tower, it will suggest visiting the Colosseum and eating pizza, among other changes.
Another piece of evidence comes from the way models are prompted to give responses to questions. Researchers have found that telling models to talk through problems step by step often significantly improves their performance, which suggests that models are doing something useful with that extra “thinking time”.
Richard argues, based on this and other experiments, that language models are developing sophisticated representations of the world which can be manipulated to draw sensible conclusions — maybe not so different from what happens in the human mind. And experiments have found that, as models get more parameters and are trained on more data, these types of capabilities consistently improve.
We might feel reluctant to say a computer understands something the way that we do. But if it walks like a duck and it quacks like a duck, we should consider that maybe we have a duck — or at least something sufficiently close to a duck it doesn’t matter.
In today’s conversation, host Rob Wiblin and Richard discuss the above, as well as:
Could speeding up AI development be a bad thing?
The balance between excitement and fear when it comes to AI advances
Why OpenAI focuses its efforts where it does
Common misconceptions about machine learning
How many computer chips it might require to be able to do most of the things humans do
How Richard understands the ‘alignment problem’ differently than other people
Why ‘situational awareness’ may be a key concept for understanding the behaviour of AI models
What work to positively shape the development of AI Richard is and isn’t excited about
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Milo McGuire and Ben Cordell Transcriptions: Katy Moore
In this episode of 80k After Hours, Rob Wiblin interviews Marcus Davis about Rethink Priorities.
Marcus is co-CEO there, in charge of their animal welfare and global health and development research.
They cover:
Interventions to help wild animals
Aquatic noise
Rethink Priorities strategy
Mistakes that RP has made since it was founded
Careers in global priorities research
And the most surprising thing Marcus has learned at RP
Who this episode is for:
People who want to learn about Rethink Priorities
People interested in a career in global priorities research
People open to novel ways to help wild animals
Who this episode isn’t for:
People who think global priorities research sounds boring
People who want to host very loud concerts under the sea
Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ’80k After Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Milo McGuire and Ben Cordell Transcriptions: Katy Moore
Blog post by Benjamin Todd · Published December 9th, 2022
What actually is effective altruism?
Effective altruism isn’t about any particular way of doing good, like AI alignment or distributing malaria nets. Rather, it’s a way of thinking.
Last summer, I wrote a new introduction to effective altruism for effectivealtruism.org. In it, I tried to sum up the effective altruism way of thinking in terms of four values. (I wrote this newsletter before FTX collapsed, but maybe that makes it even more important to reiterate the core values of EA.)
Prioritisation. Resources are limited, so we have to make hard choices between potential interventions. While helping 10 people might feel as satisfying as helping 100, those extra 90 people really matter. And it turns out that some ways of helping achieve dramatically more than others, so it’s vital to really try to roughly compare ways we might help in terms of scale and effectiveness.
Impartial altruism. It’s reasonable and good to have special concern for one’s own family, friends, life, etc. But when trying to do good in general, we should give everyone’s interests equal weight — no matter where or even when they live. People matter equally. And we should also give due weight to the interests of nonhumans.
Open truth-seeking. Rather than starting with a commitment to a certain cause, consider many different ways to help and try to find the best ones you can. Put serious time into deliberation and reflection,