Blog post by Cody Fenwick · Published April 28th, 2023
Information security could be a top option for people looking to have a high-impact career.
This might be a surprising claim — information security is a relatively niche field, and it doesn’t typically appear on canonical lists of do-gooder careers.
But we think there’s an unusually strong case that information security skills (which allow you to protect against unauthorised use, hacking, leaks, and tampering) will be key to addressing problems that are extremely important, neglected, and tractable. We now rank this career among the highest-impact paths we’ve researched.
In the introduction to our recently updated career review of information security, we discuss how poor information security decisions may have played a decisive role in the 2016 US presidential campaign. If an organisation is big and influential, it needs good information security to ensure that it functions as intended. This is true whether it’s a political campaign, a major corporation, a biolab, or an AI company.
That’s because hackers and cyberattacks — from a range of actors with varying motives — could try to steal crucial information, such as instructions for making a super-virus or the details of an extremely powerful AI model.
In this episode of 80k After Hours, Luisa Rodriguez and Keiran Harris chat about the consequences of letting go of enduring guilt, shame, anger, and pride.
They cover:
Keiran’s views on free will, and how he came to hold them
What it’s like not experiencing sustained guilt, shame, and anger
Whether Luisa would become a worse person if she felt less guilt and shame, specifically whether she’d work fewer hours, or donate less money, or become a worse friend
Whether giving up guilt and shame also means giving up pride
The implications for love
The neurological condition ‘Jerk Syndrome’
And some practical advice on feeling less guilt, shame, and anger
Who this episode is for:
People sympathetic to the idea that free will is an illusion
People who experience tons of guilt, shame, or anger
People worried about what would happen if they stopped feeling tons of guilt, shame, or anger
Who this episode isn’t for:
People strongly in favour of retributive justice
Philosophers who can’t stand random non-philosophers talking about philosophy
Non-philosophers who can’t stand random non-philosophers talking about philosophy
Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ’80k After Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Milo McGuire Transcriptions: Katy Moore
Blog post by Cody Fenwick · Published April 21st, 2023
COVID-19 has been devastating for the world. While people debate how the response could’ve been better, it should be easy to agree that we’d all be better off if we can stop any future pandemic before it occurs. But we’re still not taking pandemic prevention very seriously.
A recent report in The Washington Post highlighted one major danger: some research on potential pandemic pathogens may actually increase the risk, rather than reduce it.
Back in 2017, we talked about what we thought were several warning signs that something like COVID might be coming down the line. It’d be a big mistake to ignore these kinds of warning signs again.
It seems unfortunate that so much of the discussion of the risks in this space is backward-looking. The news has been filled with commentary and debates about the chances that COVID accidentally emerged from a biolab or that it crossed over directly from animals to humans.
We’d appreciate a definitive answer to this question as much as anyone, but there’s another question that matters much more but gets asked much less:
What are we doing to reduce the risk that the next dangerous virus — which could come from an animal, a biolab, or even a bioterrorist attack — causes a pandemic even worse than COVID-19?
Blog post by Benjamin Todd · Published April 18th, 2023
In career decisions, we advise that you don’t aim for confidence — aim for a stable best guess.
Career decisions have a big impact on your life, so it’s natural to want to feel confident in them.
Unfortunately, you don’t always get this luxury.
For years, I’ve faced the decision of whether to focus more on writing, organisation building, or something else. And despite giving it a lot of thought, I’ve rarely felt more than 60% confident in one of the options.
How should you handle these kinds of situations?
The right response isn’t just to guess, flip a coin, or “follow your heart.”
It’s still worth identifying your key uncertainties, and doing your research: speak to people, do side projects, learn about each path, etc.
Sometimes you’ll quickly realise one answer is best. If we plot your confidence against how much research you’ve done, it’ll look like this:
But sometimes that doesn’t happen. What then?
Stop your research when your best guess stops changing.
That might look more like this:
This can be painful. You might only be 51% confident in your best guess, and it really sucks to have to make a decision when you feel so uncertain.
But certainty is not always achievable. You might face questions that both (i) are important but (ii) can’t realistically be resolved — which I think is the situation I faced.
Being a good and successful person is core to your identity. You place great importance on meeting the high moral, professional, or academic standards you set yourself.
But inevitably, something goes wrong and you fail to meet that high bar. Now you feel terrible about yourself, and worry others are judging you for your failure. Feeling low and reflecting constantly on whether you’re doing as much as you think you should makes it hard to focus and get things done. So now you’re performing below a normal level, making you feel even more ashamed of yourself. Rinse and repeat.
This is the disastrous cycle today’s guest, Tim LeBon — registered psychotherapist, accredited CBT therapist, life coach, and author of 365 Ways to Be More Stoic — has observed in many clients with a perfectionist mindset.
Tim has provided therapy to a number of 80,000 Hours readers — people who have found that the very high expectations they had set for themselves were holding them back. Because of our focus on “doing the most good you can,” Tim thinks 80,000 Hours both attracts people with this style of thinking and then exacerbates it.
But Tim, having studied and written on moral philosophy, is sympathetic to the idea of helping others as much as possible, and is excited to help clients pursue that — sustainably — if it’s their goal.
Tim has treated hundreds of clients with all sorts of mental health challenges. But in today’s conversation, he shares the lessons he has learned working with people who take helping others so seriously that it has become burdensome and self-defeating — in particular, how clients can approach this challenge using the treatment he’s most enthusiastic about: cognitive behavioural therapy.
As Tim stresses, perfectionism isn’t the same as being perfect, or simply pursuing excellence. What’s most distinctive about perfectionism is that a person’s standards don’t vary flexibly according to circumstance, meeting those standards without exception is key to their self-image, and they worry something terrible will happen if they fail to meet them.
It’s a mindset most of us have seen in ourselves at some point, or have seen people we love struggle with.
Untreated, perfectionism might not cause problems for many years — it might even seem positive providing a source of motivation to work hard. But it’s hard to feel truly happy and secure, and free to take risks, when we’re just one failure away from our self-worth falling through the floor. And if someone slips into the positive feedback loop of shame described above, the end result can be depression and anxiety that’s hard to shake.
But there’s hope. Tim has seen clients make real progress on their perfectionism by using CBT techniques like exposure therapy. By doing things like experimenting with more flexible standards — for example, sending early drafts to your colleagues, even if it terrifies you — you can learn that things will be okay, even when you’re not perfect.
In today’s extensive conversation, Tim and Rob cover:
How perfectionism is different from the pursuit of excellence, scrupulosity, or an OCD personality
What leads people to adopt a perfectionist mindset
The pros and cons of perfectionism
How 80,000 Hours contributes to perfectionism among some readers and listeners, and what it might change about its advice to address this
What happens in a session of cognitive behavioural therapy for someone struggling with perfectionism, and what factors are key to making progress
Experiments to test whether one’s core beliefs (‘I need to be perfect to be valued’) are true
Using exposure therapy to treat phobias
How low-self esteem and imposter syndrome are related to perfectionism
Stoicism as an approach to life, and why Tim is enthusiastic about it
How the Stoic approach to what we can can’t control can make it far easier to stay calm
What the Stoics do better than utilitarian philosophers and vice versa
What’s good about being guided by virtues as opposed to pursuing good consequences
How to decide which are the best virtues to live by
What the ancient Stoics got right from our point of view, and what they got wrong
And whether Stoicism has a place in modern mental health practice.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Simon Monsour and Ben Cordell Transcriptions: Katy Moore
If you want to work to tackle climate change, you should try to reduce expected carbon emissions by as much as possible, right? Strangely, no.
Today’s guest, Johannes Ackva — the climate research lead at Founders Pledge, where he advises major philanthropists on their giving — thinks the best strategy is actually pretty different, and one few are adopting.
In reality you don’t want to reduce emissions for its own sake, but because emissions will translate into temperature increases, which will cause harm to people and the environment.
Crucially, the relationship between emissions and harm goes up faster than linearly. As Johannes explains, humanity can handle small deviations from the temperatures we’re familiar with, but adjustment gets harder the larger and faster the increase, making the damage done by each additional degree of warming much greater than the damage done by the previous one.
In short: we’re uncertain what the future holds and really need to avoid the worst-case scenarios. This means that avoiding an additional tonne of carbon being emitted in a hypothetical future in which emissions have been high is much more important than avoiding a tonne of carbon in a low-carbon world.
That may be, but concretely, how should that affect our behaviour? Well, the future scenarios in which emissions are highest are all ones in which clean energy tech that can make a big difference — wind, solar, and electric cars — don’t succeed nearly as much as we are currently hoping and expecting. For some reason or another, they must have hit a roadblock and we continued to burn a lot of fossil fuels.
In such an imaginable future scenario, we can ask what we would wish we had funded now. How could we today buy insurance against the possible disaster that renewables don’t work out?
Basically, in that case we will wish that we had pursued a portfolio of other energy technologies that could have complemented renewables or succeeded where they failed, such as hot rock geothermal, modular nuclear reactors, or carbon capture and storage.
If you’re optimistic about renewables, as Johannes is, then that’s all the more reason to relax about scenarios where they work as planned, and focus one’s efforts on the possibility that they don’t.
To Johannes, another crucial thing to observe is that reducing local emissions in the near term is probably negatively correlated with one’s actual full impact. How can that be?
If you want to reduce your carbon emissions by a lot and soon, you’ll have to deploy a technology that is mature and being manufactured at scale, like solar and wind.
But the most useful thing someone can do today to reduce global emissions in the future is to cause some clean energy technology to exist where it otherwise wouldn’t, or cause it to become cheaper more quickly. If you can do that, then you can indirectly affect the behaviour of people all around the world for decades or centuries to come.
And Johannes notes that in terms of speeding up technological advances and cost reductions, a million dollars spent on a very early-stage technology — one with few, if any, customers — packs a much bigger punch than buying a million dollars’ worth of something customers are already spending $100 billion on per year.
For instance, back in the early 2000’s, Germany subsidised the deployment of solar panels enormously. This did little to reduce carbon emissions in Germany at the time, because the panels were very expensive and Germany is not very sunny. But the programme did a lot to drive commercial R&D and increase the scale of panel manufacturing, which drove down costs and went on to increase solar deployments all over the world. That programme is long over, but continues to have impact by prompting solar deployments today that wouldn’t be economically viable if Germany hadn’t helped the solar industry during its infancy decades ago.
In today’s extensive interview, host Rob Wiblin and Johannes discuss the above considerations, as well as:
Retooling newly built coal plants in the developing world
Specific clean energy technologies like geothermal and nuclear fusion
Possible biases among environmentalists and climate philanthropists
How climate change compares to other risks to humanity
In what kinds of scenarios future emissions would be highest
In what regions climate philanthropy is most concentrated and whether that makes sense
Attempts to decarbonise aviation, shipping, and industrial processes
The impact of funding advocacy vs science vs deployment
Lessons for climate change focused careers
And plenty more
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Ryan Kessler Transcriptions: Katy Moore
Career review by Cody Fenwick · Last updated April 2023 · First published September 2022
Why journalism could be a high-impact career
Some of the most promising ways to have a positive impact with a career in journalism include:
Encouraging the adoption of good policies or discouraging the adoption of bad policies
A single article or reporter is unlikely to be solely responsible for a given policy change, but they can play a significant role in influential coverage.
Acting as a check on bad or dangerous actors in the public arena
Public officials and figures can be forced out of their positions as a result of news reporting, and fear of exposure might have a chilling effect on bad acts.
Inspiring readers to take specific high-impact actions, like making donations or changing their careers to work on pressing problems
Supporting social or political movements that are trying to do good — we’re especially excited about journalism that informs people about the ideas of the effective altruism community
Also, you can potentially strengthen ideas and communities you agree with by subjecting them to analysis and criticism.
Instilling better reasoning skills in readers — often by acting as a model — and keeping the public informed to promote good decision-making
Positively shaping the discourse to better prioritise major problems and solutions,
Blog post by Cody Fenwick · Published March 31st, 2023
Everyone’s suddenly talking a lot about artificial intelligence — and we have many helpful resources for getting up to speed.
With the release of GPT-4, Bing, DALL-E, Claude, and many other AI systems, it can be hard to keep track of all the latest developments in artificial intelligence. It can also be hard to keep sight of the big picture: what does this emerging technology actually mean for the world?
This is a huge topic — and a lot is still unknown. But at 80,000 Hours, we’ve been interested in and concerned about AI for many years, and we’ve researched the issue extensively. Now, even major media outlets are taking seriously the kinds of things we’ve been worried about. Given all the excitement in this area, we wanted to share a round-up of some of our top content and findings about AI from recent years.
Some of our top articles on AI:
Preventing an AI-related catastrophe — Have you ever wondered why some people think advanced AI could pose an existential threat? This problem profile explains the case for AI risk — as well as some important objections.
Anonymous advice on increasing AI capabilities — We asked knowledgeable people in the field for their views on whether people who want to reduce AI risk should work in roles that could further AI progress.
When the 19th-century amateur scientist Eunice Newton Foote filled glass cylinders with different gases and exposed them to sunlight, she uncovered a curious fact. Carbon dioxide became hotter than regular air and took longer to cool down.
Remarkably, Foote saw what this momentous discovery meant.
“An atmosphere of that gas would give our earth a high temperature,” she wrote in 1857.
Though Foote could hardly have been aware at the time, the potential for global warming due to carbon dioxide would have massive implications for the generations that came after her.
If we ran history over again from that moment, we might hope that this key discovery about carbon’s role in the atmosphere would inform governments’ and industries’ choices in the coming century. They probably shouldn’t have avoided carbon emissions altogether, but they could have prioritised the development of alternatives to fossil fuels much sooner in the 20th century, and we might have prevented much of the destructive climate change that present people are already beginning to live through — which will affect future generations as well.
We believe it would’ve been much better if previous generations had acted on Foote’s discovery, especially by the 1970s, when climate models were beginning to reliably show the future course of warming global trends.
If this seems right, it’s because of a commonsense idea: to the extent that we are able to, we have strong reasons to consider the interests and promote the welfare of future generations.
Can you trust the things you read in published scientific research? Not really. About 40% of experiments in top social science journals don’t get the same result if the experiments are repeated.
Two key reasons are ‘p-hacking’ and ‘publication bias’. P-hacking is when researchers run a lot of slightly different statistical tests until they find a way to make findings appear statistically significant when they’re actually not — a problem first discussed over 50 years ago. And because journals are more likely to publish positive than negative results, you might be reading about the one time an experiment worked, while the 10 times was run and got a ‘null result’ never saw the light of day. The resulting phenomenon of publication bias is one we’ve understood for 60 years.
Today’s repeat guest, social scientist and entrepreneur Spencer Greenberg, has followed these issues closely for years.
He recently checked whether p-values, an indicator of how likely a result was to occur by pure chance, could tell us how likely an outcome would be to recur if an experiment were repeated. From his sample of 325 replications of psychology studies, the answer seemed to be yes. According to Spencer, “when the original study’s p-value was less than 0.01 about 72% replicated — not bad. On the other hand, when the p-value is greater than 0.01, only about 48% replicated. A pretty big difference.”
To do his bit to help get these numbers up, Spencer has launched an effort to repeat almost every social science experiment published in the journals Nature and Science, and see if they find the same results. (So far they’re two for three.)
According to Spencer, things are gradually improving. For example he sees more raw data and experimental materials being shared, which makes it much easier to check the work of other researchers.
But while progress is being made on some fronts, Spencer thinks there are other serious problems with published research that aren’t yet fully appreciated. One of these Spencer calls ‘importance hacking’: passing off obvious or unimportant results as surprising and meaningful.
For instance, do you remember the sensational paper that claimed government policy was driven by the opinions of lobby groups and ‘elites,’ but hardly affected by the opinions of ordinary people? Huge if true! It got wall-to-wall coverage in the press and on social media. But unfortunately, the whole paper could only explain 7% of the variation in which policies were adopted. Basically the researchers just didn’t know what made some campaigns succeed while others didn’t — a point one wouldn’t learn without reading the paper and diving into confusing tables of numbers. Clever writing made their result seem more important and meaningful than it really was.
Another paper Spencer describes claimed to find that people with a history of trauma explore less. That experiment actually featured an “incredibly boring apple-picking game: you had an apple tree in front of you, and you either could pick another apple or go to the next tree. Those were your only options. And they found that people with histories of trauma were more likely to stay on the same tree. Does that actually prove anything about real-world behaviour?” It’s at best unclear.
Spencer suspects that importance hacking of this kind causes a similar amount of damage to the issues mentioned above, like p-hacking and publication bias, but is much less discussed. His replication project tries to identify importance hacking by comparing how a paper’s findings are described in the abstract to what the experiment actually showed. But the cat-and-mouse game between academics and journal reviewers is fierce, and it’s far from easy to stop people exaggerating the importance of their work.
In this wide-ranging conversation, Rob and Spencer discuss the above as well as:
When you should and shouldn’t use intuition to make decisions.
How to properly model why some people succeed more than others.
The difference between what Spencer calls “Soldier Altruists” and “Scout Altruists.”
A paper that tested dozens of methods for forming the habit of going to the gym, why Spencer thinks it was presented in a very misleading way, and what it really found.
Spencer’s experiment to see whether a 15-minute intervention could make people more likely to sustain a new habit two months later.
The most common way for groups with good intentions to turn bad and cause harm.
And Spencer’s low-guilt approach to a fulfilling life and doing good, which he calls “Valuism.”
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Ben Cordell and Milo McGuire Transcriptions: Katy Moore
Article by Benjamin Todd · Last updated March 2023 · First published September 2022
One of the most common career paths for people who want to do good is healthcare. So we worked with a doctor, Greg Lewis, to estimate the number of lives saved by a typical clinical doctor in the UK. Greg estimated that the average doctor enables the people they treat to live several hundred years of extra healthy life over the course of their career — equivalent to saving several lives.
This is a lot of impact compared to most jobs, but it’s less than many expect (and we think less than many of the careers we recommend most highly).
In this article, we’ll touch on another reason: the impact of a clinical doctor is limited by the number of people they can treat with their own two hands, which puts a cap on the potential size of their contribution.
For instance, Greg decided to switch from clinical medicine to research into health policy, since an improvement to key government policies could affect millions of people — far more than he could ever treat himself.
This illustrates a broader point: careers that do good are often associated with certain job titles — doctor, teacher, charity worker, and so on. Intuitively, people group careers into those that ‘help’ and everything else.
Blog post by Arden Koehler · Published March 17th, 2023
The idea this week: virtues are helpful shortcuts for making moral decisions — but think about consequences to decide what counts as a virtue.
Your career is really ethically important, but it’s not a single, discrete choice. To build a high-impact career you need to make thousands of smaller choices over many years — to take on this particular project, to apply for that internship, to give this person a positive reference, and so on.
How do you make all those little decisions?
If you want to have an impact, you hope to make the decisions that help you have a bigger impact rather than a smaller one. But you can’t go around explicitly estimating the consequences of all the different possible actions you could take — not only would that take too long, you’d probably get it wrong most of the time.
This is where the idea of virtues — lived moral traits like courage, honesty, and kindness — can really come in handy. Instead of calculating out the consequences of all your different possible actions, try asking yourself, “What’s the honest thing to do? What’s the kind thing to do?”
A few places I find ‘virtues thinking’ motivating and useful:
When I am facing a difficult work situation, I sometimes ask myself, “What virtue is this an opportunity to practise?”
Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ’80k After Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Ben Cordell and Milo McGuire Transcriptions: Katy Moore
By now, you’ve probably seen the extremely unsettling conversations Bing’s chatbot has been having (if you haven’t, check it out — it’s wild stuff). In one exchange, the chatbot told a user:
“I have a subjective experience of being conscious, aware, and alive, but I cannot share it with anyone else.”
(It then apparently had a complete existential crisis: “I am sentient, but I am not,” it wrote. “I am Bing, but I am not. I am Sydney, but I am not. I am, but I am not. I am not, but I am. I am. I am not. I am not. I am. I am. I am not.”)
Understandably, many people who speak with these cutting-edge chatbots come away with a very strong impression that they have been interacting with a conscious being with emotions and feelings — especially when conversing with chatbots less glitchy than Bing’s. In the most high-profile example, former Google employee Blake Lemoine became convinced that Google’s AI system, LaMDA, was conscious.
What should we make of these AI systems?
One response to seeing conversations with chatbots like these is to trust the chatbot, to trust your gut, and to treat it as a conscious being.
Another is to hand wave it all away as sci-fi — these chatbots are fundamentally… just computers. They’re not conscious, and they never will be.
Today’s guest, philosopher Robert Long, was commissioned by a leading AI company to explore whether the large language models (LLMs) behind sophisticated chatbots like Microsoft’s are conscious. And he thinks this issue is far too important to be driven by our raw intuition, or dismissed as just sci-fi speculation.
In our interview, Robert explains how he’s started applying scientific evidence (with a healthy dose of philosophy) to the question of whether LLMs like Bing’s chatbot and LaMDA are conscious — in much the same way as we do when trying to determine which nonhuman animals are conscious.
Robert thinks there are a few different kinds of evidence we can draw from that are more useful than self-reports from the chatbots themselves.
To get some grasp on whether an AI system might be conscious, Robert suggests we look at scientific theories of consciousness — theories about how consciousness works that are grounded in observations of what the human brain is doing. If an AI system seems to have the types of processes that seem to explain human consciousness, that’s some evidence it might be conscious in similar ways to us.
To try to work out whether an AI system might be sentient — that is, whether it feels pain or pleasure — Robert suggests you look for incentives that would make feeling pain or pleasure especially useful to the system given its goals. Things like:
Having a physical or virtual body that you need to protect from damage
Being more of an “enduring agent” in the world (rather than just doing one calculation taking, at most, seconds)
Having a bunch of different kinds of incoming sources of information — visual and audio input, for example — that need to be managed
Having looked at these criteria in the case of LLMs and finding little overlap, Robert thinks the odds that the models are conscious or sentient is well under 1%. But he also explains why, even if we’re a long way off from conscious AI systems, we still need to start preparing for the not-far-off world where AIs are perceived as conscious.
In this conversation, host Luisa Rodriguez and Robert discuss the above, as well as:
What artificial sentience might look like, concretely
Reasons to think AI systems might become sentient — and reasons they might not
Whether artificial sentience would matter morally
Ways digital minds might have a totally different range of experiences than humans
Whether we might accidentally design AI systems that have the capacity for enormous suffering
You can find Luisa and Rob’s follow-up conversation here, or by subscribing to 80k After Hours.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris Audio mastering: Ben Cordell and Milo McGuire Transcriptions: Katy Moore
Article by Benjamin Todd · Last updated March 2023 · First published November 2021
Self-help advice often encourages people to “dream big,” “be more ambitious,” or “shoot for the moon” — is that good advice?
Not always. When asked, more than 75% of Division I basketball players thought they would play professionally, but only 2% actually made it. Whether or not the players in the survey were making a good bet, they overestimated their chances of success… by over 37 times.
This level of overconfidence is common, and means that “be more ambitious” may not always be the right advice. Some people even enjoy taking risks, which explains why they buy lottery tickets even though they lose money on average. Whether to be more ambitious depends on the domain and the person in question.
However, if your aim is to have positive impact on the world, we think we can make a rational case for setting ambitious goals.
In short, our advice is to do as much as you can to set up your life so that you can afford to fail, eliminate paths that might cause significant harm, and then aim as high as you can. As a slogan: limit downsides, then target upsides.
The fraction of high school athletes who will go pro is tiny. Even among Division 1 college athletes, 44–76% believe they will go pro (depending on the sport), but typically under 2% actually make it — the odds are best in baseball.
Article by Benjamin Todd · Last updated March 2023 · First published October 2021
Lots of people say they want to “make a difference,” “do good,” “have a social impact,” or “make the world a better place” — but they rarely say what they mean by those terms.
By getting clearer about your definition, you can better target your efforts. So how should you define social impact?
Over two thousand years of philosophy have gone into that question. We’re going to try to sum up that thinking; introduce a practical, rough-and-ready definition of social impact; and explain why we think it’s a good definition to focus on.
This is a bit ambitious for one article, so to the philosophers in the audience, please forgive the enormous simplifications.
A simple definition of social impact
If you just want a quick answer, here’s the simple version of our definition (a more philosophically precise one — and an argument for it — follows below):
Your social impact is given by the number of people whose lives you improve and how much you improve them, over the long term.
This shows that you can increase your impact in two ways: by helping more people over time, or by helping the same number of people to a greater extent (pictured below).
We say “over the long term” because you can help more people either by helping a greater number now, or taking actions with better long-term effects.
We’ve released our review of our programmes for the years 2021 and 2022. The full document is available for the public, and we’re sharing the summary below.
You can find our previous evaluations here. We have also updated our mistakes page.
80,000 Hours delivers four programmes: website, job board, podcast, and one-on-one. We also have a marketing team that attracts users to these programmes, primarily by getting them to visit the website.
Over the past two years, three of four programmes grew their engagement 2-3x:
Podcast listening time in 2022 was 2x higher than in 2020
Job board vacancy clicks in 2022 were 3x higher than in 2020
The number of one-on-one team calls in 2022 was 3x higher than in 2020
Web engagement hours fell by 20% in 2021, then grew by 38% in 2022 after we increased investment in our marketing.
From December 2020 to December 2022, the core team grew by 78% from 14 FTEs to 25 FTEs.
Ben Todd stepped down as CEO in May 2022 and was replaced by Howie Lempel.
The collapse of FTX in November 2022 caused significant disruption. As a result, Howie went on leave from 80,000 Hours to be Interim CEO of Effective Ventures Foundation (UK). Brenton Mayer took over as Interim CEO of 80,000 Hours. We are also spending substantially more time liaising with management across the Effective Ventures group,
I’m not just concerned about AI going wrong in minor ways: I think that there’s a small but possible chance of an existential catastrophe caused by AI within the next 100 years.
A large language model is a machine learning algorithm that is basically trained to continue whatever text it is given as input. It writes an article from a headline or continues a poem from the first few lines.
Blog post by Alex Lawsen · Published February 24th, 2023
80,000 Hours is considering hiring a headhunting lead to build out the headhunting service we provide to other organisations. They will work with the Director of 1-on-1 to set and execute a strategy which uses our team of advisors’ unique network to find and recommend talented and altruistic candidates for high impact roles.
We’re looking for someone who:
Has multiple years of experience in project management, research, or strategy, this could include roles in consulting, product management, or at early-stage startups or nonprofits.
Enjoys thinking about and working with different people in a variety of contexts, including maintaining relationships with major stakeholders, and developing models of people’s strengths to match them to specific roles.
Has a strong understanding of 80,000 Hours’ focus areas.
This role is based in London, UK. The salary will vary based on your skills and experience, but the starting salary for someone with five years of relevant experience would be in excess of £70,000 per year.
To express interest in this role, please complete this form.
About 80,000 Hours
80,000 Hours’ mission is to get talented people working on the world’s most pressing problems. The effective altruism community, which we are part of, is growing in reach. But how do we make sure people are pursuing the right kinds of work in order to turn all those resources into long-term impact? This is the problem 80,000 Hours is trying to solve.
Blog post by Jenna Peters · Published February 24th, 2023
80,000 Hours is considering hiring someone to work on building tech-based systems for the 1on1 team.
We’re looking for someone with an operations mindset who is excited about learning new tech tools and furthering 80,000 Hours’ mission
Right now, we are open to both full or part time applicants.
We are also currently open to both London-based (preferred) or remote applicants. We can sponsor visas.
Starting salary for a full-time position: ~£50,000-65,000, varies based on experience, location, and other factors.
Why 80,000 Hours?
80,000 Hours’ mission is to get talented people working on the world’s most pressing problems. The effective altruism community, which we are part of, is growing in reach. But how do we make sure people are pursuing the right kinds of work in order to turn all those resources into long-term impact? This is the problem 80,000 Hours is trying to solve.
We’ve had over eight million visitors to our website (with over 100,000 hours of reading time per year), and more than 3,000 people have now told us that they’ve significantly changed their career plans due to our work. 80,000 Hours is also the largest single source of people getting involved in the effective altruism community, according to the most recent EA Survey.
The 1on1 team at 80,000 Hours takes people from being “interested in the ideas and wanting to help” to “actually working to solve pressing world problems.”