Going undercover to expose animal cruelty, get rabbit cages banned and reduce meat consumption

What if you knew that ducks were being killed with pitchforks? Rabbits dumped alive into containers? Or pigs being strangled with forklifts? Would you be willing to go undercover to expose the crime?

That’s a real question that confronts volunteers at Animal Equality (AE). In this episode we speak to Sharon Nunez and Jose Valle, who founded AE in 2006 and then grew it into a multi-million dollar international animal rights organisation. They’ve been chosen as one of the most effective animal protection orgs in the world by Animal Charity Evaluators for the last 3 consecutive years.

In addition to undercover investigations AE has also designed a 3D virtual-reality farm experience called iAnimal360. People get to experience being trapped in a cage – in a room designed to kill then – and can’t just look away. How big an impact is this having on users?

In this interview I’m joined by my colleague Natalie Cargill – Sharon Nuñez and Jose Valle also tackle:

  • How do they track their goals and metrics week to week?
  • How much does an undercover investigation cost?
  • Why don’t people donate more to factory farmed animals, given that they’re the vast majority of animals harmed directly by humans?
  • How risky is it to attempt to build a career in animal advocacy?
  • What led to a change in their focus from bullfighting in Spain to animal farming?
  • How does working with governments or corporate campaigns compare with early strategies like creating new vegans/vegetarians?

Continue reading →

What are the most important talent gaps in the effective altruism community?

What are the highest-impact opportunities in the effective altruism community right now? We surveyed leaders at 17 key organisations to learn more about what skills they need and how they would trade-off receiving donations against hiring good staff. It’s a more extensive and up-to-date version of the survey we did last year.

Below is a summary of the key numbers, a link to a presentation with all grthe results, a discussion of what these numbers mean, and at the bottom an appendix on how the survey was conducted and analysed.

We also report on two additional surveys about the key bottlenecks in the community, and the amount of donations expected to these organisations.

Key figures

Willingness to pay to bring forward hires

We asked how organisations would have to be compensated in donations for their last ‘junior hire’ or ‘senior hire’ to disappear and not do valuable work for a 3 year period:

Most needed skills

  • Decisions on who to hire most often turned on Good overall judgement about probabilities, what to do and what matters, General mental ability and Fit with the team (over and above being into EA).

Funding vs talent constraints

  • On a 0-4 scale EA organisations viewed themselves as 2.5 ‘talent constrained’ and 1.2 ‘funding constrained’, suggesting hiring remains the more significant limiting factor, though funding still does limit some.

Continue reading →

We can use science to end poverty faster. But how much do governments listen to it?

In both rich and poor countries, government policy is often based on no evidence at all and many programs don’t work. This has particularly harsh effects on the global poor – in some countries governments only spend $100 on each citizen a year so they can’t afford to waste a single dollar.

Enter MIT’s Poverty Action Lab (J-PAL). Since 2003 they’ve conducted experiments to figure out what policies actually help recipients, and then try to get them implemented by governments and non-profits.

Claire Walsh leads J-PAL’s Government Partnership Initiative, which works to evaluate policies and programs in collaboration with developing world governments, scale policies that have been shown to work, and generally promote a culture of evidence-based policymaking.

We discussed (her views only, not J-PAL’s):

  • How can they get evidence backed policies adopted? Do politicians in the developing world even care whether their programs actually work? Is the norm evidence-based policy, or policy-based evidence?
  • Is evidence-based policy an evidence-based strategy itself?
  • Which policies does she think would have a particularly large impact on human welfare relative to their cost?
  • How did she come to lead one of J-PAL’s departments at 29?
  • How do you evaluate the effectiveness of energy and environment programs (Walsh’s area of expertise), and what are the standout approaches in that area?
  • 80,000 Hours has warned people about the downsides of starting your career in a non-profit. Walsh started her career in a non-profit and has thrived,

Continue reading →

Dr Cameron fought Ebola for the White House. Now she works to stop something even worse.

“When you’re in the middle of a crisis and you have to ask for money, you’re already too late.”

That’s Dr. Beth Cameron, and she’s someone who should know. Beth runs Global Biological Policy and Programs at the Nuclear Threat Initiative.

She has years of experience preparing for and fighting the diseases of our nightmares, on the White House Ebola Taskforce, in the National Security Council staff, and as the senior advisor to the Assistant Secretary of Defense for Nuclear, Chemical and Biological Defense Programs.

Unfortunately, the nations of the world aren’t prepared for a crisis – and like children crowded into daycare, there’s a real danger that something nasty will come along and make us all sick at once.

During previous pandemics, countries have dragged their feet over who will pay to contain them, or struggled to move people and supplies to where they needed to be. Unfortunately, there’s no reason to think that the same wouldn’t happen again today. And at the same time, advances in biotechnology may make it possible for terrorists to bring back smallpox – or create something even worse.

In this interview we look at the current state of play in disease control, what needs to change, and how you can work towards a job where you can help make those changes yourself. Topics covered include:

  • The best strategies for containing pandemics.

Continue reading →

Podcast: You want to do as much good as possible and have billions of dollars. What do you do?

What if you were in a position to give away billions of dollars to improve the world? What would you do with it? This is the problem facing Program Officers at the Open Philanthropy Project – people like Dr Nick Beckstead.

Following a PhD in philosophy, Nick works to figure out where money can do the most good. He’s been involved in major grants in a wide range of areas, including ending factory farming through technological innovation, safeguarding the world from advances in biotechnology and artificial intelligence, and spreading rational compassion.

This episode is a tour through some of the toughest questions ‘effective altruists’ face when figuring out how to best improve the world, including:

  • Should we mostly try to help people currently alive, or future generations? Nick studied this question for years in his PhD thesis, On the Overwhelming Importance of Shaping the Far Future. (The first 31 minutes is a snappier version of my conversation with Toby Ord.)
  • Is clean meat (aka in vitro meat) technologically feasible any time soon, or should we be looking for plant-based alternatives?
  • To stop malaria is it more cost-effective to use technology to eliminate mosquitos than to distribute bed nets?
  • What are the greatest risks to human civilisation continuing?
  • Should people who want to improve the future work for changes that will be very useful in a specific scenario,

Continue reading →

The space colonisation and nanotech focussed Silicon Valley community of the 70s and 80s

One tricky thing about lengthy podcasts is that you cover a dozen issues, but when you give the episode a title you only get to tell people about one. With Christine Peterson’s interview I went with a computer security angle, which turned out to not be that viral a topic. But people who listened to the episode kept telling me how much they loved it. So I’m going to try publishing the interview in pieces, each focussed on a single theme we covered.

Christine Peterson co-founded the Foresight Institute in the 90s. In the lightly edited transcript below we talk about a community she was part of in her youth, whose idealistic ambition bears some similarity to effective altruism today. We also cover a controversy from that time about whether nanotechnology would change the world or was impossible. Finally we think about what lessons we can learn from that whole era.

If you subscribe to our podcast, you can listen at leisure on your phone, speed up the conversation if you like, and get notified about future episodes. You can do so by searching ‘80,000 Hours’ wherever you get your podcasts (RSS, SoundCloud, iTunes, Stitcher).

The community that dreamed of space settlement and atomic factories

Robert Wiblin: Tell us a bit about how you ended up where you are today.

Christine Peterson: Wow. When I was growing up,

Continue reading →

Our computers are fundamentally insecure. Here’s why that could lead to global catastrophe.

Take a trip to Silicon Valley in the 70s and 80s, when going to space sounded like a good way to get around environmental limits, people started cryogenically freezing themselves, and nanotechnology looked like it might revolutionise industry – or turn us all into grey goo.

In this episode of the 80,000 Hours Podcast Christine Peterson takes us back to her youth in the Bay Area, the ideas she encountered there, and what the dreamers she met did as they grew up. We also discuss how she came up with the term ‘open source software’ (and how she had to get someone else to propose it).

Today Christine helps runs the Foresight Institute, which fills a gap left by for-profit technology companies – predicting how new revolutionary technologies could go wrong, and ensuring we steer clear of the downsides.

We dive into:

  • Can technology ‘move fast and break things’ without eventually breaking the world? Would it be better for technology to advance more quickly, or more slowly?
  • Whether the poor security of computer systems poses a catastrophic risk for the world.
  • Could all our essential services be taken down at once? And if so, what can be done about it? Christine makes a radical proposal for solving the problem.
  • Will AIs designed for wide-scale automated hacking make computers more or less secure?
  • Would it be good to radically extend human lifespan?

Continue reading →

Ending factory farming as soon as possible

Every year tens of billions of animals are raised in terrible conditions in factory farms before being killed for human consumption. Despite the enormous scale of suffering this causes, the issue is largely neglected, with only about $50 million dollars spent each year tackling the problem globally.

Over the last two years Lewis Bollard – Project Officer for Farm Animal Welfare at the Open Philanthropy Project – has conducted extensive research into the best ways to eliminate animal suffering in farms as soon as possible.

This has resulted in $30 million in grants, making the Open Philanthropy Project one of the largest funders in the area.

Our conversation covers almost every approach being taken, which ones work, how individuals can best contribute through their careers, as well as:

  • How young people can set themselves up to contribute to scientific research into meat alternatives
  • How genetic manipulation of chickens has caused them to suffer much more than their ancestors, but could also be used to make them better off
  • Why Lewis is skeptical of vegan advocacy
  • Open Phil’s grants to improve animal welfare in China, India and South America
  • Why Lewis thinks insect farming would be worse than the status quo, and whether we should look for ‘humane’ insecticides
  • Why Lewis doubts that much can be done to tackle factory farming through legal advocacy or electoral politics
  • Which species of farm animals is best to focus on first
  • Whether fish and crustaceans are conscious,

Continue reading →

Is it time for a new scientific revolution? Julia Galef on how to make humans smarter, why Twitter isn’t all bad, and where effective altruism is going wrong

The scientific revolution in the 16th century was one of the biggest societal shifts in human history, driven by the discovery of new and better methods of figuring out who was right and who was wrong.

Julia Galef – a well-known writer and researcher focused on improving human judgment, especially about high stakes questions – believes that if we could develop new techniques to resolve disagreements, predict the future and make sound decisions together, we could again dramatically improve the world. We brought her in to talk about her ideas.

Julia has hosted the Rationally Speaking podcast since 2010, co-founded the Center for Applied Rationality in 2012, and is currently working for the Open Philanthropy Project on an investigation of expert disagreements.

This interview complements a new detailed review of whether and how to follow Julia’s career path

We ended up speaking about a wide range of topics, including:

  • Her research on how people can have productive intellectual disagreements.
  • Why she once planned on becoming an urban designer.
  • Why she doubts people are more rational than 200 years ago.
  • What the effective altruism community is doing wrong.
  • What makes her a fan of Twitter (while I think it’s dystopian).
  • Whether more people should write books.
  • Whether it’s a good idea to run a podcast, and how she grew her audience.
  • Why saying you don’t believe X often won’t convince people you don’t.

Continue reading →

Podcast: We aren’t that worried about the next pandemic. Here’s why we should be – and specifically what we can do to stop it.

What natural disaster is most likely to kill more than 10 million human beings in the next 20 years?

Terrorism? Famine? An asteroid?

Actually it’s probably a pandemic: a deadly new disease that spreads out of control. We’ve recently seen the risks with Ebola and swine flu, but they pale in comparison to the Spanish flu which killed 3% of the world’s population in 1918 to 1920. If a pandemic of that scale happened again today, 200 million would die.

Looking back further, the Black Death killed 30 to 60% of Europe’s population, which would today be two to four billion globally.

The world is woefully unprepared to deal with new diseases. Many countries have weak or non-existent health services. Diseases can spread worldwide in days due to air travel. And international efforts to limit the spread of new diseases are slow, if they happen at all.

Even more worryingly, scientific advances are making it easier to create diseases much worse than anything nature could throw at us – whether by accident or deliberately.

In this in-depth interview I speak to Howie Lempel, who spent years studying pandemic preparedness for the Open Philanthropy Project. We spend the first 20 minutes covering his work as a foundation grant-maker, then discuss how bad the pandemic problem is, why it’s probably getting worse, and what can be done about it. In the second half of the interview we go through what you personally could study and where you could work to tackle one of the worst threats facing humanity.

Continue reading →

Podcast: How to train for a job developing AI at OpenAI or DeepMind

Just two years ago OpenAI didn’t exist. It’s now among the most elite groups of machine learning researchers. They’re trying to make an AI that’s smarter than humans and have $1b at their disposal.

Even stranger for a Silicon Valley start-up, it’s not a business, but rather a non-profit founded by Elon Musk and Sam Altman among others, to ensure the benefits of AI are distributed broadly to all of society.

I did a long interview with one of its first machine learning researchers, Dr Dario Amodei, to learn about:

  • OpenAI’s latest plans and research progress.
  • His paper Concrete Problems in AI Safety, which outlines five specific ways machine learning algorithms can act in dangerous ways their designers don’t intend – something OpenAI has to work to avoid.
  • How listeners can best go about pursuing a career in machine learning and AI development themselves.

We suggest subscribing, so you can listen at leisure on your phone, speed up the conversation if you like, and get notified about future episodes. You can subscribe by searching ‘80,000 Hours’ wherever you get your podcasts (RSS, SoundCloud, iTunes, Stitcher).

The audio, summary, extra resources and full transcript are below.

Overview of the discussion

1m33s – What OpenAI is doing, Dario’s research and why AI is so important
15m50s –

Continue reading →

Podcast: The world desperately needs AI strategists. Here’s how to become one.

If a smarter-than-human AI system were developed, who would decide when it was safe to deploy? How can we discourage organisations from deploying such a technology prematurely to avoid being beaten to the post by a competitor? Should we expect the world’s top militaries to try to use AI systems for strategic advantage – and if so, do we need an international treaty to prevent an arms race?

Questions like this are the domain of AI policy experts.

We recently launched a detailed guide to pursuing careers in AI policy and strategy, put together by Miles Brundage at the University of Oxford’s Future of Humanity Institute.

It complements our article outlining the importance of positively shaping artificial intelligence and a podcast with Dr Dario Amodei of OpenAI on more technical artificial intelligence safety work which builds on this one. If you are considering a career in artificial intelligence safety, they’re all essential reading.

I interviewed Miles to ask remaining questions I had after he finished his career guide. We discuss the main career paths; what to study; where to apply; how to get started; what topics are most in need of research; and what progress has been made in the field so far.

The audio, summary and full transcript are below.

We suggest subscribing, so you can listen at leisure on your phone,

Continue reading →

Most people report believing it’s incredibly cheap to save lives in the developing world

One way that people can have a social impact with their career is to donate money to effective charities. We mention this path in our career guide, suggesting that people donate to evidence-backed charities such as the Against Malaria Foundation, which is estimated by GiveWell to save the lives of children in the developing world for around $7,500.

Alyssa Vance told me that many people may see this as highly ineffective relatively to their optimistic expectations about how much it costs to improve the lives of people in people. I thought the reverse would be true – folks would be skeptical that charities in the developing world were effective at all. Fortunately Amazon Mechanical Turk makes it straightforward to survey public opinion at a low cost, so there was no need for us to sit around speculating. I suggested a survey on this question to someone in the effective altruism community with a lot of experience using Mechanical Turk – Spencer Greenberg of Clearer Thinking – and he went ahead and conducted one in just a few hours.

You can work through the survey people took yourself here and we’ve put the data and some details about the method in a footnote. The results clearly vindicated Alyssa:

It turns out that most Americans believe a child can be prevented from dying of preventable diseases for very little –

Continue reading →

How accurately does anyone know the global distribution of income?

World income distributionHow much should you believe the numbers in figures like this?

People in the effective altruism community often refer to the global income distribution to make various points:

  • The richest people in the world are many times richer than the poor.
  • People earning professional salaries in countries like the US are usually in the top 5% of global earnings and fairly often in the top 1%. This gives them a disproportionate ability to improve the world.
  • Many people in the world live in serious absolute poverty, surviving on as little as one hundredth the income of the upper-middle class in the US.

Measuring the global income distribution is very difficult and experts who attempt to do so end up with different results. However, these core points are supported by every attempt to measure the global income distribution that we’ve seen so far.

The rest of this post will discuss the global income distribution data we’ve referred to, the uncertainty inherent in that data, and why we believe our bottom lines hold up anyway.

Will MacAskill had a striking illustration of global individual income distribution in his book Doing Good Better, that has ended up in many other articles online, including our own career guide:
 
 

 
The data in this graph was put together back in 2012 using an approach suggested by Branko Milanovic,

Continue reading →

5 reasons not to go into education

First published June 2015. Updated February 2017.

When we first speak to people interested in doing good with their careers, they often say they want to get involved in education in the US or the UK. This could mean donating to a school, doing education policy work, or becoming a teacher.

However, we haven’t prioritised careers in education at 80,000 Hours. We don’t dispute that education is a highly important problem – a more educated population could enable us to solve many other global challenges, as well as yield major economic benefits. The problem is that it doesn’t seem to be very easy to solve or neglected (important elements of our problem framework). So, it looks harder to have a large impact in education compared to many other areas. In the rest of this post, we’ll give five reasons why.

The following isn’t the result of in-depth research; it’s just meant to explain why we’ve deprioritised education so far. Our views could easily change. Note that in this post we’re not discussing education in the developing world.

1. It’s harder to help people in the US or UK

Everyone in the US or UK is rich by global standards: the poorest 5% of Americans are richer than the richest 5% of Indians (and that’s adjusted for the difference in purchasing power, see an explanation and the full data).

Continue reading →

80,000 Hours has a funding gap

Over the past three years, we’ve grown almost 36-fold, more than tripling each year. This is measured in terms of our key metric – the number of impact-adjusted significant plan changes each month. At the same time, our budget has only increased 27% per year.

Given this success, we think it’s time to take 80,000 Hours to the next level of funding.

Over the next few weeks, we’ll be preparing our full annual review and fundraising documents, but here’s a preview.

chart

Overall, the 2017 target is to triple, measured in terms of impact-adjusted significant plan changes per month (which will mean over 3,000 over the year). We’ll do this by continuing to improve the advice, and starting to scale up marketing, with the aim of becoming the default source of career advice for talented, socially-motivated graduates.

Concretely, here’s some priorities we could pursue:

  • Dramatically improve the career reviews and problem profiles, so we have in-depth profiles of all the best options. This will help our existing users make better changes, and bring in more traffic.
  • Upgrading – develop mentors and specialist content for the most high-potential users, such as those who want to work on AI risk, policy, EA orgs and so on. We now have a large base of engaged users (1300+ through the workshop, 80,000+ on newsletter), so there’s a lot of follow-up we could do to get more valuable plan changes from them.

Continue reading →

Should you work at GiveWell? Reflections from a recent employee.

The following are some reflections on what it’s like to work at GiveWell written by one of our readers. We’re posting their thoughts because we’ve written about GiveWell as a high-impact career in the past, and are keen to share more information about it. The opinions below, however, may not reflect our views.

I worked at GiveWell from August 2014 to May 2016. This piece is a reflection on my time there, on things I think GiveWell does well as an employer, on things I think it could do better, and why I decided to leave.

I envision two functions for this piece: (1) as an exercise to help me process my time at GiveWell, and (2) as a resource for people considering working at GiveWell. When I was considering taking a job at GiveWell, I found Nick Beckstead’s reflection on his internship at GiveWell to be very helpful. Outside of Nick’s piece, there isn’t very much substantive information available about working at GiveWell. Many people consider employment at GiveWell; I hope some of those people find this reflection to be useful.

Some background

I learned about GiveWell in Spring 2014, after reading Peter Singer’s Famine, Affluence, and Morality in a college ethics class and encountering related topics on the internet. By the time I took the ethics class, I knew that I did not want to go to graduate school immediately after my undergraduate, but I was very taken by academic ethics and wanted to continue serious thinking about the topic.

Continue reading →

The rent is too damn high – should you work on reforming land use regulations?

We’ve released a new ‘problem profile’ on reform of how land is used in cities.

Local laws often prohibit the construction of dense new housing, which drives up prices, especially in a few large high-wage urban areas. The increased prices transfer wealth from renters to landowners and push people away from centres of economic activity, which reduces their ability to get a job or earn higher wages, likely by a very large amount.

An opportunity to tackle the problem which nobody has yet taken is to start a non-profit or lobbying body to advocate for more housing construction in key urban areas and states. Another option would be to try to shift zoning decisions from local to state governments, where they are less likely to be determined by narrow local interests, especially existing land-owners who benefit from higher property prices.

In the profile we cover:

  • The main reasons for and against thinking that working on land use reform is among the best uses of your time.
  • How to use your career to make housing in prospering cities more accessible to ordinary people.

Read our full profile on land use reform.

Continue reading →

New report: Is climate change the biggest problem in the world?

We’ve released a new ‘problem profile’ on the risks posed by extreme climate change.

There is a small but non-negligible chance that unmitigated greenhouse emissions will lead to very large increases in global temperatures, which would likely have catastrophic consequences for life on Earth.

Though the chance of catastrophic outcomes is relatively low, the degree of harm that would result from large temperature increases is very high, meaning that the expected value of working on this problem may also be very high.

Options for working on this problem include academic research into the extreme risks of climate change or whether they might be mitigated by geoengineering. One can also advocate for reduced greenhouse emissions through careers in politics, think-tanks or journalism, and work on developing lower emissions technology as an engineer or scientist.

In the profile we cover:

  • The main reasons for and against thinking that the ‘tail risks’ of climate change are a highly pressing problem to work on.
  • How climate change scores on our assessment rubric for ranking the biggest problems in the world
  • How to use your career to lower the risk posed by climate change.

Read our full profile on the most extreme risks from climate change..

Continue reading →

How and why to use your career to make artificial intelligence safer

We’ve released a new ‘problem profile’ on the risks posed by artificial intelligence.

Many experts believe that there is a significant chance we’ll create artificially intelligent machines with abilities surpassing those of humans – superintelligence – sometime during this century. These advances could lead to extremely positive developments, but could also pose risks due to catastrophic accidents or misuse. The people working on this problem aim to maximise the chance of a positive outcome, while reducing the chance of catastrophe.

Work on the risks posed by superintelligent machines seems mostly neglected, with total funding for this research well under $10 million a year.

The main opportunity to deal with the problem is to conduct research in philosophy, computer science and mathematics aimed at keeping an AI’s actions and goals in alignment with human intentions, even if it were much more intelligent than us.

In the profile we cover:

  • The main reasons for and against thinking that the future risks posed by artificial intelligence are a highly pressing problem to work on.
  • How to use your career to reduce the risks posed by artificial intelligence.

Read our full profile on the risks posed by artificial intelligence.

Continue reading →