Podcast: You want to do as much good as possible and have billions of dollars. What do you do?

What if you were in a position to give away billions of dollars to improve the world? What would you do with it? This is the problem facing Program Officers at the Open Philanthropy Project – people like Dr Nick Beckstead.

Following a PhD in philosophy, Nick works to figure out where money can do the most good. He’s been involved in major grants in a wide range of areas, including ending factory farming through technological innovation, safeguarding the world from advances in biotechnology and artificial intelligence, and spreading rational compassion.

This episode is a tour through some of the toughest questions ‘effective altruists’ face when figuring out how to best improve the world, including:

  • Should we mostly try to help people currently alive, or future generations? Nick studied this question for years in his PhD thesis, On the Overwhelming Importance of Shaping the Far Future. (The first 31 minutes is a snappier version of my conversation with Toby Ord.)
  • Is clean meat (aka in vitro meat) technologically feasible any time soon, or should we be looking for plant-based alternatives?
  • To stop malaria is it more cost-effective to use technology to eliminate mosquitos than to distribute bed nets?
  • What are the greatest risks to human civilisation continuing?
  • Should people who want to improve the future work for changes that will be very useful in a specific scenario,

Continue reading →

Our computers are fundamentally insecure. Here’s why that could lead to global catastrophe.

Take a trip to Silicon Valley in the 70s and 80s, when going to space sounded like a good way to get around environmental limits, people started cryogenically freezing themselves, and nanotechnology looked like it might revolutionise industry – or turn us all into grey goo.

In this episode of the 80,000 Hours Podcast Christine Peterson takes us back to her youth in the Bay Area, the ideas she encountered there, and what the dreamers she met did as they grew up. We also discuss how she came up with the term ‘open source software’ (and how she had to get someone else to propose it).

Today Christine helps runs the Foresight Institute, which fills a gap left by for-profit technology companies – predicting how new revolutionary technologies could go wrong, and ensuring we steer clear of the downsides.

We dive into:

  • Can technology ‘move fast and break things’ without eventually breaking the world? Would it be better for technology to advance more quickly, or more slowly?
  • Whether the poor security of computer systems poses a catastrophic risk for the world.
  • Could all our essential services be taken down at once? And if so, what can be done about it? Christine makes a radical proposal for solving the problem.
  • Will AIs designed for wide-scale automated hacking make computers more or less secure?
  • Would it be good to radically extend human lifespan?

Continue reading →

Podcast: We aren’t that worried about the next pandemic. Here’s why we should be – and specifically what we can do to stop it.

What natural disaster is most likely to kill more than 10 million human beings in the next 20 years?

Terrorism? Famine? An asteroid?

Actually it’s probably a pandemic: a deadly new disease that spreads out of control. We’ve recently seen the risks with Ebola and swine flu, but they pale in comparison to the Spanish flu which killed 3% of the world’s population in 1918 to 1920. If a pandemic of that scale happened again today, 200 million would die.

Looking back further, the Black Death killed 30 to 60% of Europe’s population, which would today be two to four billion globally.

The world is woefully unprepared to deal with new diseases. Many countries have weak or non-existent health services. Diseases can spread worldwide in days due to air travel. And international efforts to limit the spread of new diseases are slow, if they happen at all.

Even more worryingly, scientific advances are making it easier to create diseases much worse than anything nature could throw at us – whether by accident or deliberately.

In this in-depth interview I speak to Howie Lempel, who spent years studying pandemic preparedness for the Open Philanthropy Project. We spend the first 20 minutes covering his work as a foundation grant-maker, then discuss how bad the pandemic problem is, why it’s probably getting worse, and what can be done about it. In the second half of the interview we go through what you personally could study and where you could work to tackle one of the worst threats facing humanity.

Continue reading →

Podcast: How to train for a job developing AI at OpenAI or DeepMind

Just two years ago OpenAI didn’t exist. It’s now among the most elite groups of machine learning researchers. They’re trying to make an AI that’s smarter than humans and have $1b at their disposal.

Even stranger for a Silicon Valley start-up, it’s not a business, but rather a non-profit founded by Elon Musk and Sam Altman among others, to ensure the benefits of AI are distributed broadly to all of society.

I did a long interview with one of its first machine learning researchers, Dr Dario Amodei, to learn about:

  • OpenAI’s latest plans and research progress.
  • His paper Concrete Problems in AI Safety, which outlines five specific ways machine learning algorithms can act in dangerous ways their designers don’t intend – something OpenAI has to work to avoid.
  • How listeners can best go about pursuing a career in machine learning and AI development themselves.

We suggest subscribing, so you can listen at leisure on your phone, speed up the conversation if you like, and get notified about future episodes. You can subscribe by searching ‘80,000 Hours’ wherever you get your podcasts (RSS, SoundCloud, iTunes, Stitcher).

The audio, summary, extra resources and full transcript are below.

Overview of the discussion

1m33s – What OpenAI is doing, Dario’s research and why AI is so important
15m50s –

Continue reading →

Podcast: The world desperately needs AI strategists. Here’s how to become one.

If a smarter-than-human AI system were developed, who would decide when it was safe to deploy? How can we discourage organisations from deploying such a technology prematurely to avoid being beaten to the post by a competitor? Should we expect the world’s top militaries to try to use AI systems for strategic advantage – and if so, do we need an international treaty to prevent an arms race?

Questions like this are the domain of AI policy experts.

We recently launched a detailed guide to pursuing careers in AI policy and strategy, put together by Miles Brundage at the University of Oxford’s Future of Humanity Institute.

It complements our article outlining the importance of positively shaping artificial intelligence and a podcast with Dr Dario Amodei of OpenAI on more technical artificial intelligence safety work which builds on this one. If you are considering a career in artificial intelligence safety, they’re all essential reading.

I interviewed Miles to ask remaining questions I had after he finished his career guide. We discuss the main career paths; what to study; where to apply; how to get started; what topics are most in need of research; and what progress has been made in the field so far.

The audio, summary and full transcript are below.

We suggest subscribing, so you can listen at leisure on your phone,

Continue reading →

How to pursue a career in research to lower the risks from superintelligent machines: a new career review.

google-deepmind-artificial-intelligence

This is a summary of our full career review on artificial intelligence risk research.

Have you read the profile and think you want to contribute to artificial intelligence risk research? Fill out this form and we’ll see if we can help.

Many people we coach are interested in doing research into artificial intelligence (AI), in particular how to lower the risk that superintelligent machines do harmful things not intended by their creators – a field usually referred to as ‘AI risk research’. The reasons people believe this is a particularly pressing area of research are outlined in sources such as:

Our goal with this career review was not to assess the cause area of AI risk research – on that we defer to the authors above. Rather we wanted to present some concrete guidance for the growing number of people who want to work on the problem.

We spoke to the leaders in the field, including top academics, the head of MIRI and managers in AI companies, and the key findings are:

  • Some organisations working on this problem,

Continue reading →

Which cause is most effective?

In previous posts, we explained what causes are and presented a method for assessing them in terms of expected effectiveness.

In this post, we apply this method to identify a list of causes that we think represent some particularly promising opportunities for having a social impact in your career (though there are many others we don’t cover!).

We’d like to emphasise that these are just informed guesses over which there’s disagreement. We don’t expect the results to be highly robust. However, you have to choose something to work on, so we think it’ll be useful to share our guesses to give you ideas and so we can get feedback on our reasoning – we’ve certainly had lots of requests to do so. In the future, we’d like more people to independently apply the methodology to a wider range of causes and do more research into the biggest uncertainties.

The following is intended to be a list of some of the most effective causes in general to work on, based on broad human values. Which cause is most effective for an individual to work on also depends on what resources they have (money, skills, experience), their comparative advantages and how motivated they are. This list is just intended as a starting point, which needs to be combined with individual considerations. An individual’s list may differ due also to differences in values. After we present the list, we go over some of the key assumptions we made and how these assumptions affect the rankings.

We intend to update the list significantly over time as more research is done into these issues. Fortunately, more and more cause prioritisation research is being done, so we’re optimistic our answers will become more solid over the next couple of years. This also means we think it’s highly important to stay flexible, build career capital, and keep your options open.

In the rest of this post we:
1. Provide a summary list of high priority causes
2. Explain what each cause is and overview our reasons for including it
3. Explain how key judgement calls alter the ranking
4. Overview how we came up with the list and how we’ll take it forward
5. Answer other common questions

Continue reading →

Influencing the Far Future

future_generations

Introduction

In an earlier post we reviewed the arguments in favor of the idea that we should primarily assess causes in terms of whether they help build a society that’s likely to survive and flourish in the very long-term. We think this is a plausible position, but it raises the question: what activities in fact do help improve the world over the very long term, and of those, which are best? We’ve been asked this question several times in recent case studies.

First, we propose a very broad categorisation of how our actions today might affect the long-run future.

Second, as a first step to prioritising different methods, we compiled a list of approaches to improve the long-run future that are currently popular among the community of people who explicitly believe the long-run future is important.

The list was compiled from our knowledge of the community. Please let us know if you think there are other important types of approach that have been neglected. Further, note that this post is not meant as an endorsement of any particular approach; just an acknowledgement that it has significant support.

Third, we comment on how existing mainstream philanthropy may or may not influence the far future.

Continue reading →

High impact interview 1: Existential risk research at SIAI

The plan: to conduct a series of interviews with successful workers in various key candidates for high impact careers.

The first person to agree to an interview is Luke Muehlhauser (aka lukeprog of Less Wrong), the executive director of the Singularity Institute for Artificial Intelligence, whose mission is to influence the development of greater-than-human intelligence to try and ensure that it’s a force for human flourishing rather than extinction.

Continue reading →