Podcast: How to train for a job developing AI at OpenAI or DeepMind

OpenAI’s Universe, a software platform for training AIs to play computer games.

Just two years ago OpenAI didn’t exist. It’s now among the most elite groups of machine learning researchers. They’re trying to make an AI that’s smarter than humans and have $1b at their disposal.

Even stranger for a Silicon Valley start-up, it’s not a business, but rather a non-profit founded by Elon Musk and Sam Altman among others, to ensure the benefits of AI are distributed broadly to all of society.

I did a long interview with one of its first machine learning researchers, Dr Dario Amodei, to learn about:

  • OpenAI’s latest plans and research progress.
  • His paper Concrete Problems in AI Safety, which outlines five specific ways machine learning algorithms can act in dangerous ways their designers don’t intend – something OpenAI has to work to avoid.
  • How listeners can best go about pursuing a career in machine learning and AI development themselves.

To listen on your phone, just subscribe to the ‘80,000 Hours Podcast’ (RSS) wherever you listen to podcasts.

The audio, summary, extra resources and full transcript are below.

Table of contents

1m33s – What OpenAI is doing, Dario’s research and why AI is so important
15m50s – How AI could be dangerous
24m20s – Would smarter than human AI solve most of the world’s problems?

Continue reading →

Podcast: The world desperately needs AI strategists. Here’s how to become one.

If a smarter-than-human AI system were developed, who would decide when it was safe to deploy? How can we discourage organisations from deploying such a technology prematurely to avoid being beaten to the post by a competitor? Should we expect the world’s top militaries to try to use AI systems for strategic advantage – and if so, do we need an international treaty to prevent an arms race?

Questions like this are the domain of AI policy experts.

We recently launched a detailed guide to pursuing careers in AI policy and strategy, put together by Miles Brundage at the University of Oxford’s Future of Humanity Institute.

It complements our article outlining the importance of positively shaping artificial intelligence. If you are considering a career in artificial intelligence safety, both are essential reading.

I interviewed Miles to dig deeper into his advice. We discuss the main career paths; what to study; where to apply; how to get started; what topics are most in need of research; and what progress has been made in the field so far.

The audio, summary and full transcript are below.

To listen on your phone, just subscribe to the ‘80,000 Hours Podcast’ (RSS) wherever you listen to podcasts. That way you can listen to it sped up and get alerts about future episodes.

Full transcript

Robert Wiblin: Hi,

Continue reading →

New report: Is climate change the biggest problem in the world?

We’ve released a new ‘problem profile’ on the risks posed by extreme climate change.

There is a small but non-negligible chance that unmitigated greenhouse emissions will lead to very large increases in global temperatures, which would likely have catastrophic consequences for life on Earth.

Though the chance of catastrophic outcomes is relatively low, the degree of harm that would result from large temperature increases is very high, meaning that the expected value of working on this problem may also be very high.

Options for working on this problem include academic research into the extreme risks of climate change or whether they might be mitigated by geoengineering. One can also advocate for reduced greenhouse emissions through careers in politics, think-tanks or journalism, and work on developing lower emissions technology as an engineer or scientist.

In the profile we cover:

  • The main reasons for and against thinking that the ‘tail risks’ of climate change are a highly pressing problem to work on.
  • How climate change scores on our assessment rubric for ranking the biggest problems in the world
  • How to use your career to lower the risk posed by climate change.

Read our full profile on the most extreme risks from climate change..

Continue reading →

How and why to use your career to make artificial intelligence safer

We’ve released a new ‘problem profile’ on the risks posed by artificial intelligence.

Many experts believe that there is a significant chance we’ll create artificially intelligent machines with abilities surpassing those of humans – superintelligence – sometime during this century. These advances could lead to extremely positive developments, but could also pose risks due to catastrophic accidents or misuse. The people working on this problem aim to maximise the chance of a positive outcome, while reducing the chance of catastrophe.

Work on the risks posed by superintelligent machines seems mostly neglected, with total funding for this research well under $10 million a year.

The main opportunity to deal with the problem is to conduct research in philosophy, computer science and mathematics aimed at keeping an AI’s actions and goals in alignment with human intentions, even if it were much more intelligent than us.

In the profile we cover:

  • The main reasons for and against thinking that the future risks posed by artificial intelligence are a highly pressing problem to work on.
  • How to use your career to reduce the risks posed by artificial intelligence.

Read our full profile on the risks posed by artificial intelligence.

Continue reading →

Why and how to use your career to work on biosecurity

We’ve released a new profile on biosecurity.

Natural pandemics and new scientifically engineered pathogens could potentially kill millions or even billions of people. Moreover, future progress in synthetic biology is likely to increase the risk and severity of pandemics from engineered pathogens.

But there are promising paths to reducing these risks through regulating potentially dangerous research, improving early detection systems and developing better international emergency response plans.

In the profile we cover:

  • The main reasons for and against thinking that biosecurity is a highly pressing problem.
  • How to use your career to work on reducing the risks from pandemics.

Read our profile on biosecurity.

Continue reading →

Is now the time to do something about AI?

Antique_mechanical_clock

The Open Philanthropy Project recently released a review of research on when human level artificial intelligence will be achieved. The main conclusion of the report was we’re really uncertain. But the author (Luke Muehlhauser, an expert in the area) also gave his 70% confidence interval: 10-120 years.

That’s a lot of uncertainty.

And that’s really worrying. This confidence interval suggests the author puts significant probability on human-level artificial intelligence (HLAI) occurring within 20 years. A survey of the top 100 most cited AI scientists also gave a 10% chance that HLAI is created within ten years (this was the median estimate; the mean was a 10% probability in the next 20 years).

This is like being told there’s a 10% chance aliens will arrive on the earth within the next 20 years.

Making sure this transition goes well could be the most important priority for the human race in the next century. (To read more, see Nick Bostrom’s book, Superintelligence, and this popular introduction by Wait But Why).

We issued a note about AI risk just over a year ago when Bostrom’s book was released. Since then, the field has heated up dramatically.

In January 2014, Google bought Deepmind for $400m. This triggered a wave of investment into companies focused on building human-level AI. A new AI company seems to arrive every week.

Continue reading →

Even if we can’t lower catastrophic risks now, we should do something now so we can do more later

vzuqa

Does that fit with your schedule Mr President?

A line of argument I frequently encounter is that it is too early to do anything about ‘global catastrophic risks’ today (these are also sometimes called ‘existential risks’).

For context, see our page on assessing the biggest problems in the world, evaluation of opportunities to lower catastrophic risks and our review of becoming an AI safety researcher.

This line of argument doesn’t apply so much to preventing the use of nuclear weapons, climate change, or containing disease pandemics – the potential to act on these today is about at the same level as it will be in the future.

But what about new technologies that don’t exist yet: artificial intelligence, synthetic biology, atomically precise manufacturing, and others we haven’t thought about yet? There’s a case that we should wait until they are closer to actually being developed – at that point we will have a much better idea of:

  • what form those technologies will take, if any at all;
  • what can be done to make them less risky;
  • who we need to talk to to make that happen.

Superficially this argument seems very reasonable. Each hour of work probably does get more valuable the closer you are to a ‘critical juncture in history’.

Continue reading →

How to pursue a career in research to lower the risks from superintelligent machines: a new career review.

google-deepmind-artificial-intelligence

This is a summary of our full career review on artificial intelligence risk research.

Have you read the profile and think you want to contribute to artificial intelligence risk research? Fill out this form and we’ll see if we can help.

Many people we coach are interested in doing research into artificial intelligence (AI), in particular how to lower the risk that superintelligent machines do harmful things not intended by their creators – a field usually referred to as ‘AI risk research’. The reasons people believe this is a particularly pressing area of research are outlined in sources such as:

Our goal with this career review was not to assess the cause area of AI risk research – on that we defer to the authors above. Rather we wanted to present some concrete guidance for the growing number of people who want to work on the problem.

We spoke to the leaders in the field, including top academics, the head of MIRI and managers in AI companies, and the key findings are:

  • Some organisations working on this problem,

Continue reading →

The four big challenges

The 80,000 Hours community is involved with many different causes – from scientific research to social justice – but there are four big (rather ambitious!) causes that have, so far, gathered the most support.

These are the four big challenges our community has set itself. They are all huge, but they also seem especially solvable, or especially neglected, and this means working within them offers the opportunity to make huge difference over the coming decades…

Continue reading →

Get paid to do existential risk reduction research

cser

The Centre for the Study of Existential Risk (CSER) is hiring for postdoctoral researchers. Existential risk reduction is a high-priority area on the analysis of the Global Priorities Project and GiveWell. Moreover, CSER report that they have had a successful year in grantwriting and fundraising, so the availability of research talent could become a significant constraint over the coming months. Here is Sean’s announcement:

The Centre for the Study of Existential Risk (University of Cambridge; http://cser.org) is recruiting for postdoctoral researchers to work on the study of extreme risks arising from technological advances. We have several specific projects we are recruiting for: responsible innovation in transformative technologies; horizon-scanning and foresight; ethics and evaluation of extreme technological risks, and policy and governance challenges associated with emerging technologies.

However, we also have the flexibility to hire one or more postdoctoral researchers to work on additional projects relevant to CSER’s broad aims, which include impacts and safety in artificial intelligence and synthetic biology, biosecurity, extreme tail climate change, geoengineering, and catastrophic biodiversity loss. We welcome proposals from a range of fields. The study of technological x-risk is a young interdisciplinary subfield, still taking shape. We’re looking for brilliant and committed people, to help us design it. Deadline: April 24th. Details here, with more information on our website.

Continue reading →

Want to do something about the risks of artificial intelligence?

Nick Bostrom’s recent book, “Superintelligence”, has been a great success, gaining favorable reviews in the Financial Times and the Economist, as well as support from Elon Musk, the founder of Telsa and SpaceX.

The field of research into the risks of artificial intelligence is also taking off, with the recent founding of Cambridge University’s Centre for the Study of Existential Risk and the Future of Life Institute (supported by Morgan Freeman!); continued strong growth at MIRI; and GiveWell’s recently declared interest in the area.

If you’ve read the book, and are interested in how you can contribute to this cause, we’d like to hear from you. There’s pressing needs developing in the field for researchers, project managers, and funding. We can help you work out where you can best contribute, and introduce you to the right people.

If you’re interested, please email ben at 80000hours.org, or apply for our coaching.

Continue reading →

Interview with leading HIV vaccine researcher – Prof. Sir Andrew McMichael

Introduction

Andrew McMichael

Continuing our investigation into medical research careers, we interviewed Prof. Andrew McMichael. Andrew is Director of the Weatherall Institute of Molecular Medicine in Oxford, and focuses especially on two areas of special interest to us: HIV and flu vaccines.

Key points made

  • Andrew would recommend starting in medicine for the increased security, better earnings, broader perspective and greater set of opportunities at the end. The main cost is that it takes about 5 years longer.
  • In the medicine career track, you qualify as a doctor in 5-6 years, then you work as a junior doctor for 3-5 years, while starting a PhD. During this time, you start to move towards a promising speciality, where you build your career.
  • In the biology career track, get a good undergraduate degree, then do a PhD. It’s very important to join a top lab and publish early in your career. Then you can start to move towards an interesting area.
  • After you finish your PhD is a good time to reassess. It’s a competitive career, and if you’re not headed towards the top, be prepared to do something else. Public health is a common backup option, which can make a significant contribution. If you’ve studied medicine, you can do that. People sometimes get stranded mid-career, and that can be tough.
  • An outstanding post-doc applicant has a great reference from their PhD supervisor, is good at statistics/maths/programming, and has published in a top journal.
  • If you qualify in medicine in the UK, you can earn as much as ordinary doctors while doing your research, though you’ll miss out on private practice. In the US, you’ll earn less.
  • Some exciting areas right now include stem cell research, neuroscience, psychiatry and the HIV vaccine.
  • To increase your impact, work on good quality basic science, but keep an eye out for applications.
  • Programming, mathematics and statistics are all valuable skills. Other skills shortages develop from the introduction of new technologies.
  • Good researchers can normally get funded, and Andrew would probably prefer a good researcher to a half million pound grant, though he wasn’t sure.
  • He doesn’t think that bad methodology or publication bias is a significant problem in basic science, though it might be in clinical trials.

Continue reading →

Which cause is most effective?

In previous posts, we explained what causes are and presented a method for assessing them in terms of expected effectiveness.

In this post, we apply this method to identify a list of causes that we think represent some particularly promising opportunities for having a social impact in your career (though there are many others we don’t cover!).

We’d like to emphasise that these are just informed guesses over which there’s disagreement. We don’t expect the results to be highly robust. However, you have to choose something to work on, so we think it’ll be useful to share our guesses to give you ideas and so we can get feedback on our reasoning – we’ve certainly had lots of requests to do so. In the future, we’d like more people to independently apply the methodology to a wider range of causes and do more research into the biggest uncertainties.

The following is intended to be a list of some of the most effective causes in general to work on, based on broad human values. Which cause is most effective for an individual to work on also depends on what resources they have (money, skills, experience), their comparative advantages and how motivated they are. This list is just intended as a starting point, which needs to be combined with individual considerations. An individual’s list may differ due also to differences in values. After we present the list, we go over some of the key assumptions we made and how these assumptions affect the rankings.

We intend to update the list significantly over time as more research is done into these issues. Fortunately, more and more cause prioritisation research is being done, so we’re optimistic our answers will become more solid over the next couple of years. This also means we think it’s highly important to stay flexible, build career capital, and keep your options open.

In the rest of this post we:
1. Provide a summary list of high priority causes
2. Explain what each cause is and overview our reasons for including it
3. Explain how key judgement calls alter the ranking
4. Overview how we came up with the list and how we’ll take it forward
5. Answer other common questions

Continue reading →

Case study: Working in the financial sector to promote a flourishing long-term future

Introduction

This post is a write up of an in-depth case study, exploring one person’s decision about where to work in the financial sector, from the perspective of helping the long-run future.

Key recommendations made

  • If you particularly care about long-run impacts, these are some of the interventions that have been pursued.
  • We rate cause prioritisation research and advocacy as high priority (to be explained in an upcoming post)
  • If you’re pursuing prioritisation research within finance and don’t want to pursue earning to give, then we recommend generally aiming to build career capital, building a community of people who support prioritisation, and promoting areas of social finance that seek to assess the social value of different projects. Though note that this is a judgement call.

What we learned

  • We prepared this list of ways that people are trying to improve the far future.
  • The direct impact of doing ‘impact investing’ (attempting to invest in socially beneficial companies) doesn’t seem high relative to donations to cost-effective charities, but the industry might be improvable, could produce useful research and could move more resources into altruistic causes (as we’ll explain in an upcoming report).
  • Impact investing seems like a reasonable area for someone looking to build career capital and promote prioritisation, though we don’t have much confidence in this.

Continue reading →

What should you do with a very large amount of money?

A philanthropist who will remain anonymous recently asked Nick Beckstead, a trustee of 80,000 Hours, what he would do with a very large amount of money.

Nick, with support from Carl Shulman (a research advisor to 80,000 Hours), wrote a detailed answer: A long-run perspective on strategic cause selection and philanthropy.

If you’re looking to spend or influence large budgets with the aim of improving the world (or happen to be extremely wealthy!) we recommend taking a look. It also contains brief arguments in favor of five causes.

Continue reading →

Influencing the Far Future

future_generations

Introduction

In an earlier post we reviewed the arguments in favor of the idea that we should primarily assess causes in terms of whether they help build a society that’s likely to survive and flourish in the very long-term. We think this is a plausible position, but it raises the question: what activities in fact do help improve the world over the very long term, and of those, which are best? We’ve been asked this question several times in recent case studies.

First, we propose a very broad categorisation of how our actions today might affect the long-run future.

Second, as a first step to prioritising different methods, we compiled a list of approaches to improve the long-run future that are currently popular among the community of people who explicitly believe the long-run future is important.

The list was compiled from our knowledge of the community. Please let us know if you think there are other important types of approach that have been neglected. Further, note that this post is not meant as an endorsement of any particular approach; just an acknowledgement that it has significant support.

Third, we comment on how existing mainstream philanthropy may or may not influence the far future.

Continue reading →

How Important are Future Generations?

At 80,000 Hours, we think it’s really important to find the causes in which you can make the most difference. One important consideration in evaluating causes is how much we should care about their impact on future generations. Important new research by a trustee of CEA (our parent charity) Nick Beckstead, argues that the impact on the long-term direction of future civilization is likely to be the most important consideration in working out the importance of a cause.

future_generations

Continue reading →

High impact interview 1: Existential risk research at SIAI

The plan: to conduct a series of interviews with successful workers in various key candidates for high impact careers.

The first person to agree to an interview is Luke Muehlhauser (aka lukeprog of Less Wrong), the executive director of the Singularity Institute for Artificial Intelligence, whose mission is to influence the development of greater-than-human intelligence to try and ensure that it’s a force for human flourishing rather than extinction.

Continue reading →