#33 – Anders Sandberg on what if we ended ageing, solar flares & the annual risk of nuclear war

Joseph Stalin had a life-extension program dedicated to making himself immortal. What if he had succeeded?

According to our last guest, Bryan Caplan, there’s an 80% chance that Stalin would still be ruling Russia today. Today’s guest disagrees.

Like Stalin he has eyes for his own immortality – including an insurance plan that will cover the cost of cryogenically freezing himself after he dies – and thinks the technology to achieve it might be around the corner.

Fortunately for humanity though, that guest is probably one of the nicest people on the planet: Dr Anders Sandberg of Oxford University.

The potential availability of technology to delay or even stop ageing means this disagreement matters, so he has been trying to model what would really happen if both the very best and the very worst people in the world could live forever – among many other questions.

Anders, who studies low-probability high-stakes risks and the impact of technological change at the Future of Humanity Institute, is the first guest to appear twice on the 80,000 Hours Podcast and might just be the most interesting academic at Oxford.

His research interests include more or less everything, and bucking the academic trend towards intense specialization has earned him a devoted fan base.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Last time we asked him why we don’t see aliens, and how to most efficiently colonise the universe. In today’s episode we ask about Anders’ other recent papers, including:

  • Is it worth the money to freeze your body after death in the hope of future revival, like Anders has done?
  • How much is our perception of the risk of nuclear war biased by the fact that we wouldn’t be alive to think about it had one happened?
  • If biomedical research lets us slow down ageing would culture stagnate under the crushing weight of centenarians?
  • What long-shot drugs can people take in their 70s to stave off death?
  • Can science extend human (waking) life by cutting our need to sleep?
  • How bad would it be if a solar flare took down the electricity grid? Could it happen?
  • If you’re a scientist and you discover something exciting but dangerous, when should you keep it a secret and when should you share it?
  • Will lifelike robots make us more inclined to dehumanise one another?

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

#32 – Bryan Caplan on whether the Case Against Education holds up, totalitarianism, & open borders

Bryan Caplan’s claim in The Case Against Education is striking: education doesn’t teach people much, we use little of what we learn, and college is mostly about trying to seem smarter than other people – so the government should slash education funding.

It’s a dismaying – almost profane – idea, and one most are inclined to dismiss out of hand. But having read the book, I have to admit that Bryan can point to a surprising amount of evidence in his favour.

After all, imagine this dilemma: you can have either a Princeton education without a diploma, or a Princeton diploma without an education. Which is the bigger benefit of going to Princeton – learning, or convincing people you’re smart? It’s not so easy to say.

For this interview, I searched for the best counterarguments I could find and challenged Bryan on what seem like the book’s weakest or most controversial claims.

Wouldn’t defunding education be especially bad for capable but low income students? Shouldn’t we just make incremental rather than radical changes to policy? If you reduced funding for education, wouldn’t that just lower prices, and not actually change the number of years people study? Is it really true that students who drop out in their final year of college earn about the same as people who never go to college at all?

And while we’re at it, don’t Bryan and I actually use what we learned at college every day? What about studies that show that extra years of education boost IQ scores? And surely the early years of primary school, when you learn reading and arithmetic, are useful even if college isn’t.

I then get his advice on who should study, what they should study, and where they should study, if he’s right that college is mostly about separating yourself from the pack.

We then venture into some of Bryan’s other unorthodox views – like that immigration restrictions are a human rights violation, or that we should worry about the risk of global totalitarianism.

Bryan is a Professor of Economics at George Mason University and blogger at EconLog. He’s the author of three books: The Case Against Education: Why The Education System is a Waste of Time and Money, Selfish Reasons to Have More Kids: Why Being a Great Parent is Less Work and More Fun Than You Think, and The Myth of the Rational Voter: Why Democracies Choose Bad Policies.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

In this lengthy interview, Rob and Bryan cover:

  • How worried should we be about China’s new citizen ranking system as a means of authoritarian rule?
  • How will advances in surveillance technology impact a government’s ability to rule absolutely?
  • Does more global coordination make us safer, or more at risk?
  • Should the push for open borders be a major cause area for effective altruism?
  • Are immigration restrictions a human rights violation?
  • Why aren’t libertarian-minded people more focused on modern slavery?
  • Should altruists work on criminal justice reform or reducing land use regulations?
  • What’s the greatest art form: opera, or Nicki Minaj?
  • What are the main implications of Bryan’s thesis for society?
  • Is elementary school more valuable than university?
  • What does Bryan think are the best arguments against his view?
  • The specific effects of defunding education on low income students
  • Is it possible that we wouldn’t want success in education to correlate with worker productivity?
  • Do years of education affect political affiliation?
  • How do people really improve themselves and their circumstances?
  • Who should and who shouldn’t do a masters or PhD?
  • The value of teaching foreign languages in school
  • Are there some skills people can develop that have wide applicability?
  • Are those that use their training every day just exceptions?

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

#31 – Allan Dafoe on defusing the political and economic risks posed by existing AI capabilities

The debate around the impacts of artificial intelligence often centres on ‘superintelligence’ – a general intellect that is much smarter than the best humans, in practically every field.

But according to Allan Dafoe – Senior Research Fellow in the International Politics of AI at Oxford University – even if we stopped at today’s AI technology and simply collected more data, built more sensors, and added more computing capacity, extreme systemic risks could emerge, including:

  • Mass labor displacement, unemployment, and inequality;
  • The rise of a more oligopolistic global market structure, potentially moving us away from our liberal economic world order;
  • Imagery intelligence and other mechanisms for revealing most of the ballistic missile-carrying submarines that countries rely on to be able to respond to nuclear attack;
  • Ubiquitous sensors and algorithms that can identify individuals through face recognition, leading to universal surveillance;
  • Autonomous weapons with an independent chain of command, making it easier for authoritarian regimes to violently suppress their citizens.

Allan is Director of the Center for the Governance of AI, at the Future of Humanity Institute within Oxford University. His goals have been to understand the causes of world peace and stability, which in the past has meant studying why war has declined, the role of reputation and honor as drivers of war, and the motivations behind provocation in crisis escalation. His current focus is helping humanity safely navigate the invention of advanced artificial intelligence.

I ask Allan:

  • What are the distinctive characteristics of artificial intelligence from a political or international governance point of view?
  • Is Allan’s work just a continuation of previous research on transformative technologies, like nuclear weapons?
  • How can AI be well-governed?
  • How should we think about the idea of arms races between companies or countries?
  • What would you say to people skeptical about the importance of this topic?
  • How urgently do we need to figure out solutions to these problems? When can we expect artificial intelligence to be dramatically better than today?
  • What’s the most urgent questions to deal with in this field?
  • What can people do if they want to get into the field?
  • Is there anything unusual that people can look for in themselves to tell if they’re a good fit to do this kind of research?

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

#30 – Eva Vivalt on how little social science findings generalize from one study to another

If we have a study on the impact of a social program in a particular place and time, how confident can we be that we’ll get a similar result if we study the same program again somewhere else?

Dr Eva Vivalt is a lecturer in the Research School of Economics at the Australian National University. She compiled a huge database of impact evaluations in global development – including 15,024 estimates from 635 papers across 20 types of intervention – to help answer this question.

Her finding: not confident at all.

The typical study result differs from the average effect found in similar studies so far by almost 100%. That is to say, if all existing studies of an education program find that it improves test scores by 0.5 standard deviations – the next result is as likely to be negative or greater than 1 standard deviation, as it is to be between 0-1 standard deviations.

She also observed that results from smaller studies conducted by NGOs – often pilot studies – would often look promising. But when governments tried to implement scaled-up versions of those programs, their performance would drop considerably.

For researchers hoping to figure out what works and then take those programs global, these failures of generalizability and ‘external validity’ should be disconcerting.

Is ‘evidence-based development’ writing a cheque its methodology can’t cash?

Should we invest more in collecting evidence to try to get reliable results?

Or, as some critics say, is interest in impact evaluation distracting us from more important issues, like national economic reforms that can’t be tested in randomised controlled trials?

We discuss these questions as well as Eva’s other research, including Y Combinator’s basic income study where she is a principal investigator.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Questions include:

  • What is the YC basic income study looking at, and what motivates it?
  • How do we get people to accept clean meat?
  • How much can we generalize from impact evaluations?
  • How much can we generalize from studies in development economics?
  • Should we be running more or fewer studies?
  • Do most social programs work or not?
  • The academic incentives around data aggregation
  • How much can impact evaluations inform policy decisions?
  • How often do people change their minds?
  • Do policy makers update too much or too little in the real world?
  • How good or bad are the predictions of experts? How does that change when looking at individuals versus the average of a group?
  • How often should we believe positive results?
  • What’s the state of development economics?
  • Eva’s thoughts on our article on social interventions
  • How much can we really learn from being empirical?
  • How much should we really value RCTs?
  • Is an Economics PhD overrated or underrated?

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

#29 – Anders Sandberg on three new resolutions for the Fermi Paradox and how to easily colonise the universe

Update April 2019: The key theory Dr Sandberg puts forward for why aliens may delay their activities has been strongly disputed in a new paper, which claims it is based on an incorrect understanding of the physics of computation.

The universe is so vast, yet we don’t see any alien civilizations. If they exist, where are they? Oxford University’s Anders Sandberg has an original answer: they’re ‘sleeping’, and for a very compelling reason.

Because of the thermodynamics of computation, the colder it gets, the more computations you can do. The universe is getting exponentially colder as it expands, and as the universe cools, one Joule of energy gets worth more and more. If they wait long enough this can become a 10,000,000,000,000,000,000,000,000,000,000x gain. So, if a civilization wanted to maximize its ability to perform computations – its best option might be to lie in wait for trillions of years.

Why would a civilization want to maximise the number of computations they can do? Because conscious minds are probably generated by computation, so doing twice as many computations is like living twice as long, in subjective time. Waiting will allow them to generate vastly more science, art, pleasure, or almost anything else they are likely to care about.

But there’s no point waking up to find another civilization has taken over and used up the universe’s energy. So they’ll need some sort of monitoring to protect their resources from potential competitors like us.

It’s plausible that this civilization would want to keep the universe’s matter concentrated, so that each part would be in reach of the other parts, even after the universe’s expansion. But that would mean changing the trajectory of galaxies during this dormant period. That we don’t see anything like that makes it more likely that these aliens have local outposts throughout the universe, and we wouldn’t notice them until we broke their rules. But breaking their rules might be our last action as a species.

This ‘aestivation hypothesis’ is the invention of Dr Sandberg, a Senior Research Fellow at the Future of Humanity Institute at Oxford University, where he looks at low-probability, high-impact risks, predicting the capabilities of future technologies and very long-range futures for humanity.

In this incredibly fun conversation we cover this and other possible explanations to the Fermi paradox, as well as questions like:

  • Should we want optimists or pessimists working on our most important problems?
  • How should we reason about low probability, high impact risks?
  • Would a galactic civilization want to stop the stars from burning?
  • What would be the best strategy for exploring and colonising the universe?
  • How can you stay coordinated when you’re spread across different galaxies?
  • What should humanity decide to do with its future?

If you enjoy this episode, make sure to check out part two where we talk to Anders about dictators living forever, the annual risk of nuclear war, solar flares, and more.

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

#28 – Owen Cotton-Barratt on why scientists should need insurance, PhD strategy & what if AI progresses fast

A researcher is working on creating a new virus – one more dangerous than any that exist naturally. They believe they’re being as careful as possible. After all, if things go wrong, their own life and that of their colleagues will be in danger. But if an accident is capable of triggering a global pandemic – hundreds of millions of lives might be at risk. How much additional care will the researcher actually take in the face of such a staggering death toll?

In a new paper Dr Owen Cotton-Barratt, a Research Fellow at Oxford University’s Future of Humanity Institute, argues it’s impossible to expect them to make the correct adjustments. If they have an accident that kills 5 people – they’ll feel extremely bad. If they have an accident that kills 500 million people, they’ll feel even worse – but there’s no way for them to feel 100 million times worse. The brain simply doesn’t work that way.

So, rather than relying on individual judgement, we could create a system that would lead to better outcomes: research liability insurance.

Once an insurer assesses how much damage a particular project is expected to cause and with what likelihood – in order to proceed, the researcher would need to take out insurance against the predicted risk. In return, the insurer promises that they’ll pay out – potentially tens of billions of dollars – if things go really badly.

This would force researchers think very carefully about the cost and benefits of their work – and incentivize the insurer to demand safety standards on a level that individual researchers can’t be expected to impose themselves.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Owen is currently hiring for a selective, two-year research scholars programme at Oxford.

In this wide-ranging conversation Owen and I also discuss:

  • Are academics wrong to value personal interest in a topic over its importance?
  • What fraction of research has very large potential negative consequences?
  • Why do we have such different reactions to situations where the risks are known and unknown?
  • The downsides of waiting for tenure to do the work you think is most important.
  • What are the benefits of specifying a vague problem like ‘make AI safe’ more clearly?
  • How should people balance the trade-offs between having a successful career and doing the most important work?
  • Are there any blind alleys we’ve gone down when thinking about AI safety?
  • Why did Owen give to an organisation whose research agenda he is skeptical of?

Continue reading →

#27 – Dr Tom Inglesby on how to prevent global catastrophic biological risks

How about this for a movie idea: a main character has to prevent a new contagious strain of Ebola spreading around the world. She’s the best of the best. So good in fact, that her work on early detection systems contains the strain at its source. Ten minutes into the movie, we see the results of her work – nothing happens. Life goes on as usual. She continues to be amazingly competent, and nothing continues to go wrong. Fade to black. Roll credits.

If your job is to prevent catastrophes, success is when nobody has to pay attention to you. But without regular disasters to remind authorities why they hired you in the first place, they can’t tell if you’re actually achieving anything. And when budgets come under pressure you may find that success condemns you to the chopping block.

Dr. Tom Inglesby, Director of the Center for Health Security at the Johns Hopkins Bloomberg School of Public Health, worries this may be about to happen to the scientists working on the ‘Global Health Security Agenda’.

In 2014 Ebola showed the world why we have to detect and contain new diseases before they spread, and that when it comes to contagious diseases the nations of the world sink or swim together. Fifty countries decided to work together to make sure all their health systems were up to the challenge. Back then Congress provided 5 years’ funding to help some of the world’s poorest countries build the basic health security infrastructure necessary to control pathogens before they could reach the US.

But with Ebola fading from public memory and no recent tragedies to terrify us, Congress may not renew that funding and the project could fall apart. (Learn more about how you can help.)

But there are positive signs as well – the center Inglesby leads recently received a $16 million grant from Open Philanthropy to further their work preventing global catastrophes. It also runs the Emerging Leaders in Biosecurity Fellowship to train the next generation of biosecurity experts for the US government. Inglesby regularly testifies to Congress on the threats we all face and how to address them.

In this in-depth interview we try to provide concrete guidance for listeners who want to to pursue a career in health security, and also discuss:

  • Should more people in medicine work on security?
  • What are the top jobs for people who want to improve health security and how do they work towards getting them?
  • What people can do to protect funding for the Global Health Security Agenda.
  • Should we be more concerned about natural or human caused pandemics? Which is more neglected?
  • Should we be allocating more attention and resources to global catastrophic risk scenarios?
  • Why are senior figures reluctant to prioritize one project or area at the expense of another?
  • What does Tom think about the idea that in the medium term, human-caused pandemics will pose a far greater risk than natural pandemics, and so we should focus on specific counter-measures?
  • Are the main risks and solutions understood, and it’s just a matter of implementation? Or is the principal task to identify and understand them?
  • How is the current US government performing in these areas?
  • Which agencies are empowered to think about low probability high magnitude events?
  • Are there any scientific breakthroughs that carry particular risk of harm?
  • How do we approach safety in terms of rogue groups looking to inflict harm? How is that different from preventing accidents?
  • If a terrorist group were pursuing biological weapons, how much would the CIA or other organizations then become involved in the process?
  • What are the biggest unsolved questions in health security?

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

#26 – Marie Gibbons on how exactly clean meat is created & the advances needed to get it in every supermarket

First, decide on the type of animal. Next, pick the cell type. Then take a small, painless biopsy, and put the cells in a solution that makes them feel like they’re still in the body. Once the cells are in this comfortable state, they’ll proliferate. One cell becomes two, two becomes four, four becomes eight, and so on. Continue until you have enough cells to make a burger, a nugget, a sausage, or a piece of bacon, then concentrate them until they bind into solid meat.

It’s all surprisingly straightforward in principle according to Marie Gibbons, a research fellow with The Good Food Institute, who has been researching how to improve this process at Harvard Medical School. We might even see clean meat sold commercially within a year.

The real technical challenge is developing large bioreactors and cheap solutions so that we can make huge volumes and drive down costs.

This interview covers the science and technology involved at each stage of clean meat production, the challenges and opportunities that face cutting-edge researchers like Marie, and how you could become one of them.

Marie’s research focuses on turkey cells. But as she explains, with clean meat the possibilities extend well beyond those of traditional meat. Chicken, cow, pig, but also panda – and even dinosaurs could be on the menus of the future.

Today’s episode is hosted by Natalie Cargill, a barrister in London with a background in animal advocacy. Natalie and Marie also discuss:

  • Why Marie switched from being a vet to developing clean meat
  • For people who want to dedicate themselves to animal welfare, how does working in clean meat fare compared to other career options? How can people get jobs in the area?
  • How did this become an established field?
  • How important is the choice of animal species and cell type in this process?
  • What are the biggest problems with current production methods?
  • Is this kind of research best done in an academic setting, a commercial setting, or a balance between the two?
  • How easy will it be to get consumer acceptance?
  • How valuable would extra funding be for cellular agriculture?
  • Can we use genetic modification to speed up the process?
  • Is it reasonable to be sceptical of the possibility of clean meat becoming financially competitive with traditional meat any time in the near future?

The 80,000 Hours podcast is produced by Keiran Harris.

Image credits: Featured image – the range of bioreactors available from Sartorius. Social share image – figure 2 from GFI’s Mapping Emerging Industries:
Opportunities in Clean Meat
.

Continue reading →

#25 – Robin Hanson on why we have to lie to ourselves about why we do what we do

On February 2, 1685, England’s King Charles II was struck by a sudden illness. Fortunately his physicians were the best of the best. To reassure the public they were kept abreast of the King’s treatment regimen. King Charles was made to swallow a toxic metal; had blistering agents applied to his scalp; had pigeon droppings attached to his feet; was prodded with a red-hot poker; given forty drops of ooze from “the skull of a man that was never buried”; and, finally, had crushed stones from the intestines of an East Indian goat forced down his throat. Sadly, despite these heroic efforts, he passed away the following week.

Why did the doctors go this far?

Prof Robin Hanson – Associate Professor of Economics at George Mason University – suspects that on top of any medical beliefs the doctors had a hidden motive: it needed to be clear, to the King and the public, that the physicians cared enormously about saving His Royal Majesty. Only extreme measures could make it undeniable that they had done everything they could.

If you believe Hanson, the same desire to prove we care about our family and friends explains much of what’s perverse about our medical system today.

And not only what’s perverse about medicine – Robin thinks we’re mostly kidding ourselves when we say our charities exist to help others, our schools exist to educate students, and our political expression is about choosing wise policies.

So important are hidden motives for navigating our social world that we have to deny them to ourselves, lest we accidentally reveal them to others.

Robin is a polymath economist, and a font of surprising and novel ideas in a range of fields including psychology, politics and futurology. In this extensive episode we discuss his latest book with Kevin Simler, The Elephant in the Brain: Hidden Motives in Everyday Life. We also dive into:

  • What was it like being part of a competitor group to the ‘World Wide Web’, but being beaten to the post?
  • If people aren’t going to school to learn, what’s education for?
  • What split brain patients show about our capacity for self-justification
  • Why we choose the friends we do
  • What’s puzzling about our attitude to medicine?
  • How would it look if people were focused on doing as much good as possible?
  • Are we better off donating now, when we’re older, or even after our deaths?
  • How much of the behavior of ‘effective altruists’ can we assume is genuinely motivated by wanting to do as much good as possible?
  • What does Robin mean when he refers to effective altruism as a youth movement? Is that a good or bad thing?
  • Should people make peace with their hidden motives, or remain ignorant of them?
  • How might we change policy if we fully understood these hidden motivations?
  • Is this view of human nature depressing?
  • Could we let betting markets run much of the government?
  • Why don’t big ideas for institutional reform get adopted?
  • Does history show we’re capable of predicting when new technologies will arise, or what their social impact will be?
  • What are the problems with thinking about the future in an abstract way?
  • Why has Robin shifted from mainly writing papers, to writing blog posts, to writing books?
  • Why are people working in policy reluctant to accept conclusions from psychology?
  • How did being publicly denounced by senators help Robin’s career?
  • Is contrarianism good or bad?
  • The relationship between the quality of an argument and its popularity
  • What would Robin like to see effective altruism do differently?
  • What has Robin changed his mind about over the last 5 years?

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

#24 – Stefan Schubert on why it's a bad idea to break the rules, even if it's for a good cause

How honest should we be? How helpful? How friendly? If our society claims to value honesty, for instance, but in reality accepts an awful lot of lying – should we go along with those lax standards? Or, should we attempt to set a new norm for ourselves?

Dr Stefan Schubert, a researcher at the Social Behaviour and Ethics Lab at Oxford University, has been modelling this in the context of the effective altruism community. He thinks people trying to improve the world should hold themselves to very high standards of integrity, because their minor sins can impose major costs on the thousands of others who share their goals.

In addition, when a norm is uniquely important to our situation, we should be willing to question society and come up with something different and hopefully better.

But in other cases, we can be better off sticking with whatever our culture expects, both to save time, avoid making mistakes, and ensure others can predict our behaviour.

In this interview Stefan offers a range of views on the projects and culture that make up ‘effective altruism’ – including where it’s going right and where it’s going wrong.

Stefan did his PhD in formal epistemology, before moving on to a postdoc in political rationality at the London School of Economics, while working on advocacy projects to improve truthfulness among politicians. At the time the interview was recorded Stefan was a researcher at the Centre for Effective Altruism in Oxford.

We also discuss:

  • Should we trust our own judgement more than others’?
  • How hard is it to improve political discourse?
  • What should we make of well-respected academics writing articles that seem to be completely misinformed?
  • How is effective altruism (EA) changing? What might it be doing wrong?
  • How has Stefan’s view of EA changed?
  • Should EA get more involved in politics, or steer clear of it? Would it be a bad idea for a talented graduate to get involved in party politics?
  • How much should we cooperate with those with whom we have disagreements?
  • What good reasons are there to be inconsiderate?
  • Should effective altruism potentially focused on a more narrow range of problems?

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

#23 – How to actually become an AI alignment researcher, according to Jan Leike

Want to help steer the 21st century’s most transformative technology? First complete an undergrad degree in computer science and mathematics. Prioritize harder courses over easier ones. Publish at least one paper before you apply for a PhD. Find a supervisor who’ll have a lot of time for you. Go to the top conferences and meet your future colleagues. And finally, get yourself hired.

That’s Dr Jan Leike’s advice on how to join him as a Research Scientist at DeepMind, the world’s leading AI team.

Jan is also a Research Associate at the Future of Humanity Institute at the University of Oxford, and his research aims to make machine learning robustly beneficial. His current focus is getting AI systems to learn good objective functions in cases where we can’t easily specify the outcome we actually want.

How might you know you’re a good fit for this kind of research?

Jan says to check whether you get obsessed with puzzles and problems, and find yourself mulling over questions that nobody knows the answer to. To do research in a team you also have to be good at clearly and concisely explaining your new ideas to other people.

We also discuss:

  • Where do Jan’s views differ from those expressed by Dario Amodei in episode 3?
  • Why is AGI alignment one of the world’s most pressing problems?
  • Common misconceptions about artificial intelligence
  • What are some of the specific things DeepMind is researching?
  • The ways in which today’s AI systems can fail
  • What are the best techniques available today for teaching an AI the right objective function?
  • What’s it like to have some of the world’s greatest minds as coworkers?
  • Who should do empirical research and who should do theoretical research
  • What’s the DeepMind application process like?
  • The importance of researchers being comfortable with the unknown.

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

Yes, a career in commercial law has earning potential. We still don’t recommend it.

Going into law isn’t going out of style. Law ranks among the top five career options for students1 and is one of the most popular degree courses at undergraduate level.2What explains its persistent appeal? While people go into law for a number of reasons,3 many are motivated to make a difference through public interest and pro bono work.4

Law is also one of the highest paying professions, however, so working directly on social justice issues isn’t the only way you can do good as a lawyer. If you enjoy commercial work and can secure a place at a high-paying firm, you can also have an impact by donating some of your earnings to charity. We call this earning to give.

If you target your donations to highly effective charities, this could be just as high-impact as public interest law. Newly qualified lawyers at top-ranked firms can expect to earn upwards of £70,000. Donating 10% of this take-home pay5 would be enough to save somebody’s life by buying anti-malaria bednets.6 If you are one of the approximately 5% who makes partner, you could earn over £1m each year – enough to fund a whole team of researchers, advocates or non-profit entrepreneurs.

In this profile, we explore the pros and cons of law for earning to give. We focus on high-end commercial law – where the money is – and hope to discuss public interest law in a separate review. It’s based on the legal training and experience of the primary author of this profile, Natalie Cargill, as well as conversations with lawyers from a range of practice areas. We’ve also drawn on academic literature, surveys by the Law Society, and publicly-available salary data.

Continue reading →

Blog: A new recommended career path for effective altruists: China specialist

Last summer, China unveiled a plan to become the world leader in artificial intelligence, aiming to create a $150 billion industry by 2030.

“We must take initiative to firmly grasp this new stage of development for artificial intelligence and create a new competitive edge,” the country’s State Council said. The move symbolised the technological thrust of “the great rejuvenation of the Chinese nation” promoted by President Xi Jinping.

And it’s not just AI. China is becoming increasingly important in the solution of other global problems prioritised by the effective altruism community, including biosecurity, factory farming and nuclear security. But few in the community know much about the country, and coordination between Chinese and Western organisations seems like it could be improved a great deal.

This suggests that a high-impact career path could be to develop expertise in the intersection between China, effective altruism and pressing global issues. Once you’ve attained this expertise, you can use it to carry out research into global priorities or AI strategy; work in governments setting relevant areas of China-West policy, advise Western groups on how to work together with their Chinese counterparts, and other projects that we’ll sketch below…

Continue reading →

#22 – Leah Utyasheva on the nonprofit that figured out how to massively cut suicide rates

How people kill themselves varies enormously depending on which means are most easily available. In the United States, suicide by firearm stands out. In Hong Kong, where most people live in high rise buildings, jumping from a height is more common. And in some countries in Asia and Africa with many poor agricultural communities, the leading means is drinking pesticide.

There’s a good chance you’ve never heard of this issue before. And yet, of the 800,000 people who kill themselves globally each year 20% die from pesticide self-poisoning.

Research suggests most people who try to kill themselves with pesticides reflect on the decision for less than 30 minutes, and that less than 10% of those who don’t die the first time around will try again.

Unfortunately, the fatality rate from pesticide ingestion is 40% to 70%.

Having such dangerous chemicals near people’s homes is therefore an enormous public health issue not only for the direct victims, but also the partners and children they leave behind.

Fortunately researchers like Dr Leah Utyasheva have figured out a very cheap way to massively reduce pesticide suicide rates.

In 2016, Leah co-founded the first organisation focused on this problem – The Centre for Pesticide Suicide Prevention – which recently received an incubation grant from GiveWell. She’s a human rights expert and law reform specialist, and has participated in drafting legal aid, human rights, gender equality, and anti-discrimination legislation in various countries across Europe and Canada.

In this episode, Leah and I discuss:

  • How do you prevent pesticide suicide and what’s the evidence it works?
  • How do you know that most people attempting suicide don’t want to die?
  • What types of events are causing people to have the crises that lead to attempted suicide?
  • How much money does it cost to save a life in this way?
  • How do you estimate the probability of getting law reform passed in a particular country?
  • Have you generally found politicians to be sympathetic to the idea of banning these pesticides? What are their greatest reservations?
  • The comparison of getting policy change rather than helping person-by-person
  • The importance of working with locals in places like India and Nepal, rather than coming in exclusively as outsiders
  • What are the benefits of starting your own nonprofit versus joining an existing org and persuading them of the merits of the cause?
  • Would Leah in general recommend starting a new charity? Is it more exciting than it is scary?
  • Is it important to have an academic leading this kind of work?
  • How did The Centre for Pesticide Suicide Prevention get seed funding?
  • How does the value of saving a life from suicide compare to savings someone from malaria
  • Leah’s political campaigning for the rights of vulnerable groups in Eastern Europe
  • What are the biggest downsides of human rights work?

Keiran Harris helped produce today’s episode.

Continue reading →

#21 – Holden Karnofsky on times philanthropy transformed the world & Open Phil's plan to do the same

The Green Revolution averted mass famine during the 20th century. The contraceptive pill gave women unprecedented freedom in planning their own lives. Both are widely recognised as scientific breakthroughs that transformed the world. But few know that those breakthroughs only happened when they did because of two donors willing to take risky bets on new ideas.

Today’s guest, Holden Karnofsky, has been looking for philanthropy’s biggest success stories because he’s Executive Director of Open Philanthropy, which gives away over $100 million per year – and he’s hungry for big wins.

As he learned, in the 1940s, poverty reduction overseas was not a big priority for many. But the Rockefeller Foundation decided to fund agricultural scientists to breed much better crops for the developing world – thereby massively increasing their food production.

Similarly in the 1950s, society was a long way from demanding effective birth control. Activist Margaret Sanger had the idea for the pill, and endocrinologist Gregory Pincus the research team – but they couldn’t proceed without a $40,000 research check from biologist and women’s rights activist Katherine McCormick.

In both cases, it was philanthropists rather than governments that led the way.

The reason, according to Holden, is that while governments have enormous resources, they’re constrained by only being able to fund reasonably sure bets. Philanthropists can transform the world by filling the gaps government leaves – but to seize that opportunity they have to hire the best researchers, think long-term and be willing to fail most of the time.

Holden knows more about this type of giving than almost anyone. As founder of GiveWell and then Open Philanthropy, he has been working feverishly since 2007 to find outstanding giving opportunities. This practical experience has made him one of the most influential figures in the development of the school of thought that has come to be known as effective altruism.

We’ve recorded this episode now because Open Philanthropy is hiring for a large number of positions, which we think would allow the right person to have a very large positive influence on the world. They’re looking for a large number of entry lever researchers to train up, 3 specialist researchers into potential risks from advanced artificial intelligence, as well as a Director of Operations, Operations Associate and General Counsel.

But the conversation goes well beyond specifics about these jobs. We also discuss:

  • How did they pick the problems they focus on, and how will they change over time?
  • What would Holden do differently if he were starting Open Phil again today?
  • What can we learn from the history of philanthropy?
  • What makes a good Program Officer.
  • The importance of not letting hype get ahead of the science in an emerging field.
  • The importance of honest feedback for philanthropists, and the difficulty getting it.
  • How do they decide what’s above the bar to fund, and when it’s better to hold onto the money?
  • How philanthropic funding can most influence politics.
  • What Holden would say to a new billionaire who wanted to give away most of their wealth.
  • Why Open Phil is building a research field around the safe development of artificial intelligence
  • Why they invested in OpenAI.
  • Academia’s faulty approach to answering practical questions.
  • What kind of people do and don’t thrive in Open Phil’s culture.
  • What potential utopias do people most want, according to opinion polls?

Keiran Harris helped produce today’s episode.

Continue reading →

#20 – Bruce Friedrich on inventing outstanding meat substitutes to end speciesism & factory farming

Before the US Civil War, it was easier for the North to morally oppose slavery. Why? Because unlike the South they weren’t profiting much from its existence. The fight for abolition was partly won because many no longer saw themselves as having a selfish stake in its continuation.

Bruce Friedrich, executive director of The Good Food Institute (GFI), thinks the same may be true in the fight against speciesism. 98% of people currently eat meat. But if eating meat stops being part of most people’s daily lives — it should be a lot easier to convince them that farming practices are just as cruel as they look, and that the suffering of these animals really matters.

That’s why GFI is “working with scientists, investors, and entrepreneurs” to create plant-based meat, dairy and eggs as well as clean meat alternatives to animal products. In 2016, Animal Charity Evaluators named GFI one of its recommended charities.

In this interview I’m joined by my colleague Natalie Cargill, and we ask Bruce about:

  • What’s the best meat replacement product out there right now?
  • How effective is meat substitute research for people who want to reduce animal suffering as much as possible?
  • When will we get our hands on clean meat? And why does Bruce call it clean meat, rather than in vitro meat or cultured meat?
  • What are the challenges of producing something structurally identical to meat?
  • Can clean meat be healthier than conventional meat?
  • Do plant-based alternatives have a better shot at success than clean meat?
  • Is there a concern that, even if the product is perfect, people still won’t eat it? Why might that happen?
  • What’s it like being a vegan in a family made up largely of hunters and meat-eaters?
  • What kind of pushback should be expected from the meat industry?

Keiran Harris helped produce today’s episode.

Continue reading →

#19 – Samantha Pitts-Kiefer on working next to the White House trying to prevent nuclear war

Rogue elements within a state’s security forces enrich dozens of kilograms of uranium. It’s then assembled into a crude nuclear bomb. The bomb is transported on a civilian aircraft to Washington D.C, and loaded onto a delivery truck. The truck is driven by an American citizen midway between the White House and the Capitol Building. The driver casually steps out of the vehicle, and detonates the weapon. There are more than 80,000 instant deaths. There are also at least 100,000 seriously wounded, with nowhere left to treat them.

It’s likely that one of those immediately killed would be Samantha Pitts-Kiefer, who works only one block away from the White House.

Samantha serves as Senior Director of The Global Nuclear Policy Program at the Nuclear Threat Initiative, and warns that the chances of a nuclear terrorist attack are alarmingly high. Terrorist groups have expressed a desire for nuclear weapons, and the material required to build those weapons is scattered throughout the world at a diverse range of sites – some of which lack the necessary security.

When you combine the massive death toll with the accompanying social panic and economic disruption – a nuclear 9/11 would be unthinkably bad. And yet, Samantha reminds us, we must confront the possibility.

Clearly, this is far from the only nuclear nightmare. We also discuss:

  • In the case of nuclear war, what fraction of the world’s population would die?
  • What is the biggest nuclear threat?
  • How concerned should we be about North Korea?
  • How often has the world experienced nuclear near misses?
  • How might a conflict between India and Pakistan escalate to the nuclear level?
  • How quickly must a president make a decision in the result of a suspected first strike?
  • Are global sources of nuclear material safely secured?
  • What role does cyber security have in preventing nuclear disasters?
  • How can we improve relations between nuclear armed states?
  • What do you think about the campaign for complete nuclear disarmament?
  • If you could tell the US government to do three things, what are the key priorities today?
  • Is it practical to get members of congress to pay attention to nuclear risks?
  • Could modernisation of nuclear weapons actually make the world safer?

Keiran Harris helped produce today’s episode.

Continue reading →

#18 – Ofir Reich on using data science to end poverty & the spurious action-inaction distinction

Ofir Reich spent 6 years doing math in the military, before spending another 2 in tech startups – but then made a sharp turn to become a data scientist focussed on helping the global poor.

At UC Berkeley’s Center for Effective Global Action he helps prevent tax evasion by identifying fake companies in India, enable Afghanistan to pay its teachers electronically, and raise yields for Ethiopian farmers by messaging them when local conditions make it ideal to apply fertiliser. Or at least that’s the hope – he’s also working on ways to test whether those interventions actually work.

Why dedicate his life to helping the global poor?

Ofir sees little moral difference between harming people and failing to help them. After all, if you had to press a button to keep all of your money from going to charity, and you pressed that button, would that be an action, or an inaction? Is there even an answer?

After reflecting on cases like this, he decided that to not engage with a problem is an active choice, one whose consequences he is just as morally responsible for as if he were directly involved. On top of his life philosophy we also discuss:

  • The benefits of working in a top academic environment
  • How best to start a career in global development
  • Are RCTs worth the money? Should we focus on big picture policy change instead? Or more economic theory?
  • How the delivery standards of nonprofits compare to top universities
  • Why he doesn’t enjoy living in the San Francisco bay area
  • How can we fix the problem of most published research being false?
  • How good a career path is data science?
  • How important is experience in development versus technical skills?
  • How he learned much of what he needed to know in the army
  • How concerned should effective altruists be about burnout?

Keiran Harris helped produce today’s episode.

Continue reading →

#17 – Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster

Immanuel Kant is a profoundly influential figure in modern philosophy, and was one of the earliest proponents for universal democracy and international cooperation. He also thought that women have no place in civil society, that illegitimate children should receive fewer legal protections, and that there was a ranking in the moral worth of different races.

Throughout history we’ve consistently believed, as common sense, truly horrifying things by today’s standards. According to University of Oxford Professor Will MacAskill, it’s extremely likely that we’re in the same boat today. If we accept that we’re probably making major moral errors, how should we proceed?

If our morality is tied to common sense intuitions, we’re probably just preserving these biases and moral errors. Instead we need to develop a moral view that criticises common sense intuitions, and gives us a chance to move beyond them. And if humanity is going to spread to the stars it could be worth dedicating hundreds or thousands of years to moral reflection, lest we spread our errors far and wide.

Will is an Associate Professor in Philosophy at Oxford University, author of Doing Good Better, and one of the co-founders of the effective altruism community. In this interview we discuss a wide range of topics:

  • How would we go about a ‘long reflection’ to fix our moral errors?
  • Will’s forthcoming book on how one should reason and act if you don’t know which moral theory is correct. What are the practical implications of so-called ‘moral uncertainty’?
  • If we basically solve existential risks, what does humanity do next?
  • What are some of Will’s most unusual philosophical positions?
  • What are the best arguments for and against utilitarianism?
  • Given disagreements among philosophers, how much should we believe the findings of philosophy as a field?
  • What are some the biases we should be aware of within academia?
  • What are some of the downsides of becoming a professor?
  • What are the merits of becoming a philosopher?
  • How does the media image of EA differ to the actual goals of the community?
  • What kinds of things would you like to see the EA community do differently?
  • How much should we explore potentially controversial ideas?
  • How focused should we be on diversity?
  • What are the best arguments against effective altruism?

Keiran Harris helped produce today’s episode.

Continue reading →

Annual review December 2017

benjamin todd 80000 hours

Summary

This year, we focused on “upgrading” – getting engaged readers into our top priority career paths.

We do this by writing articles on why and how to enter the priority paths, providing one-on-one advice to help the most engaged readers narrow down, and introductions to help them enter.

Some of our main successes this year include:

  1. We developed and refined this upgrading process, having been focused on introductory content last year. We made lots of improvements to coaching, and released 48 pieces of content.
  2. We used the process to grow the number of rated-10 plan changes 2.6-fold compared to 2016, from 19 to 50. We primarily placed people in AI technical safety, other AI roles, effective altruism nonprofits, earning to give and biorisk.

  3. We started tracking rated-100 and rated-1000 plan changes. We recorded 10 rated-100 and one rated-1000 plan change, so with this change, total new impact-adjusted significant plan changes (IASPC v2) doubled compared to 2016, from roughly 1200 to 2400. That means we’ve grown the annual rate of plan changes 23-fold since 2013. (If we ignore the rated-100+ category, then IASPCv1 grew 31% from 2017 to 2016, and 12-fold since 2013.)

  4. This meant that despite rising costs, cost per IASPC was flat. We updated our historical and marginal cost-effectiveness estimates, and think we’ve likely been highly cost-effective, though we have a lot of uncertainty.

  5. We maintained a good financial position,

Continue reading →