The careers and policies that can prevent global catastrophic biological risks, according to world-leading health security expert Dr Inglesby

The system doesn’t seem like it’s actually crucial but these are the people thanklessly doing the work to try to prevent people from getting infected at a population level … when epidemics happen in a city, hospitals pick up the phone and say “What is it we should be doing? How should we be operating now that an epidemic’s underway?”

Tom Inglesby

How about this for a movie idea: a main character has to prevent a new contagious strain of Ebola spreading around the world. She’s the best of the best. So good in fact, that her work on early detection systems contains the strain at its source. Ten minutes into the movie, we see the results of her work – nothing happens. Life goes on as usual. She continues to be amazingly competent, and nothing continues to go wrong. Fade to black. Roll credits.

If your job is to prevent catastrophes, success is when nobody has to pay attention to you. But without regular disasters to remind authorities why they hired you in the first place, they can’t tell if you’re actually achieving anything. And when budgets come under pressure you may find that success condemns you to the chopping block.

Dr. Tom Inglesby, Director of the Center for Health Security at the Johns Hopkins Bloomberg School of Public Health, worries this may be about to happen to the scientists working on the ‘Global Health Security Agenda’.

In 2014 Ebola showed the world why we have to detect and contain new diseases before they spread, and that when it comes to contagious diseases the nations of the world sink or swim together. Fifty countries decided to work together to make sure all their health systems were up to the challenge. Back then Congress provided 5 years’ funding to help some of the world’s poorest countries build the basic health security infrastructure necessary to control pathogens before they could reach the US.

But with Ebola fading from public memory and no recent tragedies to terrify us, Congress may not renew that funding and the project could fall apart. (Learn more about how you can help.)

But there are positive signs as well – the center Inglesby leads recently received a $16 million grant from the Open Philanthropy Project to further their work preventing global catastrophes. It also runs the Emerging Leaders in Biosecurity Fellowship to train the next generation of biosecurity experts for the US government. Inglesby regularly testifies to Congress on the threats we all face and how to address them.

In this in-depth interview we try to provide concrete guidance for listeners who want to to pursue a career in health security, and also discuss:

  • Should more people in medicine work on security?
  • What are the top jobs for people who want to improve health security and how do they work towards getting them?
  • What people can do to protect funding for the Global Health Security Agenda.
  • Should we be more concerned about natural or human caused pandemics? Which is more neglected?
  • Should we be allocating more attention and resources to global catastrophic risk scenarios?
  • Why are senior figures reluctant to prioritize one project or area at the expense of another?
  • What does Tom think about the idea that in the medium term, human-caused pandemics will pose a far greater risk than natural pandemics, and so we should focus on specific counter-measures?
  • Are the main risks and solutions understood, and it’s just a matter of implementation? Or is the principal task to identify and understand them?
  • How is the current US government performing in these areas?
  • Which agencies are empowered to think about low probability high magnitude events?
  • Are there any scientific breakthroughs that carry particular risk of harm?
  • How do we approach safety in terms of rogue groups looking to inflict harm? How is that different from preventing accidents?
  • If a terrorist group were pursuing biological weapons, how much would the CIA or other organizations then become involved in the process?
  • What are the biggest unsolved questions in health security?

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

How exactly clean meat is created & the advances needed to get it into every supermarket, according to food scientist Marie Gibbons

The beauty of exponential cell growth is that you start with one, and then the next day, you get two. … Obviously, it’s never going to be 100% efficient, but we have a lot of power in that exponential growth. If we use it appropriately, we can produce just unlimited amounts of meat.

Marie Gibbons

First, decide on the type of animal. Next, pick the cell type. Then take a small, painless biopsy, and put the cells in a solution that makes them feel like they’re still in the body. Once the cells are in this comfortable state, they’ll proliferate. One cell becomes two, two becomes four, four becomes eight, and so on. Continue until you have enough cells to make a burger, a nugget, a sausage, or a piece of bacon, then concentrate them until they bind into solid meat.

It’s all surprisingly straightforward in principle according to Marie Gibbons​, a research fellow with The Good Food Institute, who has been researching how to improve this process at Harvard Medical School. We might even see clean meat sold commercially within a year.

The real technical challenge is developing large bioreactors and cheap solutions so that we can make huge volumes and drive down costs.

This interview covers the science and technology involved at each stage of clean meat production, the challenges and opportunities that face cutting-edge researchers like Marie, and how you could become one of them.

Marie’s research focuses on turkey cells. But as she explains, with clean meat the possibilities extend well beyond those of traditional meat. Chicken, cow, pig, but also panda – and even dinosaurs could be on the menus of the future.

Today’s episode is hosted by Natalie Cargill, a barrister in London with a background in animal advocacy. Natalie and Marie also discuss:

  • Why Marie switched from being a vet to developing clean meat
  • For people who want to dedicate themselves to animal welfare, how does working in clean meat fare compared to other career options? How can people get jobs in the area?
  • How did this become an established field?
  • How important is the choice of animal species and cell type in this process?
  • What are the biggest problems with current production methods?
  • Is this kind of research best done in an academic setting, a commercial setting, or a balance between the two?
  • How easy will it be to get consumer acceptance?
  • How valuable would extra funding be for cellular agriculture?
  • Can we use genetic modification to speed up the process?
  • Is it reasonable to be sceptical of the possibility of clean meat becoming financially competitive with traditional meat any time in the near future?

The 80,000 Hours podcast is produced by Keiran Harris.

Image credits: Featured image – the range of bioreactors available from Sartorius. Social share image – figure 2 from GFI’s Mapping Emerging Industries:
Opportunities in Clean Meat
.

Continue reading →

Why we have to lie to ourselves about why we do what we do, according to Prof Robin Hanson

In fact, your conscious mind is more plausibly a press secretary. You’re not the president or the king or the CEO. You aren’t in charge. You aren’t actually making the decision, the conscious part of your mind at least. You are there to make up a good explanation for what’s going on so that you can avoid the accusation that you’re violating norms.

Robin Hanson

On February 2, 1685, England’s King Charles II was struck by a sudden illness. Fortunately his physicians were the best of the best. To reassure the public they were kept abreast of the King’s treatment regimen. King Charles was made to swallow a toxic metal; had blistering agents applied to his scalp; had pigeon droppings attached to his feet; was prodded with a red-hot poker; given forty drops of ooze from “the skull of a man that was never buried”; and, finally, had crushed stones from the intestines of an East Indian goat forced down his throat. Sadly, despite these heroic efforts, he passed away the following week.

Why did the doctors go this far?

Prof Robin Hanson – Associate Professor of Economics at George Mason University – suspects that on top of any medical beliefs the doctors had a hidden motive: it needed to be clear, to the King and the public, that the physicians cared enormously about saving His Royal Majesty. Only extreme measures could make it undeniable that they had done everything they could.

If you believe Hanson, the same desire to prove we care about our family and friends explains much of what’s perverse about our medical system today.

And not only what’s perverse about medicine – Robin thinks we’re mostly kidding ourselves when we say our charities exist to help others, our schools exist to educate students, and our political expression is about choosing wise policies.

So important are hidden motives for navigating our social world that we have to deny them to ourselves, lest we accidentally reveal them to others.

Robin is a polymath economist, and a font of surprising and novel ideas in a range of fields including psychology, politics and futurology. In this extensive episode we discuss his latest book with Kevin Simler, The Elephant in the Brain: Hidden Motives in Everyday Life. We also dive into:

  • What was it like being part of a competitor group to the ‘World Wide Web’, but being beaten to the post?
  • If people aren’t going to school to learn, what’s education for?
  • What split brain patients show about our capacity for self-justification
  • Why we choose the friends we do
  • What’s puzzling about our attitude to medicine?
  • How would it look if people were focused on doing as much good as possible?
  • Are we better off donating now, when we’re older, or even after our deaths?
  • How much of the behavior of ‘effective altruists’ can we assume is genuinely motivated by wanting to do as much good as possible?
  • What does Robin mean when he refers to effective altruism as a youth movement? Is that a good or bad thing?
  • Should people make peace with their hidden motives, or remain ignorant of them?
  • How might we change policy if we fully understood these hidden motivations?
  • Is this view of human nature depressing?
  • Could we let betting markets run much of the government?
  • Why don’t big ideas for institutional reform get adopted?
  • Does history show we’re capable of predicting when new technologies will arise, or what their social impact will be?
  • What are the problems with thinking about the future in an abstract way?
  • Why has Robin shifted from mainly writing papers, to writing blog posts, to writing books?
  • Why are people working in policy reluctant to accept conclusions from psychology?
  • How did being publicly denounced by senators help Robin’s career?
  • Is contrarianism good or bad?
  • The relationship between the quality of an argument and its popularity
  • What would Robin like to see effective altruism do differently?
  • What has Robin changed his mind about over the last 5 years?

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

Why it’s a bad idea to break the rules, even if it’s for a good cause

…social norms have to be evaluated on the basis of their outcomes, like everything else. And that might prompt people to think that they should break norms and rules fairly frequently. But we wanted to push against that…

Stefan Schubert

How honest should we be? How helpful? How friendly? If our society claims to value honesty, for instance, but in reality accepts an awful lot of lying – should we go along with those lax standards? Or, should we attempt to set a new norm for ourselves?

Dr Stefan Schubert, a researcher at the Social Behaviour and Ethics Lab at Oxford University, has been modelling this in the context of the effective altruism community. He thinks people trying to improve the world should hold themselves to very high standards of integrity, because their minor sins can impose major costs on the thousands of others who share their goals.

In addition, when a norm is uniquely important to our situation, we should be willing to question society and come up with something different and hopefully better.

But in other cases, we can be better off sticking with whatever our culture expects, both to save time, avoid making mistakes, and ensure others can predict our behaviour.

In this interview Stefan offers a range of views on the projects and culture that make up ‘effective altruism’ – including where it’s going right and where it’s going wrong.

Stefan did his PhD in formal epistemology, before moving on to a postdoc in political rationality at the London School of Economics, while working on advocacy projects to improve truthfulness among politicians. At the time the interview was recorded Stefan was a researcher at the Centre for Effective Altruism in Oxford.

We also discuss:

  • Should we trust our own judgement more than others’?
  • How hard is it to improve political discourse?
  • What should we make of well-respected academics writing articles that seem to be completely misinformed?
  • How is effective altruism (EA) changing? What might it be doing wrong?
  • How has Stefan’s view of EA changed?
  • Should EA get more involved in politics, or steer clear of it? Would it be a bad idea for a talented graduate to get involved in party politics?
  • How much should we cooperate with those with whom we have disagreements?
  • What good reasons are there to be inconsiderate?
  • Should effective altruism potentially focused on a more narrow range of problems?

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

A machine learning alignment researcher on how to become a machine learning alignment researcher

The reason why I would recommend people get a machine learning PhD, if they’re in a position to do so, is that this is where we are currently the most talent constrained. So, at DeepMind, and for the technical AI safety team, we’d love to hire more people who have a machine learning PhD or equivalent experience, and just get them to work on AI safety.

Jan Leike

Want to help steer the 21st century’s most transformative technology? First complete an undergrad degree in computer science and mathematics. Prioritize harder courses over easier ones. Publish at least one paper before you apply for a PhD. Find a supervisor who’ll have a lot of time for you. Go to the top conferences and meet your future colleagues. And finally, get yourself hired.

That’s Dr Jan Leike’s advice on how to join him as a Research Scientist at DeepMind, the world’s leading AI team.

Jan is also a Research Associate at the Future of Humanity Institute at the University of Oxford, and his research aims to make machine learning robustly beneficial. His current focus is getting AI systems to learn good objective functions in cases where we can’t easily specify the outcome we actually want.

How might you know you’re a good fit for this kind of research?

Jan says to check whether you get obsessed with puzzles and problems, and find yourself mulling over questions that nobody knows the answer to. To do research in a team you also have to be good at clearly and concisely explaining your new ideas to other people.

We also discuss:

  • Where do Jan’s views differ from those expressed by Dario Amodei in episode 3?
  • Why is AGI alignment one of the world’s most pressing problems?
  • Common misconceptions about artificial intelligence
  • What are some of the specific things DeepMind is researching?
  • The ways in which today’s AI systems can fail
  • What are the best techniques available today for teaching an AI the right objective function?
  • What’s it like to have some of the world’s greatest minds as coworkers?
  • Who should do empirical research and who should do theoretical research
  • What’s the DeepMind application process like?
  • The importance of researchers being comfortable with the unknown.

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

Yes, a career in commercial law has earning potential. We still don’t recommend it.

Going into law isn’t going out of style. Law ranks among the top five career options for students1 and is one of the most popular degree courses at undergraduate level.2What explains its persistent appeal? While people go into law for a number of reasons,3 many are motivated to make a difference through public interest and pro bono work.4

Law is also one of the highest paying professions, however, so working directly on social justice issues isn’t the only way you can do good as a lawyer. If you enjoy commercial work and can secure a place at a high-paying firm, you can also have an impact by donating some of your earnings to charity. We call this earning to give.

If you target your donations to highly effective charities, this could be just as high-impact as public interest law. Newly qualified lawyers at top-ranked firms can expect to earn upwards of £70,000. Donating 10% of this take-home pay5 would be enough to save somebody’s life by buying anti-malaria bednets.6 If you are one of the approximately 5% who makes partner, you could earn over £1m each year – enough to fund a whole team of researchers, advocates or non-profit entrepreneurs.

In this profile, we explore the pros and cons of law for earning to give. We focus on high-end commercial law – where the money is – and hope to discuss public interest law in a separate review. It’s based on the legal training and experience of the primary author of this profile, Natalie Cargill, as well as conversations with lawyers from a range of practice areas. We’ve also drawn on academic literature, surveys by the Law Society, and publicly-available salary data.

Continue reading →

A new recommended career path for effective altruists: China specialist

Last summer, China unveiled a plan to become the world leader in artificial intelligence, aiming to create a $150 billion industry by 2030.

“We must take initiative to firmly grasp this new stage of development for artificial intelligence and create a new competitive edge,” the country’s State Council said. The move symbolised the technological thrust of “the great rejuvenation of the Chinese nation” promoted by President Xi Jinping.

And it’s not just AI. China is becoming increasingly important in the solution of other global problems prioritised by the effective altruism community, including biosecurity, factory farming and nuclear security. But few in the community know much about the country, and coordination between Chinese and Western organisations seems like it could be improved a great deal.

This suggests that a high-impact career path could be to develop expertise in the intersection between China, effective altruism and pressing global issues. Once you’ve attained this expertise, you can use it to carry out research into global priorities or AI strategy; work in governments setting relevant areas of China-West policy, advise Western groups on how to work together with their Chinese counterparts, and other projects that we’ll sketch below…

Continue reading →

The non-profit that figured out how to massively cut suicide rates in Sri Lanka, and their plan to do the same around the world

…the suicide rate in Sri Lanka has dropped significantly. So, from 57 deaths per 100,000 population in ’95, it has dropped now to 17. This is a 70% reduction. So, it’s a very significant success, in fact the greatest decrease in suicide rate ever seen.

Dr Leah Utyasheva

How people kill themselves varies enormously depending on which means are most easily available. In the United States, suicide by firearm stands out. In Hong Kong, where most people live in high rise buildings, jumping from a height is more common. And in some countries in Asia and Africa with many poor agricultural communities, the leading means is drinking pesticide.

There’s a good chance you’ve never heard of this issue before. And yet, of the 800,000 people who kill themselves globally each year 20% die from pesticide self-poisoning.

Research suggests most people who try to kill themselves with pesticides reflect on the decision for less than 30 minutes, and that less than 10% of those who don’t die the first time around will try again.

Unfortunately, the fatality rate from pesticide ingestion is 40% to 70%.

Having such dangerous chemicals near people’s homes is therefore an enormous public health issue not only for the direct victims, but also the partners and children they leave behind.

Fortunately researchers like Dr Leah Utyasheva have figured out a very cheap way to massively reduce pesticide suicide rates.

In 2016, Leah co-founded the first organisation focused on this problem – The Centre for Pesticide Suicide Prevention – which recently received an incubation grant from GiveWell. She’s a human rights expert and law reform specialist, and has participated in drafting legal aid, human rights, gender equality, and anti-discrimination legislation in various countries across Europe and Canada.

In this episode, Leah and I discuss:

  • How do you prevent pesticide suicide and what’s the evidence it works?
  • How do you know that most people attempting suicide don’t want to die?
  • What types of events are causing people to have the crises that lead to attempted suicide?
  • How much money does it cost to save a life in this way?
  • How do you estimate the probability of getting law reform passed in a particular country?
  • Have you generally found politicians to be sympathetic to the idea of banning these pesticides? What are their greatest reservations?
  • The comparison of getting policy change rather than helping person-by-person
  • The importance of working with locals in places like India and Nepal, rather than coming in exclusively as outsiders
  • What are the benefits of starting your own non-profit versus joining an existing org and persuading them of the merits of the cause?
  • Would Leah in general recommend starting a new charity? Is it more exciting than it is scary?
  • Is it important to have an academic leading this kind of work?
  • How did The Centre for Pesticide Suicide Prevention get seed funding?
  • How does the value of saving a life from suicide compare to savings someone from malaria
  • Leah’s political campaigning for the rights of vulnerable groups in Eastern Europe
  • What are the biggest downsides of human rights work?

Keiran Harris helped produce today’s episode.

Continue reading →

The world’s most intellectual foundation is hiring. Holden Karnofsky, founder of GiveWell, on how philanthropy can have maximum impact by taking big risks.

…there’s this popular observation that once you become a philanthropist, you never again tell a bad joke… everyone wants to be on your good side. And I think that can be a very toxic environment…

Holden Karnofsky

The Green Revolution averted mass famine during the 20th century. The contraceptive pill gave women unprecedented freedom in planning their own lives. Both are widely recognised as scientific breakthroughs that transformed the world. But few know that those breakthroughs only happened when they did because of two donors willing to take risky bets on new ideas.

Today’s guest, Holden Karnofsky, has been looking for philanthropy’s biggest success stories because he’s Executive Director of the Open Philanthropy Project, which gives away over $100 million per year – and he’s hungry for big wins.

As he learned, in the 1940s, poverty reduction overseas was not a big priority for many. But the Rockefeller Foundation decided to fund agricultural scientists to breed much better crops for the developing world – thereby massively increasing their food production.

Similarly in the 1950s, society was a long way from demanding effective birth control. Activist Margaret Sanger had the idea for the pill, and endocrinologist Gregory Pincus the research team – but they couldn’t proceed without a $40,000 research check from biologist and women’s rights activist Katherine McCormick.

In both cases, it was philanthropists rather than governments that led the way.

The reason, according to Holden, is that while governments have enormous resources, they’re constrained by only being able to fund reasonably sure bets. Philanthropists can transform the world by filling the gaps government leaves – but to seize that opportunity they have to hire the best researchers, think long-term and be willing to fail most of the time.

Holden knows more about this type of giving than almost anyone. As founder of GiveWell and then the Open Philanthropy Project, he has been working feverishly since 2007 to find outstanding giving opportunities. This practical experience has made him one of the most influential figures in the development of the school of thought that has come to be known as effective altruism.

We’ve recorded this episode now because the Open Philanthropy Project is hiring for a large number of positions, which we think would allow the right person to have a very large positive influence on the world. They’re looking for a large number of entry lever researchers to train up, 3 specialist researchers into potential risks from advanced artificial intelligence, as well as a Director of Operations, Operations Associate and General Counsel.

But the conversation goes well beyond specifics about these jobs. We also discuss:

  • How did they pick the problems they focus on, and how will they change over time?
  • What would Holden do differently if he were starting Open Phil again today?
  • What can we learn from the history of philanthropy?
  • What makes a good Program Officer.
  • The importance of not letting hype get ahead of the science in an emerging field.
  • The importance of honest feedback for philanthropists, and the difficulty getting it.
  • How do they decide what’s above the bar to fund, and when it’s better to hold onto the money?
  • How philanthropic funding can most influence politics.
  • What Holden would say to a new billionaire who wanted to give away most of their wealth.
  • Why Open Phil is building a research field around the safe development of artificial intelligence
  • Why they invested in OpenAI.
  • Academia’s faulty approach to answering practical questions.
  • What kind of people do and don’t thrive in Open Phil’s culture.
  • What potential utopias do people most want, according to opinion polls?

Keiran Harris helped produce today’s episode.

Continue reading →

Bruce Friedrich makes the case that inventing outstanding meat replacements is the most effective way to help animals

Well, you can perfectly replicate it. You can do better. … If you are going to go the conventional meat-making way, you are constrained by the biology of the animal. If you want to use plant-based meat … you can do taste tests and find things that people like even more…

Bruce Friedrich

Before the US Civil War, it was easier for the North to morally oppose slavery. Why? Because unlike the South they weren’t profiting much from its existence. The fight for abolition was partly won because many no longer saw themselves as having a selfish stake in its continuation.

Bruce Friedrich, executive director of The Good Food Institute (GFI), thinks the same may be true in the fight against speciesism. 98% of people currently eat meat. But if eating meat stops being part of most people’s daily lives — it should be a lot easier to convince them that farming practices are just as cruel as they look, and that the suffering of these animals really matters.

That’s why GFI is “working with scientists, investors, and entrepreneurs” to create plant-based meat, dairy and eggs as well as clean meat alternatives to animal products. In 2016, Animal Charity Evaluators named GFI one of its recommended charities.

In this interview I’m joined by my colleague Natalie Cargill, and we ask Bruce about:

  • What’s the best meat replacement product out there right now?
  • How effective is meat substitute research for people who want to reduce animal suffering as much as possible?
  • When will we get our hands on clean meat? And why does Bruce call it clean meat, rather than in vitro meat or cultured meat?
  • What are the challenges of producing something structurally identical to meat?
  • Can clean meat be healthier than conventional meat?
  • Do plant-based alternatives have a better shot at success than clean meat?
  • Is there a concern that, even if the product is perfect, people still won’t eat it? Why might that happen?
  • What’s it like being a vegan in a family made up largely of hunters and meat-eaters?
  • What kind of pushback should be expected from the meat industry?

Keiran Harris helped produce today’s episode.

Continue reading →

“It’s my job to worry about any way nukes could get used”

Now we have more countries with nuclear weapons, we have major potential flashpoints. We also, even though we are not in the Cold War … you do have the possibility of some sort of miscalculation or accident…

Samantha Pitts-Kiefer

Rogue elements within a state’s security forces enrich dozens of kilograms of uranium. It’s then assembled into a crude nuclear bomb. The bomb is transported on a civilian aircraft to Washington D.C, and loaded onto a delivery truck. The truck is driven by an American citizen midway between the White House and the Capitol Building. The driver casually steps out of the vehicle, and detonates the weapon. There are more than 80,000 instant deaths. There are also at least 100,000 seriously wounded, with nowhere left to treat them.

It’s likely that one of those immediately killed would be Samantha Pitts-Kiefer, who works only one block away from the White House.

Samantha serves as Senior Director of The Global Nuclear Policy Program at the Nuclear Threat Initiative, and warns that the chances of a nuclear terrorist attack are alarmingly high. Terrorist groups have expressed a desire for nuclear weapons, and the material required to build those weapons is scattered throughout the world at a diverse range of sites – some of which lack the necessary security.

When you combine the massive death toll with the accompanying social panic and economic disruption – a nuclear 9/11 would be unthinkably bad. And yet, Samantha reminds us, we must confront the possibility.

Clearly, this is far from the only nuclear nightmare. We also discuss:

  • In the case of nuclear war, what fraction of the world’s population would die?
  • What is the biggest nuclear threat?
  • How concerned should we be about North Korea?
  • How often has the world experienced nuclear near misses?
  • How might a conflict between India and Pakistan escalate to the nuclear level?
  • How quickly must a president make a decision in the result of a suspected first strike?
  • Are global sources of nuclear material safely secured?
  • What role does cyber security have in preventing nuclear disasters?
  • How can we improve relations between nuclear armed states?
  • What do you think about the campaign for complete nuclear disarmament?
  • If you could tell the US government to do three things, what are the key priorities today?
  • Is it practical to get members of congress to pay attention to nuclear risks?
  • Could modernisation of nuclear weapons actually make the world safer?

Keiran Harris helped produce today’s episode.

Continue reading →

Ofir Reich on using data science to end poverty and the spurious action/inaction distinction

Ofir Reich spent 6 years doing math in the military, before spending another 2 in tech startups – but then made a sharp turn to become a data scientist focussed on helping the global poor.

At UC Berkeley’s Center for Effective Global Action he helps prevent tax evasion by identifying fake companies in India, enable Afghanistan to pay its teachers electronically, and raise yields for Ethiopian farmers by messaging them when local conditions make it ideal to apply fertiliser. Or at least that’s the hope – he’s also working on ways to test whether those interventions actually work.

Why dedicate his life to helping the global poor?

Ofir sees little moral difference between harming people and failing to help them. After all, if you had to press a button to keep all of your money from going to charity, and you pressed that button, would that be an action, or an inaction? Is there even an answer?

After reflecting on cases like this, he decided that to not engage with a problem is an active choice, one whose consequences he is just as morally responsible for as if he were directly involved. On top of his life philosophy we also discuss:

  • The benefits of working in a top academic environment
  • How best to start a career in global development
  • Are RCTs worth the money? Should we focus on big picture policy change instead? Or more economic theory?
  • How the delivery standards of nonprofits compare to top universities
  • Why he doesn’t enjoy living in the San Francisco bay area
  • How can we fix the problem of most published research being false?
  • How good a career path is data science?
  • How important is experience in development versus technical skills?
  • How he learned much of what he needed to know in the army
  • How concerned should effective altruists be about burnout?

Keiran Harris helped produce today’s episode.

Continue reading →

Our descendants will probably see us as moral monsters. What should we do about that?

Immanuel Kant is a profoundly influential figure in modern philosophy, and was one of the earliest proponents for universal democracy and international cooperation. He also thought that women have no place in civil society, that illegitimate children should receive fewer legal protections, and that there was a ranking in the moral worth of different races.

Throughout history we’ve consistently believed, as common sense, truly horrifying things by today’s standards. According to University of Oxford Professor Will MacAskill, it’s extremely likely that we’re in the same boat today. If we accept that we’re probably making major moral errors, how should we proceed?

If our morality is tied to common sense intuitions, we’re probably just preserving these biases and moral errors. Instead we need to develop a moral view that criticises common sense intuitions, and gives us a chance to move beyond them. And if humanity is going to spread to the stars it could be worth dedicating hundreds or thousands of years to moral reflection, lest we spread our errors far and wide.

Will is an Associate Professor in Philosophy at Oxford University, author of Doing Good Better, and one of the co-founders of the effective altruism community. In this interview we discuss a wide range of topics:

  • How would we go about a ‘long reflection’ to fix our moral errors?
  • Will’s forthcoming book on how one should reason and act if you don’t know which moral theory is correct. What are the practical implications of so-called ‘moral uncertainty’?
  • If we basically solve existential risks, what does humanity do next?
  • What are some of Will’s most unusual philosophical positions?
  • What are the best arguments for and against utilitarianism?
  • Given disagreements among philosophers, how much should we believe the findings of philosophy as a field?
  • What are some the biases we should be aware of within academia?
  • What are some of the downsides of becoming a professor?
  • What are the merits of becoming a philosopher?
  • How does the media image of EA differ to the actual goals of the community?
  • What kinds of things would you like to see the EA community do differently?
  • How much should we explore potentially controversial ideas?
  • How focused should we be on diversity?
  • What are the best arguments against effective altruism?
    Keiran Harris helped produce today’s episode.

Keiran Harris helped produce today’s episode.

Continue reading →

Annual review December 2017

Summary

This year, we focused on “upgrading” – getting engaged readers into our top priority career paths.

We do this by writing articles on why and how to enter the priority paths, providing one-on-one advice to help the most engaged readers narrow down, and introductions to help them enter.

Some of our main successes this year include:

  1. We developed and refined this upgrading process, having been focused on introductory content last year. We made lots of improvements to coaching, and released 48 pieces of content.
  2. We used the process to grow the number of rated-10 plan changes 2.6-fold compared to 2016, from 19 to 50. We primarily placed people in AI technical safety, other AI roles, effective altruism non-profits, earning to give and biorisk.

  3. We started tracking rated-100 and rated-1000 plan changes. We recorded 10 rated-100 and one rated-1000 plan change, so with this change, total new impact-adjusted significant plan changes (IASPC v2) doubled compared to 2016, from roughly 1200 to 2400. That means we’ve grown the annual rate of plan changes 23-fold since 2013. (If we ignore the rated-100+ category, then IASPCv1 grew 31% from 2017 to 2016, and 12-fold since 2013.)

  4. This meant that despite rising costs, cost per IASPC was flat. We updated our historical and marginal cost-effectiveness estimates, and think we’ve likely been highly cost-effective, though we have a lot of uncertainty.

  5. We maintained a good financial position,

Continue reading →

Guide to effective holiday giving in 2017

It’s that wonderful time of year again – the time I have to rush out a blog post about effective holiday giving before heading off for the Christmas break.

Here’s our article on how to find the best charity to give to.

In short we now recommend giving to the Effective Altruism Funds – this allows you to delegate the decision to world experts who research the most effective places to give full time. It’s fast and really hard to do better.

Alternatively, if you’d like to try something new, check out donor lotteries. They’re a great innovation for small to medium sized donors, though take a minute to fully understand.

If you want to do your own research, my holiday giving guide from last year is still a good starting point, as are the recent posts by the researchers at GiveWell and the Open Philanthropy Project on where they’re giving.

A possible new year’s resolution

Thinking longer term, this is the time of year that many people take the Giving What We Can pledge to donate 10% of their income to the most impactful organisations than can find. Last year 318 people did so over the holidays, and Giving What We Can is running a pledge drive again this year.

Donating 10% is one of the more straightfoward ways you can have more social impact.

Continue reading →

Michelle hopes to shape the world by shaping the ideas of intellectuals. Will global priorities research succeed?

In the 40s and 50s neoliberalism was a fringe movement within economics. But by the 80s it had become a dominant school of thought in public policy, and achieved major policy changes across the English speaking world. How did this happen?

In part because its leaders invested heavily in training academics to study and develop their ideas. Whether you think neoliberalism was good or bad, its history demonstrates the impact building a strong intellectual base within universities can have.

Dr Michelle Hutchinson is working to get a different set of ideas a hearing in academia by setting up the Global Priorities Institute (GPI) at Oxford University. The Institute, which is currently hiring for three roles, aims to bring together outstanding philosophers and economists to research how to most improve the world. The hope is that it will spark widespread academic engagement with effective altruist thinking, which will hone the ideas and help them gradually percolate into society more broadly.

Its research agenda includes questions like:

  • How do we compare the good done by focussing on really different types of causes?
  • How does saving lives actually affect the world relative to other things we could do?
  • What are the biggest wins governments should be focussed on getting?

Before moving to GPI, Michelle was the Executive Director of Giving What We Can and a founding figure of the effective altruism movement. She has a PhD in Applied Ethics from Oxford on prioritization and global health.

We discuss:

  • What is global priorities research and why does it matter?
  • How is effective altruism seen in academia? Is it important to convince academics of the value of your work, or is it OK to ignore them?
  • Operating inside a university is quite expensive, so is it even worth doing? Who can pay for this kind of thing?
  • How hard is it to do something innovative inside a university? How serious are the administrative and other barriers?
  • Is it harder to fundraise for a new institute, or hire the right people?
  • Have other social movements benefitted from having a prominent academic arm?
  • How can people prepare themselves to get research roles at a place like GPI?
  • Many people want to have roles doing this kind of research. How many are actually cut out for it? What should those who aren’t do instead?
  • What are the odds of the Institute’s work having an effect on the real world?

If you’re interesting in donating to or working at GPI, you can email Michelle at [email protected]

Continue reading →

Prof Tetlock on predicting catastrophes, why keep your politics secret, and when experts know more than you

Prof Philip Tetlock is a social science legend. Over forty years he has researched whose forecasts we can trust, whose we can’t and why – and developed methods that allow all of us to be better at predicting the future.

After the Iraq WMDs fiasco, the US intelligence services hired him to figure out how to ensure they’d never screw up that badly again. The result of that work – Superforecasting – was a media sensation in 2015.

It described Tetlock’s Good Judgement Project, which found forecasting methods so accurate they beat everyone else in open competition, including thousands of people in the intelligence services with access to classified information.

Today he’s working to develop the best forecasting process ever by combining the best of human and machine intelligence in the Hybrid Forecasting Competition, which you can start participating in now to sharpen your own judgement.

In this interview we describe his key findings and then push to the edge of what’s known about how to foresee the unforeseeable:

  • Should people who want to be right just adopt the views of experts rather than apply their own judgement?
  • Why are Berkeley undergrads worse forecasters than dart-throwing chimps?
  • Should I keep my political views secret, so it will be easier to change them later?
  • How can listeners contribute to his latest cutting-edge research?
  • What do we know about our accuracy at predicting low-probability high-impact disasters?
  • Does his research provide an intellectual basis for populist political movements?
  • Was the Iraq War caused by bad politics, or bad intelligence methods?
  • What can we learn about forecasting from the 2016 election?
  • Can experience help people avoid overconfidence and underconfidence?
  • When does an AI easily beat human judgement?
  • Could more accurate forecasting methods make the world more dangerous?
  • How much does demographic diversity line up with cognitive diversity?
  • What are the odds we’ll go to war with China?
  • Should we let prediction tournaments run most of the government?

Continue reading →

Why you should consider applying for grad school right now

Application deadlines for US PhD programs are coming up over the next month. We think many of our readers who are considering grad school at some point in the next few years should apply this year.

We’re writing this informal list of pros and cons now because a number of people we’ve coached recently have been more reluctant to apply for grad school than we think they should have been.

Why should they take the option seriously?

  • You have to plan far ahead of time. If you apply now you will only begin the program late next year. Even if you don’t feel ready to start a PhD today, you should consider whether you will be in a year’s time. If you aren’t sure, applying keeps that option open. We’ve spoken to many people considering grad school but thought they would work for a few years before returning, only to have their situation change and grad school seem like a much better option. Early in your career, your mind can change more often than you expect.
  • An increasing number of the paths we recommend, especially in research and policy, are much easier to pursue with a PhD. For example, if you want to work on improving our ability to control pandemics, the best options appear to be research (most likely in academia but perhaps also in the private sector), or policy reform (in think tanks, government agencies, congressional, or elsewhere). Some of the best roles are only open to people with PhDs.

Continue reading →

Going undercover to expose animal cruelty, get rabbit cages banned and reduce meat consumption

What if you knew that ducks were being killed with pitchforks? Rabbits dumped alive into containers? Or pigs being strangled with forklifts? Would you be willing to go undercover to expose the crime?

That’s a real question that confronts volunteers at Animal Equality (AE). In this episode we speak to Sharon Nunez and Jose Valle, who founded AE in 2006 and then grew it into a multi-million dollar international animal rights organisation. They’ve been chosen as one of the most effective animal protection orgs in the world by Animal Charity Evaluators for the last 3 consecutive years.

In addition to undercover investigations AE has also designed a 3D virtual-reality farm experience called iAnimal360. People get to experience being trapped in a cage – in a room designed to kill then – and can’t just look away. How big an impact is this having on users?

In this interview I’m joined by my colleague Natalie Cargill – Sharon Nuñez and Jose Valle also tackle:

  • How do they track their goals and metrics week to week?
  • How much does an undercover investigation cost?
  • Why don’t people donate more to factory farmed animals, given that they’re the vast majority of animals harmed directly by humans?
  • How risky is it to attempt to build a career in animal advocacy?
  • What led to a change in their focus from bullfighting in Spain to animal farming?
  • How does working with governments or corporate campaigns compare with early strategies like creating new vegans/vegetarians?
  • Has their very rapid growth been difficult to handle?
  • What should our listeners study or do if they want to work in this area?
  • How can we get across the message that horrific cases are a feature – not a bug – of factory farming?
  • Do the owners or workers of factory farms ever express shame at what they do?

If you’re interested in this episode you’ll also want to hear our comprehensive review of ways to help animals with Lewis Bollard.

Continue reading →

What are the most important talent gaps in the effective altruism community?

What are the highest-impact opportunities in the effective altruism community right now? We surveyed leaders at 17 key organisations to learn more about what skills they need and how they would trade-off receiving donations against hiring good staff. It’s a more extensive and up-to-date version of the survey we did last year.

Below is a summary of the key numbers, a link to a presentation with all the results, a discussion of what these numbers mean, and at the bottom an appendix on how the survey was conducted and analysed.

We also report on two additional surveys about the key bottlenecks in the community, and the amount of donations expected to these organisations.

Key figures

Willingness to pay to bring forward hires

We asked how organisations would have to be compensated in donations for their last ‘junior hire’ or ‘senior hire’ to disappear and not do valuable work for a 3 year period:

Most needed skills

  • Decisions on who to hire most often turned on Good overall judgement about probabilities, what to do and what matters, General mental ability and Fit with the team (over and above being into EA).

Funding vs talent constraints

  • On a 0-4 scale EA organisations viewed themselves as 2.5 ‘talent constrained’ and 1.2 ‘funding constrained’, suggesting hiring remains the more significant limiting factor, though funding still does limit some.

Continue reading →