#38 – Yew-Kwang Ng on anticipating EA decades ago & how to make a much happier world

Will people who think carefully about how to maximize welfare eventually converge on the same views?

The effective altruism community has spent the past 10 years debating how best to increase happiness and reduce suffering, and gradually narrowed in on the world’s poorest people, all sentient animals, and future generations.

Yew-Kwang Ng, Professor of Economics at Nanyang Technological University in Singapore, worked totally independently on this exact question since the 70s. Many of his early conclusions are now conventional wisdom within effective altruism – though other views he holds remain controversial or little-known.

For instance, he thinks we ought to explore increasing pleasure via direct brain stimulation, and that genetic engineering may be an important tool for increasing happiness in the future.

His work has suggested that the welfare of most wild animals is on balance negative and he hopes that in the future this is a problem humanity will work to solve. Yet he thinks that greatly improved conditions for farm animals could eventually justify eating meat.

And he has spent most of his life forcefully advocating for the view that happiness, broadly construed, is the only intrinsically valuable thing.

If it’s true that careful researchers will converge as Prof Ng believes, these ideas may prove as prescient as his other, now widely accepted, opinions.

See below for our summary and appreciation of Kwang’s top publications and insights throughout a lifetime of research.

Born in Japanese-occupied Malaya during WW2, Kwang has led an exceptional life. While in high school he was drawn to physics, mathematics, and philosophy, yet he chose to study economics because of his dream: to establish communism in an independent Malaysia.

But events in the Soviet Union and the Chinese ‘cultural revolution’, in addition to his burgeoning knowledge and academic appreciation of economics, would change his views about the practicability of communism. He would soon complete his journey from young revolutionary to academic economist, and eventually become a columnist writing in support of Deng Xiaoping’s Chinese economic reforms in the 80s.

He got his PhD at Sydney University in 1971, and has since published over 250 peer-reviewed papers – covering economics, biology, politics, mathematics, philosophy, psychology, and sociology, with a particular focus on ‘welfare economics’.

In 2007, he was made a Distinguished Fellow of the Economic Society of Australia, the highest award the society bestows.

In this episode we discuss how he developed some of his most unusual ideas and his fascinating life story, including:

  • Why Kwang believes that ‘Happiness Is Absolute, Universal, Ultimate, Unidimensional, Cardinally Measurable and Interpersonally Comparable’
  • What are the most pressing questions in economics?
  • Did Kwang have to worry about censorship from the Chinese government when promoting market economics, or concern for animal welfare?
  • Welfare economics and where Kwang thinks it went wrong
  • The need to move towards a morality based on happiness
  • What are the key implications of Kwang’s views for how a government ought to set its priorities?
  • Could promoting these views accidentally give support to oppressive governments?
  • Why does Kwang think the economics profession as a whole doesn’t agree with him on many things?
  • Why he thinks we should spend much more to prevent climate change and whether other are economists convinced by his arguments
  • Kwang’s proposed field: welfare biology.
  • Does evolution tend to create happy or unhappy creatures?
  • Novel ways to substantially increase human happiness
  • What would Kwang say to listeners who might want to build on his research in the future?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

#37 – GiveWell picks top charities by estimating the unknowable. James Snowden on how they do it.

What’s the value of preventing the death of a 5-year-old child, compared to a 20-year-old, or an 80-year-old?

The global health community has generally regarded the value as proportional to the number of health-adjusted life-years the person has remaining – but GiveWell, one of the world’s foremost charity evaluators, no longer uses that approach. They found that contrary to the years-remaining’ method, many of their staff actually value preventing the death of an adult more than preventing the death of a young child. But there’s plenty of disagreement, with the team’s estimates spanning a four-fold range.

As James Snowden – a research consultant at GiveWell – explains in this episode, there’s no way around making these controversial judgement calls based on limited information. If you try to ignore a question like this, you just implicitly take an unreflective stance on it instead. And for each charity they investigate there’s 1 or 2 dozen of these highly uncertain parameters that need to be estimated.

GiveWell has been working to find the best way to make these decisions since its inception in 2007. Lives hang in the balance, so they want their staff to say what they really believe and bring whatever private knowledge they have to the table, rather than just defer to their managers, or an imaginary consensus.

Their strategy is to have a massive spreadsheet that lists dozens of things they need to know, and to ask every staff member to give a figure and justification. Then once a year, the GiveWell team gets together to identify what they really disagree about and think through what evidence it would take to change their minds.

Often the people who have the greatest familiarity with a particular intervention are the ones who drive the decision, as others choose to defer to them. But the group can also end up with very different answers, based on different prior beliefs about moral issues and how the world works. In that case then use the median of everyone’s best guess to make their key decisions.

In making his estimate of the relative badness of dying at different ages, James specifically considered two factors: how many years of life do you lose, and how much interest do you have in those future years? Currently, James believes that the worst time for a person to die is around 8 years of age.

We discuss his experiences with doing such calculations, as well as various other topics:

  • Why GiveWell’s recommendations have changed more than it looks.
  • What are the biggest research priorities for GiveWell at the moment?
  • How do you take into account the long-term knock-on effects from interventions?
  • If GiveWell’s advice were going to end up being very different in a couple years’ time, how might that happen?
  • Are there any charities that James thinks are really cost-effective which GiveWell hasn’t funded yet?
  • How does domestic government spending in the developing world compare to effective charities?
  • What are the main challenges with policy related interventions?
  • What are the main uncertainties around interventions to reduce pesticide suicide? Are there any other mental health interventions you’re looking at?
  • How much time do you spend trying to discover novel interventions?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

#36 – Tanya Singh on ending the operations management bottleneck in effective altruism

Almost nobody is able to do groundbreaking physics research themselves, and by the time his brilliance was appreciated, Einstein was hardly limited by funding. But what if you could find a way to unlock the secrets of the universe like Einstein nonetheless?

Today’s guest, Tanya Singh, sees herself as doing something like that every day. She’s Executive Assistant to one of her intellectual heroes who she believes is making a huge contribution to improving the world: Professor Bostrom at Oxford University’s Future of Humanity Institute (FHI).

She couldn’t get more work out of Bostrom with extra donations, as his salary is already easily covered. But with her superior abilities as an Executive Assistant, Tanya frees up hours of his time every week, essentially ‘buying’ more Bostrom in a way nobody else can. She also help manage FHI more generally, in so doing freeing up more than an hour of staff time for each hour she works. This gives her the leverage to do more good than other people or other positions.

In our previous episode, Tara Mac Aulay objected to viewing operations work as predominately a way of freeing up other people’s time:

“A good ops person doesn’t just allow you to scale linearly, but also can help figure out bottlenecks and solve problems such that the organization is able to do qualitatively different work, rather than just increase the total quantity”, Tara said.

Tara’s right that buying time for people at the top of their field is just one path to impact, though it’s one Tanya says she finds highly motivating. Other paths include enabling complex projects that would otherwise be impossible, allowing you to hire and grow much faster, and preventing disasters that could bring down a whole organisation – all things that Tanya does at FHI as well.

In today’s episode we discuss all of those approaches, as we dive deeper into the broad class of roles we refer to as ‘operations management’. We discuss the arguments we made in ‘Why operations management is one of the biggest bottlenecks in effective altruism’, as well as:

  • Does one really need to hire people aligned with an org’s mission to work in ops?
  • The most notable operations successes in the 20th Century.
  • What’s it like being the only operations person in an org?
  • The role of a COO as compared to a CEO, and the options for career progression.
  • How do good operation teams allow orgs to scale quickly?
  • How much do operations staff get to set their org’s strategy?
  • Which personal weaknesses aren’t a huge problem in operations?
  • How do you automate processes? Why don’t most people do this?
  • Cultural differences between Britain and India where Tanya grew up.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

#35 – Tara Mac Aulay on the audacity to fix the world without asking permission

How broken is the world? How inefficient is a typical organisation? Looking at Tara Mac Aulay’s life, the answer seems to be ‘very’.

At 15 she took her first job – an entry-level position at a chain restaurant. Rather than accept her place, Tara took it on herself to massively improve the store’s shambolic staff scheduling and inventory management. After cutting staff costs 30% she was quickly promoted, and at 16 sent in to overhaul dozens of failing stores in a final effort to save them from closure.

That’s just the first in a startling series of personal stories that take us to a hospital drug dispensary where pharmacists are wasting a third of their time, a chemotherapy ward in Bhutan that’s killing its patients rather than saving lives, and eventually the Centre for Effective Altruism, where Tara becomes CEO and leads it through start-up accelerator Y Combinator.

In this episode – available in audio and summary or transcript below – Tara demonstrates how the ability to do practical things, avoid major screw-ups, and design systems that scale, is both rare and precious.

People with an operations mindset spot failures others can’t see and fix them before they bring an organisation down. This kind of resourcefulness can transform the world by making possible critical projects that would otherwise fall flat on their face.

But as Tara’s experience shows they need to figure out what actually motivates the authorities who often try to block their reforms.

We explore how people with this skill set can do as much good as possible, what 80,000 Hours got wrong in our article ‘Why operations management is one of the biggest bottlenecks in effective altruism’, as well as:

  • Tara’s biggest mistakes and how to deal with the delicate politics of organizational reform.
  • How a student can save a hospital millions with a simple spreadsheet model.
  • The sociology of Bhutan and how medicine in the developing world often makes things worse rather than better.
  • What most people misunderstand about operations, and how to tell if you have what it takes.
  • And finally, operations jobs people should consider applying for, such as those open now at the Centre for Effective Altruism.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

#34 – We use the worst voting system that exists. Here's how Aaron Hamlin is going to fix it.

In 1991 Edwin Edwards won the Louisiana gubernatorial election. In 2001, he was found guilty of racketeering and received a 10 year invitation to Federal prison. The strange thing about that election? By 1991 Edwards was already notorious for his corruption. Actually, that’s not it.

The truly strange thing is that Edwards was clearly the good guy in the race. How is that possible?

His opponent was former Ku Klux Klan Grand Wizard David Duke.

How could Louisiana end up having to choose between a criminal and a Nazi sympathiser?

It’s not like they lacked other options: the state’s moderate incumbent governor Buddy Roemer ran for re-election. Polling showed that Roemer was massively preferred to both the career criminal and the career bigot, and would easily win a head-to-head election against either.

Unfortunately, in Louisiana every candidate from every party competes in the first round, and the top two then go on to a second – a so-called ‘jungle primary’. Vote splitting squeezed out the middle, and meant that Roemer was eliminated in the first round.

Louisiana voters were left with only terrible options, in a run-off election mostly remembered for the proliferation of bumper stickers reading “Vote for the Crook. It’s Important.”

We could look at this as a cultural problem, exposing widespread enthusiasm for bribery and racism that will take generations to overcome. But according to Aaron Hamlin, Executive Director of The Center for Election Science (CES), there’s a simple way to make sure we never have to elect someone hated by more than half the electorate: change how we vote.

He advocates an alternative voting method called approval voting, in which you can vote for as many candidates as you want, not just one. That means that you can always support your honest favorite candidate, even when an election seems like a choice between the lesser of two evils.

While it might not seem sexy, this single change could transform politics. Approval voting is adored by voting researchers, who regard it as the best simple voting system available. (For whether your individual vote matters, see our article on the importance of voting.)

Which do they regard as unquestionably the worst? First-past-the-post – precisely the disastrous system used and exported around the world by the US and UK.

Aaron has a practical plan to spread approval voting across the US using ballot initiatives – and it just might be our best shot at making politics a bit less unreasonable.

The Center for Election Science is a U.S. nonprofit which aims to fix broken government by helping the world adopt smarter election systems. They recently received a $600,000 grant from Open Philanthropy to scale up their efforts.

Get this episode now by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or check out the transcript below.

In this comprehensive conversation Aaron and I discuss:

  • Why hasn’t everyone just picked the best voting system already? Why is this a tough issue?
  • How common is it for voting systems to produce suboptimal outcomes, or even disastrous ones?
  • What is approval voting? What are its biggest downsides?
  • The positives and negatives of different voting methods used globally
  • The difficulties of getting alternative voting methods implemented
  • Do voting theorists mostly agree on the best voting method?
  • Are any unequal voting methods – where those considered more politically informed get a disproportional say – viable options?
  • Does a lack of general political knowledge from an electorate mean we need to keep voting methods simple?
  • How does voting reform stack up on the 80,000 Hours metrics of scale, neglectedness and solvability?
  • Is there anywhere where these reforms have been tested so we can see the expected outcomes?
  • Do we see better governance in countries that have better voting systems?
  • What about the argument that we don’t want the electorate to have more influence (because of their at times crazy views)?
  • How much does a voting method influence a political landscape? How would a change in voting method affect the two party system?
  • How did the voting system affect the 2016 US presidential election?
  • Is there a concern that changing to approval voting would lead to more extremist candidates getting elected?
  • What’s the practical plan to get voting reform widely implemented? What’s the biggest challenge to implementation?
  • Would it make sense to target areas of the world that are currently experiencing a period of political instability?
  • Should we try to convince people to use alternative voting methods in their everyday lives (when going to the movies, or choosing a restaurant)?
  • What staff does CES need? What would they do with extra funding? What do board members do for a nonprofit?

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

#33 – Anders Sandberg on what if we ended ageing, solar flares & the annual risk of nuclear war

Joseph Stalin had a life-extension program dedicated to making himself immortal. What if he had succeeded?

According to our last guest, Bryan Caplan, there’s an 80% chance that Stalin would still be ruling Russia today. Today’s guest disagrees.

Like Stalin he has eyes for his own immortality – including an insurance plan that will cover the cost of cryogenically freezing himself after he dies – and thinks the technology to achieve it might be around the corner.

Fortunately for humanity though, that guest is probably one of the nicest people on the planet: Dr Anders Sandberg of Oxford University.

The potential availability of technology to delay or even stop ageing means this disagreement matters, so he has been trying to model what would really happen if both the very best and the very worst people in the world could live forever – among many other questions.

Anders, who studies low-probability high-stakes risks and the impact of technological change at the Future of Humanity Institute, is the first guest to appear twice on the 80,000 Hours Podcast and might just be the most interesting academic at Oxford.

His research interests include more or less everything, and bucking the academic trend towards intense specialization has earned him a devoted fan base.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Last time we asked him why we don’t see aliens, and how to most efficiently colonise the universe. In today’s episode we ask about Anders’ other recent papers, including:

  • Is it worth the money to freeze your body after death in the hope of future revival, like Anders has done?
  • How much is our perception of the risk of nuclear war biased by the fact that we wouldn’t be alive to think about it had one happened?
  • If biomedical research lets us slow down ageing would culture stagnate under the crushing weight of centenarians?
  • What long-shot drugs can people take in their 70s to stave off death?
  • Can science extend human (waking) life by cutting our need to sleep?
  • How bad would it be if a solar flare took down the electricity grid? Could it happen?
  • If you’re a scientist and you discover something exciting but dangerous, when should you keep it a secret and when should you share it?
  • Will lifelike robots make us more inclined to dehumanise one another?

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

#32 – Bryan Caplan on whether the Case Against Education holds up, totalitarianism, & open borders

Bryan Caplan’s claim in The Case Against Education is striking: education doesn’t teach people much, we use little of what we learn, and college is mostly about trying to seem smarter than other people – so the government should slash education funding.

It’s a dismaying – almost profane – idea, and one most are inclined to dismiss out of hand. But having read the book, I have to admit that Bryan can point to a surprising amount of evidence in his favour.

After all, imagine this dilemma: you can have either a Princeton education without a diploma, or a Princeton diploma without an education. Which is the bigger benefit of going to Princeton – learning, or convincing people you’re smart? It’s not so easy to say.

For this interview, I searched for the best counterarguments I could find and challenged Bryan on what seem like the book’s weakest or most controversial claims.

Wouldn’t defunding education be especially bad for capable but low income students? Shouldn’t we just make incremental rather than radical changes to policy? If you reduced funding for education, wouldn’t that just lower prices, and not actually change the number of years people study? Is it really true that students who drop out in their final year of college earn about the same as people who never go to college at all?

And while we’re at it, don’t Bryan and I actually use what we learned at college every day? What about studies that show that extra years of education boost IQ scores? And surely the early years of primary school, when you learn reading and arithmetic, are useful even if college isn’t.

I then get his advice on who should study, what they should study, and where they should study, if he’s right that college is mostly about separating yourself from the pack.

We then venture into some of Bryan’s other unorthodox views – like that immigration restrictions are a human rights violation, or that we should worry about the risk of global totalitarianism.

Bryan is a Professor of Economics at George Mason University and blogger at EconLog. He’s the author of three books: The Case Against Education: Why The Education System is a Waste of Time and Money, Selfish Reasons to Have More Kids: Why Being a Great Parent is Less Work and More Fun Than You Think, and The Myth of the Rational Voter: Why Democracies Choose Bad Policies.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

In this lengthy interview, Rob and Bryan cover:

  • How worried should we be about China’s new citizen ranking system as a means of authoritarian rule?
  • How will advances in surveillance technology impact a government’s ability to rule absolutely?
  • Does more global coordination make us safer, or more at risk?
  • Should the push for open borders be a major cause area for effective altruism?
  • Are immigration restrictions a human rights violation?
  • Why aren’t libertarian-minded people more focused on modern slavery?
  • Should altruists work on criminal justice reform or reducing land use regulations?
  • What’s the greatest art form: opera, or Nicki Minaj?
  • What are the main implications of Bryan’s thesis for society?
  • Is elementary school more valuable than university?
  • What does Bryan think are the best arguments against his view?
  • The specific effects of defunding education on low income students
  • Is it possible that we wouldn’t want success in education to correlate with worker productivity?
  • Do years of education affect political affiliation?
  • How do people really improve themselves and their circumstances?
  • Who should and who shouldn’t do a masters or PhD?
  • The value of teaching foreign languages in school
  • Are there some skills people can develop that have wide applicability?
  • Are those that use their training every day just exceptions?

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

#31 – Allan Dafoe on defusing the political and economic risks posed by existing AI capabilities

The debate around the impacts of artificial intelligence often centres on ‘superintelligence’ – a general intellect that is much smarter than the best humans, in practically every field.

But according to Allan Dafoe – Senior Research Fellow in the International Politics of AI at Oxford University – even if we stopped at today’s AI technology and simply collected more data, built more sensors, and added more computing capacity, extreme systemic risks could emerge, including:

  • Mass labor displacement, unemployment, and inequality;
  • The rise of a more oligopolistic global market structure, potentially moving us away from our liberal economic world order;
  • Imagery intelligence and other mechanisms for revealing most of the ballistic missile-carrying submarines that countries rely on to be able to respond to nuclear attack;
  • Ubiquitous sensors and algorithms that can identify individuals through face recognition, leading to universal surveillance;
  • Autonomous weapons with an independent chain of command, making it easier for authoritarian regimes to violently suppress their citizens.

Allan is Director of the Center for the Governance of AI, at the Future of Humanity Institute within Oxford University. His goals have been to understand the causes of world peace and stability, which in the past has meant studying why war has declined, the role of reputation and honor as drivers of war, and the motivations behind provocation in crisis escalation. His current focus is helping humanity safely navigate the invention of advanced artificial intelligence.

I ask Allan:

  • What are the distinctive characteristics of artificial intelligence from a political or international governance point of view?
  • Is Allan’s work just a continuation of previous research on transformative technologies, like nuclear weapons?
  • How can AI be well-governed?
  • How should we think about the idea of arms races between companies or countries?
  • What would you say to people skeptical about the importance of this topic?
  • How urgently do we need to figure out solutions to these problems? When can we expect artificial intelligence to be dramatically better than today?
  • What’s the most urgent questions to deal with in this field?
  • What can people do if they want to get into the field?
  • Is there anything unusual that people can look for in themselves to tell if they’re a good fit to do this kind of research?

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

#30 – Eva Vivalt on how little social science findings generalize from one study to another

If we have a study on the impact of a social program in a particular place and time, how confident can we be that we’ll get a similar result if we study the same program again somewhere else?

Dr Eva Vivalt is a lecturer in the Research School of Economics at the Australian National University. She compiled a huge database of impact evaluations in global development – including 15,024 estimates from 635 papers across 20 types of intervention – to help answer this question.

Her finding: not confident at all.

The typical study result differs from the average effect found in similar studies so far by almost 100%. That is to say, if all existing studies of an education program find that it improves test scores by 0.5 standard deviations – the next result is as likely to be negative or greater than 1 standard deviation, as it is to be between 0-1 standard deviations.

She also observed that results from smaller studies conducted by NGOs – often pilot studies – would often look promising. But when governments tried to implement scaled-up versions of those programs, their performance would drop considerably.

For researchers hoping to figure out what works and then take those programs global, these failures of generalizability and ‘external validity’ should be disconcerting.

Is ‘evidence-based development’ writing a cheque its methodology can’t cash?

Should we invest more in collecting evidence to try to get reliable results?

Or, as some critics say, is interest in impact evaluation distracting us from more important issues, like national economic reforms that can’t be tested in randomised controlled trials?

We discuss these questions as well as Eva’s other research, including Y Combinator’s basic income study where she is a principal investigator.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Questions include:

  • What is the YC basic income study looking at, and what motivates it?
  • How do we get people to accept clean meat?
  • How much can we generalize from impact evaluations?
  • How much can we generalize from studies in development economics?
  • Should we be running more or fewer studies?
  • Do most social programs work or not?
  • The academic incentives around data aggregation
  • How much can impact evaluations inform policy decisions?
  • How often do people change their minds?
  • Do policy makers update too much or too little in the real world?
  • How good or bad are the predictions of experts? How does that change when looking at individuals versus the average of a group?
  • How often should we believe positive results?
  • What’s the state of development economics?
  • Eva’s thoughts on our article on social interventions
  • How much can we really learn from being empirical?
  • How much should we really value RCTs?
  • Is an Economics PhD overrated or underrated?

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

#29 – Anders Sandberg on three new resolutions for the Fermi Paradox and how to easily colonise the universe

Update April 2019: The key theory Dr Sandberg puts forward for why aliens may delay their activities has been strongly disputed in a new paper, which claims it is based on an incorrect understanding of the physics of computation.

The universe is so vast, yet we don’t see any alien civilizations. If they exist, where are they? Oxford University’s Anders Sandberg has an original answer: they’re ‘sleeping’, and for a very compelling reason.

Because of the thermodynamics of computation, the colder it gets, the more computations you can do. The universe is getting exponentially colder as it expands, and as the universe cools, one Joule of energy gets worth more and more. If they wait long enough this can become a 10,000,000,000,000,000,000,000,000,000,000x gain. So, if a civilization wanted to maximize its ability to perform computations – its best option might be to lie in wait for trillions of years.

Why would a civilization want to maximise the number of computations they can do? Because conscious minds are probably generated by computation, so doing twice as many computations is like living twice as long, in subjective time. Waiting will allow them to generate vastly more science, art, pleasure, or almost anything else they are likely to care about.

But there’s no point waking up to find another civilization has taken over and used up the universe’s energy. So they’ll need some sort of monitoring to protect their resources from potential competitors like us.

It’s plausible that this civilization would want to keep the universe’s matter concentrated, so that each part would be in reach of the other parts, even after the universe’s expansion. But that would mean changing the trajectory of galaxies during this dormant period. That we don’t see anything like that makes it more likely that these aliens have local outposts throughout the universe, and we wouldn’t notice them until we broke their rules. But breaking their rules might be our last action as a species.

This ‘aestivation hypothesis’ is the invention of Dr Sandberg, a Senior Research Fellow at the Future of Humanity Institute at Oxford University, where he looks at low-probability, high-impact risks, predicting the capabilities of future technologies and very long-range futures for humanity.

In this incredibly fun conversation we cover this and other possible explanations to the Fermi paradox, as well as questions like:

  • Should we want optimists or pessimists working on our most important problems?
  • How should we reason about low probability, high impact risks?
  • Would a galactic civilization want to stop the stars from burning?
  • What would be the best strategy for exploring and colonising the universe?
  • How can you stay coordinated when you’re spread across different galaxies?
  • What should humanity decide to do with its future?

If you enjoy this episode, make sure to check out part two where we talk to Anders about dictators living forever, the annual risk of nuclear war, solar flares, and more.

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

#28 – Owen Cotton-Barratt on why scientists should need insurance, PhD strategy & what if AI progresses fast

A researcher is working on creating a new virus – one more dangerous than any that exist naturally. They believe they’re being as careful as possible. After all, if things go wrong, their own life and that of their colleagues will be in danger. But if an accident is capable of triggering a global pandemic – hundreds of millions of lives might be at risk. How much additional care will the researcher actually take in the face of such a staggering death toll?

In a new paper Dr Owen Cotton-Barratt, a Research Fellow at Oxford University’s Future of Humanity Institute, argues it’s impossible to expect them to make the correct adjustments. If they have an accident that kills 5 people – they’ll feel extremely bad. If they have an accident that kills 500 million people, they’ll feel even worse – but there’s no way for them to feel 100 million times worse. The brain simply doesn’t work that way.

So, rather than relying on individual judgement, we could create a system that would lead to better outcomes: research liability insurance.

Once an insurer assesses how much damage a particular project is expected to cause and with what likelihood – in order to proceed, the researcher would need to take out insurance against the predicted risk. In return, the insurer promises that they’ll pay out – potentially tens of billions of dollars – if things go really badly.

This would force researchers think very carefully about the cost and benefits of their work – and incentivize the insurer to demand safety standards on a level that individual researchers can’t be expected to impose themselves.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Owen is currently hiring for a selective, two-year research scholars programme at Oxford.

In this wide-ranging conversation Owen and I also discuss:

  • Are academics wrong to value personal interest in a topic over its importance?
  • What fraction of research has very large potential negative consequences?
  • Why do we have such different reactions to situations where the risks are known and unknown?
  • The downsides of waiting for tenure to do the work you think is most important.
  • What are the benefits of specifying a vague problem like ‘make AI safe’ more clearly?
  • How should people balance the trade-offs between having a successful career and doing the most important work?
  • Are there any blind alleys we’ve gone down when thinking about AI safety?
  • Why did Owen give to an organisation whose research agenda he is skeptical of?

Continue reading →

#27 – Dr Tom Inglesby on how to prevent global catastrophic biological risks

How about this for a movie idea: a main character has to prevent a new contagious strain of Ebola spreading around the world. She’s the best of the best. So good in fact, that her work on early detection systems contains the strain at its source. Ten minutes into the movie, we see the results of her work – nothing happens. Life goes on as usual. She continues to be amazingly competent, and nothing continues to go wrong. Fade to black. Roll credits.

If your job is to prevent catastrophes, success is when nobody has to pay attention to you. But without regular disasters to remind authorities why they hired you in the first place, they can’t tell if you’re actually achieving anything. And when budgets come under pressure you may find that success condemns you to the chopping block.

Dr. Tom Inglesby, Director of the Center for Health Security at the Johns Hopkins Bloomberg School of Public Health, worries this may be about to happen to the scientists working on the ‘Global Health Security Agenda’.

In 2014 Ebola showed the world why we have to detect and contain new diseases before they spread, and that when it comes to contagious diseases the nations of the world sink or swim together. Fifty countries decided to work together to make sure all their health systems were up to the challenge. Back then Congress provided 5 years’ funding to help some of the world’s poorest countries build the basic health security infrastructure necessary to control pathogens before they could reach the US.

But with Ebola fading from public memory and no recent tragedies to terrify us, Congress may not renew that funding and the project could fall apart. (Learn more about how you can help.)

But there are positive signs as well – the center Inglesby leads recently received a $16 million grant from Open Philanthropy to further their work preventing global catastrophes. It also runs the Emerging Leaders in Biosecurity Fellowship to train the next generation of biosecurity experts for the US government. Inglesby regularly testifies to Congress on the threats we all face and how to address them.

In this in-depth interview we try to provide concrete guidance for listeners who want to to pursue a career in health security, and also discuss:

  • Should more people in medicine work on security?
  • What are the top jobs for people who want to improve health security and how do they work towards getting them?
  • What people can do to protect funding for the Global Health Security Agenda.
  • Should we be more concerned about natural or human caused pandemics? Which is more neglected?
  • Should we be allocating more attention and resources to global catastrophic risk scenarios?
  • Why are senior figures reluctant to prioritize one project or area at the expense of another?
  • What does Tom think about the idea that in the medium term, human-caused pandemics will pose a far greater risk than natural pandemics, and so we should focus on specific counter-measures?
  • Are the main risks and solutions understood, and it’s just a matter of implementation? Or is the principal task to identify and understand them?
  • How is the current US government performing in these areas?
  • Which agencies are empowered to think about low probability high magnitude events?
  • Are there any scientific breakthroughs that carry particular risk of harm?
  • How do we approach safety in terms of rogue groups looking to inflict harm? How is that different from preventing accidents?
  • If a terrorist group were pursuing biological weapons, how much would the CIA or other organizations then become involved in the process?
  • What are the biggest unsolved questions in health security?

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

#26 – Marie Gibbons on how exactly clean meat is created & the advances needed to get it in every supermarket

First, decide on the type of animal. Next, pick the cell type. Then take a small, painless biopsy, and put the cells in a solution that makes them feel like they’re still in the body. Once the cells are in this comfortable state, they’ll proliferate. One cell becomes two, two becomes four, four becomes eight, and so on. Continue until you have enough cells to make a burger, a nugget, a sausage, or a piece of bacon, then concentrate them until they bind into solid meat.

It’s all surprisingly straightforward in principle according to Marie Gibbons, a research fellow with The Good Food Institute, who has been researching how to improve this process at Harvard Medical School. We might even see clean meat sold commercially within a year.

The real technical challenge is developing large bioreactors and cheap solutions so that we can make huge volumes and drive down costs.

This interview covers the science and technology involved at each stage of clean meat production, the challenges and opportunities that face cutting-edge researchers like Marie, and how you could become one of them.

Marie’s research focuses on turkey cells. But as she explains, with clean meat the possibilities extend well beyond those of traditional meat. Chicken, cow, pig, but also panda – and even dinosaurs could be on the menus of the future.

Today’s episode is hosted by Natalie Cargill, a barrister in London with a background in animal advocacy. Natalie and Marie also discuss:

  • Why Marie switched from being a vet to developing clean meat
  • For people who want to dedicate themselves to animal welfare, how does working in clean meat fare compared to other career options? How can people get jobs in the area?
  • How did this become an established field?
  • How important is the choice of animal species and cell type in this process?
  • What are the biggest problems with current production methods?
  • Is this kind of research best done in an academic setting, a commercial setting, or a balance between the two?
  • How easy will it be to get consumer acceptance?
  • How valuable would extra funding be for cellular agriculture?
  • Can we use genetic modification to speed up the process?
  • Is it reasonable to be sceptical of the possibility of clean meat becoming financially competitive with traditional meat any time in the near future?

The 80,000 Hours podcast is produced by Keiran Harris.

Image credits: Featured image – the range of bioreactors available from Sartorius. Social share image – figure 2 from GFI’s Mapping Emerging Industries:
Opportunities in Clean Meat
.

Continue reading →

#25 – Robin Hanson on why we have to lie to ourselves about why we do what we do

On February 2, 1685, England’s King Charles II was struck by a sudden illness. Fortunately his physicians were the best of the best. To reassure the public they were kept abreast of the King’s treatment regimen. King Charles was made to swallow a toxic metal; had blistering agents applied to his scalp; had pigeon droppings attached to his feet; was prodded with a red-hot poker; given forty drops of ooze from “the skull of a man that was never buried”; and, finally, had crushed stones from the intestines of an East Indian goat forced down his throat. Sadly, despite these heroic efforts, he passed away the following week.

Why did the doctors go this far?

Prof Robin Hanson – Associate Professor of Economics at George Mason University – suspects that on top of any medical beliefs the doctors had a hidden motive: it needed to be clear, to the King and the public, that the physicians cared enormously about saving His Royal Majesty. Only extreme measures could make it undeniable that they had done everything they could.

If you believe Hanson, the same desire to prove we care about our family and friends explains much of what’s perverse about our medical system today.

And not only what’s perverse about medicine – Robin thinks we’re mostly kidding ourselves when we say our charities exist to help others, our schools exist to educate students, and our political expression is about choosing wise policies.

So important are hidden motives for navigating our social world that we have to deny them to ourselves, lest we accidentally reveal them to others.

Robin is a polymath economist, and a font of surprising and novel ideas in a range of fields including psychology, politics and futurology. In this extensive episode we discuss his latest book with Kevin Simler, The Elephant in the Brain: Hidden Motives in Everyday Life. We also dive into:

  • What was it like being part of a competitor group to the ‘World Wide Web’, but being beaten to the post?
  • If people aren’t going to school to learn, what’s education for?
  • What split brain patients show about our capacity for self-justification
  • Why we choose the friends we do
  • What’s puzzling about our attitude to medicine?
  • How would it look if people were focused on doing as much good as possible?
  • Are we better off donating now, when we’re older, or even after our deaths?
  • How much of the behavior of ‘effective altruists’ can we assume is genuinely motivated by wanting to do as much good as possible?
  • What does Robin mean when he refers to effective altruism as a youth movement? Is that a good or bad thing?
  • Should people make peace with their hidden motives, or remain ignorant of them?
  • How might we change policy if we fully understood these hidden motivations?
  • Is this view of human nature depressing?
  • Could we let betting markets run much of the government?
  • Why don’t big ideas for institutional reform get adopted?
  • Does history show we’re capable of predicting when new technologies will arise, or what their social impact will be?
  • What are the problems with thinking about the future in an abstract way?
  • Why has Robin shifted from mainly writing papers, to writing blog posts, to writing books?
  • Why are people working in policy reluctant to accept conclusions from psychology?
  • How did being publicly denounced by senators help Robin’s career?
  • Is contrarianism good or bad?
  • The relationship between the quality of an argument and its popularity
  • What would Robin like to see effective altruism do differently?
  • What has Robin changed his mind about over the last 5 years?

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

#24 – Stefan Schubert on why it's a bad idea to break the rules, even if it's for a good cause

How honest should we be? How helpful? How friendly? If our society claims to value honesty, for instance, but in reality accepts an awful lot of lying – should we go along with those lax standards? Or, should we attempt to set a new norm for ourselves?

Dr Stefan Schubert, a researcher at the Social Behaviour and Ethics Lab at Oxford University, has been modelling this in the context of the effective altruism community. He thinks people trying to improve the world should hold themselves to very high standards of integrity, because their minor sins can impose major costs on the thousands of others who share their goals.

In addition, when a norm is uniquely important to our situation, we should be willing to question society and come up with something different and hopefully better.

But in other cases, we can be better off sticking with whatever our culture expects, both to save time, avoid making mistakes, and ensure others can predict our behaviour.

In this interview Stefan offers a range of views on the projects and culture that make up ‘effective altruism’ – including where it’s going right and where it’s going wrong.

Stefan did his PhD in formal epistemology, before moving on to a postdoc in political rationality at the London School of Economics, while working on advocacy projects to improve truthfulness among politicians. At the time the interview was recorded Stefan was a researcher at the Centre for Effective Altruism in Oxford.

We also discuss:

  • Should we trust our own judgement more than others’?
  • How hard is it to improve political discourse?
  • What should we make of well-respected academics writing articles that seem to be completely misinformed?
  • How is effective altruism (EA) changing? What might it be doing wrong?
  • How has Stefan’s view of EA changed?
  • Should EA get more involved in politics, or steer clear of it? Would it be a bad idea for a talented graduate to get involved in party politics?
  • How much should we cooperate with those with whom we have disagreements?
  • What good reasons are there to be inconsiderate?
  • Should effective altruism potentially focused on a more narrow range of problems?

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

#23 – How to actually become an AI alignment researcher, according to Jan Leike

Want to help steer the 21st century’s most transformative technology? First complete an undergrad degree in computer science and mathematics. Prioritize harder courses over easier ones. Publish at least one paper before you apply for a PhD. Find a supervisor who’ll have a lot of time for you. Go to the top conferences and meet your future colleagues. And finally, get yourself hired.

That’s Dr Jan Leike’s advice on how to join him as a Research Scientist at DeepMind, the world’s leading AI team.

Jan is also a Research Associate at the Future of Humanity Institute at the University of Oxford, and his research aims to make machine learning robustly beneficial. His current focus is getting AI systems to learn good objective functions in cases where we can’t easily specify the outcome we actually want.

How might you know you’re a good fit for this kind of research?

Jan says to check whether you get obsessed with puzzles and problems, and find yourself mulling over questions that nobody knows the answer to. To do research in a team you also have to be good at clearly and concisely explaining your new ideas to other people.

We also discuss:

  • Where do Jan’s views differ from those expressed by Dario Amodei in episode 3?
  • Why is AGI alignment one of the world’s most pressing problems?
  • Common misconceptions about artificial intelligence
  • What are some of the specific things DeepMind is researching?
  • The ways in which today’s AI systems can fail
  • What are the best techniques available today for teaching an AI the right objective function?
  • What’s it like to have some of the world’s greatest minds as coworkers?
  • Who should do empirical research and who should do theoretical research
  • What’s the DeepMind application process like?
  • The importance of researchers being comfortable with the unknown.

The 80,000 Hours podcast is produced by Keiran Harris.

Continue reading →

Yes, a career in commercial law has earning potential. We still don’t recommend it.

Going into law isn’t going out of style. Law ranks among the top five career options for students1 and is one of the most popular degree courses at undergraduate level.2What explains its persistent appeal? While people go into law for a number of reasons,3 many are motivated to make a difference through public interest and pro bono work.4

Law is also one of the highest paying professions, however, so working directly on social justice issues isn’t the only way you can do good as a lawyer. If you enjoy commercial work and can secure a place at a high-paying firm, you can also have an impact by donating some of your earnings to charity. We call this earning to give.

If you target your donations to highly effective charities, this could be just as high-impact as public interest law. Newly qualified lawyers at top-ranked firms can expect to earn upwards of £70,000. Donating 10% of this take-home pay5 would be enough to save somebody’s life by buying anti-malaria bednets.6 If you are one of the approximately 5% who makes partner, you could earn over £1m each year – enough to fund a whole team of researchers, advocates or non-profit entrepreneurs.

In this profile, we explore the pros and cons of law for earning to give. We focus on high-end commercial law – where the money is – and hope to discuss public interest law in a separate review. It’s based on the legal training and experience of the primary author of this profile, Natalie Cargill, as well as conversations with lawyers from a range of practice areas. We’ve also drawn on academic literature, surveys by the Law Society, and publicly-available salary data.

Continue reading →

Blog: A new recommended career path for effective altruists: China specialist

Last summer, China unveiled a plan to become the world leader in artificial intelligence, aiming to create a $150 billion industry by 2030.

“We must take initiative to firmly grasp this new stage of development for artificial intelligence and create a new competitive edge,” the country’s State Council said. The move symbolised the technological thrust of “the great rejuvenation of the Chinese nation” promoted by President Xi Jinping.

And it’s not just AI. China is becoming increasingly important in the solution of other global problems prioritised by the effective altruism community, including biosecurity, factory farming and nuclear security. But few in the community know much about the country, and coordination between Chinese and Western organisations seems like it could be improved a great deal.

This suggests that a high-impact career path could be to develop expertise in the intersection between China, effective altruism and pressing global issues. Once you’ve attained this expertise, you can use it to carry out research into global priorities or AI strategy; work in governments setting relevant areas of China-West policy, advise Western groups on how to work together with their Chinese counterparts, and other projects that we’ll sketch below…

Continue reading →

#22 – Leah Utyasheva on the nonprofit that figured out how to massively cut suicide rates

How people kill themselves varies enormously depending on which means are most easily available. In the United States, suicide by firearm stands out. In Hong Kong, where most people live in high rise buildings, jumping from a height is more common. And in some countries in Asia and Africa with many poor agricultural communities, the leading means is drinking pesticide.

There’s a good chance you’ve never heard of this issue before. And yet, of the 800,000 people who kill themselves globally each year 20% die from pesticide self-poisoning.

Research suggests most people who try to kill themselves with pesticides reflect on the decision for less than 30 minutes, and that less than 10% of those who don’t die the first time around will try again.

Unfortunately, the fatality rate from pesticide ingestion is 40% to 70%.

Having such dangerous chemicals near people’s homes is therefore an enormous public health issue not only for the direct victims, but also the partners and children they leave behind.

Fortunately researchers like Dr Leah Utyasheva have figured out a very cheap way to massively reduce pesticide suicide rates.

In 2016, Leah co-founded the first organisation focused on this problem – The Centre for Pesticide Suicide Prevention – which recently received an incubation grant from GiveWell. She’s a human rights expert and law reform specialist, and has participated in drafting legal aid, human rights, gender equality, and anti-discrimination legislation in various countries across Europe and Canada.

In this episode, Leah and I discuss:

  • How do you prevent pesticide suicide and what’s the evidence it works?
  • How do you know that most people attempting suicide don’t want to die?
  • What types of events are causing people to have the crises that lead to attempted suicide?
  • How much money does it cost to save a life in this way?
  • How do you estimate the probability of getting law reform passed in a particular country?
  • Have you generally found politicians to be sympathetic to the idea of banning these pesticides? What are their greatest reservations?
  • The comparison of getting policy change rather than helping person-by-person
  • The importance of working with locals in places like India and Nepal, rather than coming in exclusively as outsiders
  • What are the benefits of starting your own nonprofit versus joining an existing org and persuading them of the merits of the cause?
  • Would Leah in general recommend starting a new charity? Is it more exciting than it is scary?
  • Is it important to have an academic leading this kind of work?
  • How did The Centre for Pesticide Suicide Prevention get seed funding?
  • How does the value of saving a life from suicide compare to savings someone from malaria
  • Leah’s political campaigning for the rights of vulnerable groups in Eastern Europe
  • What are the biggest downsides of human rights work?

Keiran Harris helped produce today’s episode.

Continue reading →