#111 – Mushtaq Khan on how mainstream economics gets corruption and good governance all wrong

If you’re living in the Niger Delta in Nigeria, your best bet at a high-paying career is probably ‘artisanal refining’ — or, in plain language, stealing oil from pipelines.

The resulting oil spills damage the environment and cause severe health problems, but the Nigerian government has continually failed in their attempts to stop this theft.

They send in the army, and the army gets corrupted. They send in enforcement agencies, and the enforcement agencies get corrupted. What’s happening here?

According to Mushtaq Khan, economics professor at SOAS University of London, this is a classic example of ‘networked corruption’. Everyone in the community is benefiting from the criminal enterprise — so much so that the locals would prefer civil war to following the law. It pays vastly better than other local jobs, hotels and restaurants have formed around it, and houses are even powered by the electricity generated from the oil.

In today’s episode, Mushtaq elaborates on the models he uses to understand these problems and make predictions he can test in the real world.

Some of the most important factors shaping the fate of nations are their structures of power: who is powerful, how they are organized, which interest groups can pull in favours with the government, and the constant push and pull between the country’s rulers and its ruled. While traditional economic theory has relatively little to say about these topics, institutional economists like Mushtaq have a lot to say, and participate in lively debates about which of their competing ideas best explain the world around us.

The issues at stake are nothing less than why some countries are rich and others are poor, why some countries are mostly law abiding while others are not, and why some government programmes improve public welfare while others just enrich the well connected.

Mushtaq’s specialties are anti-corruption and industrial policy, where he believes mainstream theory and practice are largely misguided. To root out fraud, aid agencies try to impose institutions and laws that work in countries like the U.K. today. Everyone nods their heads and appears to go along, but years later they find nothing has changed, or worse — the new anti-corruption laws are mostly just used to persecute anyone who challenges the country’s rulers.

As Mushtaq explains, to people who specialise in understanding why corruption is ubiquitous in some countries but not others, this is entirely predictable. Western agencies imagine a situation where most people are law abiding, but a handful of selfish fat cats are engaging in large-scale graft. In fact in the countries they’re trying to change everyone is breaking some rule or other, or participating in so-called ‘corruption’, because it’s the only way to get things done and always has been.

Mushtaq’s rule of thumb is that when the locals most concerned with a specific issue are invested in preserving a status quo they’re participating in, they almost always win out.

To actually reduce corruption, countries like his native Bangladesh have to follow the same gradual path the U.K. once did: find organizations that benefit from rule-abiding behaviour and are selfishly motivated to promote it, and help them police their peers.

Trying to impose a new way of doing things from the top down wasn’t how Europe modernised, and it won’t work elsewhere either.

In cases like oil theft in Nigeria, where no one wants to follow the rules, Mushtaq says corruption may be impossible to solve directly. Instead you have to play a long game, bringing in other employment opportunities, improving health services, and deploying alternative forms of energy — in the hope that one day this will give people a viable alternative to corruption.

In this extensive interview Rob and Mushtaq cover this and much more, including:

  • How does one test theories like this?
  • Why are companies in some poor countries so much less productive than their peers in rich countries?
  • Have rich countries just legalized the corruption in their societies?
  • What are the big live debates in institutional economics?
  • Should poor countries protect their industries from foreign competition?
  • Where has industrial policy worked, and why?
  • How can listeners use these theories to predict which policies will work in their own countries?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#110 – Holden Karnofsky on building aptitudes and kicking ass

Holden Karnofsky helped create two of the most influential organisations in the effective philanthropy world. So when he outlines a different perspective on career advice than the one we present at 80,000 Hours — we take it seriously.

Holden disagrees with us on a few specifics, but it’s more than that: he prefers a different vibe when making career choices, especially early in one’s career.

While he might ultimately recommend similar jobs to those we recommend at 80,000 Hours, the reasons are often different.

At 80,000 Hours we often talk about ‘paths’ to working on what we currently think of as the most pressing problems in the world. That’s partially because people seem to prefer the most concrete advice possible.

But Holden thinks a problem with that kind of advice is that it’s hard to take actions based on it if your job options don’t match well with your plan, and it’s hard to get a reliable signal about whether you’re making the right choices.

How can you know you’ve chosen the right cause? How can you know the future job you’re aiming for will still be helpful to that cause? And what if you can’t get a job in this area at all?

Holden prefers to focus on ‘aptitudes’ that you can build in all sorts of different roles and cause areas, which can later be applied more directly.

Even if the current role or path doesn’t work out, or your career goes in wacky directions you’d never anticipated (like so many successful careers do), or you change your whole worldview — you’ll still have access to this aptitude.

So instead of trying to become a project manager at an effective altruism organisation, maybe you should just become great at project management. Instead of trying to become a researcher at a top AI lab, maybe you should just become great at digesting hard problems.

Who knows where these skills will end up being useful down the road?

Holden doesn’t think you should spend much time worrying about whether you’re having an impact in the first few years of your career — instead you should just focus on learning to kick ass at something, knowing that most of your impact is going to come decades into your career.

He thinks as long as you’ve gotten good at something, there will usually be a lot of ways that you can contribute to solving the biggest problems.

But that still leaves you needing to figure out which aptitude to focus on.

Holden suggests a couple of rules of thumb:

  1. Do what you’ll succeed at
  2. Take your intuitions and feelings seriously

80,000 Hours does recommend thinking about these types of things under the banner of career capital, but Holden’s version puts the development of these skills at the centre of your plan.

But Holden’s most important point, perhaps, is this:

Be very careful about following career advice at all.

He points out that a career is such a personal thing that it’s very easy for the advice-giver to be oblivious to important factors having to do with your personality and unique situation.

He thinks it’s pretty hard for anyone to really have justified empirical beliefs about career choice, and that you should be very hesitant to make a radically different decision than you would have otherwise based on what some person (or website!) tells you to do.

Instead, he hopes conversations like these serve as a way of prompting discussion and raising points that you can apply your own personal judgment to.

That’s why in the end he thinks people should look at their career decisions through his aptitude lens, the ‘80,000 Hours lens’, and ideally several other frameworks as well. Because any one perspective risks missing something important.

Holden and Rob also cover:

  • When not to do the thing you’re excited about
  • Ways to be helpful to longtermism outside of careers
  • ‘Money pits’ — cost-effective things that could absorb a lot of funding
  • Why finding a new cause area might be overrated
  • COVID and the biorisk portfolio
  • Whether the world has gotten better over thousands of years
  • Historical events that deserve more attention
  • Upcoming topics on Cold Takes
  • What Holden’s gotten wrong recently
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#109 – Holden Karnofsky on the most important century

Will the future of humanity be wild, or boring? It’s natural to think that if we’re trying to be sober and measured, and predict what will really happen rather than spin an exciting story, it’s more likely than not to be sort of… dull.

But there’s also good reason to think that that is simply impossible. The idea that there’s a boring future that’s internally coherent is an illusion that comes from not inspecting those scenarios too closely.

At least that is what Holden Karnofsky — founder of charity evaluator GiveWell and foundation Open Philanthropy — argues in his new article series titled ‘The Most Important Century’. He hopes to lay out part of the worldview that’s driving the strategy and grantmaking of Open Philanthropy’s longtermist team, and encourage more people to join his efforts to positively shape humanity’s future.

The bind is this. For the first 99% of human history the global economy (initially mostly food production) grew very slowly: under 0.1% a year. But since the industrial revolution around 1800, growth has exploded to over 2% a year.

To us in 2020 that sounds perfectly sensible and the natural order of things. But Holden points out that in fact it’s not only unprecedented, it also can’t continue for long.

The power of compounding increases means that to sustain 2% growth for just 10,000 years, 5% as long as humanity has already existed, would require us to turn every individual atom in the galaxy into an economy as large as the Earth’s today. Not super likely.

So what are the options? First, maybe growth will slow and then stop. In that case we today live in the single miniscule slice in the history of life during which the world rapidly changed due to constant technological advances, before intelligent civilization permanently stagnated or even collapsed. What a wild time to be alive!

Alternatively, maybe growth will continue for thousands of years. In that case we are at the very beginning of what would necessarily have to become a stable galaxy-spanning civilization, harnessing the energy of entire stars among other feats of engineering. We would then stand among the first tiny sliver of all the quadrillions of intelligent beings who ever exist. What a wild time to be alive!

Isn’t there another option where the future feels less remarkable and our current moment not so special?

While the full version of the argument above has a number of caveats, the short answer is ‘not really’. We might be in a computer simulation and our galactic potential all an illusion, though that’s hardly any less weird. And maybe the most exciting events won’t happen for generations yet. But on a cosmic scale we’d still be living around the universe’s most remarkable time:

Graphic

Holden himself was very reluctant to buy into the idea that today’s civilization is in a strange and privileged position, but has ultimately concluded “all possible views about humanity’s future are wild”.

In the full series Holden goes on to elaborate on technologies that might contribute to making this the most important era in history, including computer systems that automate research into science and technology, the ability to create ‘digital people’ on computers, or transformative artificial intelligence itself.

All of these offer the potential for huge upsides and huge downsides, and Holden is at pains to say we should neither rejoice nor despair at the circumstance we find ourselves in. Rather they require sober forethought about how we want the future to play out, and how we might as a species be able to steer things in that direction.

If this sort of stuff sounds nuts to you, Holden gets it — he spent the first part of his career focused on straightforward ways of helping people in poor countries. Of course this sounds weird.

But he thinks that, if you keep pushing yourself to do even more good, it’s reasonable to go from:

“I care about all people — even if they live on the other side of the world”, to “I care about all people — even if they haven’t been born yet”, to “I care about all people — even if they’re digital”.

In the conversation Holden and Rob cover each part of the ‘Most Important Century’ series, including:

  • The case that we live in an incredibly important time
  • How achievable-seeming technology – in particular, mind uploading – could lead to unprecedented productivity, control of the environment, and more
  • How economic growth is faster than it can be for all that much longer
  • Forecasting transformative AI
  • And the implications of living in the most important century

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#108 – Chris Olah on working at top AI labs without an undergrad degree

Chris Olah has had a fascinating and unconventional career path.

Most people who want to pursue a research career feel they need a degree to get taken seriously. But Chris not only doesn’t have a PhD, but doesn’t even have an undergraduate degree. After dropping out of university to help defend an acquaintance who was facing bogus criminal charges, Chris started independently working on machine learning research, and eventually got an internship at Google Brain, a leading AI research group.

In this interview — a follow-up to our episode on his technical work — we discuss what, if anything, can be learned from his unusual career path. Should more people pass on university and just throw themselves at solving a problem they care about? Or would it be foolhardy for others to try to copy a unique case like Chris’?

We also cover some of Chris’ personal passions over the years, including his attempts to reduce what he calls ‘research debt’ by starting a new academic journal called Distill, focused just on explaining existing results unusually clearly.

As Chris explains, as fields develop they accumulate huge bodies of knowledge that researchers are meant to be familiar with before they start contributing themselves. But the weight of that existing knowledge — and the need to keep up with what everyone else is doing — can become crushing. It can take someone until their 30s or later to earn their stripes, and sometimes a field will split in two just to make it possible for anyone to stay on top of it.

If that were unavoidable it would be one thing, but Chris thinks we’re nowhere near communicating existing knowledge as well as we could. Incrementally improving an explanation of a technical idea might take a single author weeks to do, but could go on to save a day for thousands, tens of thousands, or hundreds of thousands of students, if it becomes the best option available.

Despite that, academics have little incentive to produce outstanding explanations of complex ideas that can speed up the education of everyone coming up in their field. And some even see the process of deciphering bad explanations as a desirable right of passage all should pass through, just as they did.

So Chris tried his hand at chipping away at this problem — but concluded the nature of the problem wasn’t quite what he originally thought. In this conversation we talk about that, as well as:

  • Why highly thoughtful cold emails can be surprisingly effective, but average cold emails do little
  • Strategies for growing as a researcher
  • Thinking about research as a market
  • How Chris thinks about writing outstanding explanations
  • The concept of ‘micromarriages’ and ‘microbestfriendships’
  • And much more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#107 – Chris Olah on what the hell is going on inside neural networks

Big machine learning models can identify plant species better than any human, write passable essays, beat you at a game of Starcraft 2, figure out how a photo of Tobey Maguire and the word ‘spider’ are related, solve the 60-year-old ‘protein folding problem’, diagnose some diseases, play romantic matchmaker, write solid computer code, and offer questionable legal advice.

Humanity made these amazing and ever-improving tools. So how do our creations work? In short: we don’t know.

Today’s guest, Chris Olah, finds this both absurd and unacceptable. Over the last ten years he has been a leader in the effort to unravel what’s really going on inside these black boxes. As part of that effort he helped create the famous DeepDream visualisations at Google Brain, reverse engineered the CLIP image classifier at OpenAI, and is now continuing his work at Anthropic, a new $100 million research company that tries to “co-develop the latest safety techniques alongside scaling of large ML models”.

Despite having a huge fan base thanks to his tweets and lay explanations of ML, today’s episode is the first long interview Chris has ever given. It features his personal take on what we’ve learned so far about what ML algorithms are doing, and what’s next for this research agenda at Anthropic.

His decade of work has borne substantial fruit, producing an approach for looking inside the mess of connections in a neural network and back out what functional role each piece is serving. Among other things, Chris and team found that every visual classifier seems to converge on a number of simple common elements in their early layers — elements so fundamental they may exist in our own visual cortex in some form.

They also found networks developing ‘multimodal neurones’ that would trigger in response to the presence of high-level concepts like ‘romance’, across both images and text, mimicking the famous ‘Halle Berry neuron’ from human neuroscience.

While reverse engineering how a mind works would make any top-ten list of the most valuable knowledge to pursue for its own sake, Chris’s work is also of urgent practical importance. Machine learning models are already being deployed in medicine, business, the military, and the justice system, in ever more powerful roles. The competitive pressure to put them into action as soon as they can turn a profit is great, and only getting greater.

But if we don’t know what these machines are doing, we can’t be confident they’ll continue to work the way we want as circumstances change. Before we hand an algorithm the proverbial nuclear codes, we should demand more assurance than “well, it’s always worked fine so far”.

But by peering inside neural networks and figuring out how to ‘read their minds’ we can potentially foresee future failures and prevent them before they happen. Artificial neural networks may even be a better way to study how our own minds work, given that, unlike a human brain, we can see everything that’s happening inside them — and having been posed similar challenges, there’s every reason to think evolution and ‘gradient descent’ often converge on similar solutions.

Among other things, Rob and Chris cover:

  • Why Chris thinks it’s necessary to work with the largest models
  • Whether you can generalise from visual to language models
  • What fundamental lessons we’ve learned about how neural networks (and perhaps humans) think
  • What it means that neural networks are learning high-level concepts like ‘superheroes’, mental health, and Australiana, and can identify these themes across both text and images
  • How interpretability research might help make AI safer to deploy, and Chris’ response to skeptics
  • Why there’s such a fuss about ‘scaling laws’ and what they say about future AI progress
  • What roles Anthropic is hiring for, and who would be a good fit for them

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#106 – Cal Newport on an industrial revolution for office work

If you wanted to start a university department from scratch, and attract as many superstar researchers as possible, what’s the most attractive perk you could offer?

How about just not needing an email address?

According to today’s guest, Cal Newport — computer science professor and best-selling author of A World Without Email — it should seem obscene and absurd for a world-renowned vaccine researcher with decades of experience to spend a third of their time fielding requests from HR, building management, finance, and on and on. Yet with offices organised the way they are today, nothing could feel more natural.

But this isn’t just a problem at the elite level — it affects almost all of us. A typical U.S. office worker checks their email 80 times a day, or once every six minutes. Data analysis by RescueTime found that a third of users checked email or Slack every three minutes or more, averaged over a full work day.

Each time that happens our focus is broken, killing our momentum on the knowledge work we’re supposedly paid to do.

When we lament how much email and chat have reduced our focus, increased our anxiety and made our days a buzz of frenetic activity, we most naturally blame ‘weakness of will’. If only we had the discipline to check Slack and email once a day, all would be well — or so the story goes.

Cal believes that line of thinking fundamentally misunderstands how we got to a place where knowledge workers can rarely find more than five consecutive minutes to spend doing just one thing.

Since the Industrial Revolution, a combination of technology and better organization have allowed the manufacturing industry to produce a hundred-fold as much with the same number of people.

Cal says that by comparison, it’s not clear that specialised knowledge workers like scientists, authors, or senior managers are any more productive than they were 50 years ago. If the knowledge sector could achieve even a tiny fraction of what manufacturing has, and find a way to coordinate its work that raised productivity by just 1%, that would generate on the order of $100 billion globally each year.

On Cal’s account, those opportunities are staring us in the face. Modern factories operated by top firms are structured with painstaking care and two centuries of accumulated experience to ensure staff can get the greatest amount possible done.

By contrast, most knowledge work today operates with no deliberate structure at all. Instead of carefully constructed processes to get the most out of each person, we just hand out tasks and leave people to organise themselves organically in whatever way feels easiest to them.

Since the 1990s, when everyone got an email address and most lost their assistants, that lack of direction has led to what Cal calls the ‘hyperactive hive mind’: everyone sends emails and chats to everyone else, all throughout the day, whenever they need anything.

Rather than strategic thinkers, managers work as human switchboards, answering and forwarding dozens of emails on any and every topic to keep the system from seizing up.

Finding a time for four people to meet might mean an eight-email thread. Annoying enough! But each of those four has to keep checking in to make sure the thread is progressing, and answer any new questions that come up. So in aggregate those four might interrupt their train of thought and check their email 20, 30 or even 40 times in the process of coordinating a single meeting.

Cal points out that this is so normal we don’t even think of it as a way of organising work, but it is: it’s what happens when management does nothing to enable teams to decide on a better way of coordinating themselves. And if any individual tries to opt out and focus on one thing for an entire day, they’re throwing a wrench in the ‘hyperactive hive mind’, which explains why calls for individual discipline have done so little to fix the problem.

A few industries have made progress taming the ‘hyperactive hive mind’. Cal points to tech support ticketing systems, which throttle correspondence and keep engineers focused on one problem at a time until they can’t get any further, at which point that problem is parked and they’re given a single new problem to work on next.

He also points to ‘extreme programming’, a system in which two software engineers sit side-by-side in front of one computer and together write code to solve a specific problem for their entire work day. As they work, those software engineers have no email account and no phone number. All incoming and outgoing communication with the rest of the world is run through a dedicated liaison officer so they can maintain 100% focus. Usually after six hours of real actual work they need to go home and rest.

But on Cal’s telling, in this interview and in A World Without Email, this barely scratches the surface of the improvements that are possible within knowledge work. And reining in the hyperactive hive mind won’t just help people do higher quality work, it will free them from the 24/7 anxiety that there’s someone somewhere they haven’t gotten back to.

In this interview Cal and Rob cover that, as well as:

  • Is the hyperactive hive-mind really one of the world’s most pressing problems?
  • The historical origins of the ‘hyperactive hive-mind’
  • The harm caused by attention switching
  • Who’s working to solve the problem and how
  • Why it took more than a century to come up with the ‘assembly line’ method for factory organisation
  • Cal’s top productivity advice for high school students, university students, and early-career employees
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#105 – Alexander Berger on improving global health and wellbeing in clear and direct ways

The effective altruist research community tries to identify the highest impact things people can do to improve the world. Unsurprisingly, given the difficulty of such a massive and open-ended project, very different schools of thought have arisen about how to do the most good.

Today’s guest, Alexander Berger, leads Open Philanthropy’s ‘Global Health and Wellbeing’ programme, where he oversees around $175 million in grants each year, and ultimately aspires to disburse billions in the most impactful ways he and his team can identify.

This programme is the flagship effort representing one major effective altruist approach: try to improve the health and wellbeing of humans and animals that are alive today, in clearly identifiable ways, applying an especially analytical and empirical mindset.

The programme makes grants to tackle easily-prevented illnesses among the world’s poorest people, offer cash to people living in extreme poverty, prevent cruelty to billions of farm animals, advance biomedical science, and improve criminal justice and immigration policy in the United States.

Open Philanthropy’s researchers rely on empirical information to guide their decisions where it’s available, and where it’s not, they aim to maximise expected benefits to recipients through careful analysis of the gains different projects would offer and their relative likelihoods of success.

Job opportunities at Open Philanthropy

Alexander’s Global Health and Wellbeing team is hiring two new Program Officers to oversee work to reduce air pollution in south Asia — which hugely damages the health of hundreds of millions — and to improve foreign aid policy in rich countries, so that it does more to help the world’s poorest people improve their circumstances. They’re also seeking new generalist researchers.

Learn more about these and other vacancies here.

Disclaimer of conflict of interest: 80,000 Hours and our parent organisation, the Centre For Effective Altruism, have received substantial funding from Open Philanthropy.

This ‘global health and wellbeing’ approach — sometimes referred to as ‘neartermism’ — contrasts with another big school of thought in effective altruism, known as ‘longtermism’, which aims to direct the long-term future of humanity and its descendants in a positive direction. Longtermism bets that while it’s harder to figure out how to benefit future generations than people alive today, the total number of people who might live in the future is far greater than the number alive today, and this gain in scale more than offsets that lower tractability.

The debate between these two very different theories of how to best improve the world has been one of the most significant within effective altruist research since its inception. Alexander first joined the influential charity evaluator GiveWell in 2011, and since then has conducted research alongside top thinkers on global health and wellbeing and longtermism alike, ultimately deciding to dedicate his efforts to improving the world today in identifiable ways.

In this conversation Alexander advocates for that choice, explaining the case in favour of adopting the ‘global health and wellbeing’ mindset, while going through the arguments for the longtermist approach that he finds most and least convincing.

Rob and Alexander also tackle:

  • Why it should be legal to sell your kidney, and why Alexander donated his to a total stranger
  • Why it’s shockingly hard to find ways to give away large amounts of money that are more cost effective than distributing anti-malaria bed nets
  • How much you gain from working with tight feedback loops
  • Open Philanthropy’s biggest wins
  • Why Open Philanthropy engages in ‘worldview diversification’ by having both a global health and wellbeing programme and a longtermist programme as well
  • Whether funding science and political advocacy is a good way to have more social impact
  • Whether our effects on future generations are predictable or unforeseeable
  • What problems the global health and wellbeing team works to solve and why
  • Opportunities to work at Open Philanthropy

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#104 – Dr Pardis Sabeti on the Sentinel system for detecting and stopping pandemics

When the first person with COVID-19 went to see a doctor in Wuhan, nobody could tell that it wasn’t a familiar disease like the flu — that we were dealing with something new.

How much death and destruction could we have avoided if we’d had a hero who could? That’s what the last Assistant Secretary of Defense Andy Weber asked on the show back in March.

Today’s guest Pardis Sabeti is a professor at Harvard, fought Ebola on the ground in Africa during the 2014 outbreak, runs her own lab, co-founded a company that produces next-level testing, and is even the lead singer of a rock band. If anyone is going to be that hero in the next pandemic — it just might be her.

She is a co-author of the SENTINEL proposal, a practical system for detecting new diseases quickly, using an escalating series of three novel diagnostic techniques.

The first method, called SHERLOCK, uses CRISPR gene editing to detect familiar viruses in a simple, inexpensive filter paper test, using non-invasive samples.

Rapid diagnostic tests [are a] terrific technology, but usually it takes about six months to develop a new one because the proteins are a little more bespoke… Whereas the genome sequence, it’s just literally like a code, you just put it in and you immediately can target… You type it out and you have it going.

If SHERLOCK draws a blank, we escalate to the second step, CARMEN, an advanced version of SHERLOCK that uses microfluidics and CRISPR to simultaneously detect hundreds of viruses and viral strains. More expensive, but far more comprehensive.

Most infections all look the same — Lassa looks like Ebola, which looks like malaria, which looks like typhoid, and other things at varying stages. So you don’t want to have to know exactly what you’re looking for in a lot of cases; you want to do a broad differential that you test for.

If neither SHERLOCK nor CARMEN detects a known pathogen, it’s time to pull out the big gun: metagenomic sequencing. More expensive still, but sequencing all the DNA in a patient sample lets you identify and track every virus — known and unknown — in a sample.

Those are the kinds of technologies that we can have in the kinds of labs that we could have in every country on the planet, and even in a lot of regional centers. Then if something comes up and all the standard tests that you’ve run don’t know what it is, you can basically try to put it through.

If Pardis and her team succeeds, our future pandemic potential patient zero may:

  1. Go to the hospital with flu-like symptoms, and immediately be tested using SHERLOCK — which will come back negative
  2. Take the CARMEN test for a much broader range of illnesses — which will also come back negative
  3. Their sample will be sent for metagenomic sequencing, which will reveal that they’re carrying a new virus we’ll have to contend with
  4. At all levels, information will be recorded in a cloud-based data system that shares data in real time; the hospital will be alerted and told to quarantine the patient
  5. The world will be able to react weeks — or even months — faster, potentially saving millions of lives

It’s a wonderful vision, and one humanity is ready to test out. But there are all sorts of practical questions, such as:

  • How do you scale these technologies, including to remote and rural areas?
  • Will doctors everywhere be able to operate them?
  • Who will pay for it?
  • How do you maintain the public’s trust and protect against misuse of sequencing data?
  • How do you avoid drowning in the data the system produces?

In this conversation Pardis and Rob address all those questions, as well as:

  • Pardis’ history with trying to control emerging contagious diseases
  • The potential of mRNA vaccines
  • Other emerging technologies
  • How to best educate people about pandemics
  • The pros and cons of gain-of-function research
  • Turning mistakes into exercises you can learn from
  • Overcoming enormous life challenges
  • Why it’s so important to work with people you can laugh with
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#103 – Max Roser on building the world's first great source of COVID-19 data at Our World in Data

History is filled with stories of great people stepping up in times of crisis. Presidents averting wars; soldiers leading troops away from certain death; data scientists sleeping on the office floor to launch a new webpage a few days sooner.

That last one is barely a joke — by our lights, people like today’s guest Max Roser should be viewed with similar admiration by COVID-19 historians.

Max runs Our World in Data, a small education nonprofit which began the pandemic with just six staff. But since last February his team has supplied essential COVID statistics to over 130 million users — among them BBC, the Financial Times, The New York Times, the OECD, the World Bank, the IMF, Donald Trump, Tedros Adhanom, and Dr. Anthony Fauci, just to name a few.

An economist at Oxford University, Max Roser founded Our World in Data as a small side project in 2011 and has led it since, including through the wild ride of 2020. In today’s interview, Max explains how he and his team realized that if they didn’t start making COVID data accessible and easy to make sense of, it wasn’t clear when anyone would.

But Our World in Data wasn’t naturally set up to become the world’s go-to source for COVID updates. Up until then their specialty had been in-depth articles explaining century-length trends in metrics like life expectancy — to the point that their graphing software was only set up to present yearly data.

But the team eventually realized that the World Health Organization was publishing numbers that flatly contradicted themselves, most of the press was embarrassingly out of its depth, and countries were posting case data as images buried deep in their sites, where nobody would find them. Even worse, nobody was reporting or compiling how many tests different countries were doing, rendering all those case figures largely meaningless.

As a result, trying to make sense of the pandemic was a time-consuming nightmare. If you were leading a national COVID response, learning what other countries were doing and whether it was working would take weeks of study — and that meant, with the walls falling in around you, it simply wasn’t going to happen. Ministries of health around the world were flying blind.

Disbelief ultimately turned to determination, and the Our World in Data team committed to do whatever had to be done to fix the situation. Overnight their software was quickly redesigned to handle daily data, and for the next few months Max and colleagues like Edouard Mathieu and Hannah Ritchie did little but sleep and compile COVID data.

In this episode Max explains how Our World in Data went about filling a huge gap that never should have been there in the first place — and how they had to do it all again in December 2020 when, eleven months into the pandemic, there was still nobody else to compile global vaccination statistics.

We also talk about:

  • Our World in Data’s early struggles to get funding
  • Why government agencies are so bad at presenting data
  • Which agencies did a good job during the COVID pandemic (shout out to the European CDC)
  • How much impact Our World in Data has by helping people understand the world
  • How to deal with the unreliability of development statistics
  • Why research shouldn’t be published as a PDF
  • Why academia under-incentivises data collection
  • The history of war
  • And much more

Final note: We also want to acknowledge other groups that did great work collecting and presenting COVID-19 data early on during the pandemic, including the Financial Times, Johns Hopkins University (which produced the first case map), the European CDC (who compiled a lot of the data that Our World in Data relied on), the Human Mortality Database (who compiled figures on excess mortality), and no doubt many others.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ryan Kessler
Transcriptions: Sofia Davis-Fogel

Continue reading →

#102 – Tom Moynihan on why prior generations missed some of the biggest priorities of all

It can be tough to get people to truly care about reducing existential risks today. But spare a thought for the longtermist of the 17th century: they were surrounded by people who thought extinction was literally impossible.

Today’s guest Tom Moynihan, intellectual historian and author of the book X-Risk: How Humanity Discovered Its Own Extinction, says that until the 18th century, almost everyone — including early atheists — couldn’t imagine that humanity or life could simply disappear because of an act of nature.

This is largely because of the prevalence of the ‘principle of plenitude’, which Tom defines as saying:

Whatever can happen will happen. In its stronger form it says whatever can happen will happen reliably and recurrently. And in its strongest form it says that all that can happen is happening right now. And that’s the way things will be forever.

This has the implication that if humanity ever disappeared for some reason, then it would have to reappear. So why would you ever worry about extinction?

Here are 4 more commonly held beliefs from generations past that Tom shares in the interview:

  • All regions of matter that can be populated will be populated: In other words, there are aliens on every planet, because it would be a massive waste of real estate if all of them were just inorganic masses, where nothing interesting was going on. This also led to the idea that if you dug deep into the Earth, you’d potentially find thriving societies.
  • Aliens were human-like, and shared the same values as us: they would have the same moral beliefs, and the same aesthetic beliefs. The idea that aliens might be very different from us only arrived in the 20th century.
  • Fossils were rocks that had gotten a bit too big for their britches and were trying to act like animals: they couldn’t actually move, so becoming an imprint of an animal was the next best thing.
  • All future generations were contained in miniature form, Russian-doll style, in the sperm of the first man: preformation was the idea that within the ovule or the sperm of an animal is contained its offspring in miniature form, and the French philosopher Malebranche said, well, if one is contained in the other one, then surely that goes on forever.

And here are another three that weren’t held widely, but were proposed by scholars and taken seriously:

  • Life preceded the existence of rocks: Living things, like germs or microorganisms, came first, and they extruded the earth.
  • No idea can be wrong: Nothing we can say about the world is wrong in a strong sense, because at some point in the future or the past, it has been true.
  • Maybe we were living before the Trojan War: Aristotle said that we might actually be living before Troy, because it — like every other event — will repeat at some future date. And he said that actually, the set of possibilities might be so narrow that it might be safer to say that we actually live before Troy.

But Tom tries to be magnanimous when faced with these incredibly misguided worldviews.

I think that something almost similar to scope neglect can happen, where we see the sheer extent of ignorance in the past and therefore think that is boundless. And this could lead you to think therefore our progress is also made insignificant within this boundless sea, but no, I think it’s structured. There are bounds to ignorance and we’re making progress, but within a space that’s potentially far bigger than we can currently think of.

In this nearly four-hour long interview, Tom and Rob cover all of these ideas, as well as:

  • How we know the ancients really believed such things
  • How we should respond to wacky old ideas
  • How we moved on from these theories
  • How future intellectual historians might view our beliefs today
  • The distinction between ‘apocalypse’ and ‘extinction’
  • The history of probability
  • Utopias and dystopias
  • Big ideas that haven’t flowed through into all relevant fields yet
  • Intellectual history as a possible high-impact career
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#101 – Robert Wright on using cognitive empathy to save the world

In 2003, Saddam Hussein refused to let Iraqi weapons scientists leave the country to be interrogated. Given the overwhelming domestic support for an invasion at the time, most key figures in the U.S. took that as confirmation that he had something to hide — probably an active WMD program.

But what about alternative explanations? Maybe those scientists knew about past crimes. Or maybe they’d defect. Or maybe giving in to that kind of demand would have humiliated Hussein in the eyes of enemies like Iran and Saudi Arabia.

According to today’s guest Robert Wright, host of the popular podcast The Wright Show, these are the kinds of things that might have come up if people were willing to look at things from Saddam Hussein’s perspective.

He calls this ‘cognitive empathy’. It’s not feeling-your-pain-type empathy — it’s just trying to understand how another person thinks.

He says if you pitched this kind of thing back in 2003 you’d be shouted down as a ‘Saddam apologist’ — and he thinks the same is true today when it comes to regimes in China, Russia, Iran, and North Korea.

The two Roberts in today’s episode — Bob Wright and Rob Wiblin — agree that removing this taboo against perspective taking, even with people you consider truly evil, could potentially significantly improve discourse around international relations.

They feel that if we could spread the meme that if you’re able to understand what dictators are thinking and calculating, based on their country’s history and interests, it seems like we’d be less likely to make terrible foreign policy errors.

But how do you actually do that?

Bob’s new ‘Apocalypse Aversion Project’ is focused on creating the necessary conditions for solving non-zero-sum global coordination problems, something most people are already on board with.

And in particular he thinks that might come from enough individuals “transcending the psychology of tribalism”. He doesn’t just mean rage and hatred and violence, he’s also talking about cognitive biases.

Bob makes the striking claim that if enough people in the U.S. had been able to combine perspective taking with mindfulness — the ability to notice and identify thoughts as they arise — then the U.S. might have even been able to avoid the invasion of Iraq.

Rob pushes back on how realistic this approach really is, asking questions like:

  • Haven’t people been trying to do this since the beginning of time?
  • Is there a really good novel angle that will move the needle and change how a lot of people think and behave?
  • Wouldn’t it be better to focus on a much narrower task, like getting more mindfulness and meditation and reflectiveness among the U.S. foreign policy elite?

But despite the differences in approaches, Bob has a lot of common ground with 80,000 Hours — and the result is a fun back-and-forth about the best ways to achieve shared goals.

This is a crossover episode, also appearing on The Wright Show, with Bob and Rob taking turns interviewing each other.

Bob starts by questioning Rob about effective altruism, and they go on to cover a bunch of other topics, such as:

  • Specific risks like climate change and new technologies
  • How to achieve social cohesion
  • The pros and cons of society-wide surveillance
  • How Rob got into effective altruism
  • And much more

If you’re interested to hear more of Bob’s interviews you can subscribe to The Wright Show anywhere you’re getting this one. You can also watch videos of this and all his other episodes on Bloggingheads.tv.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#100 – Having a successful career with depression, anxiety and imposter syndrome

Today’s episode is one of the most remarkable and really, unique, pieces of content we’ve ever produced (and I can say that because I had almost nothing to do with making it!).

The producer of this show, Keiran Harris, interviewed our mutual colleague Howie about the major ways that mental illness has affected his life and career. While depression, anxiety, ADHD and other problems are extremely common, it’s rare for people to offer detailed insight into their thoughts and struggles — and even rarer for someone as perceptive as Howie to do so.

The first half of this conversation is a searingly honest account of Howie’s story, including losing a job he loved due to a depressed episode, what it was like to be basically out of commission for over a year, how he got back on his feet, and the things he still finds difficult today.

The second half covers Howie’s advice. Conventional wisdom on mental health can be really focused on cultivating willpower — telling depressed people that the virtuous thing to do is to start exercising, improve their diet, get their sleep in check, and generally fix all their problems before turning to therapy and medication as some sort of last resort.

Howie tries his best to be a corrective to this misguided attitude and pragmatically focus on what actually matters — doing whatever will help you get better.

Mental illness is one of the things that most often trips up people who could otherwise enjoy flourishing careers and have a large social impact, so we think this could plausibly be one of our more valuable episodes.

If you’re in a hurry, we’ve extracted the key advice that Howie has to share in a section below.

Howie and Keiran basically treated it like a private conversation, with the understanding that it may be too sensitive to release. But, after getting some really positive feedback, they’ve decided to share it with the world.

Here are a few quotes from early reviewers:

I think there’s a big difference between admitting you have depression/seeing a psych and giving a warts-and-all account of a major depressive episode like Howie does in this episode… His description was relatable and really inspiring.

Someone who works on mental health issues said:

This episode is perhaps the most vivid and tangible example of what it is like to experience psychological distress that I’ve ever encountered. Even though the content of Howie and Keiran’s discussion was serious, I thought they both managed to converse about it in an approachable and not-overly-somber way.

And another reviewer said:

I found Howie’s reflections on what is actually going on in his head when he engages in negative self-talk to be considerably more illuminating than anything I’ve heard from my therapist.

We also hope that the episode will:

  1. Help people realise that they have a shot at making a difference in the future, even if they’re experiencing (or have experienced in the past) mental illness, self doubt, imposter syndrome, or other personal obstacles.

  2. Give insight into what it’s like in the head of one person with depression, anxiety, and imposter syndrome, including the specific thought patterns they experience on typical days and more extreme days. In addition to being interesting for its own sake, this might make it easier for people to understand the experiences of family members, friends, and colleagues — and know how to react more helpfully.

Several early listeners have even made specific behavioral changes due to listening to the episode — including people who generally have good mental health but were convinced it’s well worth the low cost of setting up a plan in case they have problems in the future.

So we think this episode will be valuable for:

  • People who have experienced mental health problems or might in future;
  • People who have had troubles with stress, anxiety, low mood, low self esteem, imposter syndrome and similar issues, even if their experience isn’t well described as ‘mental illness’;
  • People who have never experienced these problems but want to learn about what it’s like, so they can better relate to and assist family, friends or colleagues who do.

In other words, we think this episode could be worthwhile for almost everybody.

Just a heads up that this conversation gets pretty intense at times, and includes references to self-harm and suicidal thoughts.

If you don’t want to hear or read the most intense section, you can skip the chapter called ‘Disaster’. And if you’d rather avoid almost all of these references, you could skip straight to the chapter called ‘80,000 Hours’.

We’ve collected a large list of high quality resources for overcoming mental health problems in our links section below.

If you’re feeling suicidal or have thoughts of harming yourself right now, there are suicide hotlines at National Suicide Prevention Lifeline in the U.S. (800-273-8255) and Samaritans in the U.K. (116 123). You may also want to find and save a number for a local service where possible.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#99 – Leah Garcés on turning adversaries into allies to change the chicken industry

For a chance to prevent enormous amounts of suffering, would you be brave enough to drive five hours to a remote location to meet a man who seems likely to be your enemy, knowing that it might be an ambush?

Today’s guest — Leah Garcés — was.

That man was a chicken farmer named Craig Watts, and that ambush never happened. Instead, Leah and Craig forged a friendship and a partnership focused on reducing suffering on factory farms.

Leah, now president of Mercy For Animals (MFA), tried for years to get access to a chicken farm to document the horrors she knew were happening behind closed doors. It made sense that no one would let her in — why would the evil chicken farmers behind these atrocities ever be willing to help her take them down?

But after sitting with Craig on his living room floor for hours and listening to his story, she discovered that he wasn’t evil at all — in fact he was just stuck in a cycle he couldn’t escape, forced to use methods he didn’t endorse.

Most chicken farmers have enormous debts they are constantly struggling to pay off, make very little money, and have to work in terrible conditions — their main activity most days is finding and killing the sick chickens in their flock. Craig was one of very few farmers close to finally paying off his debts, which made him slightly less vulnerable to retaliation. That, combined with his natural tenacity and bravery, opened up the possibility for him to work with Leah.

Craig let Leah openly film inside the chicken houses, and shared highly confidential documents about the antibiotics put into the feed. That led to a viral video, and a New York Times story. The villain of that video was Jim Perdue, CEO of one of the biggest meat companies in the world. They show him saying, “Farmers are happy. Chickens are happy. There’s a lot of space. They’re clean.” And then they show the grim reality.

For years, Perdue wouldn’t speak to Leah. But remarkably, when they actually met in person, she again managed to forge a meaningful relationship with a natural adversary. She was able to put aside her utter contempt for the chicken industry and see Craig and Jim as people, not cartoonish villains.

Leah believes that you need to be willing to sit down with anyone who has the power to solve a problem that you don’t — recognising them as human beings with a lifetime of complicated decisions behind their actions. And she stresses that finding or making a connection is really important. In the case of Jim Perdue, it was the fact they both had adopted children. Because of this, they were able to forget that they were supposed to be enemies in that moment, talk about their experience as parents, and build some trust.

The other lesson that Leah highlights is that you need to look for win-wins and start there, rather than starting with disagreements. With Craig Watts, instead of opening with “How do I end his job”, she thought, “How can I find him a better job?” If you find solutions where everybody wins, you don’t need to spend resources fighting the former enemy. They’ll come to you.

Typically animal activists are seen as coming into rural areas to take away jobs and choices — but MFA are trying to do the opposite. They want to create new opportunities, and give farmers a level of freedom they haven’t had since they first set foot on the factory farming debt treadmill.

It turns out that conditions in chicken houses are perfect for growing hemp or mushrooms, so Mercy For Animals have started their ‘Transfarmation project’ to help farmers like Craig escape from the prison of factory farming by converting their production from animals to plants. To convince farmers to leave behind a life of producing suffering, all you need to do is find them something better — which for many of them is almost anything else.

Leah and Rob also talk about:

  • Mercy for Animals’ overall strategy for ending factory farming sooner than later
  • Why conditions for farmers are so bad
  • The importance of building on past work
  • The benefits of creating a ranking and scoring companies against each other
  • Why we should drive up the price of factory farmed meat by any means necessary
  • The difficulty of enforcing corporate pledges
  • Her disagreements with others in the animal welfare movement
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#98 – Christian Tarsney on future bias and a possible solution to moral fanaticism

Imagine that you’re in the hospital for surgery. This kind of procedure is always safe, and always successful — but it can take anywhere from one to ten hours. You can’t be knocked out for the operation, but because it’s so painful — you’ll be given a drug that makes you forget the experience.

You wake up, not remembering going to sleep. You ask the nurse if you’ve had the operation yet. They look at the foot of your bed, and see two different charts for two patients. They say “Well, you’re one of these two — but I’m not sure which one. One of them had an operation yesterday that lasted ten hours. The other is set to have a one-hour operation later today.”

So it’s either true that you already suffered for ten hours, or true that you’re about to suffer for one hour.

Which patient would you rather be?

Most people would be relieved to find out they’d already had the operation. Normally we prefer less pain rather than more pain, but in this case, we prefer ten times more pain — just because the pain would be in the past rather than the future.

Christian Tarsney, a philosopher at Oxford University’s Global Priorities Institute, has written a couple of papers about this ‘future bias’ — that is, that people seem to care more about their future experiences than about their past experiences.

That probably sounds perfectly normal to you. But do we actually have good reasons to prefer to have our positive experiences in the future, and our negative experiences in the past?

One of Christian’s experiments found that when you ask people to imagine hypothetical scenarios where they can affect their own past experiences, they care about those experiences more — which suggests that our inability to affect the past is one reason why we feel mostly indifferent to it.

But he points out that if that was the main reason, then we should also be indifferent to inevitable future experiences — if you know for sure that something bad is going to happen to you tomorrow, you shouldn’t care about it. But if you found out you simply had to have a horribly painful operation tomorrow, it’s probably all you’d care about!

Another explanation for future bias is that we have this intuition that time is like a videotape, where the things that haven’t played yet are still on the way.

If your future experiences really are ahead of you rather than behind you, that makes it rational to care more about the future than the past. But Christian says that, even though he shares this intuition, it’s actually very hard to make the case for time having a direction.

It’s a live debate that’s playing out in the philosophy of time, as well as in physics. And Christian says that even if you could show that time had a direction, it would still be hard to explain why we should care more about the past than the future — at least in a way that doesn’t just sound like “Well, the past is in the past and the future is in the future”.

For Christian, there are two big practical implications of these past, present, and future ethical comparison cases.

The first is for altruists: If we care about whether current people’s goals are realised, then maybe we should care about the realisation of people’s past goals, including the goals of people who are now dead.

The second is more personal: If we can’t actually justify caring more about the future than the past, should we really worry about death any more than we worry about all the years we spent not existing before we were born?

Christian and Rob also cover several other big topics, including:

  • A possible solution to moral fanaticism, where you can end up preferring options that give you only a very tiny chance of an astronomically good outcome over options that give you certainty of a very good outcome
  • How much of humanity’s resources we should spend on improving the long-term future
  • How large the expected value of the continued existence of Earth-originating civilization might be
  • How we should respond to uncertainty about the state of the world
  • The state of global priorities research
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ryan Kessler
Transcriptions: Sofia Davis-Fogel

Continue reading →

#97 – Mike Berkowitz on keeping the U.S. a liberal democratic country

Donald Trump’s attempt to overturn the results of the 2020 election split the Republican party. There were those who went along with it — 147 members of Congress raised objections to the official certification of electoral votes — but there were others who refused. These included Brad Raffensperger and Brian Kemp in Georgia, and Vice President Mike Pence.

Although one could say that the latter Republicans showed great courage, the key to the split may lie less in differences of moral character or commitment to democracy, and more in what was being asked of them. Trump wanted the first group to break norms, but he wanted the second group to break the law.

And while norms were indeed shattered, laws were upheld.

Today’s guest Mike Berkowitz, executive director of the Democracy Funders Network, points out a problem we came to realize throughout the Trump presidency: So many of the things that we thought were laws were actually just customs.

So once you have leaders who don’t buy into those customs — like, say, that a president shouldn’t tell the Department of Justice who it should and shouldn’t be prosecuting — there’s nothing preventing said customs from being violated.

And what happens if current laws change?

A recent Georgia bill took away some of the powers of Georgia’s Secretary of State — Brad Raffensberger. Mike thinks that’s clearly retribution for Raffensperger’s refusal to overturn the 2020 election results. But he also thinks it means that the next time someone tries to overturn the results of the election, they could get much farther than Trump did in 2020.

In this interview Mike covers what he thinks are the three most important levers to push on to preserve liberal democracy in the United States:

  1. Reforming the political system, by e.g. introducing new voting methods
  2. Revitalizing local journalism
  3. Reducing partisan hatred within the United States

Mike says that American democracy, like democracy elsewhere in the world, is not an inevitability. The U.S. has institutions that are really important for the functioning of democracy, but they don’t automatically protect themselves — they need people to stand up and protect them.

In addition to the changes listed above, Mike also thinks that we need to harden more norms into laws, such that individuals have fewer opportunities to undermine the system.

And inasmuch as laws provided the foundation for the likes of Raffensperger, Kemp, and Pence to exhibit political courage, if we can succeed in creating and maintaining the right laws — we may see many others following their lead.

As Founding Father James Madison put it: “If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary.”

Mike and Rob also talk about:

  • What sorts of terrible scenarios we should actually be worried about, i.e. the difference between being overly alarmist and properly alarmist
  • How to reduce perverse incentives for political actors, including those to overturn election results
  • The best opportunities for donations in this space
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#96 – Nina Schick on disinformation and the rise of synthetic media

You might have heard fears like this in the last few years: What if Donald Trump was woken up in the middle of the night and shown a fake video — indistinguishable from a real one — in which Kim Jong Un announced an imminent nuclear strike on the U.S.?

Today’s guest Nina Schick, author of Deepfakes: The Coming Infocalypse, thinks these concerns were the result of hysterical reporting, and that the barriers to entry in terms of making a very sophisticated ‘deepfake’ video today are a lot higher than people think.

But she also says that by the end of the decade, YouTubers will be able to produce the kind of content that’s currently only accessible to Hollywood studios. So is it just a matter of time until we’ll be right to be terrified of this stuff?

Nina thinks the problem of misinformation and disinformation might be roughly as important as climate change, because as she says: “Everything exists within this information ecosystem, it encompasses everything.” We haven’t done enough research to properly weigh in on that ourselves, but Rob did present Nina with some early objections, such as:

  • Won’t people quickly learn that audio and video can be faked, and so will only take them seriously if they come from a trusted source?
  • If photoshop didn’t lead to total chaos, why should this be any different?

But the grim reality is that if you wrote “I believe that the world will end on April 6, 2022” and pasted it next to a photo of Albert Einstein — a lot of people would believe it was a genuine quote. And Nina thinks that flawless synthetic videos will represent a significant jump in our ability to deceive.

She also points out that the direct impact of fake videos is just one side of the issue. In a world where all media can be faked, everything can be denied.

Consider Trump’s infamous Access Hollywood tape. If that happened in 2020 instead of 2016, he would have almost certainly claimed it was fake — and that claim wouldn’t be obviously ridiculous. Malignant politicians everywhere could plausibly deny footage of them receiving a bribe, or ordering a massacre. What happens if in every criminal trial, a suspect caught on camera can just look at the jury and say “that video is fake”?

Nina says that undeniably, this technology is going to give bad actors a lot of scope for not having accountability for their actions.

As we try to inoculate people against being tricked by synthetic media, we risk corroding their trust in all authentic media too. And Nina asks: If you can’t agree on any set of objective facts or norms on which to start your debate, how on earth do you even run a society?

Nina and Rob also talk about a bunch of other topics, including:

  • The history of disinformation, and groups who sow disinformation professionally
  • How deepfake pornography is used to attack and silence women activitists
  • The key differences between how this technology interacts with liberal democracies vs. authoritarian regimes
  • Whether we should make it illegal to make a deepfake of someone without their permission
  • And the coolest positive uses of this technology

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: Type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#95 – Kelly Wanser on whether to deliberately intervene in the climate

How long do you think it’ll be before we’re able to bend the weather to our will? A massive rainmaking program in China, efforts to seed new oases in the Arabian peninsula, or chemically induce snow for skiers in Colorado.

100 years? 50 years? 20?

Those who know how to write a teaser hook for a podcast episode will have correctly guessed that all these things are already happening today. And the techniques being used could be turned to managing climate change as well.

Today’s guest, Kelly Wanser, founded SilverLining — a nonprofit organization that advocates research into climate interventions, such as seeding or brightening clouds, to ensure that we maintain a safe climate.

Kelly says that current climate projections, even if we do everything right from here on out, imply that two degrees of global warming are now unavoidable. And the same scientists who made those projections fear the flow-through effect that warming could have.

Since our best case scenario may already be too dangerous, SilverLining focuses on ways that we could intervene quickly in the climate if things get especially grim — their research serving as a kind of insurance policy.

After considering everything from mirrors in space, to shiny objects on the ocean, to materials on the Arctic, their scientists concluded that the most promising approach was leveraging one of the ways that the Earth already regulates its temperature — the reflection of sunlight off particles and clouds in the atmosphere.

Cloud brightening is a climate control approach that uses the spraying of a fine mist of sea water into clouds to make them ‘whiter’ so they reflect even more sunlight back into space.

These ‘streaks’ in clouds are already created by ships because the particulates from their diesel engines inadvertently make clouds a bit brighter.

Kelly says that scientists estimate that we’re already lowering the global temperature this way by 0.5–1.1ºC, without even intending to.

While fossil fuel particulates are terrible for human health, they think we could replicate this effect by simply spraying sea water up into clouds. But so far there hasn’t been funding to measure how much temperature change you get for a given amount of spray.

And we won’t want to dive into these methods head first because the atmosphere is a complex system we can’t yet properly model, and there are many things to check first. For instance, chemicals that reflect light from the upper atmosphere might totally change wind patterns in the stratosphere. Or they might not — for all the discussion of global warming the climate is surprisingly understudied.

The public tends to be skeptical of climate interventions, otherwise known as geoengineering, so in this episode we cover a range of possible objections, such as:

  • It being riskier than doing nothing
  • That it will inevitably be dangerously political
  • And the risk of the ‘double catastrophe’, where a pandemic stops our climate interventions and temperatures sky-rocket at the worst time.

Kelly and Rob also talk about:

  • The many climate interventions that are already happening
  • The most promising ideas in the field
  • And whether people would be more accepting if we found ways to intervene that had nothing to do with making the world a better place.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: Type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#94 – Ezra Klein on aligning journalism, politics, and what matters most

How many words in U.S. newspapers have been spilled on tax policy in the past five years? And how many words on CRISPR? Or meat alternatives? Or how AI may soon automate the majority of jobs?

When people look back on this era, is the interesting thing going to have been fights over whether or not the top marginal tax rate was 39.5% or 35.4%, or is it going to be that human beings started to take control of human evolution; that we stood on the brink of eliminating immeasurable levels of suffering on factory farms; and that for the first time the average American might become financially comfortable and unemployed simultaneously?

Today’s guest is Ezra Klein, one of the most prominent journalists in the world. Ezra thinks that pressing issues are neglected largely because there’s little pre-existing infrastructure to push them.

He points out that for a long time taxes have been considered hugely important in D.C. political circles — and maybe once they were. But either way, the result is that there are a lot of congressional committees, think tanks, and experts that have focused on taxes for decades and continue to produce a steady stream of papers, articles, and opinions for journalists they know to cover (often these are journalists hired to write specifically about tax policy).

To Ezra (and to us, and to many others) AI seems obviously more important than marginal changes in taxation over the next 10 or 15 years — yet there’s very little infrastructure for thinking about it. There isn’t a committee in Congress that primarily deals with AI, and no one has a dedicated AI position in the executive branch of the U.S. Government; nor are big AI think tanks in D.C. producing weekly articles for journalists they know to report on.

On top of this, the status quo always has a psychological advantage. If something was thought important by previous generations, we naturally assume it must be important today as well — think of how students continued learning ancient Greek long after it had ceased to be useful even in most scholarly careers.

All of this generates a strong ‘path dependence’ that can lock the media in to covering less important topics despite having no intention to do so.

According to Ezra, the hardest thing to do in journalism — as the leader of a publication, or even to some degree just as a writer — is to maintain your own sense of what’s important, and not just be swept along in the tide of what “the industry / the narrative / the conversation has decided is important.”

One reason Ezra created the Future Perfect vertical at Vox is that as he began to learn about effective altruism, he thought: “This is a framework for thinking about importance that could offer a different lens that we could use in journalism. It could help us order things differently.”

Ezra says there is an audience for the stuff that we’d consider most important here at 80,000 Hours. It’s broadly believed that nobody will read articles on animal suffering, but Ezra says that his experience at Vox shows these stories actually do really well — and that many of the things that the effective altruist community cares a lot about are “…like catnip for readers.”

Ezra’s bottom line for fellow journalists is that if something important is happening in the world and you can’t make the audience interested in it, that is your failure — never the audience’s failure.

But is that really true? In today’s episode we explore that claim, as well as:

  • How many hours of news the average person should consume
  • Where the progressive movement is failing to live up to its values
  • Why Ezra thinks ‘price gouging’ is a bad idea
  • Where the FDA has failed on rapid at-home testing for COVID-19
  • Whether we should be more worried about tail-risk scenarios
  • And his biggest critiques of the effective altruism community

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: Type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#93 – Andy Weber on rendering bioweapons obsolete & ending the new nuclear arms race

COVID-19 has provided a vivid reminder of the damage biological threats can do. But the threat doesn’t come from natural sources alone. Weaponized contagious diseases — which were abandoned by the United States, but developed in large numbers by the Soviet Union, right up until its collapse — have the potential to spread globally and kill just as many as an all-out nuclear war.

For five years, today’s guest, Andy Weber, was the US’ Assistant Secretary of Defense responsible for biological and other weapons of mass destruction. While people primarily associate the Pentagon with waging wars (including most within the Pentagon itself) Andy is quick to point out that you don’t have national security if your population remains at grave risk from natural and lab-created diseases.

Andy’s current mission is to spread the word that while bioweapons are terrifying, scientific advances also leave them on the verge of becoming an outdated technology.

He thinks there is an overwhelming case to increase our investment in two new technologies that could dramatically reduce the risk of bioweapons, and end natural pandemics in the process: mass genetic sequencing and mRNA vaccines.

First, advances in mass genetic sequencing technology allow direct, real-time analysis of DNA or RNA fragments collected from all over the human environment. You cast a wide net, and if you start seeing DNA sequences that you don’t recognise spreading through the population — that can set off an alarm.

Andy notes that while the necessary desktop sequencers may be expensive enough that they’re only in hospitals today, they’re rapidly getting smaller, cheaper, and easier to use. In fact DNA sequencing has recently experienced the most dramatic cost decrease of any technology, declining by a factor of 10,000 since 2007. It’s only a matter of time before they’re cheap enough to put in every home.

In the world Andy envisions, each morning before you brush your teeth you also breathe into a tube. Your sequencer can tell you if you have any of 300 known pathogens, while simultaneously scanning for any unknown viruses. It’s hooked up to your WiFi and reports into a public health surveillance system, which can check to see whether any novel DNA sequences are being passed from person to person. New contagious diseases can be detected and investigated within days — long before they run out of control.

The second major breakthrough comes from mRNA vaccines, which are today being used to end the COVID pandemic. The wonder of mRNA vaccines is that they can instruct our cells to make any random protein we choose and trigger a protective immune response from the body.

Until now it has taken a long time to invent and test any new vaccine, and there was then a laborious process of scaling up the equipment necessary to manufacture it. That leaves a new disease or bioweapon months or years to wreak havoc.

But using the sequencing technology above, we can quickly get the genetic codes that correspond to the surface proteins of any new pathogen, and switch them into the mRNA vaccines we’re already making. Inventing a new vaccine would become less like manufacturing a new iPhone and more like printing a new book — you use the same printing press and just change the words.

So long as we maintained enough capacity to manufacture and deliver mRNA vaccines, a whole country could in principle be vaccinated against a new disease in months.

Together these technologies could make advanced bioweapons a threat of the past. And in the process humanity’s oldest and deadliest enemy — contagious disease — could be brought under control like never before.

Andy has always been pretty open and honest, but his retirement last year has allowed him to stop worrying about being seen to speak for the Department of Defense, or for the president of the United States – and so we were also able to get his forthright views on a bunch of interesting other topics, such as:

  • The chances that COVID-19 escaped from a research facility
  • Whether a US president can really truly launch nuclear weapons unilaterally
  • What he thinks should be the top priorities for the Biden administration
  • If Andy was 18 and starting his career over again today, what would his plan be?
  • The time he and colleagues found 600kg of unsecured, highly enriched uranium sitting around in a barely secured facility in Kazakhstan, and eventually transported it to the United States
  • And much more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: Type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#92 – Brian Christian on the alignment problem

Brian Christian is a bestselling author with a particular knack for accurately communicating difficult or technical ideas from both mathematics and computer science.

Listeners loved our episode about his book Algorithms to Live By — so when the team read his new book, The Alignment Problem, and found it to be an insightful and comprehensive review of the state of the research into making advanced AI useful and reliably safe, getting him back on the show was a no-brainer.

Brian has so much of substance to say this episode will likely be of interest to people who know a lot about AI as well as those who know a little, and of interest to people who are nervous about where AI is going as well as those who aren’t nervous at all.

Here’s a tease of 10 Hollywood-worthy stories from the episode:

  • The Riddle of Dopamine: The development of reinforcement learning solves a long-standing mystery of how humans are able to learn from their experience.
  • ALVINN: A student teaches a military vehicle to drive between Pittsburgh and Lake Erie, without intervention, in the early nineties, using a computer with a tenth the processing capacity of an Apple Watch.
  • Couch Potato: An agent trained to be curious is stopped in its quest to navigate a maze by a paralysing TV screen.
  • Pitts & McCulloch: A homeless teenager and his foster father figure invent the idea of the neural net.
  • Tree Senility: Agents become so good at living in trees to escape predators that they forget how to leave, starve, and die.
  • The Danish Bicycle: A reinforcement learning agent figures out that it can better achieve its goal by riding in circles as quickly as possible than reaching its purported destination.
  • Montezuma’s Revenge: By 2015 a reinforcement learner can play 60 different Atari games — the majority impossibly well — but can’t score a single point on one game humans find tediously simple.
  • Curious Pong: Two novelty-seeking agents, forced to play Pong against one another, create increasingly extreme rallies.
  • AlphaGo Zero: A computer program becomes superhuman at Chess and Go in under a day by attempting to imitate itself.
  • Robot Gymnasts: Over the course of an hour, humans teach robots to do perfect backflips just by telling them which of 2 random actions look more like a backflip.

We also cover:

  • How reinforcement learning actually works, and some of its key achievements and failures
  • How a lack of curiosity can cause AIs to fail to be able to do basic things
  • The pitfalls of getting AI to imitate how we ourselves behave
  • The benefits of getting AI to infer what we must be trying to achieve
  • Why it’s good for agents to be uncertain about what they’re doing
  • Why Brian isn’t that worried about explicit deception
  • The interviewees Brian most agrees with, and most disagrees with
  • Developments since Brian finished the manuscript
  • The effective altruism and AI safety communities
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: Type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →