Should we leave a helpful message for future civilizations, just in case humanity dies out?

…there’s two parts to the problem. The first is calling someone’s attention to a place. I think that’s the harder part by far. You can’t just bury a thing, because hundreds and millions of years is long enough that the surface of the earth is no longer the surface of the earth…

Paul Christiano

Imagine that, one day, humanity dies out. At some point, many millions of years later, intelligent life might well evolve again. Is there any message we could leave that would reliably help them out?

In his second appearance on the 80,000 Hours Podcast, machine learning researcher and polymath Paul Christiano suggests we try to answer this question with a related thought experiment: are there any messages we might want to send back to our ancestors in the year 1700 that would have made history likely to go in a better direction than it did? It seems there probably is.

We could tell them hard-won lessons from history; mention some research questions we wish we’d started addressing earlier; hand over all the social science we have that fosters peace and cooperation; and at the same time steer clear of engineering hints that would speed up the development of dangerous weapons.

But, as Christiano points out, even if we could satisfactorily figure out what we’d like to be able to tell our ancestors, that’s just the first challenge. We’d need to leave the message somewhere that they could identify and dig up. While there are some promising options, this turns out to be remarkably hard to do, as anything we put on the Earth’s surface quickly gets buried far underground.

But even if we figure out a satisfactory message, and a ways to ensure it’s found, a civilization this far in the future won’t speak any language like our own. And being another species, they presumably won’t share as many fundamental concepts with us as humans from 1700. If we knew a way to leave them thousands of books and pictures in a material that wouldn’t break down, would they be able to decipher what we meant to tell them, or would it simply remain a mystery?

That’s just one of many playful questions discussed in today’s episode with Christiano — a frequent writer who’s willing to brave questions that others find too strange or hard to grapple with.

We also talk about why divesting a little bit from harmful companies might be more useful than I’d been thinking. Or whether creatine might make us a bit smarter, and carbon dioxide filled conference rooms make us a lot stupider.

Finally, we get a big update on progress in machine learning and efforts to make sure it’s reliably aligned with our goals, which is Paul’s main research project. He responds to the views that DeepMind’s Pushmeet Kohli espoused in a previous episode, and we discuss whether we’d be better off if AI progress turned out to be most limited by algorithmic insights, or by our ability to manufacture enough computer processors.

Some other issues that come up along the way include:

  • Are there any supplements people can take that make them think better?
  • What implications do our views on meta-ethics have for aligning AI with our goals?
  • Is there much of a risk that the future will contain anything optimised for causing harm?
  • An outtake about the implications of decision theory, which we decided was too confusing and confused to stay in the main recording.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

The new 30-person research group in DC investigating how emerging technologies could affect national security

It’s a little strange to say, “Oh, who’s going to get AI first? Who’s going to get electricity first?” It seems more like “who’s going to use it in what ways, and who’s going to be able to deploy it and actually have it be in widespread use?”

Helen Toner

From 1870 to 1950, the introduction of electricity transformed life in the US and UK, as people gained access to lighting, radio and a wide range of household appliances for the first time. Electricity turned out to be a general purpose technology that could help with almost everything people did.

Some think machine learning could alter 21st century life in a similar way.

In addition to massively changing everyday life, past general purpose technologies have also changed the nature of war. For example, when electricity was introduced to the battlefield, commanders gained the ability to quickly communicate with units far away in the field.

How might international security be altered if the impact of machine learning is similar in scope to that of electricity? Today’s guest — Helen Toner — recently helped found the Center for Security and Emerging Technology at Georgetown University to help policymakers prepare for any such disruptive technical changes that might threaten international peace.

Their first focus is machine learning (ML), a technology which allows computers to recognise patterns, learn from them, and develop ‘intuitions’ that inform their judgement about future cases. This is something humans do constantly, whether we’re playing tennis, reading someone’s face, diagnosing a patient, or figuring out which business ideas are likely to succeed.

Hear about high-impact opportunities to help ensure AI remains safe and beneficial

  1. Submit your CV and interests within AI below.
  2. Key organisations tell us about their important positions, some of which are not yet publicly advertised.
  3. We’ll get in touch with you if there are high impact opportunities that might be a good fit. If you’re interested, we’ll make the introductions.

Tell us about your interests

Sometimes these ML algorithms can seem uncannily insightful, and they’re only getting better over time. Ultimately a wide range of different ML algorithms could end up helping us with all kinds of decisions, just as electricity wakes us up, makes us coffee, and brushes our teeth — all in the first five minutes of our day.

Rapid advances in ML, and the many prospective military applications, has people worrying about an ‘AI arms race’ between the US and China. Henry Kissinger and the past CEO of Google Eric Schmidt recently wrote that AI could “destabilize everything from nuclear détente to human friendships.” Some politicians talk of classifying and restricting access to ML algorithms, lest they fall into the wrong hands.

But if electricity is the best analogy, you could reasonably ask — was there an arms race in electricity in the 19th century? Would that have made any sense? And could someone have changed the course of history by changing who first got electricity and how they used it, or is that a fantasy?

In today’s episode we discuss the research frontier in the emerging field of AI policy and governance, how to have a career shaping US government policy, and Helen’s experience living and studying in China.

We cover:

  • Why immigration is the main policy area that should be affected by AI advances today.
  • Why talking about an ‘arms race’ in AI is premature.
  • How the US could remain the leading country in machine learning for the foreseeable future.
  • Whether it’s ever possible to have a predictable effect on government policy.
  • How Bobby Kennedy may have positively affected the Cuban Missile Crisis.
  • Whether it’s possible to become a China expert and still get a security clearance.
  • Can access to ML algorithms be restricted, or is that just not practical?
  • Why Helen and her colleagues set up the Center for Security and Emerging Technology and what jobs are available there and elsewhere in the field.
  • Whether AI could help stabilise authoritarian regimes.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

Accurately predicting the future is central to absolutely everything. Professor Tetlock has spent 40 years studying how to do it better.

Am I a believer in climate change, or a denier, if I say ‘Well, I’m 72% confident that the UN IPCC surface temperature forecasts are correct within plus or minus 0.3°C’? … I’m flirting with the idea that they might be wrong, right?

Professor Philip Tetlock

Have you ever been infuriated by a doctor’s unwillingness to give you an honest, probabilistic estimate about what to expect? Or a lawyer who won’t tell you the chances you’ll win your case?

Their behaviour is so frustrating because accurately predicting the future is central to every action we take. If we can’t assess the likelihood of different outcomes we’re in a complete bind, whether the decision concerns war and peace, work and study, or Black Mirror and RuPaul’s Drag Race.

Which is why the research of Professor Philip Tetlock is relevant for all of us each and every day.

He has spent 40 years as a meticulous social scientist, collecting millions of predictions from tens of thousands of people, in order to figure out how good humans really are at foreseeing the future, and what habits of thought allow us to do better.

Along with other psychologists, he identified that many ordinary people are attracted to a ‘folk probability’ that draws just three distinctions — ‘impossible’, ‘possible’ and ‘certain’ — and which leads to major systemic mistakes. But with the right mindset and training we can become capable of accurately discriminating between differences as fine as 56% versus 57% likely.

In the aftermath of Iraq and WMDs the US intelligence community hired him to prevent the same ever happening again, and his guide — Superforecasting: The Art and Science of Prediction — became a bestseller back in 2014.

That was five years ago. In today’s interview, Tetlock explains how his research agenda continues to advance, today using the game Civilization 5 to see how well we can predict what would have happened in elusive counterfactual worlds we never get to see, and discovering how simple algorithms can complement or substitute for human judgement.

We discuss how his work can be applied to your personal life to answer high-stakes questions, such as how likely you are to thrive in a given career path, or whether your business idea will be a billion-dollar unicorn — or fall apart catastrophically. (To help you get better at figuring those things out, our site now has a training app developed by the Open Philanthropy Project and Clearer Thinking that teaches you to accurately distinguish your ’70 percents’ from your ’80 percents’.)

We also bring a few methodological questions raised by the author of a recent review of the forecasting literature. And we find out what jobs people can take to make improving the reasonableness of decision-making in major institutions their profession, as it has been for Tetlock over many decades.

We view Tetlock’s work as so core to living well that we’ve brought him back for a second and longer appearance on the show — his first appearance was back in episode 15. Some questions this time around include:

  • What would it look like to live in a world where elites across the globe were better at predicting social and political trends? What are the main barriers to this happening?
  • What are some of the best opportunities for making forecaster training content?
  • What do extrapolation algorithms actually do, and given they perform so well, can we get more access to them?
  • Have any sectors of society or government started to embrace forecasting more in the last few years?
  • If you could snap your fingers and have one organisation begin regularly using proper forecasting, which would it be?
  • When if ever should one use explicit Bayesian reasoning?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

Prof Cass Sunstein on how social change happens, and why it’s so often abrupt & unpredictable

…the former Nazi said, “Opposition? How would anybody know? How would anybody know what somebody else opposes or doesn’t oppose? That a man says he opposes or doesn’t oppose depends on the circumstances, where and when, and to whom…”

Prof Cass Sunstein

It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunstein, they shouldn’t despair. Large social changes are often abrupt and unexpected, arising in an environment of seeming public opposition.

The Communist Revolution in Russia spread so swiftly it confounded even Lenin. Seventy years later the Soviet Union collapsed just as quickly and unpredictably.

In the modern era we have gay marriage, #metoo and the Arab Spring, as well as nativism, Euroskepticism and Hindu nationalism.

How can a society that so recently seemed to support the status quo bring about change in years, months, or even weeks?

Sunstein — co-author of Nudge, Obama White House official, and by far the most cited legal scholar of the late 2000s — aims to unravel the mystery and figure out the implications in his new book How Change Happens.

He pulls together three phenomena which social scientists have studied in recent decades: preference falsification, variable thresholds for action, and group polarisation. If Sunstein is to be believed, together these are a cocktail for social shifts that are chaotic and fundamentally unpredictable.

In brief, people constantly misrepresent their true views, even to close friends and family. They themselves aren’t quite sure how socially acceptable their feelings would have to become before they revealed them or joined a campaign for change. And a chance meeting between a few strangers can be the spark that radicalises a handful of people who then find a message that can spread their beliefs to millions.

According to Sunstein, it’s “much, much easier” to create social change when large numbers of people secretly or latently agree with you. But ‘preference falsification’ is so pervasive that it’s no simple matter to figure out when they do.

In today’s interview, we debate with Sunstein whether this model of social change is accurate, and if so, what lessons it has for those who would like to steer the world in a more humane direction. We cover:

  • How much people misrepresent their views in democratic countries.
  • Whether the finding that groups with an existing view tend towards a more extreme position would stand up in the replication crisis.
  • When is it justified to encourage your own group to polarise?
  • Sunstein’s difficult experiences as a pioneer of animal rights law.
  • Whether activists can do better by spending half their resources on public opinion surveys.
  • Should people be more or less outspoken about their true views?
  • What might be the next social revolution to take off?
  • How can we learn about social movements that failed and disappeared?
  • How to find out what people really think.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

DeepMind’s plan to make AI systems robust & reliable, why it’s a core issue in AI design, and how to succeed at AI research

Machine learning safety work is about enablement. It’s not a sort of tax… it’s enabling the creation and development of these technologies.

Pushmeet Kohli

When you’re building a bridge, responsibility for making sure it won’t fall over isn’t handed over to a few ‘bridge not falling down engineers’. Making sure a bridge is safe to use and remains standing in a storm is completely central to the design, and indeed the entire project.

When it comes to artificial intelligence, commentators often distinguish between enhancing the capabilities of machine learning systems and enhancing their safety. But to Pushmeet Kohli, principal scientist and research team leader at DeepMind, research to make AI robust and reliable is no more a side-project in AI design than keeping a bridge standing is a side-project in bridge design.

Far from being an overhead on the ‘real’ work, it’s an essential part of making AI systems work in any sense. We don’t want AI systems to be out of alignment with our intentions, and that consideration must arise throughout their development.

Professor Stuart Russell — co-author of the most popular AI textbook — has gone as far as to suggest that if this view is right, it may be time to retire the term ‘AI safety research’ altogether.

With the goal of designing systems that reliably do what we want, DeepMind have recently published work on important technical challenges for the ML community.

For instance, Pushmeet is looking for efficient ways to test whether a system conforms to the desired specifications, even in peculiar situations, by creating an ‘adversary’ that proactively seeks out the worst failures possible. If the adversary can efficiently identify the worst-case input for a given model, DeepMind can catch rare failure cases before deploying a model in the real world. In the future single mistakes by autonomous systems may have very large consequences, which will make even small failure probabilities unacceptable.

He’s also looking into ‘training specification-consistent models’ and formal verification’, while other researchers at DeepMind working on their AI safety agenda are figuring out how to understand agent incentives, avoid side-effects, and model AI rewards.

Hear about high-impact opportunities to help ensure AI remains safe and beneficial

  1. Submit your CV and interests within AI below.
  2. Key organisations tell us about their important positions, some of which are not yet publicly advertised.
  3. We’ll get in touch with you if there are high impact opportunities that might be a good fit. If you’re interested, we’ll make the introductions.

Tell us about your interests

In today’s interview, we focus on the convergence between broader AI research and robustness, as well as:

  • DeepMind’s work on the protein folding problem
  • Parallels between ML problems and past challenges in software development and computer security
  • How can you analyse the thinking of a neural network?
  • Unique challenges faced by DeepMind’s technical AGI safety team
  • How do you communicate with a non-human intelligence?
  • How should we conceptualize ML progress?
  • What are the biggest misunderstandings about AI safety and reliability?
  • Are there actually a lot of disagreements within the field?
  • The difficulty of forecasting AI development

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

As an addendum to the episode, we caught up with some members of the DeepMind team to learn more about roles at the organization beyond research and engineering, and how these contribute to the broader mission of developing AI for positive social impact.

A broad sketch of the kinds of roles listed on the DeepMind website may be helpful for listeners:

  • Program Managers keep the research team moving forward in a coordinated way, enabling and accelerating research.
  • The Ethics & Society team explores the real-world impacts of AI, from both an ethics research and policy perspective.
  • The Public Engagement & Communications team thinks about how to communicate about AI and its implications, engaging with audiences ranging from the AI community to the media to the broader public.
  • The Recruitment team focuses on building out the team in all of these areas, as well as research and engineering, bringing together the diverse and multidisciplinary group of people required to fulfill DeepMind’s ambitious mission.

There are many more listed opportunities across other teams, from Legal to People & Culture to the Office of the CEO, where our listeners may like to get involved.

They invite applicants from a wide range of backgrounds and skillsets so interested listeners should take a look at their open positions.

Continue reading →

Rob Wiblin on human nature, new technology, and living a happy, healthy & ethical life

Today we cross-posted to our podcast feed some interviews Rob did recently on two other podcasts — Mission Daily (from 2m) and The Good Life (from 1h13m).

Some of the content will be familiar to regular listeners or readers — but if you’re at all interested in Rob’s personal thoughts, there should be quite a lot of new material to make listening worthwhile.

The first interview is with Chad Grills. They focused largely on new technologies and existential risks, but also discuss topics like:

  • Why Rob is wary of fiction
  • Egalitarianism in the evolution of hunter gatherers
  • How to stop social media screwing with politics
  • Careers in government versus business

The second interview is with Prof Andrew Leigh — the Shadow Assistant Treasurer in Australia. This one gets into more personal topics than Rob usually covers, like:

  • What advice would he give to his teenage self?
  • Which person has most shaped his view of living an ethical life?
  • His approach to giving to the homeless
  • What does he do to maximise his own happiness?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

Recap: why do some organisations say their recent hires are worth so much?

Our 2018 survey found that for a second year, a significant fraction of organisations reported that they’d want to be compensated hundreds of thousands or sometimes millions of dollars for the loss of a recent hire for three years.

There was some debate last October about whether those figures could be accurate, why they were so high, and what they mean. In the current post, I outline some rough notes summarising the different explanations for why people in the survey estimated that the value of recent hires might be high, though I don’t seek firm conclusions about which considerations are playing the biggest role.

In short, we consider four explanations:

  1. The estimates might be wrong.
  2. There might be large differences in the value-add of different hires.
  3. The organisations might be able to fundraise easily.
  4. Retaining a recent hire allows the organisation to avoid running a hiring process.

Overall, we take the figures as evidence that leaders of the effective altruism community, when surveyed, think the value-add of recent hires at these organisations is very high — plausibly more valuable than donating six figures (or possible even more) per year to the same organisations. However, we do not think the precise numbers are a reliable answer to decision-relevant questions for job seekers, funders, or potential employers. We think it’s likely that mistakes are driving up these estimates. Even ignoring the high probability of mistakes,

Continue reading →

80,000 Hours Annual Review – December 2018

This annual review summarises our annual impact evaluation, and outlines our progress, plans, weaknesses and fundraising needs. It’s supplemented by a more detailed document that acts as a (less polished) appendix adding more detail to each section. Both documents were initially prepared in Dec 2018. We delayed their release until we heard back from some of our largest donors so that other stakeholders would be fully informed about our funding situation before we asked for their support. Except where otherwise stated, we haven’t updated the review with data from 2019 so empirical claims are generally “as of December 2018.” You can also see a glossary of key terms used in the reviews. You can find our previous evaluations here.

What does 80,000 Hours do?

80,000 Hours aims to solve the most pressing skill bottlenecks in the world’s most pressing problems.

We do this by carrying out research to identify the careers that best solve these problems, and using this research to provide free online content and in-person support. Our work is especially aimed at helping talented graduates aged 20-35 enter higher-impact careers.

The content aims to attract people who might be able to solve these bottlenecks and help them find new high-impact options. The in-person support aims to identify promising people and help them enter paths that are a good fit for them by providing advice, introductions and placements into specific positions.


Continue reading →

Career advice I wish I’d been given when I was young

Note: A reader who prefers to remain anonymous — but whose career we think did a lot of good — passed us this list of advice which they were grateful to have received, or wish they’d been given when they were younger.

We thought it was very interesting, including where it doesn’t line up exactly with our usual views, and so are publishing it here with their permission.

The advice is targeted towards people sympathetic to the principles of effective altruism, especially those with an interest in public policy careers, but we think much of it is more broadly useful.

  1. Don’t focus too much on long-term plans. Focus on interesting projects and you’ll build a resumé that stands out — take on multiple part-time consultancies and volunteer projects in parallel to quickly build it out. Back in my 30s, most of the things on my resumé were projects that involved 10% of my time each, and about half of them didn’t pay me any money. Those projects sounded fancy and helped me to get good full-time jobs later on.
  2. Find good thinkers and cold-call the ones you most admire. Many years ago I was lucky that people like Peter Singer, Peter Unger, John Broome, and Derek Parfit were kind enough to respond to my letters. (Any readers who are famous should take the time to respond to strangers’ emails.)

    I was similarly lucky that some of the policy professionals whose work I was most impressed with replied to me when I wrote out of the blue to say that I wanted to work for them.

Continue reading →

How to have a big impact in government & huge organisations, based on 16 years’ experience in the White House

…if your perception of the government is based on reading the press, what do you read about? You read about scandal. You read about gridlock. It would be as if your perception of New York were based on only reading the crime pages.

Tom Kalil

You’re 29 years old, and you’ve just been given a job in the White House. How do you quickly figure out how the US Executive Branch behemoth actually works, so that you can have as much impact as possible – before you quit or get kicked out?

That was the challenge put in front of Tom Kalil in 1993.

He had enough success to last a full 16 years inside the Clinton and Obama administrations, working to foster the development of the internet, then nanotechnology, and then cutting-edge brain modelling, among other things.

But not everyone figures out how to move the needle. In today’s interview, Tom shares his experience with how to increase your chances of getting an influential role in government, and how to make the most of the opportunity if you get in.

He believes that Congressional gridlock leads people to greatly underestimate how much the Executive Branch can and does do on its own every day. Decisions by individuals change how billions of dollars are spent; regulations are enforced, and then suddenly they aren’t; and a single sentence in the State of the Union can get civil servants to pay attention to a topic that would otherwise go ignored.

Over years at the White House Office of Science and Technology Policy, ‘Team Kalil’ built up a white board of principles. For example, ‘the schedule is your friend’: setting a meeting date with the President can force people to finish something, where they otherwise might procrastinate.

Or ‘talk to who owns the paper’. People would wonder how Tom could get so many lines into the President’s speeches. The answer was “figure out who’s writing the speech, find them with the document, and tell them to add the line.” Obvious, but not something most were doing.

Not everything is a precise operation though. Tom also tells us the story of NetDay, a project that was put together at the last minute because the President incorrectly believed it was already organised – and decided he was going to announce it in person.

American interested in working on AI policy?

We’ve helped dozens of people transition into policy careers. We can offer introductions to people and funding opportunities, and we can help answer specific questions you might have.

If you are a US citizen interested in building expertise to work on US AI policy, apply for our free coaching service.

Apply for coaching

In today’s episode we get down to nuts & bolts, and discuss:

  • How did Tom spin work on a primary campaign into a job in the next White House?
  • Why does Tom think hiring is the most important work he did, and how did he decide who to bring onto the team?
  • How do you get people to do things when you don’t have formal power over them?
  • What roles in the US government are most likely to help with the long-term future, or reducing existential risks?
  • Is it possible, or even desirable, to get the general public interested in abstract, long-term policy ideas?
  • What are ‘policy entrepreneurs’ and why do they matter?
  • What is the role for prizes in promoting science and technology? What are other promising policy ideas?
  • Why you can get more done by not taking credit.
  • What can the White House do if an agency isn’t doing what it wants?
  • How can the effective altruism community improve the maturity of our policy recommendations?
  • How much can talented individuals accomplish during a short-term stay in government?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

Animals in the wild often suffer a great deal. What, if anything, should we do about that?

Nature isn’t good or bad. It doesn’t say anything about happiness or suffering. What we can do is look at what drives our existence, and then figure out what experiences animals are most likely to have as a result.

Persis Eskander

Elephants in chains at travelling circuses; pregnant pigs trapped in coffin-sized crates at factory farms; deers living in the wild. We should welcome the last as a pleasant break from the horror, right?

Maybe, but maybe not. While we tend to have a romanticised view of nature, life in the wild includes a range of extremely negative experiences.

Most animals are hunted by predators, and constantly have to remain vigilant lest they be killed, and perhaps experience the terror of being eaten alive. Resource competition often leads to chronic hunger or starvation. Their diseases and injuries are never treated. In winter wild animals freeze to death and in droughts they die of heat or thirst.

There are fewer than 20 people in the world dedicating their lives to researching these problems.

But according to Persis Eskander, researcher at the Open Philanthropy Project, if we sum up the negative experiences of all wild animals, their sheer number – trillions to quintillions, depending on which you count – could make the scale of the problem larger than most other near-term concerns.

Persis urges us to recognise that nature isn’t inherently good or bad, but rather the result of an amoral evolutionary process. For those that can’t survive the brutal indifference of their environment, life is often a series of bad experiences, followed by an even worse death.

But should we actually intervene? How do we know what animals are sentient? How often do animals really feel hunger, cold, fear, happiness, satisfaction, boredom, and intense agony? Are there long-term technologies that could some day allow us to massively improve wild animal welfare?

For most of these big questions, the answer is: we don’t know. And Persis thinks we’re far from knowing enough to start interfering with ecosystems. But that’s all the more reason to start considering these questions.

There are a few concrete steps we could take today, like improving the way wild caught fish are slaughtered. Fish might lack the charisma of a lion or the intelligence of a pig, but if they have the capacity to suffer — and evidence suggests that they do — we should be thinking of ways to kill them painlessly rather than allowing them to suffocate to death over hours.

In today’s interview we explore wild animal welfare as a new field of research, and discuss:

  • Do we have a moral duty towards wild animals?
  • How should we measure the number of wild animals?
  • What are some key activities that generate a lot of suffering or pleasure for wild animals that people might not fully appreciate?
  • Is there a danger in imagining how we as humans would feel if we were put into their situation?
  • Should we eliminate parasites and predators?
  • How important are insects?
  • Interventions worth rolling out today
  • How strongly should we focus on just avoiding humans going in and making things worse?
  • How does this compare to work on farmed animal suffering?
  • The most compelling arguments for not dedicating resources to wild animal welfare
  • Is there much of a case for the idea that this work could improve the very long-term future of humanity?
  • Would increasing concern for wild animals improve our values?
  • How do you get academics to take an interest in this?
  • How could autonomous drones improve wild animal welfare?

Rob is then joined by two of his colleagues — Niel Bowerman and Michelle Hutchinson — to quickly cover:

  • The importance of figuring out your values
  • Chemistry, psychology, and other different paths towards working on wild animal welfare
  • How to break into new fields

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

The team trying to end poverty by founding well-governed ‘charter’ cities

What China did was urbanization combined with special economic zones. They looked and said, “Wait, Hong Kong’s rich. Taiwan is rich. They’re Chinese, we’re Chinese. Why are they doing well and we’re starving?

Dr Mark Lutter

Governance matters. Policy change quickly took China from famine to fortune; Singapore from swamps to skyscrapers; and Hong Kong from fishing village to financial centre. Unfortunately, many governments are hard to reform and — to put it mildly — it’s not easy to found a new country.

This has prompted poverty-fighters and political dreamers to look for creative ways to get new and better ‘pseudo-countries’ off the ground. The poor could then voluntarily migrate to in search of security and prosperity. And innovators would be free to experiment with new political and legal systems without having to impose their ideas on existing jurisdictions.

The ‘seasteading movement’ imagined founding new self-governing cities on the sea, but obvious challenges have kept that one on the drawing board. Nobel Prize winner and World Bank President Paul Romer suggested ‘charter cities’, where a host country would volunteer for another country with better legal institutions to effectively govern some of its territory. But that idea too ran aground for political, practical and personal reasons.

Now Dr Mark Lutter and Tamara Winter, of The Center for Innovative Governance Research (CIGR), are reviving the idea of ‘charter cities’, with some modifications. Gone is the idea of transferring sovereignty. Instead these cities would look more like the ‘special economic zones’ that worked miracles for Taiwan and China among others. But rather than keep the rest of the country’s rules with a few pieces removed, they hope to start from scratch, opting in to the laws they want to keep, in order to leap forward to “best practices in commercial law.”

Also listen to: Rob on The Good Life: Andrew Leigh in Conversation — on ‘making the most of your 80,000 hours’.

The project has quickly gotten attention, with Mark and Tamara receiving funding from Tyler Cowen’s Emergent Ventures (discussed in episode 45) and winning a Pioneer tournament.

Starting afresh with a new city makes it possible to clear away thousands of harmful rules without having to fight each of the thousands of interest groups that will viciously defend their privileges. Initially the city can fund infrastructure and public services by gradually selling off its land, which appreciates as the city flourishes. And with 40 million people relocating to cities every year, there are plenty of prospective migrants.

CIGR is fleshing out how these arrangements would work, advocating for them, and developing supporting services that make it easier for any jurisdiction to implement. They’re currently in the process of influencing a new prospective satellite city in Zambia.

Of course, one can raise many criticisms of this idea: Is it likely to be taken up? Is CIGR really doing the right things to make it happen? Will it really reduce poverty if it is?

We discuss those questions, as well as:

  • How did Mark get a new organisation off the ground, with fundraising and other staff?
  • What made China’s ‘special economic zones’ so successful?
  • What are the biggest challenges in getting new cities off the ground?
  • What are the top criticisms of charter cities, and why aren’t they worried?
  • How did Mark find and hire Tamara? How did he know this was a good idea?
  • Who do they need to talk to to make charter cities happen?
  • How does their idea fit into the broader story of governance innovation?
  • Should people care about this idea if they aren’t focussed on tackling poverty?
  • Why aren’t people already doing this?
  • Why does Tamara support more people starting families?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

OpenAI can teach algorithms to write articles, win video games & manipulate objects. How can policy keep up with AI advances?

I would recommend everyone who has calibrated intuitions about AI timelines spend some time doing stuff with real robots and it will probably … how should I put this? … further calibrate your intuitions in quite a humbling way.

Jack Clark

Dactyl is an AI system that can manipulate objects with a human-like robot hand. OpenAI Five is an AI system that can defeat humans at the video game Dota 2. The strange thing is they were both developed using the same general-purpose reinforcement learning algorithm.

How is this possible and what does it show?

In today’s interview Jack Clark, Policy Director at OpenAI, explains that from a computational perspective using a hand and playing Dota 2 are remarkably similar problems.

A robot hand needs to hold an object, move its fingers, and rotate it to the desired position. In Dota 2 you control a team of several different people, moving them around a map to attack an enemy.

Your hand has 20 or 30 different joints to move. The number of main actions in Dota 2 is 10 to 20, as you move your characters around a map.

When you’re rotating an objecting in your hand, you sense its friction, but you don’t directly perceive the entire shape of the object. In Dota 2, you’re unable to see the entire map and perceive what’s there by moving around — metaphorically ‘touching’ the space.

Read our new in-depth article on becoming an AI policy specialist: The case for building expertise to work on US AI policy, and how to do it

This is true of many apparently distinct problems in life. Compressing different sensory inputs down to a fundamental computational problem which we know how to solve only requires the right general purpose software.

OpenAI used an algorithm called Proximal Policy Optimization (PPO), which is fairly robust — in the sense that you can throw it at many different problems, not worry too much about tuning it, and it will do okay.

Jack emphasises that this algorithm wasn’t easy to create, and they were incredibly excited about it working on both tasks. But he also says that the creation of such increasingly ‘broad-spectrum’ algorithms has been the story of the last few years, and that the invention of software like PPO will have unpredictable consequences, heightening the huge challenges that already exist in AI policy.

Today’s interview is a mega-AI-policy-quad episode; Jack is joined by his colleagues Amanda Askell and Miles Brundage, on the day they released their fascinating and controversial large general language model GPT-2.

We discuss:

  • What are the most significant changes in the AI policy world over the last year or two?
  • How much is the field of AI policy still in the phase of just doing research and figuring out what should be done, versus actually trying to change things in the real world?
  • What capabilities are likely to develop over the next five, 10, 15, 20 years?
  • How much should we focus on the next couple of years, versus the next couple of decades?
  • How should we approach possible malicious uses of AI?
  • What are some of the potential ways OpenAI could make things worse, and how can they be avoided?
  • Publication norms for AI research
  • Where do we stand in terms of arms races between countries or different AI labs?
  • The case for creating a newsletter
  • Should the AI community have a closer relationship to the military?
  • Working at OpenAI vs. working in the US government
  • How valuable is Twitter in the AI policy world?

Rob is then joined by two of his colleagues — Niel Bowerman and Michelle Hutchinson — to quickly discuss:

  • The reaction to OpenAI’s release of GPT-2
  • Jack’s critique of our US AI policy article
  • How valuable are roles in government?
  • Where do you start if you want to write content for a specific audience?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

Find your highest impact role: 104 new vacancies in our February 2019 job board updates

Our job board continues to get big updates each 2 week, and now lists 235 vacancies, with 104 additional opportunities in the last month.

If you’re actively looking for a new role, we recommend checking out the job board regularly – when a great opening comes up, you’ll want to maximise your time to prepare.

The job board is a curated list of the most promising positions to apply for that we’re currently aware of. They’re all high-impact opportunities at organisations that are working on some of the world’s most pressing problems:

Check out the job board →

They’re demanding positions, but if you’re a good fit for one of them, it could be your best opportunity to have an impact.

If you apply for one of these jobs, or intend to, please do let us know.

A few highlights from the last month

Continue reading →

Can journalists still write about important things?

I think it’s certainly fair to say that there is more good journalism than people realize. And the reason for that is that a lot of the stuff that gets people very angry is not the best stuff out there.

Kelsey Piper

“Politics. Business. Opinion. Science. Sports. Animal welfare. Existential risks.” Is this a plausible future lineup for major news outlets?

Funded by the Rockefeller Foundation and given very little editorial direction, Vox’s Future Perfect aspires to be more or less that.

Competition in the news business creates pressure to write quick pieces on topical political issues that can drive lots of clicks with just a few hours’ work.

But according to Kelsey Piper, staff writer for this new section on Vox’s website focused on effective altruist themes, Future Perfect’s goal is to run in the opposite direction and make room for more substantive coverage that’s not tied to the news cycle.

They hope that in the long-term, talented writers from other outlets across the political spectrum, can also be attracted to tackle these topics.

Some skeptics of the project have questioned whether this general coverage of global catastrophic risks actually helps reduce them.

Kelsey responds: if you decide to dedicate your life to AI safety research, what’s the likely reaction from your family and friends? Do they think of you as someone about to join “that weird Silicon Valley apocalypse thing”? Or do they, having read about the issues widely, simply think “Oh, yeah. That seems important. I’m glad you’re working on it.”

Kelsey believes that really matters, and is determined by broader coverage of these kinds of topics.

If that’s right, is journalism a plausible pathway for doing the most good with your career, or did Kelsey just get particularly lucky? After all, journalism is a shrinking industry without an obvious revenue model to fund many writers looking into the world’s most pressing problems.

Kelsey points out that one needn’t take the risk of committing to journalism at an early age. Instead listeners can specialise in an important topic, while leaving open the option of switching into specialist journalism later on, should a great opportunity happen to present itself.

In today’s episode we discuss that path, as well as:

  • What’s the day to day life of a Vox journalist like?
  • How can good journalism get funded?
  • Are there meaningful tradeoffs between doing what’s in the interest of Vox, and doing what’s good?
  • How concerned should we be about the risk of effective altruism being perceived as partisan?
  • How well can short articles effectively communicate complicated ideas?
  • Are there alternative business models that could fund high quality journalism on a larger scale?
  • How do you approach the case for taking AI seriously to a broader audience?
  • How valuable might it be for media outlets to do Tetlock-style forecasting?
  • Is it really a good idea to heavily tax billionaires?
  • How do you avoid the pressure to get clicks?
  • How possible is it to predict which articles are going to be popular?
  • How did Kelsey build the skills necessary to work at Vox?
  • General lessons for people dealing with very difficult life circumstances

Rob is then joined by two of his colleagues – Keiran Harris and Michelle Hutchinson – to quickly discuss:

  • The risk political polarisation poses to long-termist causes
  • How should specialists keep journalism available as a career option?
  • Should we create a news aggregator that aims to make someone as well informed as possible in big-picture terms?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

Radical institutional reforms that make capitalism & democracy work better, and how to get them

…a lot of libertarians are into the idea of decentralised knowledge when it comes to Hayek and the market. But when we talk about politics, suddenly they think that there’s no knowledge out there. And I think that… that’s nuts. … politics is just like all these other things – there’s lots of local information out there.

Prof Glen Weyl

Imagine you were put in charge of planning out a country’s economy – determining who should work where and what they should make – without prices. You would surely struggle to collect all the information you need about what people want and who can most efficiently make it from an office building in the capital city.

Pro-market economists love to wax rhapsodic about the capacity of markets to pull together the valuable local information spread across all of society and solve this so-called ‘knowledge problem’.

But when it comes to politics and voting – which also aim to aggregate the preferences and knowledge found in millions of individuals – the enthusiasm for finding clever institutional designs turns to skepticism.

Today’s guest, freewheeling economist Glen Weyl, won’t have it, and is on a warpath to reform liberal democratic institutions in order to save them. Just last year he wrote Radical Markets: Uprooting Capitalism and Democracy for a Just Society with Eric Posner, but he has already moved on, saying “in the 6 months since the book came out I’ve made more intellectual progress than in the whole 10 years before that.”

He believes we desperately need more efficient, equitable and decentralised ways to organise society that take advantage of what each person knows, and his research agenda has already made some breakthroughs.

Despite a background in the best economics departments in the world – Harvard, Princeton, Yale and the University of Chicago – he is too worried for the future to sit in his office writing papers. Instead he has left the academy to try to inspire a social movement, RadicalxChange, with a vision of social reform as expansive as his own. (You can sign up for their conference in March here.)

Economist Alex Tabarrok called his latest proposal, known as ‘liberal radicalism’, “a quantum leap in public-goods mechanism-design.” The goal is to accurately measure how much the public actually values a good they all have to share, like a scientific research finding. Alex observes that under liberal radicalism “almost magically… citizens will voluntarily contribute exactly the amount that correctly signals how much society as a whole values the public good. Amazing!” But the proposal, however good in theory, might struggle in the real world because it requires large subsidies, and compensates for people’s selfishness so effectively that it might even be an overcorrection.

An earlier proposal – ‘quadratic voting’ (QV) – would allow people to express the relative strength of their preferences in the democratic process. No longer would 51 people who support a proposal, but barely care about the issue, outvote 49 incredibly passionate opponents, predictably making society worse in the process.

Instead everyone would be given ‘voice credits’ which they could spread across elections as they chose. QV follows a square root rule: 1 voice credit gets you 1 vote, 4 voice credits gets you 2 votes, 9 voice credits gives you 3 votes, and so on. It’s not immediately apparent, but this method is on average the ideal way of allowing people to more and more impose their desires on the rest of society, but at an ever escalating cost. To economists it’s an idea that’s obvious, though only in retrospect, and is already being taken up by business.

Weyl points to studies showing that people are more likely to vote strongly not only about issues they care more about, but issues they know more about. He expects that allowing people to specialise and indicate when they know what they’re talking about will create a democracy that does more to aggregate careful judgement, rather than just passionate ignorance.

But these and indeed all of Weyl’s proposals have faced criticism. Some say the risk of unintended consequences are too great, or that they solve the wrong problem. Others see these proposals as unproven, impractical, or just another example of overambitious social planning on the part of intellectuals. I raise these concerns to see how he responds.

Weyl hopes a creative spirit in figuring out how to make collective decision-making work for the modern world can restore faith in liberal democracy and prevent a resurgence of reactionary ideas during a future recession. But as big a topic as all that is, this extended conversation covers more:

  • How should we think about blockchain as a technology, and the community dedicated to it?
  • How could auctions inspire an alternative to private property?
  • Why is Glen wary of mathematical styles of approaching issues?
  • Is high modernism underrated?
  • Should we think of the world as going well or badly?
  • What are the biggest intellectual errors of the effective altruism community? And the rationality community?
  • Should migrants be sponsored by communities?
  • Could we provide people with a sustainable living by treating their data as labour?
  • The potential importance of artists in promoting ideas
  • How does liberal radicalism actually work

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

The case for building expertise to work on US AI policy, and how to do it

At 80,000 Hours we think a significant number of people should build expertise to work on United States (US) policy relevant to the long-term effects of the development and use of artificial intelligence (AI).

In this article we go into more detail on this claim, as well as discussing arguments in favor and against. We also briefly outline which specific career paths to aim for and discuss which sorts of people we think might suit these roles best.

This article is based on multiple conversations with three senior US Government officials, three federal employees working on science and technology issues, three congressional staffers, and several other people who have served as advisors to government from within academia and non-profits. We also spoke with several research scientists at top AI labs and in academia, as well as relevant experts from foundations and nonprofits.

We have hired Niel Bowerman as our in-house specialist on AI policy careers. If you are a US citizen interested in pursuing a career in AI public policy, please let us know and Niel may be able to work with you to help you enter this career path.

Still in her 20s, Terah Lyons has risen to the top of the artificial intelligence (AI) policy world.

Less than two years after finishing her Harvard undergraduate, she was working in the Obama White House, writing a report laying out the administration’s policies on AI.

Continue reading →

The CIA analyst who foresaw Trump in 2013 and his theory of why politics is changing

…the elites that ran our institutions had the authority to provide information, frame it & explain the world. That’s completely gone, and with it there’s been a bleeding away of expert authority, and a public has been created that’s essentially very angry…

Martin Gurri

Politics in rich countries seems to be going nuts. What’s the explanation? Rising inequality? The decline of manufacturing jobs? Excessive immigration?

Martin Gurri spent decades as a CIA analyst and in his 2014 book The Revolt of The Public and the Crisis of Authority in the New Millennium, predicted political turbulence for an entirely different reason: new communication technologies were flipping the balance of power between the public and traditional authorities.

In 1959 the President could control the narrative by leaning on his friends at four TV stations, who felt it was proper to present the nation’s leader in a positive light, no matter their flaws. Today, it’s impossible to prevent someone from broadcasting any grievance online, whether it’s a contrarian insight or an insane conspiracy theory.

According to Gurri, trust in society’s institutions – police, journalists, scientists and more – has been undermined by constant criticism from outsiders, and exposed to a cacophony of conflicting opinions on every issue the public takes fewer truths for granted. We are now free to see our leaders as the flawed human beings they always have been, and are not amused.

Suspicious they are being betrayed by elites, the public can also use technology to coordinate spontaneously and express its anger. Keen to ‘throw the bastards out’ – protesters take to the streets, united by what they don’t like, but without a shared agenda for how to move forward or the institutional infrastructure to figure out how to fix things. Some popular movements have come to view any attempt to exercise power over others as suspect.

If Gurri is to be believed, protest movements in Egypt, Spain, Greece and Israel in 2011 followed this script, while Brexit, Trump and the French yellow vests movement subsequently vindicated his theory.

In this model, politics won’t return to its old equilibrium any time soon. The leaders of tomorrow will need a new message and style if they hope to maintain any legitimacy in this less hierarchical world. Otherwise, we’re in for decades of grinding conflict between traditional centres of authority and the general public, who doubt both their loyalty and competence.

But how much should we believe this theory? Why do Canada and Australia remain pools of calm in the storm? Aren’t some malcontents quite concrete in their demands? And are protest movements actually more common (or more nihilistic) than they were decades ago?

In today’s episode we ask these questions and add an hour-long discussion with two of Rob’s colleagues – Keiran Harris and Michelle Hutchinson – to further explore the ideas in the book.

The conversation covers:

  • What’s changed about the public’s relationship to information and authority?
  • Are protesters today usually united for or against something?
  • What sorts of people are participating in these new movements?
  • Are we elites or the public?
  • Is the number of street protests and the level of dissatisfaction with governments actually higher than before?
  • How do we know that the internet is driving this rather than some other phenomenon?
  • How do technological changes enable social and political change?
  • The historical role of television
  • Are people also more disillusioned now with sports heroes and actors?
  • What are the best arguments against this thesis?
  • How should we think about countries like Canada, Australia, Spain, and China using this model?
  • Has public opinion shifted as much as it seems?
  • How can we get to a point where people view the system and politicians as legitimate and respectable, given the competitive pressures against being honest about the limits of your power and knowledge?
  • Which countries are finding good ways to make politics work in this new era?
  • What are the implications for the threat of totalitarianism?
  • What is this is going to do to international relations? Will it make it harder for countries to cooperate and avoid conflict?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

We could feed all 8 billion people through a nuclear winter. Dr David Denkenberger is working to make it practical.

I was reading this paper called Fungi & Sustainability – the premise was that after an asteroid impact, humans would go extinct and the world would be ruled by mushrooms, which would grow just fine in the dark. I thought… why don’t we just eat the mushrooms and not go extinct?

Dr David Denkenberger

If a nuclear winter or asteroid impact blocked the sun for years, our inability to grow food would result in billions dying of starvation, right? According to Dr David Denkenberger, co-author of Feeding Everyone No Matter What: no. If he’s to be believed, nobody need starve at all.

Even without the sun, David sees the Earth as a bountiful food source. Mushrooms farmed on decaying wood. Bacteria fed with natural gas. Fish and mussels supported by sudden upwelling of ocean nutrients – and many more.

Dr Denkenberger is an Assistant Professor at the University of Alaska Fairbanks, and he’s out to spread the word that while a nuclear winter might be horrible, experts have been mistaken to assume that mass starvation is an inevitability. In fact, he says, the only thing that would prevent us from feeding the world is insufficient preparation.

Not content to just write a book pointing this out, David has gone on to found a growing non-profit – the Alliance to Feed the Earth in Disasters – to brace the world to feed everyone come what may. He expects that today 10% of people would find enough food to survive a massive disaster. In principle, if we did everything right, nobody need go hungry. But being more realistic about how much we’re likely to invest, David hopes a plan to inform people ahead of time would save 30%, and a decent research and development scheme 80%.

According to David’s published cost-benefit analyses, work on this problem may be able to save lives, in expectation, for under $100 each, making it an incredible investment.

These preparations could also help make humanity more resilient to global catastrophic risks, by forestalling an ‘everyone for themselves’ mentality, which then causes trade and civilization to unravel.

But some worry that David’s cost-effectiveness estimates are exaggerations, so I challenge him on the practicality of his approach, and how much his non-profit’s work would actually matter in a post-apocalyptic world. In our extensive conversation, we cover:

  • How could the sun end up getting blocked, or agriculture otherwise be decimated?
  • What are all the ways we could we eat nonetheless? What kind of life would this be?
  • Can these methods be scaled up fast?
  • What is his organisation, ALLFED, actually working on?
  • How does he estimate the cost-effectiveness of this work, and what are the biggest weaknesses of the approach?
  • How would more food affect the post-apocalyptic world? Won’t people figure it out at that point anyway?
  • Why not just leave guidebooks with this information in every city?
  • Would these preparations make nuclear war more likely?
  • What kind of people is ALLFED trying to hire?
  • What would ALLFED do with more money? What have been their biggest mistakes?
  • How he ended up doing this work. And his other engineering proposals for improving the world, including how to prevent a supervolcano explosion.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

Find your highest impact role: 77 new vacancies in our December job board updates

Thanks to the sterling work of Maria Gutierrez, our job board continues to get big updates each 2 week, and now lists 169 vacancies, with 77 additional opportunities in the last month.

If you’re actively looking for a new role, we recommend checking out the job board regularly – when a great opening comes up, you’ll want to maximise your time to prepare.

The job board is a curated list of the most promising positions to apply for that we’re currently aware of. They’re all high-impact opportunities at organisations that are working on some of the world’s most pressing problems:

Check out the job board →

They’re demanding positions, but if you’re a good fit for one of them, it could be your best opportunity to have an impact.

If you apply for one of these jobs, or intend to, please do let us know.

A few highlights from the last month

Continue reading →