#58 – Pushmeet Kohli of DeepMind on designing robust & reliable AI systems and how to succeed in AI

When you’re building a bridge, responsibility for making sure it won’t fall over isn’t handed over to a few ‘bridge not falling down engineers’. Making sure a bridge is safe to use and remains standing in a storm is completely central to the design, and indeed the entire project.

When it comes to artificial intelligence, commentators often distinguish between enhancing the capabilities of machine learning systems and enhancing their safety. But to Pushmeet Kohli, principal scientist and research team leader at DeepMind, research to make AI robust and reliable is no more a side-project in AI design than keeping a bridge standing is a side-project in bridge design.

Far from being an overhead on the ‘real’ work, it’s an essential part of making AI systems work in any sense. We don’t want AI systems to be out of alignment with our intentions, and that consideration must arise throughout their development.

Professor Stuart Russell — co-author of the most popular AI textbook — has gone as far as to suggest that if this view is right, it may be time to retire the term ‘AI safety research’ altogether.

With the goal of designing systems that reliably do what we want, DeepMind have recently published work on important technical challenges for the ML community.

For instance, Pushmeet is looking for efficient ways to test whether a system conforms to the desired specifications, even in peculiar situations, by creating an ‘adversary’ that proactively seeks out the worst failures possible. If the adversary can efficiently identify the worst-case input for a given model, DeepMind can catch rare failure cases before deploying a model in the real world. In the future single mistakes by autonomous systems may have very large consequences, which will make even small failure probabilities unacceptable.

He’s also looking into ‘training specification-consistent models’ and formal verification’, while other researchers at DeepMind working on their AI safety agenda are figuring out how to understand agent incentives, avoid side-effects, and model AI rewards.

In today’s interview, we focus on the convergence between broader AI research and robustness, as well as:

  • DeepMind’s work on the protein folding problem
  • Parallels between ML problems and past challenges in software development and computer security
  • How can you analyse the thinking of a neural network?
  • Unique challenges faced by DeepMind’s technical AGI safety team
  • How do you communicate with a non-human intelligence?
  • How should we conceptualize ML progress?
  • What are the biggest misunderstandings about AI safety and reliability?
  • Are there actually a lot of disagreements within the field?
  • The difficulty of forecasting AI development

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.



As an addendum to the episode, we caught up with some members of the DeepMind team to learn more about roles at the organization beyond research and engineering, and how these contribute to the broader mission of developing AI for positive social impact.

A broad sketch of the kinds of roles listed on the DeepMind website may be helpful for listeners:

  • Program Managers keep the research team moving forward in a coordinated way, enabling and accelerating research.
  • The Ethics & Society team explores the real-world impacts of AI, from both an ethics research and policy perspective.
  • The Public Engagement & Communications team thinks about how to communicate about AI and its implications, engaging with audiences ranging from the AI community to the media to the broader public.
  • The Recruitment team focuses on building out the team in all of these areas, as well as research and engineering, bringing together the diverse and multidisciplinary group of people required to fulfill DeepMind’s ambitious mission.

There are many more listed opportunities across other teams, from Legal to People & Culture to the Office of the CEO, where our listeners may like to get involved.

They invite applicants from a wide range of backgrounds and skill sets so interested listeners should take a look at their open positions.


Continue reading →

Rob Wiblin on human nature, new technology, and living a happy, healthy & ethical life

Today we cross-posted to our podcast feed some interviews Rob did recently on two other podcasts — Mission Daily (from 2m) and The Good Life (from 1h13m).

Some of the content will be familiar to regular listeners or readers — but if you’re at all interested in Rob’s personal thoughts, there should be quite a lot of new material to make listening worthwhile.

The first interview is with Chad Grills. They focused largely on new technologies and existential risks, but also discuss topics like:

  • Why Rob is wary of fiction
  • Egalitarianism in the evolution of hunter gatherers
  • How to stop social media screwing with politics
  • Careers in government versus business

The second interview is with Prof Andrew Leigh — the Shadow Assistant Treasurer in Australia. This one gets into more personal topics than Rob usually covers, like:

  • What advice would he give to his teenage self?
  • Which person has most shaped his view of living an ethical life?
  • His approach to giving to the homeless
  • What does he do to maximise his own happiness?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

Recap: why do some organisations say their recent hires are worth so much?

Our 2018 survey found that for a second year, a significant fraction of organisations reported that they’d want to be compensated hundreds of thousands or sometimes millions of dollars for the loss of a recent hire for three years.

There was some debate last October about whether those figures could be accurate, why they were so high, and what they mean. In the current post, I outline some rough notes summarising the different explanations for why people in the survey estimated that the value of recent hires might be high, though I don’t seek firm conclusions about which considerations are playing the biggest role.

In short, we consider four explanations:

  1. The estimates might be wrong.
  2. There might be large differences in the value-add of different hires.
  3. The organisations might be able to fundraise easily.
  4. Retaining a recent hire allows the organisation to avoid running a hiring process.

Overall, we take the figures as evidence that leaders of the effective altruism community, when surveyed, think the value-add of recent hires at these organisations is very high — plausibly more valuable than donating six figures (or possible even more) per year to the same organisations. However, we do not think the precise numbers are a reliable answer to decision-relevant questions for job seekers, funders, or potential employers. We think it’s likely that mistakes are driving up these estimates. Even ignoring the high probability of mistakes,

Continue reading →

80,000 Hours Annual Review – December 2018

benjamin todd 80000 hours

This annual review summarises our annual impact evaluation, and outlines our progress, plans, weaknesses and fundraising needs. It’s supplemented by a more detailed document that acts as a (less polished) appendix adding more detail to each section. Both documents were initially prepared in Dec 2018. We delayed their release until we heard back from some of our largest donors so that other stakeholders would be fully informed about our funding situation before we asked for their support. Except where otherwise stated, we haven’t updated the review with data from 2019 so empirical claims are generally “as of December 2018.” You can also see a glossary of key terms used in the reviews. You can find our previous evaluations here.

What does 80,000 Hours do?

80,000 Hours aims to solve the most pressing skill bottlenecks in the world’s most pressing problems.

We do this by carrying out research to identify the careers that best solve these problems, and using this research to provide free online content and in-person support. Our work is especially aimed at helping talented graduates aged 20-35 enter higher-impact careers.

The content aims to attract people who might be able to solve these bottlenecks and help them find new high-impact options. The in-person support aims to identify promising people and help them enter paths that are a good fit for them by providing advice, introductions and placements into specific positions.

Currently,

Continue reading →

Career advice I wish I’d been given when I was young

Note: A reader who prefers to remain anonymous — but whose career we think did a lot of good — passed us this list of advice which they were grateful to have received, or wish they’d been given when they were younger.

We thought it was very interesting, including where it doesn’t line up exactly with our usual views, and so are publishing it here with their permission.

The advice is targeted towards people sympathetic to the principles of effective altruism, especially those with an interest in public policy careers, but we think much of it is more broadly useful.

  1. Don’t focus too much on long-term plans. Focus on interesting projects and you’ll build a resumé that stands out — take on multiple part-time consultancies and volunteer projects in parallel to quickly build it out. Back in my 30s, most of the things on my resumé were projects that involved 10% of my time each, and about half of them didn’t pay me any money. Those projects sounded fancy and helped me to get good full-time jobs later on.
  2. Find good thinkers and cold-call the ones you most admire. Many years ago I was lucky that people like Peter Singer, Peter Unger, John Broome, and Derek Parfit were kind enough to respond to my letters. (Any readers who are famous should take the time to respond to strangers’ emails.)

    I was similarly lucky that some of the policy professionals whose work I was most impressed with replied to me when I wrote out of the blue to say that I wanted to work for them.

Continue reading →

#57 – Tom Kalil on how to do the most good in government

You’re 29 years old, and you’ve just been given a job in the White House. How do you quickly figure out how the US Executive Branch behemoth actually works, so that you can have as much impact as possible – before you quit or get kicked out?

That was the challenge put in front of Tom Kalil in 1993.

He had enough success to last a full 16 years inside the Clinton and Obama administrations, working to foster the development of the internet, then nanotechnology, and then cutting-edge brain modelling, among other things.

But not everyone figures out how to move the needle. In today’s interview, Tom shares his experience with how to increase your chances of getting an influential role in government, and how to make the most of the opportunity if you get in.

He believes that Congressional gridlock leads people to greatly underestimate how much the Executive Branch can and does do on its own every day. Decisions by individuals change how billions of dollars are spent; regulations are enforced, and then suddenly they aren’t; and a single sentence in the State of the Union can get civil servants to pay attention to a topic that would otherwise go ignored.

Over years at the White House Office of Science and Technology Policy, ‘Team Kalil’ built up a white board of principles. For example, ‘the schedule is your friend’: setting a meeting date with the President can force people to finish something, where they otherwise might procrastinate.

Or ‘talk to who owns the paper’. People would wonder how Tom could get so many lines into the President’s speeches. The answer was “figure out who’s writing the speech, find them with the document, and tell them to add the line.” Obvious, but not something most were doing.

Not everything is a precise operation though. Tom also tells us the story of NetDay, a project that was put together at the last minute because the President incorrectly believed it was already organised – and decided he was going to announce it in person.

American interested in working on AI policy?

We’ve helped dozens of people transition into policy careers. We can offer introductions to people and funding opportunities, and we can help answer specific questions you might have.

If you are a US citizen interested in building expertise to work on US AI policy, apply for our free coaching service.

Apply for coaching

In today’s episode we get down to nuts & bolts, and discuss:

  • How did Tom spin work on a primary campaign into a job in the next White House?
  • Why does Tom think hiring is the most important work he did, and how did he decide who to bring onto the team?
  • How do you get people to do things when you don’t have formal power over them?
  • What roles in the US government are most likely to help with the long-term future, or reducing existential risks?
  • Is it possible, or even desirable, to get the general public interested in abstract, long-term policy ideas?
  • What are ‘policy entrepreneurs’ and why do they matter?
  • What is the role for prizes in promoting science and technology? What are other promising policy ideas?
  • Why you can get more done by not taking credit.
  • What can the White House do if an agency isn’t doing what it wants?
  • How can the effective altruism community improve the maturity of our policy recommendations?
  • How much can talented individuals accomplish during a short-term stay in government?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

#56 – Persis Eskander on wild animal welfare and what, if anything, to do about it

Elephants in chains at travelling circuses; pregnant pigs trapped in coffin-sized crates at factory farms; deers living in the wild. We should welcome the last as a pleasant break from the horror, right?

Maybe, but maybe not. While we tend to have a romanticised view of nature, life in the wild includes a range of extremely negative experiences.

Most animals are hunted by predators, and constantly have to remain vigilant lest they be killed, and perhaps experience the terror of being eaten alive. Resource competition often leads to chronic hunger or starvation. Their diseases and injuries are never treated. In winter wild animals freeze to death and in droughts they die of heat or thirst.

There are fewer than 20 people in the world dedicating their lives to researching these problems.

But according to Persis Eskander, researcher at Open Philanthropy, if we sum up the negative experiences of all wild animals, their sheer number – trillions to quintillions, depending on which you count – could make the scale of the problem larger than most other near-term concerns.

Persis urges us to recognise that nature isn’t inherently good or bad, but rather the result of an amoral evolutionary process. For those that can’t survive the brutal indifference of their environment, life is often a series of bad experiences, followed by an even worse death.

But should we actually intervene? How do we know what animals are sentient? How often do animals really feel hunger, cold, fear, happiness, satisfaction, boredom, and intense agony? Are there long-term technologies that could some day allow us to massively improve wild animal welfare?

For most of these big questions, the answer is: we don’t know. And Persis thinks we’re far from knowing enough to start interfering with ecosystems. But that’s all the more reason to start considering these questions.

There are a few concrete steps we could take today, like improving the way wild caught fish are slaughtered. Fish might lack the charisma of a lion or the intelligence of a pig, but if they have the capacity to suffer — and evidence suggests that they do — we should be thinking of ways to kill them painlessly rather than allowing them to suffocate to death over hours.

In today’s interview we explore wild animal welfare as a new field of research, and discuss:

  • Do we have a moral duty towards wild animals?
  • How should we measure the number of wild animals?
  • What are some key activities that generate a lot of suffering or pleasure for wild animals that people might not fully appreciate?
  • Is there a danger in imagining how we as humans would feel if we were put into their situation?
  • Should we eliminate parasites and predators?
  • How important are insects?
  • Interventions worth rolling out today
  • How strongly should we focus on just avoiding humans going in and making things worse?
  • How does this compare to work on farmed animal suffering?
  • The most compelling arguments for not dedicating resources to wild animal welfare
  • Is there much of a case for the idea that this work could improve the very long-term future of humanity?
  • Would increasing concern for wild animals improve our values?
  • How do you get academics to take an interest in this?
  • How could autonomous drones improve wild animal welfare?

Rob is then joined by two of his colleagues — Niel Bowerman and Michelle Hutchinson — to quickly cover:

  • The importance of figuring out your values
  • Chemistry, psychology, and other different paths towards working on wild animal welfare
  • How to break into new fields

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

#55 – Mark Lutter & Tamara Winter on founding charter cities with outstanding governance to end poverty

Governance matters. Policy change quickly took China from famine to fortune; Singapore from swamps to skyscrapers; and Hong Kong from fishing village to financial centre. Unfortunately, many governments are hard to reform and — to put it mildly — it’s not easy to found a new country.

This has prompted poverty-fighters and political dreamers to look for creative ways to get new and better ‘pseudo-countries’ off the ground. The poor could then voluntarily migrate to in search of security and prosperity. And innovators would be free to experiment with new political and legal systems without having to impose their ideas on existing jurisdictions.

The ‘seasteading movement’ imagined founding new self-governing cities on the sea, but obvious challenges have kept that one on the drawing board. Nobel Prize winner and World Bank President Paul Romer suggested ‘charter cities’, where a host country would volunteer for another country with better legal institutions to effectively govern some of its territory. But that idea too ran aground for political, practical and personal reasons.

Now Dr Mark Lutter and Tamara Winter, of The Center for Innovative Governance Research (CIGR), are reviving the idea of ‘charter cities’, with some modifications. Gone is the idea of transferring sovereignty. Instead these cities would look more like the ‘special economic zones’ that worked miracles for Taiwan and China among others. But rather than keep the rest of the country’s rules with a few pieces removed, they hope to start from scratch, opting in to the laws they want to keep, in order to leap forward to “best practices in commercial law.”

Also listen to: Rob on The Good Life: Andrew Leigh in Conversation — on ‘making the most of your 80,000 hours’.

The project has quickly gotten attention, with Mark and Tamara receiving funding from Tyler Cowen’s Emergent Ventures (discussed in episode 45) and winning a Pioneer tournament.

Starting afresh with a new city makes it possible to clear away thousands of harmful rules without having to fight each of the thousands of interest groups that will viciously defend their privileges. Initially the city can fund infrastructure and public services by gradually selling off its land, which appreciates as the city flourishes. And with 40 million people relocating to cities every year, there are plenty of prospective migrants.

CIGR is fleshing out how these arrangements would work, advocating for them, and developing supporting services that make it easier for any jurisdiction to implement. They’re currently in the process of influencing a new prospective satellite city in Zambia.

Of course, one can raise many criticisms of this idea: Is it likely to be taken up? Is CIGR really doing the right things to make it happen? Will it really reduce poverty if it is?

We discuss those questions, as well as:

  • How did Mark get a new organisation off the ground, with fundraising and other staff?
  • What made China’s ‘special economic zones’ so successful?
  • What are the biggest challenges in getting new cities off the ground?
  • What are the top criticisms of charter cities, and why aren’t they worried?
  • How did Mark find and hire Tamara? How did he know this was a good idea?
  • Who do they need to talk to to make charter cities happen?
  • How does their idea fit into the broader story of governance innovation?
  • Should people care about this idea if they aren’t focussed on tackling poverty?
  • Why aren’t people already doing this?
  • Why does Tamara support more people starting families?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

#54 – Askell, Brundage & Clark from OpenAI on publication norms, malicious uses of AI, and general-purpose learning algorithms

Dactyl is an AI system that can manipulate objects with a human-like robot hand. OpenAI Five is an AI system that can defeat humans at the video game Dota 2. The strange thing is they were both developed using the same general-purpose reinforcement learning algorithm.

How is this possible and what does it show?

In today’s interview Jack Clark, Policy Director at OpenAI, explains that from a computational perspective using a hand and playing Dota 2 are remarkably similar problems.

A robot hand needs to hold an object, move its fingers, and rotate it to the desired position. In Dota 2 you control a team of several different people, moving them around a map to attack an enemy.

Your hand has 20 or 30 different joints to move. The number of main actions in Dota 2 is 10 to 20, as you move your characters around a map.

When you’re rotating an objecting in your hand, you sense its friction, but you don’t directly perceive the entire shape of the object. In Dota 2, you’re unable to see the entire map and perceive what’s there by moving around — metaphorically ‘touching’ the space.

Read our new in-depth article on becoming an AI policy specialist: The case for building expertise to work on US AI policy, and how to do it

This is true of many apparently distinct problems in life. Compressing different sensory inputs down to a fundamental computational problem which we know how to solve only requires the right general purpose software.

OpenAI used an algorithm called Proximal Policy Optimization (PPO), which is fairly robust — in the sense that you can throw it at many different problems, not worry too much about tuning it, and it will do okay.

Jack emphasises that this algorithm wasn’t easy to create, and they were incredibly excited about it working on both tasks. But he also says that the creation of such increasingly ‘broad-spectrum’ algorithms has been the story of the last few years, and that the invention of software like PPO will have unpredictable consequences, heightening the huge challenges that already exist in AI policy.

Today’s interview is a mega-AI-policy-quad episode; Jack is joined by his colleagues Amanda Askell and Miles Brundage, on the day they released their fascinating and controversial large general language model GPT-2.

We discuss:

  • What are the most significant changes in the AI policy world over the last year or two?
  • How much is the field of AI policy still in the phase of just doing research and figuring out what should be done, versus actually trying to change things in the real world?
  • What capabilities are likely to develop over the next five, 10, 15, 20 years?
  • How much should we focus on the next couple of years, versus the next couple of decades?
  • How should we approach possible malicious uses of AI?
  • What are some of the potential ways OpenAI could make things worse, and how can they be avoided?
  • Publication norms for AI research
  • Where do we stand in terms of arms races between countries or different AI labs?
  • The case for creating a newsletter
  • Should the AI community have a closer relationship to the military?
  • Working at OpenAI vs. working in the US government
  • How valuable is Twitter in the AI policy world?

Rob is then joined by two of his colleagues — Niel Bowerman and Michelle Hutchinson — to quickly discuss:

  • The reaction to OpenAI’s release of GPT-2
  • Jack’s critique of our US AI policy article
  • How valuable are roles in government?
  • Where do you start if you want to write content for a specific audience?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

Find your highest impact role: 104 new vacancies in our February 2019 job board updates

Our job board continues to get big updates each 2 week, and now lists 235 vacancies, with 104 additional opportunities in the last month.

If you’re actively looking for a new role, we recommend checking out the job board regularly – when a great opening comes up, you’ll want to maximise your time to prepare.

The job board is a curated list of the most promising positions to apply for that we’re currently aware of. They’re all high-impact opportunities at organisations that are working on some of the world’s most pressing problems:

Check out the job board →

They’re demanding positions, but if you’re a good fit for one of them, it could be your best opportunity to have an impact.

If you apply for one of these jobs, or intend to, please do let us know.

A few highlights from the last month

Continue reading →

#53 – Kelsey Piper on the room for important advocacy within journalism

“Politics. Business. Opinion. Science. Sports. Animal welfare. Existential risks.” Is this a plausible future lineup for major news outlets?

Funded by the Rockefeller Foundation and given very little editorial direction, Vox’s Future Perfect aspires to be more or less that.

Competition in the news business creates pressure to write quick pieces on topical political issues that can drive lots of clicks with just a few hours’ work.

But according to Kelsey Piper, staff writer for this new section on Vox’s website focused on effective altruist themes, Future Perfect’s goal is to run in the opposite direction and make room for more substantive coverage that’s not tied to the news cycle.

They hope that in the long-term, talented writers from other outlets across the political spectrum, can also be attracted to tackle these topics.

Some skeptics of the project have questioned whether this general coverage of global catastrophic risks actually helps reduce them.

Kelsey responds: if you decide to dedicate your life to AI safety research, what’s the likely reaction from your family and friends? Do they think of you as someone about to join “that weird Silicon Valley apocalypse thing”? Or do they, having read about the issues widely, simply think “Oh, yeah. That seems important. I’m glad you’re working on it.”

Kelsey believes that really matters, and is determined by broader coverage of these kinds of topics.

If that’s right, is journalism a plausible pathway for doing the most good with your career, or did Kelsey just get particularly lucky? After all, journalism is a shrinking industry without an obvious revenue model to fund many writers looking into the world’s most pressing problems.

Kelsey points out that one needn’t take the risk of committing to journalism at an early age. Instead listeners can specialise in an important topic, while leaving open the option of switching into specialist journalism later on, should a great opportunity happen to present itself.

In today’s episode we discuss that path, as well as:

  • What’s the day to day life of a Vox journalist like?
  • How can good journalism get funded?
  • Are there meaningful tradeoffs between doing what’s in the interest of Vox, and doing what’s good?
  • How concerned should we be about the risk of effective altruism being perceived as partisan?
  • How well can short articles effectively communicate complicated ideas?
  • Are there alternative business models that could fund high quality journalism on a larger scale?
  • How do you approach the case for taking AI seriously to a broader audience?
  • How valuable might it be for media outlets to do Tetlock-style forecasting?
  • Is it really a good idea to heavily tax billionaires?
  • How do you avoid the pressure to get clicks?
  • How possible is it to predict which articles are going to be popular?
  • How did Kelsey build the skills necessary to work at Vox?
  • General lessons for people dealing with very difficult life circumstances

Rob is then joined by two of his colleagues – Keiran Harris and Michelle Hutchinson – to quickly discuss:

  • The risk political polarisation poses to long-termist causes
  • How should specialists keep journalism available as a career option?
  • Should we create a news aggregator that aims to make someone as well informed as possible in big-picture terms?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

#52 – Glen Weyl on uprooting capitalism and democracy for a just society

Imagine you were put in charge of planning out a country’s economy – determining who should work where and what they should make – without prices. You would surely struggle to collect all the information you need about what people want and who can most efficiently make it from an office building in the capital city.

Pro-market economists love to wax rhapsodic about the capacity of markets to pull together the valuable local information spread across all of society and solve this so-called ‘knowledge problem’.

But when it comes to politics and voting – which also aim to aggregate the preferences and knowledge found in millions of individuals – the enthusiasm for finding clever institutional designs turns to skepticism.

Today’s guest, freewheeling economist Glen Weyl, won’t have it, and is on a warpath to reform liberal democratic institutions in order to save them. Just last year he wrote Radical Markets: Uprooting Capitalism and Democracy for a Just Society with Eric Posner, but he has already moved on, saying “in the 6 months since the book came out I’ve made more intellectual progress than in the whole 10 years before that.”

He believes we desperately need more efficient, equitable and decentralised ways to organise society that take advantage of what each person knows, and his research agenda has already made some breakthroughs.

Despite a background in the best economics departments in the world – Harvard, Princeton, Yale and the University of Chicago – he is too worried for the future to sit in his office writing papers. Instead he has left the academy to try to inspire a social movement, RadicalxChange, with a vision of social reform as expansive as his own. (You can sign up for their conference in March here.)

Economist Alex Tabarrok called his latest proposal, known as ‘liberal radicalism’, “a quantum leap in public-goods mechanism-design.” The goal is to accurately measure how much the public actually values a good they all have to share, like a scientific research finding. Alex observes that under liberal radicalism “almost magically… citizens will voluntarily contribute exactly the amount that correctly signals how much society as a whole values the public good. Amazing!” But the proposal, however good in theory, might struggle in the real world because it requires large subsidies, and compensates for people’s selfishness so effectively that it might even be an overcorrection.

An earlier proposal – ‘quadratic voting’ (QV) – would allow people to express the relative strength of their preferences in the democratic process. No longer would 51 people who support a proposal, but barely care about the issue, outvote 49 incredibly passionate opponents, predictably making society worse in the process.

Instead everyone would be given ‘voice credits’ which they could spread across elections as they chose. QV follows a square root rule: 1 voice credit gets you 1 vote, 4 voice credits gets you 2 votes, 9 voice credits gives you 3 votes, and so on. It’s not immediately apparent, but this method is on average the ideal way of allowing people to more and more impose their desires on the rest of society, but at an ever escalating cost. To economists it’s an idea that’s obvious, though only in retrospect, and is already being taken up by business.

Weyl points to studies showing that people are more likely to vote strongly not only about issues they care more about, but issues they know more about. He expects that allowing people to specialise and indicate when they know what they’re talking about will create a democracy that does more to aggregate careful judgement, rather than just passionate ignorance.

But these and indeed all of Weyl’s proposals have faced criticism. Some say the risk of unintended consequences are too great, or that they solve the wrong problem. Others see these proposals as unproven, impractical, or just another example of overambitious social planning on the part of intellectuals. I raise these concerns to see how he responds.

Weyl hopes a creative spirit in figuring out how to make collective decision-making work for the modern world can restore faith in liberal democracy and prevent a resurgence of reactionary ideas during a future recession. But as big a topic as all that is, this extended conversation covers more:

  • How should we think about blockchain as a technology, and the community dedicated to it?
  • How could auctions inspire an alternative to private property?
  • Why is Glen wary of mathematical styles of approaching issues?
  • Is high modernism underrated?
  • Should we think of the world as going well or badly?
  • What are the biggest intellectual errors of the effective altruism community? And the rationality community?
  • Should migrants be sponsored by communities?
  • Could we provide people with a sustainable living by treating their data as labour?
  • The potential importance of artists in promoting ideas
  • How does liberal radicalism actually work

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

#51 – Martin Gurri on the revolt of the public & crisis of authority in the information age

Politics in rich countries seems to be going nuts. What’s the explanation? Rising inequality? The decline of manufacturing jobs? Excessive immigration?

Martin Gurri spent decades as a CIA analyst and in his 2014 book The Revolt of The Public and the Crisis of Authority in the New Millennium, predicted political turbulence for an entirely different reason: new communication technologies were flipping the balance of power between the public and traditional authorities.

In 1959 the President could control the narrative by leaning on his friends at four TV stations, who felt it was proper to present the nation’s leader in a positive light, no matter their flaws. Today, it’s impossible to prevent someone from broadcasting any grievance online, whether it’s a contrarian insight or an insane conspiracy theory.

According to Gurri, trust in society’s institutions – police, journalists, scientists and more – has been undermined by constant criticism from outsiders, and exposed to a cacophony of conflicting opinions on every issue the public takes fewer truths for granted. We are now free to see our leaders as the flawed human beings they always have been, and are not amused.

Suspicious they are being betrayed by elites, the public can also use technology to coordinate spontaneously and express its anger. Keen to ‘throw the bastards out’ – protesters take to the streets, united by what they don’t like, but without a shared agenda for how to move forward or the institutional infrastructure to figure out how to fix things. Some popular movements have come to view any attempt to exercise power over others as suspect.

If Gurri is to be believed, protest movements in Egypt, Spain, Greece and Israel in 2011 followed this script, while Brexit, Trump and the French yellow vests movement subsequently vindicated his theory.

In this model, politics won’t return to its old equilibrium any time soon. The leaders of tomorrow will need a new message and style if they hope to maintain any legitimacy in this less hierarchical world. Otherwise, we’re in for decades of grinding conflict between traditional centres of authority and the general public, who doubt both their loyalty and competence.

But how much should we believe this theory? Why do Canada and Australia remain pools of calm in the storm? Aren’t some malcontents quite concrete in their demands? And are protest movements actually more common (or more nihilistic) than they were decades ago?

In today’s episode we ask these questions and add an hour-long discussion with two of Rob’s colleagues – Keiran Harris and Michelle Hutchinson – to further explore the ideas in the book.

The conversation covers:

  • What’s changed about the public’s relationship to information and authority?
  • Are protesters today usually united for or against something?
  • What sorts of people are participating in these new movements?
  • Are we elites or the public?
  • Is the number of street protests and the level of dissatisfaction with governments actually higher than before?
  • How do we know that the internet is driving this rather than some other phenomenon?
  • How do technological changes enable social and political change?
  • The historical role of television
  • Are people also more disillusioned now with sports heroes and actors?
  • What are the best arguments against this thesis?
  • How should we think about countries like Canada, Australia, Spain, and China using this model?
  • Has public opinion shifted as much as it seems?
  • How can we get to a point where people view the system and politicians as legitimate and respectable, given the competitive pressures against being honest about the limits of your power and knowledge?
  • Which countries are finding good ways to make politics work in this new era?
  • What are the implications for the threat of totalitarianism?
  • What is this is going to do to international relations? Will it make it harder for countries to cooperate and avoid conflict?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

#50 – David Denkenberger on how to feed all 8 billion people through an asteroid/nuclear winter

If a nuclear winter or asteroid impact blocked the sun for years, our inability to grow food would result in billions dying of starvation, right? According to Dr David Denkenberger, co-author of Feeding Everyone No Matter What: no. If he’s to be believed, nobody need starve at all.

Even without the sun, David sees the Earth as a bountiful food source. Mushrooms farmed on decaying wood. Bacteria fed with natural gas. Fish and mussels supported by sudden upwelling of ocean nutrients – and many more.

Dr Denkenberger is an Assistant Professor at the University of Alaska Fairbanks, and he’s out to spread the word that while a nuclear winter might be horrible, experts have been mistaken to assume that mass starvation is an inevitability. In fact, he says, the only thing that would prevent us from feeding the world is insufficient preparation.

Not content to just write a book pointing this out, David has gone on to found a growing nonprofit – the Alliance to Feed the Earth in Disasters – to brace the world to feed everyone come what may. He expects that today 10% of people would find enough food to survive a massive disaster. In principle, if we did everything right, nobody need go hungry. But being more realistic about how much we’re likely to invest, David hopes a plan to inform people ahead of time would save 30%, and a decent research and development scheme 80%.

According to David’s published cost-benefit analyses, work on this problem may be able to save lives, in expectation, for under $100 each, making it an incredible investment.

These preparations could also help make humanity more resilient to global catastrophic risks, by forestalling an ‘everyone for themselves’ mentality, which then causes trade and civilization to unravel.

But some worry that David’s cost-effectiveness estimates are exaggerations, so I challenge him on the practicality of his approach, and how much his nonprofit’s work would actually matter in a post-apocalyptic world. In our extensive conversation, we cover:

  • How could the sun end up getting blocked, or agriculture otherwise be decimated?
  • What are all the ways we could we eat nonetheless? What kind of life would this be?
  • Can these methods be scaled up fast?
  • What is his organisation, ALLFED, actually working on?
  • How does he estimate the cost-effectiveness of this work, and what are the biggest weaknesses of the approach?
  • How would more food affect the post-apocalyptic world? Won’t people figure it out at that point anyway?
  • Why not just leave guidebooks with this information in every city?
  • Would these preparations make nuclear war more likely?
  • What kind of people is ALLFED trying to hire?
  • What would ALLFED do with more money? What have been their biggest mistakes?
  • How he ended up doing this work. And his other engineering proposals for improving the world, including how to prevent a supervolcano explosion.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

Find your highest impact role: 77 new vacancies in our December job board updates

Thanks to the sterling work of Maria Gutierrez, our job board continues to get big updates each 2 week, and now lists 169 vacancies, with 77 additional opportunities in the last month.

If you’re actively looking for a new role, we recommend checking out the job board regularly – when a great opening comes up, you’ll want to maximise your time to prepare.

The job board is a curated list of the most promising positions to apply for that we’re currently aware of. They’re all high-impact opportunities at organisations that are working on some of the world’s most pressing problems:

Check out the job board →

They’re demanding positions, but if you’re a good fit for one of them, it could be your best opportunity to have an impact.

If you apply for one of these jobs, or intend to, please do let us know.

A few highlights from the last month

Continue reading →

#49 – Rachel Glennerster on a year's worth of education for under $1 and other development best buys

If I told you it’s possible to deliver an extra year of ideal primary-level education for 30 cents, would you believe me? Hopefully not – the claim is absurd on its face.

But it may be true nonetheless. The very best education interventions are phenomenally cost-effective, but they’re not the kinds of things you’d expect, says this week’s guest, Dr Rachel Glennerster.

She’s Chief Economist at the UK’s foreign aid agency DFID, and used to run J-PAL, the world-famous anti-poverty research centre based at MIT’s Economics Department, where she studied the impact of a wide range of approaches to improving education, health, and political institutions. According to Glennerster:

“…when we looked at the cost effectiveness of education programs, there were a ton of zeros, and there were a ton of zeros on the things that we spend most of our money on. So more teachers, more books, more inputs, like smaller class sizes – at least in the developing world – seem to have no impact, and that’s where most government money gets spent.”

“But measurements for the top ones – the most cost effective programs – say they deliver 460 LAYS per £100 spent ($US130). LAYS are Learning-Adjusted Years of Schooling. Each one is the equivalent of the best possible year of education you can have – Singapore-level.”

“…the two programs that come out as spectacularly effective… well, the first is just rearranging kids in a class.”

“You have to test the kids, so that you can put the kids who are performing at grade two level in the grade two class, and the kids who are performing at grade four level in the grade four class, even if they’re different ages – and they learn so much better. So that’s why it’s so phenomenally cost effective because, it really doesn’t cost anything.”

“The other one is providing information. So sending information over the phone [for example about how much more people earn if they do well in school and graduate]. So these really small nudges. Now none of those nudges will individually transform any kid’s life, but they are so cheap that you get these fantastic returns on investment – and we do very little of that kind of thing.”

(See the links section below to learn more about these kinds of results.)

In this episode, Dr Glennerster shares her decades of accumulated wisdom on which anti-poverty programs are overrated, which are neglected opportunities, and how we can know the difference, across a range of fields including health, empowering women and macroeconomic policy.

Regular listeners will be wondering – have we forgotten all about the lessons from episode 30 of the show with Dr Eva Vivalt? She threw several buckets of cold water on the hope that we could accurately measure the effectiveness of social programs at all.

According to Eva, her dataset of hundreds of randomised controlled trials indicates that social science findings don’t generalize well at all. The results of a trial at a school in Namibia tell us remarkably little about how a similar program will perform if delivered at another school in Namibia – let alone if it’s attempted in India instead.

Rachel offers a different and more optimistic interpretation of Eva’s findings.

Firstly, Rachel thinks it will often be possible to anticipate where studies will generalise and where they won’t. Studies are being lumped together that vary a great deal in i) how serious the problem is to start, ii) how well the program is delivered, iii) the details of the intervention itself. It’s no surprise that they have very variable results.

Rachel also points out that even if randomised trials can never accurately measure the effectiveness of every individual program, they can help us discover regularities of human behaviour that can inform everything we do. For instance, dozens of studies have shown that charging for preventative health measure like vaccinations will greatly reduce the number of people who take them up.

To learn more and figure out who you sympathise with, you’ll just have to listen to the the episode.

Regardless, Vivalt and Glennerster agree that we should continue to run these kinds of studies, and today’s episode delves into the latest ideas in global health and development. We discuss:

  • The development of aid work over the past 3 decades?
  • What’s the right balance of RCT work?
  • Do RCTs distract from broad economic growth and progress in these societies?
  • Overrated/underrated: charter cities, getting along with colleagues, cash transfers, cracking down on tax havens, micronutrient supplementation, pre-registration
  • The importance of using your judgement, experience, and priors
  • Things that reoccur in every culture
  • Do we produce too many programs where the quality of implementation matters?
  • Has the “empirical revolution” gone too far?
  • The increasing usage of Bayesian statistics
  • High impact gender equality interventions
  • Should we mostly focus on reforming macroeconomic policy in developing countries?
  • How important are markets for carbon?
  • What should we think about the impact the US and UK had in eastern Europe after the Cold War?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

A simple checklist for overcoming life and career setbacks

At 80,000 Hours we focus a lot on developing ambitious plans to dramatically improve the world.

Something we haven’t written so much about is how to overcome the challenges – heartbreak, rejection, failure, illness, grief, conflict and more – that are sure to arise as we attempt to follow through on those plans, and which risk throwing us off course.

We don’t have particular expertise on this topic, but I wanted to share an approach that me and some friends have found useful, and which might help you as well.

When bad things happen in life, the thoughts we then have about them have a big impact on how much they harm us. Even where we can’t avoid the direct suffering inflicted by a problem, we can at least avoid hurting ourselves further, by ruminating about it and getting trapped in a cycle of negative thoughts.

In the case of the minor annoyances we face every day, maintaining our equanimity can almost entirely eliminate the harm they cause us. And even when we face serious adversity, ensuring we think about it the right way can limit the damage, and save us from falling into depression or another negative spiral. (Though this list isn’t really suitable for seriously traumatic events.)

To help myself with this, I’ve made a checklist of questions I try to work through when something unpleasant happens, in order to reframe the situation and get over it as quickly as possible.

Continue reading →

#48 – Brian Christian on better living through the wisdom of computer science

Ever felt that you were so busy you spent all your time paralysed trying to figure out where to start, and couldn’t get much done? Computer scientists have a term for this – thrashing – and it’s a common reason our computers freeze up. The solution, for people as well as laptops, is to ‘work dumber’: pick something at random and finish it, without wasting time thinking about the bigger picture.

Ever wonder why people reply more if you ask them for a meeting at 2pm on Tuesday, than if you offer to talk at whatever happens to be the most convenient time in the next month? The first requires a two-second check of the calendar; the latter implicitly asks them to solve a vexing optimisation problem.

What about estimating the probability of something you can’t model, and which has never happened before? Math has got your back: the likelihood is no higher than 1 in the number of times it hasn’t happened, plus one. So if 5 people have tried a new drug and survived, the chance of the next one dying is at most 1 in 6.

Bestselling author Brian Christian studied computer science, and in the book Algorithms to Live By he’s out to find the lessons it can offer for a better life. In addition to the above he looks into when to quit your job, when to marry, the best way to sell your house, how long to spend on a difficult decision, and how much randomness to inject into your life.

In each case computer science gives us a theoretically optimal solution. In this episode we think hard about whether its models match our reality.

One genre of problems Brian explores in his book are ‘optimal stopping problems’, the canonical example of which is ‘the secretary problem’. Imagine you’re hiring a secretary, you receive n applicants, they show up in a random order, and you interview them one after another. You either have to hire that person on the spot and dismiss everybody else, or send them away and lose the option to hire them in future.

It turns out most of life can be viewed this way – a series of unique opportunities you pass by that will never be available in exactly the same way again.

So how do you attempt to hire the very best candidate in the pool? There’s a risk that you stop before you see the best, and a risk that you set your standards too high and let the best candidate pass you by.

Mathematicians of the mid-twentieth century produced the elegant solution: spend exactly one over e, or approximately 37% of your search, just establishing a baseline without hiring anyone, no matter how promising they seem. Then immediately hire the next person who’s better than anyone you’ve seen so far.

It turns out that your odds of success in this scenario are also 37%. And the optimal strategy and the odds of success are identical regardless of the size of the pool. So as n goes to infinity you still want to follow this 37% rule, and you still have a 37% chance of success. Even if you interview a million people.

But if you have the option to go back, say by apologising to the first applicant and begging them to come work with you, and you have a 50% chance of your apology being accepted, then the optimal explore percentage rises all the way to 61%.

Today’s episode focuses on Brian’s book-length exploration of how insights from computer algorithms can and can’t be applied to our everyday lives. We cover:

  • Is it really important that people know these different models and try to apply them?
  • What’s it like being a human confederate in the Turing test competition? What can you do to seem incredibly human?
  • Is trying to detect fake social media accounts a losing battle?
  • The canonical explore/exploit problem in computer science: the multi-armed bandit
  • How can we characterize a computational model of what people are actually doing, and is there a rigorous way to analyse just how good their instincts actually are?
  • What’s the value of cardinal information above and beyond ordinal information?
  • What’s the optimal way to buy or sell a house?
  • Why is information economics so important?
  • The martyrdom of being a music critic
  • ‘Simulated annealing’, and the best practices in optimisation
  • What kind of decisions should people randomize more in life?
  • Is the world more static than it used to be?
  • How much time should we spend on prioritisation? When does the best solution require less precision?
  • How do you predict the duration of something when you you don’t even know the scale of how long it’s going to last?
  • How many heists should you go on if you have a certain fixed probability of getting arrested and having all of your assets seized?
  • Are pro and con lists valuable?
  • Computational kindness, and the best way to schedule meetings
  • How should we approach a world of immense political polarisation?
  • How would this conversation have changed if there wasn’t an audience?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

Think twice before talking about ‘talent gaps’ — clarifying nine misconceptions

After pushing the idea of ‘talent gaps’ in 2015, we’ve noticed increasing confusion about the term.

This is partly our fault. So, here’s a quick list of common misconceptions about talent gaps and how they can be fixed. This is all pretty rough and we’re still refining our own views, but we hope this might start to clarify this issue, while we work on better explaining the idea in our key content.

1. Problem areas are constrained by specific skills, not ‘talent’

Problem areas are rarely generically ‘talent constrained’. They’re instead constrained by specific skills and abilities. It’s nearly always clearer to talk about the specific needs of the field, ideally down to the level of specific profiles of people, rather than talent and funding in general.

For instance, work to positively shape the development of AI is highly constrained by the following:

  • ML researchers, especially those able to do field-defining work, who are interested in and understand AI safety, the alignment problem, and other issues relevant to the long-term development of AI.
  • People skilled in operations, especially those able to run nonprofits with under 50 people or academic institutes, and who are interested in and understand issues related to the long-term development of AI.
  • Strategy and policy researchers able to do disentanglement research in pre-paradigmatic fields.
  • People with the policy expertise and career capital to work in influential government positions who are also knowledgeable about and dedicated to the issue.

Continue reading →

#47 – Catherine Olsson & Daniel Ziegler on the fast path into high-impact ML engineering roles

After dropping out of his ML PhD at Stanford, Daniel Ziegler needed to decide what to do next. He’d always enjoyed building stuff and wanted to help shape the development of AI, so he thought a research engineering position at an org dedicated to aligning AI with human interests could be his best option.

He decided to apply to OpenAI, spent 6 weeks preparing for the interview, and actually landed the job. His PhD, by contrast, might have taken 6 years. Daniel thinks this highly accelerated career path may be possible for many others.

On today’s episode Daniel is joined by Catherine Olsson, who has also worked at OpenAI, and left her computational neuroscience PhD to become a research engineer at Google Brain. They share this piece of advice for those interested in this career path: just dive in. If you’re trying to get good at something, just start doing that thing, and figure out that way what’s necessary to be able to do it well.

To go with this episode, Catherine has even written a simple step-by-step guide to help others copy her and Daniel’s success.

Daniel thinks the key for him was nailing the job interview.

OpenAI needed him to be able to demonstrate the ability to do the kind of stuff he’d be working on day-to-day. So his approach was to take a list of 50 key deep reinforcement learning papers, read one or two a day, and pick a handful to actually reproduce. He spent a bunch of time coding in Python and TensorFlow, sometimes 12 hours a day, trying to debug and tune things until they were actually working.

Daniel emphasizes that the most important thing was to practice exactly those things that he knew he needed to be able to do. He also received an offer from the Machine Intelligence Research Institute, and so he had the opportunity to decide between two organisations focused on the global problem that most concerns him.

Daniel’s path might seem unusual, but both he and Catherine expect it can be replicated by others. If they’re right, it could greatly increase our ability to quickly get new people into ML roles in which they can make a difference.

Catherine says that her move from OpenAI to an ML research team at Google now allows her to bring a different set of skills to the table. Technical AI safety is a multifaceted area of research, and the many sub-questions in areas such as reward learning, robustness, and interpretability all need to be answered to maximize the probability that AI development goes well for humanity.

Today’s episode combines the expertise of two pioneers and is a key resource for anyone wanting to follow in their footsteps. We cover:

  • What is the field of AI safety? How could your projects contribute?
  • What are OpenAI and Google Brain doing?
  • Why would one decide to work on AI?
  • The pros and cons of ML PhDs
  • Do you learn more on the job, or while doing a PhD?
  • Why did Daniel think OpenAI had the best approach? What did that mean?
  • Controversial issues within ML
  • What are some of the problems that are ready for software engineers?
  • What’s required to be a good ML engineer? Is replicating papers a good way of determining suitability?
  • What fraction of software developers could make similar transitions?
  • How in-demand are research engineers?
  • The development of Dota 2 bots
  • What’s the organisational structure of ML groups? Are there similarities to an academic lab?
  • The fluidity of roles in ML
  • Do research scientists have more influence on the vision of an org?
  • What’s the value of working in orgs not specifically focused on safety?
  • Has learning more made you more or less worried about the future?
  • The value of AI policy work
  • Advice for people considering 23andMe

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →