Advice on how to read our advice

We’ve found that readers sometimes interpret or apply our advice in ways we didn’t anticipate and wouldn’t exactly recommend. That’s hard to avoid when you’re writing for a range of people with different personalities and initial views.

To help get on the same page, here’s some advice about our advice, for those about to launch into reading our site.

We want our writing to inform people’s views, but only in proportion to the likelihood that we’re actually right. So we need to make sure you have a balanced perspective on how compelling the evidence is for the different claims we make on the site, and how much weight to put on our advice in your situation.

What follows is a list of points to bear in mind when reading our site, and some thoughts on how to avoid the communication problems we face.

We’ve been wrong before, and we’ll be wrong again

We still have a lot to learn about how people can best have a positive impact with their careers. This means, unfortunately, that we make mistakes and change our advice over time. And this means that in a couple of years, we’ll no longer stand by some of the claims we make today.

Our positions can change because the world changes — for instance, a problem that was more pressing in the past can receive lots of attention and become less pressing over time. Our positions can also change as we learn more —

Continue reading →

Before committing to management consulting, consider directly entering priority paths, policy, startups, and other options

Many people we advise seem to think that management consulting is the best way to establish their career and gain career capital in their first one or two jobs after their undergraduate degree.

Because of this, people we advise often don’t spend much time generating additional options once they’ve received a management consulting offer, or considering alternatives before they apply to consulting in the first place. However, we think that for people who share our ‘longtermist’ view of global priorities, there are often even better options for career capital.

We’ve even met people who already have PhDs from top programmes in relevant areas but who think they need to do consulting to gain even more career capital, which we think is rarely the best option.

This is even more true of other prestigious generalist corporate jobs, such as investment banking, corporate law, professional services, and also perhaps to options like Teach for America (if you don’t intend to go into education) and MBAs. We provide a little more detail on these alternatives below.

We think this mistaken impression is in part due to our old career guide, which featured consulting and other prestigious corporate jobs prominently in our article on career capital. (We explain how our views have changed over time and the mistakes we made presenting them in the appendix.)

We want to clarify that while we think consulting is a good option for career capital early in your career (especially for practical “do-er” types),

Continue reading →

Bruce Schneier on how insecure electronic voting could break the United States — and surveillance without tyranny

Nobody is in favor of the power going down. Nobody is in favor of all cell phones not working. But an election? There are sides. Half of the country will want the result to stand and half the country will want the result overturned; they’ll decide on their course of action based on the result, not based on what’s right.

Bruce Schneier

November 3 2020, 10:32PM: CNN, NBC, and FOX report that Donald Trump has narrowly won Florida, and with it, re-election.

November 3 2020, 11:46PM: The NY Times, Washington Post and Wall Street Journal report that some group has successfully hacked electronic voting systems across the country, including Florida. The malware has spread to tens of thousands of machines and deletes any record of its activity, so the returning officer of Florida concedes they actually have no idea who won the state — and don’t see how they can figure it out.

What on Earth happens next?

Today’s guest — world-renowned computer security expert Bruce Schneier — thinks this scenario is plausible, and the ensuing chaos would sow so much distrust that half the country would never accept the election result.

Unfortunately the US has no recovery system for a situation like this, unlike Parliamentary democracies, which can just rerun the election a few weeks later.

The constitution says the state legislature decides, and they can do so however they like; one tied local election in Texas was settled by playing a hand of poker.

Elections serve two purposes. The first is the obvious one: to pick a winner. The second, but equally important, is to convince the loser to go along with it — which is why hacks often focus on convincing the losing side that the election wasn’t fair.

Schneier thinks there’s a need to agree how this situation should be handled before something like it happens, and America falls into severe infighting as everyone tries to turn the situation to their political advantage.

And to fix our voting systems, we urgently need two things: a voter-verifiable paper ballot and risk-limiting audits.

He likes the system in Minnesota: you get a paper ballot with ovals you fill in, which are then fed into a computerised reader. The computer reads the ballot, and the paper falls into a locked box that’s available for recounts. That gives you the speed of electronic voting, with the security of a paper ballot.

On the back-end, he wants risk limiting audits that are automatically triggered based on the margin of victory. If there’s a large margin of victory, you need a small audit. For a small margin of victory, you need a large audit.

Those two things would do an enormous amount to improve voting security, and we should move to that as soon as possible.

According to Schneier, computer security experts look at current electronic voting machines and can barely believe their eyes. But voting machine designers never understand the security weakness of what they’re designing, because they have a bureaucrat’s rather than hacker’s mindset.

The ideal computer security expert walks into a shop and thinks, “You know, here’s how I would shoplift.” They automatically see where the cameras are, whether there are alarms, and where the security guards aren’t watching.

In this impassioned episode we discuss this hacker mindset, and how to use a career in security to protect democracy and guard dangerous secrets from people who shouldn’t have access to them.

We also cover:

  • How can we have surveillance of dangerous actors, without falling back into authoritarianism?
  • When if ever should information about weaknesses in society’s security be kept secret?
  • How secure are nuclear weapons systems around the world?
  • How worried should we be about deep-fakes?
  • The similarities between hacking computers and hacking our biology in the future
  • Schneier’s critiques of blockchain technology
  • How technologists could be vital in shaping policy
  • What are the most consequential computer security problems today?
  • Could a career in information security be very useful for reducing global catastrophic risks?
  • What are some of the most kind of widely-held but incorrect beliefs among computer security people?
  • And more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

How useful are long-term career plans?

There are two main types of mistakes one can make with career plans: having an overly rigid and specific plan, and having no long-term plans at all. We see both issues in our advising.

In the rest of this article, we give some arguments for and against long-term career planning, and explain how we aim to strike the balance between both types of mistake.

“Plans are useless but planning is essential.” — Dwight D. Eisenhower

Some arguments against long-term plans

Many of our readers get paralysed thinking about long-term options. It’s easy to see your choice of career as a single decision that you have to “get right” immediately, creating a lot of anxiety. In reality, most of the time you’re only committing to a job for a couple of years, and you’ll have many opportunities to shift course in the future.

Your preferences will also change over your career (more than you think), the world will change (including which problems are most pressing and what the key bottlenecks are), and you will learn a huge amount about your skills and which options are best.

Most people we know who are having a big impact today wouldn’t have predicted they’d be doing the kind of work they’re doing ten years ago.

This means it’s not useful to make detailed long-term plans. Having an overly detailed long-term plan might even cause you to overly fixate on that one path,

Continue reading →

Anonymous answers: How have you seen talented people fail in their work?

The following are excerpts from interviews with people whose work we respect and who would like to remain anonymous. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect much of it is more broadly useful.

This is the second in this series of posts with anonymous answers. The first release had answers to the question: “Is there any career advice you’d be hesitant to give if it were going to be attributed to you?”

Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.

Continue reading →

Anonymous advice: What’s good career advice you wouldn’t want to have your name on?

The following are excerpts from interviews with people whose work we respect and who would like to remain anonymous. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. But we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

The advice is targeted towards people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect much of it is more broadly useful.

This is the first in a series of posts with anonymous answers to a range of questions. The second is: How have you seen talented people fail in their work?

Just landed on our site for the first time? After this you might like to read about our key ideas.

Continue reading →

Cross-posting two interviews with Rob Wiblin on plastic straws, nicotine, doping, & whether changing the long term is really possible

Today on our podcast feed, we’re releasing some interviews I recently recorded for two other shows, Love Your Work and The Neoliberal Podcast

To listen, subscribe to the 80,000 Hours Podcast by searching for 80,000 Hours wherever you get your podcasts, or find us on Apple Podcasts, Google Podcasts, Spotify or SoundCloud.

If you’ve listened to absolutely everything on our podcast feed, you’ll have heard four interviews with me already, but fortunately I think these two don’t include too much repetition, and I’ve gotten a decent amount of positive feedback on both. 

First up, I speak with David Kadavy on Love Your Work

This is a particularly personal and relaxed interview. We talk about all sorts of things, including nicotine gum, plastic straw bans, whether recycling is important, how many lives a doctor saves, why interviews should go for at least 2 hours, how athletes doping could be good for the world, and many other fun topics. 

At some points we even actually discuss effective altruism and 80,000 Hours, but you can easily skip through those bits if they feel too familiar. 

The second interview is with Jeremiah Johnson on the Neoliberal Podcast. It starts at 2 hours and 15 minutes into this recording. 

Neoliberalism in the sense used by this show is not the free market fundamentalism you might associate with that term.

Continue reading →

Have we helped you have a bigger social impact? Our annual impact survey 2019

Briefly, once a year, we at 80,000 Hours ask you to tell us if we’ve helped you have a larger social impact.

We and our donors need to know which of our programs are helping people enough to continue or scale up, and it’s only by hearing your stories that we can make these decisions well.

You can also let us know where we’ve fallen short, which helps us fix problems with our advice.

So, if our podcast, job board, articles, advising or other services have somehow contributed to your life or career plans, please take 3–10 minutes to let us know how:

https://80000hours.org/impact-survey/

We’ve refreshed the survey this year, hopefully making it easier to fill out than in the past.

We’ll keep this appeal up for the next week, but it would be great if you could fill it out now so we can start working through your stories.

Thanks so much!

Continue reading →

Vitalik Buterin on effective altruism, better ways to fund public goods, the blockchain’s problems so far, and how it could yet change the world

We’re talking about a general purpose infrastructure for funding public goods in the same way that money is a general purpose infrastructure for funding private goods. There’s definitely a lot of challenges. But at the same time, but if we can make that work… it’s huge.

Vitalik Buterin

Historically, progress in the field of cryptography has had major consequences. It has changed the course of major wars, made it possible to do business on the internet, and enabled private communication between both law-abiding citizens and dangerous criminals. Could it have similarly significant consequences in future?

Today’s guest — Vitalik Buterin — is world-famous as the lead developer of Ethereum, a successor to the cryptographic-currency Bitcoin, which added the capacity for smart contracts and decentralised organisations. Buterin first proposed Ethereum at the age of 20, and by the age of 23 its success had likely made him a billionaire.

At the same time, far from indulging hype about these so-called ‘blockchain’ technologies, he has been candid about the limited good accomplished by Bitcoin and other currencies developed using cryptographic tools — and the breakthroughs that will be needed before they can have a meaningful social impact. In his own words, “blockchains as they currently exist are in many ways a joke, right?”

But Buterin is not just a realist. He’s also an idealist, who has been helping to advance big ideas for new social institutions that might help people better coordinate to pursue their shared goals.

By combining theories in economics and mechanism design with advances in cryptography, he has been pioneering the new interdiscriplinary field of ‘cryptoeconomics’. Economist Tyler Cowen has observed that, “at 25, Vitalik appears to repeatedly rediscover important economics results from famous papers — without knowing about the papers at all.”

Though its applications have faced major social and technical problems, Ethereum has been used to crowdsource investment for projects and enforce contracts without the need for a central authority. But the proposals for new ways of coordinating people are far more ambitious than that.

For instance, along with previous guest Glen Weyl, Vitalik has helped develop a model for so-called ‘quadratic funding’, which in principle could transform the provision of ‘public goods’. That is, goods that people benefit from whether they help pay for them or not.

Examples of goods that are fully or partially public goods include sound decision-making in government, international peace, scientific advances, disease control, the existence of smart journalism, preventing climate change, deflecting asteroids headed to Earth, and the elimination of suffering. Their underprovision in part reflects the difficulty of getting people to pay for anything when they can instead free-ride on the efforts of others. Anything that could reduce this failure of coordination might transform the world.

The innovative leap of the ‘quadratic funding’ formula is that individuals can in principle be given the incentive to voluntarily contribute amounts that together signal to a government how much society as a whole values a public good, how much should be spent on it, and where that funding should be directed.

But these and other related proposals face major hurdles. They’re vulnerable to collusion, might be used to fund scams, and remain untested at a small scale. Not to mention that anything with a square root sign in it is going to struggle to achieve widespread societal legitimacy. Is the prize large enough to justify efforts to overcome these challenges?

In today’s extensive three-hour interview, Buterin and I cover:

  • What the blockchain has accomplished so far, and what it might achieve in the next decade;
  • Why many social problems can be viewed as a coordination failure to provide a public good;
  • Whether any of the ideas for decentralised social systems emerging from the blockchain community could really work;
  • His view of ‘effective altruism’ and ‘long-termism’;
  • The difficulty of establishing true identities and preventing collusion, and why this is an important enabling technology;
  • Why he is optimistic about ‘quadratic funding’, but pessimistic about replacing existing voting with ‘quadratic voting’;
  • When it’s good and bad for private entities to censor online speech;
  • Why humanity might have to abandon living in cities;
  • And much more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

How replaceable are the top candidates in large hiring rounds? Why the answer flips depending on the distribution of applicant ability

As more and more people apply for a job, the value of each extra application goes down. But does it go down quickly, or only very gradually?

This question matters, because for many of the jobs we discuss, lots of people apply and the application process is highly competitive. When this happens, some of our readers have the sense that, if a lot of people are already applying for a job, there’s no point in them applying as well. After all, there must be someone else suitable in the applicant pool already — someone who would do a similarly good job, even if you were to turn down an offer. So, the logic goes, if you take the job, you’re fully ‘replaceable’, and therefore not having much social impact.

By contrast, 80,000 Hours and many of the organisations we help with hiring often feel differently, saying:

  • Even when many people would be interested in taking a job, the difference between the best and the second best applicant is often large. So losing your best option would still be really costly.
  • Even when you have a large applicant pool, it’s useful to keep hearing about more potential hires, in the hope of finding someone who’ll be significantly more productive than everyone you’re currently aware of.

Which of these positions is correct? I threw together some simple models in an Excel spreadsheet to explore this disagreement.

In short,

Continue reading →

Should we leave a helpful message for future civilizations, just in case humanity dies out?

…there’s two parts to the problem. The first is calling someone’s attention to a place. I think that’s the harder part by far. You can’t just bury a thing, because hundreds and millions of years is long enough that the surface of the earth is no longer the surface of the earth…

Paul Christiano

Imagine that, one day, humanity dies out. At some point, many millions of years later, intelligent life might well evolve again. Is there any message we could leave that would reliably help them out?

In his second appearance on the 80,000 Hours Podcast, machine learning researcher and polymath Paul Christiano suggests we try to answer this question with a related thought experiment: are there any messages we might want to send back to our ancestors in the year 1700 that would have made history likely to go in a better direction than it did? It seems there probably is.

We could tell them hard-won lessons from history; mention some research questions we wish we’d started addressing earlier; hand over all the social science we have that fosters peace and cooperation; and at the same time steer clear of engineering hints that would speed up the development of dangerous weapons.

But, as Christiano points out, even if we could satisfactorily figure out what we’d like to be able to tell our ancestors, that’s just the first challenge. We’d need to leave the message somewhere that they could identify and dig up. While there are some promising options, this turns out to be remarkably hard to do, as anything we put on the Earth’s surface quickly gets buried far underground.

But even if we figure out a satisfactory message, and a ways to ensure it’s found, a civilization this far in the future won’t speak any language like our own. And being another species, they presumably won’t share as many fundamental concepts with us as humans from 1700. If we knew a way to leave them thousands of books and pictures in a material that wouldn’t break down, would they be able to decipher what we meant to tell them, or would it simply remain a mystery?

That’s just one of many playful questions discussed in today’s episode with Christiano — a frequent writer who’s willing to brave questions that others find too strange or hard to grapple with.

We also talk about why divesting a little bit from harmful companies might be more useful than I’d been thinking. Or whether creatine might make us a bit smarter, and carbon dioxide filled conference rooms make us a lot stupider.

Finally, we get a big update on progress in machine learning and efforts to make sure it’s reliably aligned with our goals, which is Paul’s main research project. He responds to the views that DeepMind’s Pushmeet Kohli espoused in a previous episode, and we discuss whether we’d be better off if AI progress turned out to be most limited by algorithmic insights, or by our ability to manufacture enough computer processors.

Some other issues that come up along the way include:

  • Are there any supplements people can take that make them think better?
  • What implications do our views on meta-ethics have for aligning AI with our goals?
  • Is there much of a risk that the future will contain anything optimised for causing harm?
  • An outtake about the implications of decision theory, which we decided was too confusing and confused to stay in the main recording.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

The new 30-person research group in DC investigating how emerging technologies could affect national security

It’s a little strange to say, “Oh, who’s going to get AI first? Who’s going to get electricity first?” It seems more like “who’s going to use it in what ways, and who’s going to be able to deploy it and actually have it be in widespread use?”

Helen Toner

From 1870 to 1950, the introduction of electricity transformed life in the US and UK, as people gained access to lighting, radio and a wide range of household appliances for the first time. Electricity turned out to be a general purpose technology that could help with almost everything people did.

Some think machine learning could alter 21st century life in a similar way.

In addition to massively changing everyday life, past general purpose technologies have also changed the nature of war. For example, when electricity was introduced to the battlefield, commanders gained the ability to quickly communicate with units far away in the field.

How might international security be altered if the impact of machine learning is similar in scope to that of electricity? Today’s guest — Helen Toner — recently helped found the Center for Security and Emerging Technology at Georgetown University to help policymakers prepare for any such disruptive technical changes that might threaten international peace.

Their first focus is machine learning (ML), a technology which allows computers to recognise patterns, learn from them, and develop ‘intuitions’ that inform their judgement about future cases. This is something humans do constantly, whether we’re playing tennis, reading someone’s face, diagnosing a patient, or figuring out which business ideas are likely to succeed.

Hear about high-impact opportunities to help ensure AI remains safe and beneficial

  1. Submit your CV and interests within AI below.
  2. Key organisations tell us about their important positions, some of which are not yet publicly advertised.
  3. We’ll get in touch with you if there are high impact opportunities that might be a good fit. If you’re interested, we’ll make the introductions.

Tell us about your interests

Sometimes these ML algorithms can seem uncannily insightful, and they’re only getting better over time. Ultimately a wide range of different ML algorithms could end up helping us with all kinds of decisions, just as electricity wakes us up, makes us coffee, and brushes our teeth — all in the first five minutes of our day.

Rapid advances in ML, and the many prospective military applications, has people worrying about an ‘AI arms race’ between the US and China. Henry Kissinger and the past CEO of Google Eric Schmidt recently wrote that AI could “destabilize everything from nuclear détente to human friendships.” Some politicians talk of classifying and restricting access to ML algorithms, lest they fall into the wrong hands.

But if electricity is the best analogy, you could reasonably ask — was there an arms race in electricity in the 19th century? Would that have made any sense? And could someone have changed the course of history by changing who first got electricity and how they used it, or is that a fantasy?

In today’s episode we discuss the research frontier in the emerging field of AI policy and governance, how to have a career shaping US government policy, and Helen’s experience living and studying in China.

We cover:

  • Why immigration is the main policy area that should be affected by AI advances today.
  • Why talking about an ‘arms race’ in AI is premature.
  • How the US could remain the leading country in machine learning for the foreseeable future.
  • Whether it’s ever possible to have a predictable effect on government policy.
  • How Bobby Kennedy may have positively affected the Cuban Missile Crisis.
  • Whether it’s possible to become a China expert and still get a security clearance.
  • Can access to ML algorithms be restricted, or is that just not practical?
  • Why Helen and her colleagues set up the Center for Security and Emerging Technology and what jobs are available there and elsewhere in the field.
  • Whether AI could help stabilise authoritarian regimes.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

Accurately predicting the future is central to absolutely everything. Professor Tetlock has spent 40 years studying how to do it better.

Am I a believer in climate change, or a denier, if I say ‘Well, I’m 72% confident that the UN IPCC surface temperature forecasts are correct within plus or minus 0.3°C’? … I’m flirting with the idea that they might be wrong, right?

Professor Philip Tetlock

Have you ever been infuriated by a doctor’s unwillingness to give you an honest, probabilistic estimate about what to expect? Or a lawyer who won’t tell you the chances you’ll win your case?

Their behaviour is so frustrating because accurately predicting the future is central to every action we take. If we can’t assess the likelihood of different outcomes we’re in a complete bind, whether the decision concerns war and peace, work and study, or Black Mirror and RuPaul’s Drag Race.

Which is why the research of Professor Philip Tetlock is relevant for all of us each and every day.

He has spent 40 years as a meticulous social scientist, collecting millions of predictions from tens of thousands of people, in order to figure out how good humans really are at foreseeing the future, and what habits of thought allow us to do better.

Along with other psychologists, he identified that many ordinary people are attracted to a ‘folk probability’ that draws just three distinctions — ‘impossible’, ‘possible’ and ‘certain’ — and which leads to major systemic mistakes. But with the right mindset and training we can become capable of accurately discriminating between differences as fine as 56% versus 57% likely.

In the aftermath of Iraq and WMDs the US intelligence community hired him to prevent the same ever happening again, and his guide — Superforecasting: The Art and Science of Prediction — became a bestseller back in 2014.

That was five years ago. In today’s interview, Tetlock explains how his research agenda continues to advance, today using the game Civilization 5 to see how well we can predict what would have happened in elusive counterfactual worlds we never get to see, and discovering how simple algorithms can complement or substitute for human judgement.

We discuss how his work can be applied to your personal life to answer high-stakes questions, such as how likely you are to thrive in a given career path, or whether your business idea will be a billion-dollar unicorn — or fall apart catastrophically. (To help you get better at figuring those things out, our site now has a training app developed by the Open Philanthropy Project and Clearer Thinking that teaches you to accurately distinguish your ’70 percents’ from your ’80 percents’.)

We also bring a few methodological questions raised by the author of a recent review of the forecasting literature. And we find out what jobs people can take to make improving the reasonableness of decision-making in major institutions their profession, as it has been for Tetlock over many decades.

We view Tetlock’s work as so core to living well that we’ve brought him back for a second and longer appearance on the show — his first appearance was back in episode 15. Some questions this time around include:

  • What would it look like to live in a world where elites across the globe were better at predicting social and political trends? What are the main barriers to this happening?
  • What are some of the best opportunities for making forecaster training content?
  • What do extrapolation algorithms actually do, and given they perform so well, can we get more access to them?
  • Have any sectors of society or government started to embrace forecasting more in the last few years?
  • If you could snap your fingers and have one organisation begin regularly using proper forecasting, which would it be?
  • When if ever should one use explicit Bayesian reasoning?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

Prof Cass Sunstein on how social change happens, and why it’s so often abrupt & unpredictable

…the former Nazi said, “Opposition? How would anybody know? How would anybody know what somebody else opposes or doesn’t oppose? That a man says he opposes or doesn’t oppose depends on the circumstances, where and when, and to whom…”

Prof Cass Sunstein

It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunstein, they shouldn’t despair. Large social changes are often abrupt and unexpected, arising in an environment of seeming public opposition.

The Communist Revolution in Russia spread so swiftly it confounded even Lenin. Seventy years later the Soviet Union collapsed just as quickly and unpredictably.

In the modern era we have gay marriage, #metoo and the Arab Spring, as well as nativism, Euroskepticism and Hindu nationalism.

How can a society that so recently seemed to support the status quo bring about change in years, months, or even weeks?

Sunstein — co-author of Nudge, Obama White House official, and by far the most cited legal scholar of the late 2000s — aims to unravel the mystery and figure out the implications in his new book How Change Happens.

He pulls together three phenomena which social scientists have studied in recent decades: preference falsification, variable thresholds for action, and group polarisation. If Sunstein is to be believed, together these are a cocktail for social shifts that are chaotic and fundamentally unpredictable.

In brief, people constantly misrepresent their true views, even to close friends and family. They themselves aren’t quite sure how socially acceptable their feelings would have to become before they revealed them or joined a campaign for change. And a chance meeting between a few strangers can be the spark that radicalises a handful of people who then find a message that can spread their beliefs to millions.

According to Sunstein, it’s “much, much easier” to create social change when large numbers of people secretly or latently agree with you. But ‘preference falsification’ is so pervasive that it’s no simple matter to figure out when they do.

In today’s interview, we debate with Sunstein whether this model of social change is accurate, and if so, what lessons it has for those who would like to steer the world in a more humane direction. We cover:

  • How much people misrepresent their views in democratic countries.
  • Whether the finding that groups with an existing view tend towards a more extreme position would stand up in the replication crisis.
  • When is it justified to encourage your own group to polarise?
  • Sunstein’s difficult experiences as a pioneer of animal rights law.
  • Whether activists can do better by spending half their resources on public opinion surveys.
  • Should people be more or less outspoken about their true views?
  • What might be the next social revolution to take off?
  • How can we learn about social movements that failed and disappeared?
  • How to find out what people really think.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

DeepMind’s plan to make AI systems robust & reliable, why it’s a core issue in AI design, and how to succeed at AI research

Machine learning safety work is about enablement. It’s not a sort of tax… it’s enabling the creation and development of these technologies.

Pushmeet Kohli

When you’re building a bridge, responsibility for making sure it won’t fall over isn’t handed over to a few ‘bridge not falling down engineers’. Making sure a bridge is safe to use and remains standing in a storm is completely central to the design, and indeed the entire project.

When it comes to artificial intelligence, commentators often distinguish between enhancing the capabilities of machine learning systems and enhancing their safety. But to Pushmeet Kohli, principal scientist and research team leader at DeepMind, research to make AI robust and reliable is no more a side-project in AI design than keeping a bridge standing is a side-project in bridge design.

Far from being an overhead on the ‘real’ work, it’s an essential part of making AI systems work in any sense. We don’t want AI systems to be out of alignment with our intentions, and that consideration must arise throughout their development.

Professor Stuart Russell — co-author of the most popular AI textbook — has gone as far as to suggest that if this view is right, it may be time to retire the term ‘AI safety research’ altogether.

With the goal of designing systems that reliably do what we want, DeepMind have recently published work on important technical challenges for the ML community.

For instance, Pushmeet is looking for efficient ways to test whether a system conforms to the desired specifications, even in peculiar situations, by creating an ‘adversary’ that proactively seeks out the worst failures possible. If the adversary can efficiently identify the worst-case input for a given model, DeepMind can catch rare failure cases before deploying a model in the real world. In the future single mistakes by autonomous systems may have very large consequences, which will make even small failure probabilities unacceptable.

He’s also looking into ‘training specification-consistent models’ and formal verification’, while other researchers at DeepMind working on their AI safety agenda are figuring out how to understand agent incentives, avoid side-effects, and model AI rewards.

Hear about high-impact opportunities to help ensure AI remains safe and beneficial

  1. Submit your CV and interests within AI below.
  2. Key organisations tell us about their important positions, some of which are not yet publicly advertised.
  3. We’ll get in touch with you if there are high impact opportunities that might be a good fit. If you’re interested, we’ll make the introductions.

Tell us about your interests

In today’s interview, we focus on the convergence between broader AI research and robustness, as well as:

  • DeepMind’s work on the protein folding problem
  • Parallels between ML problems and past challenges in software development and computer security
  • How can you analyse the thinking of a neural network?
  • Unique challenges faced by DeepMind’s technical AGI safety team
  • How do you communicate with a non-human intelligence?
  • How should we conceptualize ML progress?
  • What are the biggest misunderstandings about AI safety and reliability?
  • Are there actually a lot of disagreements within the field?
  • The difficulty of forecasting AI development

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.



As an addendum to the episode, we caught up with some members of the DeepMind team to learn more about roles at the organization beyond research and engineering, and how these contribute to the broader mission of developing AI for positive social impact.

A broad sketch of the kinds of roles listed on the DeepMind website may be helpful for listeners:

  • Program Managers keep the research team moving forward in a coordinated way, enabling and accelerating research.
  • The Ethics & Society team explores the real-world impacts of AI, from both an ethics research and policy perspective.
  • The Public Engagement & Communications team thinks about how to communicate about AI and its implications, engaging with audiences ranging from the AI community to the media to the broader public.
  • The Recruitment team focuses on building out the team in all of these areas, as well as research and engineering, bringing together the diverse and multidisciplinary group of people required to fulfill DeepMind’s ambitious mission.

There are many more listed opportunities across other teams, from Legal to People & Culture to the Office of the CEO, where our listeners may like to get involved.

They invite applicants from a wide range of backgrounds and skillsets so interested listeners should take a look at their open positions.


Continue reading →

Rob Wiblin on human nature, new technology, and living a happy, healthy & ethical life

Today we cross-posted to our podcast feed some interviews Rob did recently on two other podcasts — Mission Daily (from 2m) and The Good Life (from 1h13m).

Some of the content will be familiar to regular listeners or readers — but if you’re at all interested in Rob’s personal thoughts, there should be quite a lot of new material to make listening worthwhile.

The first interview is with Chad Grills. They focused largely on new technologies and existential risks, but also discuss topics like:

  • Why Rob is wary of fiction
  • Egalitarianism in the evolution of hunter gatherers
  • How to stop social media screwing with politics
  • Careers in government versus business

The second interview is with Prof Andrew Leigh — the Shadow Assistant Treasurer in Australia. This one gets into more personal topics than Rob usually covers, like:

  • What advice would he give to his teenage self?
  • Which person has most shaped his view of living an ethical life?
  • His approach to giving to the homeless
  • What does he do to maximise his own happiness?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

Recap: why do some organisations say their recent hires are worth so much?

Our 2018 survey found that for a second year, a significant fraction of organisations reported that they’d want to be compensated hundreds of thousands or sometimes millions of dollars for the loss of a recent hire for three years.

There was some debate last October about whether those figures could be accurate, why they were so high, and what they mean. In the current post, I outline some rough notes summarising the different explanations for why people in the survey estimated that the value of recent hires might be high, though I don’t seek firm conclusions about which considerations are playing the biggest role.

In short, we consider four explanations:

  1. The estimates might be wrong.
  2. There might be large differences in the value-add of different hires.
  3. The organisations might be able to fundraise easily.
  4. Retaining a recent hire allows the organisation to avoid running a hiring process.

Overall, we take the figures as evidence that leaders of the effective altruism community, when surveyed, think the value-add of recent hires at these organisations is very high — plausibly more valuable than donating six figures (or possible even more) per year to the same organisations. However, we do not think the precise numbers are a reliable answer to decision-relevant questions for job seekers, funders, or potential employers. We think it’s likely that mistakes are driving up these estimates. Even ignoring the high probability of mistakes,

Continue reading →

80,000 Hours Annual Review – December 2018

This annual review summarises our annual impact evaluation, and outlines our progress, plans, weaknesses and fundraising needs. It’s supplemented by a more detailed document that acts as a (less polished) appendix adding more detail to each section. Both documents were initially prepared in Dec 2018. We delayed their release until we heard back from some of our largest donors so that other stakeholders would be fully informed about our funding situation before we asked for their support. Except where otherwise stated, we haven’t updated the review with data from 2019 so empirical claims are generally “as of December 2018.” You can also see a glossary of key terms used in the reviews. You can find our previous evaluations here.

What does 80,000 Hours do?

80,000 Hours aims to solve the most pressing skill bottlenecks in the world’s most pressing problems.

We do this by carrying out research to identify the careers that best solve these problems, and using this research to provide free online content and in-person support. Our work is especially aimed at helping talented graduates aged 20-35 enter higher-impact careers.

The content aims to attract people who might be able to solve these bottlenecks and help them find new high-impact options. The in-person support aims to identify promising people and help them enter paths that are a good fit for them by providing advice, introductions and placements into specific positions.

Currently,

Continue reading →

Career advice I wish I’d been given when I was young

Note: A reader who prefers to remain anonymous — but whose career we think did a lot of good — passed us this list of advice which they were grateful to have received, or wish they’d been given when they were younger.

We thought it was very interesting, including where it doesn’t line up exactly with our usual views, and so are publishing it here with their permission.

The advice is targeted towards people sympathetic to the principles of effective altruism, especially those with an interest in public policy careers, but we think much of it is more broadly useful.

  1. Don’t focus too much on long-term plans. Focus on interesting projects and you’ll build a resumé that stands out — take on multiple part-time consultancies and volunteer projects in parallel to quickly build it out. Back in my 30s, most of the things on my resumé were projects that involved 10% of my time each, and about half of them didn’t pay me any money. Those projects sounded fancy and helped me to get good full-time jobs later on.
  2. Find good thinkers and cold-call the ones you most admire. Many years ago I was lucky that people like Peter Singer, Peter Unger, John Broome, and Derek Parfit were kind enough to respond to my letters. (Any readers who are famous should take the time to respond to strangers’ emails.)

    I was similarly lucky that some of the policy professionals whose work I was most impressed with replied to me when I wrote out of the blue to say that I wanted to work for them.

Continue reading →