Blog post by Anonymous · Published November 29th, 2019
The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.
The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect much of it is more broadly useful.
This is the fifth in this series of posts with anonymous answers. You can find the complete collection here.
We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.
Did you just land on our site for the first time? After this you might like to read 80,000 Hours’ career guide.
Blog post by Anonymous · Published November 22nd, 2019
The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.
The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect much of it is more broadly useful.
This is the fourth in this series of posts with anonymous answers. You can find the complete collection here.
We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.
Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.
Ambassador Bonnie Jenkins has had an incredible career in diplomacy and global security.
Today she’s a nonresident senior fellow at the Brookings Institution and president of Global Connections Empowering Global Change, where she works on global health, infectious disease and defence innovation. And in 2017 she founded her own nonprofit, the Women of Color Advancing Peace, Security and Conflict Transformation (WCAPS).
But in this interview we focus on her time as Ambassador at the U.S. Department of State under the Obama administration, where she worked for eight years as Coordinator for Threat Reduction Programs in the Bureau of International Security and Nonproliferation.
In that role, Bonnie coordinated the Department of State’s work to prevent weapons of mass destruction (WMD) terrorism with programmes funded by other U.S. departments and agencies, and as well as other countries.
What was it like to be an ambassador focusing on an issue, rather than an ambassador of a country? Bonnie says the travel was exhausting. She could find herself in Africa one week, and Indonesia the next. She’d meet with folks going to New York for meetings at the UN one day, then hold her own meetings at the White House the next.
Each event would have a distinct purpose. For one, she’d travel to Germany as a US Representative, talking about why the two countries should extend their partnership. For another, she could visit the Food and Agriculture Organization to talk about why they need to think more about biosecurity issues. No day was like the last.
Bonnie was also a leading U.S. official in the launch and implementation of the Global Health Security Agenda (GHSA) discussed at length in episode 27.
Before returning to government in 2009, Bonnie served as program officer for U.S. Foreign and Security Policy at the Ford Foundation. She also served as counsel on the National Commission on Terrorist Attacks Upon the United States (9/11 Commission). Bonnie was the lead staff member conducting research, interviews, and preparing commission reports on counterterrorism policies in the Office of the Secretary of Defense and on U.S. military plans targeting al-Qaeda before 9/11.
She’s also a retired Naval Reserves officer and received several awards for her service. Bonnie remembers the military fondly. She didn’t want that life 24 hours a day, which is why she never went full time. But she liked the rules, loved the camaraderie and remembers it as a time filled with laughter.
And as if that all weren’t curious enough, four years ago Bonnie decided to go vegan. We talk about her work so far as well as:
How listeners can start a career like hers
The history of Cooperative Threat Reduction work
Mistakes made by Mr Obama and Mr Trump
Biggest uncontrolled nuclear material threats today
Biggest security issues in the world today
The Biological Weapons Convention
Where does Bonnie disagree with her colleagues working on peace and security?
The implications for countries who give up WMDs
The fallout from a change in government
Networking, the value of attention, and being a vegan in DC
And the best 2020 Presidential candidates.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
The 80,000 Hours Podcast is produced by Keiran Harris.
Blog post by Anonymous · Published November 18th, 2019
The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.
The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect much of it is more broadly useful.
This is the third in this series of posts with anonymous answers. You can find the complete collection here.
We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.
Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.
We’ve found that readers sometimes interpret or apply our advice in ways we didn’t anticipate and wouldn’t exactly recommend. That’s hard to avoid when you’re writing for a range of people with different personalities and initial views.
To help get on the same page, here’s some advice about our advice, for those about to launch into reading our site.
We want our writing to inform people’s views, but only in proportion to the likelihood that we’re actually right. So we need to make sure you have a balanced perspective on how compelling the evidence is for the different claims we make on the site, and how much weight to put on our advice in your situation.
What follows is a list of points to bear in mind when reading our site, and some thoughts on how to avoid the communication problems we face.
We’ve been wrong before, and we’ll be wrong again
We still have a lot to learn about how people can best have a positive impact with their careers. This means, unfortunately, that we make mistakes and change our advice over time. And this means that in a couple of years, we’ll no longer stand by some of the claims we make today.
Our positions can change because the world changes — for instance, a problem that was more pressing in the past can receive lots of attention and become less pressing over time. Our positions can also change as we learn more —
Many people we advise seem to think that management consulting is the best way to establish their career and gain career capital in their first one or two jobs after their undergraduate degree.
Because of this, people we advise often don’t spend much time generating additional options once they’ve received a management consulting offer, or considering alternatives before they apply to consulting in the first place. However, we think that for people who share our ‘longtermist’ view of global priorities, there are often even better options for career capital.
We’ve even met people who already have PhDs from top programmes in relevant areas but who think they need to do consulting to gain even more career capital, which we think is rarely the best option.
This is even more true of other prestigious generalist corporate jobs, such as investment banking, corporate law, professional services, and also perhaps to options like Teach for America (if you don’t intend to go into education) and MBAs. We provide a little more detail on these alternatives below.
We think this mistaken impression is in part due to our old career guide, which featured consulting and other prestigious corporate jobs prominently in our article on career capital. (We explain how our views have changed over time and the mistakes we made presenting them in the appendix.)
We want to clarify that while we think consulting is a good option for career capital early in your career (especially for practical “do-er”
November 3 2020, 10:32PM: CNN, NBC, and FOX report that Donald Trump has narrowly won Florida, and with it, re-election.
November 3 2020, 11:46PM: The NY Times, Washington Post and Wall Street Journal report that some group has successfully hacked electronic voting systems across the country, including Florida. The malware has spread to tens of thousands of machines and deletes any record of its activity, so the returning officer of Florida concedes they actually have no idea who won the state — and don’t see how they can figure it out.
What on Earth happens next?
Today’s guest — world-renowned computer security expert Bruce Schneier — thinks this scenario is plausible, and the ensuing chaos would sow so much distrust that half the country would never accept the election result.
Unfortunately the US has no recovery system for a situation like this, unlike Parliamentary democracies, which can just rerun the election a few weeks later.
The constitution says the state legislature decides, and they can do so however they like; one tied local election in Texas was settled by playing a hand of poker.
Elections serve two purposes. The first is the obvious one: to pick a winner. The second, but equally important, is to convince the loser to go along with it — which is why hacks often focus on convincing the losing side that the election wasn’t fair.
Schneier thinks there’s a need to agree how this situation should be handled before something like it happens, and America falls into severe infighting as everyone tries to turn the situation to their political advantage.
And to fix our voting systems, we urgently need two things: a voter-verifiable paper ballot and risk-limiting audits.
He likes the system in Minnesota: you get a paper ballot with ovals you fill in, which are then fed into a computerised reader. The computer reads the ballot, and the paper falls into a locked box that’s available for recounts. That gives you the speed of electronic voting, with the security of a paper ballot.
On the back-end, he wants risk limiting audits that are automatically triggered based on the margin of victory. If there’s a large margin of victory, you need a small audit. For a small margin of victory, you need a large audit.
Those two things would do an enormous amount to improve voting security, and we should move to that as soon as possible.
According to Schneier, computer security experts look at current electronic voting machines and can barely believe their eyes. But voting machine designers never understand the security weakness of what they’re designing, because they have a bureaucrat’s rather than hacker’s mindset.
The ideal computer security expert walks into a shop and thinks, “You know, here’s how I would shoplift.” They automatically see where the cameras are, whether there are alarms, and where the security guards aren’t watching.
In this impassioned episode we discuss this hacker mindset, and how to use a career in security to protect democracy and guard dangerous secrets from people who shouldn’t have access to them.
We also cover:
How can we have surveillance of dangerous actors, without falling back into authoritarianism?
When if ever should information about weaknesses in society’s security be kept secret?
How secure are nuclear weapons systems around the world?
How worried should we be about deep-fakes?
The similarities between hacking computers and hacking our biology in the future
Schneier’s critiques of blockchain technology
How technologists could be vital in shaping policy
What are the most consequential computer security problems today?
Could a career in information security be very useful for reducing global catastrophic risks?
What are some of the most kind of widely-held but incorrect beliefs among computer security people?
And more.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
The 80,000 Hours Podcast is produced by Keiran Harris.
Article by Benjamin Todd · Last updated October 2019 · First published October 2019
There are two main types of mistakes one can make with career plans: having an overly rigid and specific plan, and having no long-term plans at all. We see both issues in our advising.
In the rest of this article, we give some arguments for and against long-term career planning, and explain how we aim to strike the balance between both types of mistake.
“Plans are useless but planning is essential.” — Dwight D. Eisenhower
We start with the arguments against long-term plans, and then cover the arguments in favour of them.
Some arguments against long-term plans
Many of our readers get paralysed thinking about long-term options. It’s easy to see your choice of career as a single decision that you have to “get right” immediately, creating a lot of anxiety. In reality, most of the time you’re only committing to a job for a couple of years, and you’ll have many opportunities to shift course in the future.
Your preferences will also change over your career (more than you think), the world will change (including which problems are most pressing and what the key bottlenecks are), and you will learn a huge amount about your skills and which options are best.
Most people we know who are having a big impact today wouldn’t have predicted they’d be doing the kind of work they’re doing ten years ago.
This means it’s not useful to make detailed long-term plans.
Blog post by Anonymous · Published October 10th, 2019
The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.
The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect much of it is more broadly useful.
This is the second in this series of posts with anonymous answers. You can find the complete collection here.
We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.
Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.
Blog post by Anonymous · Published October 3rd, 2019
The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.
The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect much of it is more broadly useful.
This is the first in a series of posts with anonymous answers to a range of questions. You can find the complete collection here.
We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.
Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.
If you’ve listened to absolutely everything on our podcast feed, you’ll have heard four interviews with me already, but fortunately I think these two don’t include too much repetition, and I’ve gotten a decent amount of positive feedback on both.
This is a particularly personal and relaxed interview. We talk about all sorts of things, including nicotine gum, plastic straw bans, whether recycling is important, how many lives a doctor saves, why interviews should go for at least 2 hours, how athletes doping could be good for the world, and many other fun topics.
At some points we even actually discuss effective altruism and 80,000 Hours, but you can easily skip through those bits if they feel too familiar.
The second interview is with Jeremiah Johnson on the Neoliberal Podcast. It starts at 2 hours and 15 minutes into this recording.
Blog post by Robert Wiblin · Published September 17th, 2019
Briefly, once a year, we at 80,000 Hours ask you to tell us if we’ve helped you have a larger social impact.
We and our donors need to know which of our programs are helping people enough to continue or scale up, and it’s only by hearing your stories that we can make these decisions well.
You can also let us know where we’ve fallen short, which helps us fix problems with our advice.
So, if our podcast, job board, articles, advising or other services have somehow contributed to your life or career plans, please take 3–10 minutes to let us know how:
Historically, progress in the field of cryptography has had major consequences. It has changed the course of major wars, made it possible to do business on the internet, and enabled private communication between both law-abiding citizens and dangerous criminals. Could it have similarly significant consequences in future?
Today’s guest — Vitalik Buterin — is world-famous as the lead developer of Ethereum, a successor to the cryptographic-currency Bitcoin, which added the capacity for smart contracts and decentralised organisations. Buterin first proposed Ethereum at the age of 20, and by the age of 23 its success had likely made him a billionaire.
At the same time, far from indulging hype about these so-called ‘blockchain’ technologies, he has been candid about the limited good accomplished by Bitcoin and other currencies developed using cryptographic tools — and the breakthroughs that will be needed before they can have a meaningful social impact. In his own words, “blockchains as they currently exist are in many ways a joke, right?”
But Buterin is not just a realist. He’s also an idealist, who has been helping to advance big ideas for new social institutions that might help people better coordinate to pursue their shared goals.
By combining theories in economics and mechanism design with advances in cryptography, he has been pioneering the new interdiscriplinary field of ‘cryptoeconomics’. Economist Tyler Cowen has observed that, “at 25, Vitalik appears to repeatedly rediscover important economics results from famous papers — without knowing about the papers at all.”
Though its applications have faced major social and technical problems, Ethereum has been used to crowdsource investment for projects and enforce contracts without the need for a central authority. But the proposals for new ways of coordinating people are far more ambitious than that.
For instance, along with previous guest Glen Weyl, Vitalik has helped develop a model for so-called ‘quadratic funding’, which in principle could transform the provision of ‘public goods’. That is, goods that people benefit from whether they help pay for them or not.
Examples of goods that are fully or partially public goods include sound decision-making in government, international peace, scientific advances, disease control, the existence of smart journalism, preventing climate change, deflecting asteroids headed to Earth, and the elimination of suffering. Their underprovision in part reflects the difficulty of getting people to pay for anything when they can instead free-ride on the efforts of others. Anything that could reduce this failure of coordination might transform the world.
The innovative leap of the ‘quadratic funding’ formula is that individuals can in principle be given the incentive to voluntarily contribute amounts that together signal to a government how much society as a whole values a public good, how much should be spent on it, and where that funding should be directed.
But these and other related proposals face major hurdles. They’re vulnerable to collusion, might be used to fund scams, and remain untested at a small scale. Not to mention that anything with a square root sign in it is going to struggle to achieve widespread societal legitimacy. Is the prize large enough to justify efforts to overcome these challenges?
In today’s extensive three-hour interview, Buterin and I cover:
What the blockchain has accomplished so far, and what it might achieve in the next decade;
Why many social problems can be viewed as a coordination failure to provide a public good;
Whether any of the ideas for decentralised social systems emerging from the blockchain community could really work;
His view of ‘effective altruism’ and ‘long-termism’;
The difficulty of establishing true identities and preventing collusion, and why this is an important enabling technology;
Why he is optimistic about ‘quadratic funding’, but pessimistic about replacing existing voting with ‘quadratic voting’;
When it’s good and bad for private entities to censor online speech;
Why humanity might have to abandon living in cities;
And much more.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
The 80,000 Hours Podcast is produced by Keiran Harris.
Blog post by Robert Wiblin · Published August 13th, 2019
As more and more people apply for a job, the value of each extra application goes down. But does it go down quickly, or only very gradually?
This question matters, because for many of the jobs we discuss, lots of people apply and the application process is highly competitive. When this happens, some of our readers have the sense that, if a lot of people are already applying for a job, there’s no point in them applying as well. After all, there must be someone else suitable in the applicant pool already — someone who would do a similarly good job, even if you were to turn down an offer. So, the logic goes, if you take the job, you’re fully ‘replaceable’, and therefore not having much social impact.
By contrast, 80,000 Hours and many of the organisations we help with hiring often feel differently, saying:
Even when many people would be interested in taking a job, the difference between the best and the second best applicant is often large. So losing your best option would still be really costly.
Even when you have a large applicant pool, it’s useful to keep hearing about more potential hires, in the hope of finding someone who’ll be significantly more productive than everyone you’re currently aware of.
Which of these positions is correct? I threw together some simple models in an Excel spreadsheet to explore this disagreement.
Imagine that, one day, humanity dies out. At some point, many millions of years later, intelligent life might well evolve again. Is there any message we could leave that would reliably help them out?
In his second appearance on the 80,000 Hours Podcast, machine learning researcher and polymath Paul Christiano suggests we try to answer this question with a related thought experiment: are there any messages we might want to send back to our ancestors in the year 1700 that would have made history likely to go in a better direction than it did? It seems there probably is.
We could tell them hard-won lessons from history; mention some research questions we wish we’d started addressing earlier; hand over all the social science we have that fosters peace and cooperation; and at the same time steer clear of engineering hints that would speed up the development of dangerous weapons.
But, as Christiano points out, even if we could satisfactorily figure out what we’d like to be able to tell our ancestors, that’s just the first challenge. We’d need to leave the message somewhere that they could identify and dig up. While there are some promising options, this turns out to be remarkably hard to do, as anything we put on the Earth’s surface quickly gets buried far underground.
But even if we figure out a satisfactory message, and a ways to ensure it’s found, a civilization this far in the future won’t speak any language like our own. And being another species, they presumably won’t share as many fundamental concepts with us as humans from 1700. If we knew a way to leave them thousands of books and pictures in a material that wouldn’t break down, would they be able to decipher what we meant to tell them, or would it simply remain a mystery?
That’s just one of many playful questions discussed in today’s episode with Christiano — a frequent writer who’s willing to brave questions that others find too strange or hard to grapple with.
We also talk about why divesting a little bit from harmful companies might be more useful than I’d been thinking. Or whether creatine might make us a bit smarter, and carbon dioxide filled conference rooms make us a lot stupider.
Finally, we get a big update on progress in machine learning and efforts to make sure it’s reliably aligned with our goals, which is Paul’s main research project. He responds to the views that DeepMind’s Pushmeet Kohli espoused in a previous episode, and we discuss whether we’d be better off if AI progress turned out to be most limited by algorithmic insights, or by our ability to manufacture enough computer processors.
Some other issues that come up along the way include:
Are there any supplements people can take that make them think better?
What implications do our views on meta-ethics have for aligning AI with our goals?
Is there much of a risk that the future will contain anything optimised for causing harm?
Interested in applying this thinking to your career?
If you found this interesting, and are thinking through how considerations like these might affect your career choices, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
The 80,000 Hours Podcast is produced by Keiran Harris.
From 1870 to 1950, the introduction of electricity transformed life in the US and UK, as people gained access to lighting, radio and a wide range of household appliances for the first time. Electricity turned out to be a general purpose technology that could help with almost everything people did.
Some think machine learning could alter 21st century life in a similar way.
In addition to massively changing everyday life, past general purpose technologies have also changed the nature of war. For example, when electricity was introduced to the battlefield, commanders gained the ability to quickly communicate with units far away in the field.
How might international security be altered if the impact of machine learning is similar in scope to that of electricity? Today’s guest — Helen Toner — recently helped found the Center for Security and Emerging Technology at Georgetown University to help policymakers prepare for any such disruptive technical changes that might threaten international peace.
Their first focus is machine learning (ML), a technology which allows computers to recognise patterns, learn from them, and develop ‘intuitions’ that inform their judgement about future cases. This is something humans do constantly, whether we’re playing tennis, reading someone’s face, diagnosing a patient, or figuring out which business ideas are likely to succeed.
Sometimes these ML algorithms can seem uncannily insightful, and they’re only getting better over time. Ultimately a wide range of different ML algorithms could end up helping us with all kinds of decisions, just as electricity wakes us up, makes us coffee, and brushes our teeth — all in the first five minutes of our day.
Rapid advances in ML, and the many prospective military applications, has people worrying about an ‘AI arms race’ between the US and China. Henry Kissinger and the past CEO of Google Eric Schmidt recently wrote that AI could “destabilize everything from nuclear détente to human friendships.” Some politicians talk of classifying and restricting access to ML algorithms, lest they fall into the wrong hands.
But if electricity is the best analogy, you could reasonably ask — was there an arms race in electricity in the 19th century? Would that have made any sense? And could someone have changed the course of history by changing who first got electricity and how they used it, or is that a fantasy?
In today’s episode we discuss the research frontier in the emerging field of AI policy and governance, how to have a career shaping US government policy, and Helen’s experience living and studying in China.
We cover:
Why immigration is the main policy area that should be affected by AI advances today.
Why talking about an ‘arms race’ in AI is premature.
How the US could remain the leading country in machine learning for the foreseeable future.
Whether it’s ever possible to have a predictable effect on government policy.
How Bobby Kennedy may have positively affected the Cuban Missile Crisis.
Whether it’s possible to become a China expert and still get a security clearance.
Can access to ML algorithms be restricted, or is that just not practical?
Why Helen and her colleagues set up the Center for Security and Emerging Technology and what jobs are available there and elsewhere in the field.
Whether AI could help stabilise authoritarian regimes.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
The 80,000 Hours Podcast is produced by Keiran Harris.
Have you ever been infuriated by a doctor’s unwillingness to give you an honest, probabilistic estimate about what to expect? Or a lawyer who won’t tell you the chances you’ll win your case?
Their behaviour is so frustrating because accurately predicting the future is central to every action we take. If we can’t assess the likelihood of different outcomes we’re in a complete bind, whether the decision concerns war and peace, work and study, or Black Mirror and RuPaul’s Drag Race.
Which is why the research of Professor Philip Tetlock is relevant for all of us each and every day.
He has spent 40 years as a meticulous social scientist, collecting millions of predictions from tens of thousands of people, in order to figure out how good humans really are at foreseeing the future, and what habits of thought allow us to do better.
Along with other psychologists, he identified that many ordinary people are attracted to a ‘folk probability’ that draws just three distinctions — ‘impossible’, ‘possible’ and ‘certain’ — and which leads to major systemic mistakes. But with the right mindset and training we can become capable of accurately discriminating between differences as fine as 56% versus 57% likely.
In the aftermath of Iraq and WMDs the US intelligence community hired him to prevent the same ever happening again, and his guide — Superforecasting: The Art and Science of Prediction — became a bestseller back in 2014.
That was five years ago. In today’s interview, Tetlock explains how his research agenda continues to advance, today using the game Civilization 5 to see how well we can predict what would have happened in elusive counterfactual worlds we never get to see, and discovering how simple algorithms can complement or substitute for human judgement.
We discuss how his work can be applied to your personal life to answer high-stakes questions, such as how likely you are to thrive in a given career path, or whether your business idea will be a billion-dollar unicorn — or fall apart catastrophically. (To help you get better at figuring those things out, our site now has a training app developed by Open Philanthropy and Clearer Thinking that teaches you to accurately distinguish your ’70 percents’ from your ’80 percents’.)
We also bring a few methodological questions raised by the author of a recent review of the forecasting literature. And we find out what jobs people can take to make improving the reasonableness of decision-making in major institutions their profession, as it has been for Tetlock over many decades.
We view Tetlock’s work as so core to living well that we’ve brought him back for a second and longer appearance on the show — his first appearance was back in episode 15. Some questions this time around include:
What would it look like to live in a world where elites across the globe were better at predicting social and political trends? What are the main barriers to this happening?
What are some of the best opportunities for making forecaster training content?
What do extrapolation algorithms actually do, and given they perform so well, can we get more access to them?
Have any sectors of society or government started to embrace forecasting more in the last few years?
If you could snap your fingers and have one organisation begin regularly using proper forecasting, which would it be?
When if ever should one use explicit Bayesian reasoning?
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
The 80,000 Hours Podcast is produced by Keiran Harris.
It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunstein, they shouldn’t despair. Large social changes are often abrupt and unexpected, arising in an environment of seeming public opposition.
The Communist Revolution in Russia spread so swiftly it confounded even Lenin. Seventy years later the Soviet Union collapsed just as quickly and unpredictably.
In the modern era we have gay marriage, #metoo and the Arab Spring, as well as nativism, Euroskepticism and Hindu nationalism.
How can a society that so recently seemed to support the status quo bring about change in years, months, or even weeks?
Sunstein — co-author of Nudge, Obama White House official, and by far the most cited legal scholar of the late 2000s — aims to unravel the mystery and figure out the implications in his new book How Change Happens.
He pulls together three phenomena which social scientists have studied in recent decades: preference falsification, variable thresholds for action, and group polarisation. If Sunstein is to be believed, together these are a cocktail for social shifts that are chaotic and fundamentally unpredictable.
In brief, people constantly misrepresent their true views, even to close friends and family. They themselves aren’t quite sure how socially acceptable their feelings would have to become before they revealed them or joined a campaign for change. And a chance meeting between a few strangers can be the spark that radicalises a handful of people who then find a message that can spread their beliefs to millions.
According to Sunstein, it’s “much, much easier” to create social change when large numbers of people secretly or latently agree with you. But ‘preference falsification’ is so pervasive that it’s no simple matter to figure out when they do.
In today’s interview, we debate with Sunstein whether this model of social change is accurate, and if so, what lessons it has for those who would like to steer the world in a more humane direction. We cover:
How much people misrepresent their views in democratic countries.
Whether the finding that groups with an existing view tend towards a more extreme position would stand up in the replication crisis.
When is it justified to encourage your own group to polarise?
Sunstein’s difficult experiences as a pioneer of animal rights law.
Whether activists can do better by spending half their resources on public opinion surveys.
Should people be more or less outspoken about their true views?
What might be the next social revolution to take off?
How can we learn about social movements that failed and disappeared?
How to find out what people really think.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
The 80,000 Hours Podcast is produced by Keiran Harris.
When you’re building a bridge, responsibility for making sure it won’t fall over isn’t handed over to a few ‘bridge not falling down engineers’. Making sure a bridge is safe to use and remains standing in a storm is completely central to the design, and indeed the entire project.
When it comes to artificial intelligence, commentators often distinguish between enhancing the capabilities of machine learning systems and enhancing their safety. But to Pushmeet Kohli, principal scientist and research team leader at DeepMind, research to make AI robust and reliable is no more a side-project in AI design than keeping a bridge standing is a side-project in bridge design.
Far from being an overhead on the ‘real’ work, it’s an essential part of making AI systems work in any sense. We don’t want AI systems to be out of alignment with our intentions, and that consideration must arise throughout their development.
Professor Stuart Russell — co-author of the most popular AI textbook — has gone as far as to suggest that if this view is right, it may be time to retire the term ‘AI safety research’ altogether.
With the goal of designing systems that reliably do what we want, DeepMind have recently published work on important technical challenges for the ML community.
For instance, Pushmeet is looking for efficient ways to test whether a system conforms to the desired specifications, even in peculiar situations, by creating an ‘adversary’ that proactively seeks out the worst failures possible. If the adversary can efficiently identify the worst-case input for a given model, DeepMind can catch rare failure cases before deploying a model in the real world. In the future single mistakes by autonomous systems may have very large consequences, which will make even small failure probabilities unacceptable.
In today’s interview, we focus on the convergence between broader AI research and robustness, as well as:
DeepMind’s work on the protein folding problem
Parallels between ML problems and past challenges in software development and computer security
How can you analyse the thinking of a neural network?
Unique challenges faced by DeepMind’s technical AGI safety team
How do you communicate with a non-human intelligence?
How should we conceptualize ML progress?
What are the biggest misunderstandings about AI safety and reliability?
Are there actually a lot of disagreements within the field?
The difficulty of forecasting AI development
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
The 80,000 Hours Podcast is produced by Keiran Harris.
As an addendum to the episode, we caught up with some members of the DeepMind team to learn more about roles at the organization beyond research and engineering, and how these contribute to the broader mission of developing AI for positive social impact.
A broad sketch of the kinds of roles listed on the DeepMind website may be helpful for listeners:
Program Managers keep the research team moving forward in a coordinated way, enabling and accelerating research.
The Ethics & Society team explores the real-world impacts of AI, from both an ethics research and policy perspective.
The Public Engagement & Communications team thinks about how to communicate about AI and its implications, engaging with audiences ranging from the AI community to the media to the broader public.
The Recruitment team focuses on building out the team in all of these areas, as well as research and engineering, bringing together the diverse and multidisciplinary group of people required to fulfill DeepMind’s ambitious mission.
There are many more listed opportunities across other teams, from Legal to People & Culture to the Office of the CEO, where our listeners may like to get involved.
They invite applicants from a wide range of backgrounds and skill sets so interested listeners should take a look at their open positions.