Risks from atomically precise manufacturing

Both the risks and benefits of advances in atomically precise manufacturing seem like they might be significant, and there is currently little effort to shape the trajectory of this technology. However, there is also relatively little investment going into developing atomically precise manufacturing, which reduces the urgency of the issue.

Continue reading →

Expression of interest: Head of Operations

80,000 Hours

80,000 Hours provides research and support to help people switch into careers that effectively tackle the world’s most pressing problems.

We’ve had over 8 million visitors to our website, and more than 3,000 people have told us that they’ve significantly changed their career plans due to our work. We’re also the largest single source of people getting involved in the effective altruism community, according to the most recent EA Survey.

The internal systems team

This role is on the internal systems team, which is here to build the organisation and systems that support 80,000 Hours to achieve its mission.

We oversee 80,000 Hours’ office, tech systems, organisation-wide metrics and impact evaluation, as well as HR, recruiting, finances, and much of our fundraising.

Currently, we have four full-time staff, some part-time staff, and receive support from the Centre for Effective Altruism (our fiscal sponsor).

The role

As 80,000 Hours’ Head of Operations, you would:

  • Oversee a wide range of our internal operations, including team-wide processes, much of our fundraising, our office, finances, tech systems, data practices, and external relations.
  • Manage a team of two operations specialists, including investing in their professional development and identifying opportunities for advancement where appropriate.
  • Grow your team to build capacity in the areas you oversee, including identifying 80,000 Hours’ operational needs and designing roles that will address these.
  • Develop our internal operations strategy — in particular,

Continue reading →

    Open position: Marketer

    Applications for this position are now closed.

    We’re looking for a new marketer to help us expand our readership and scale up our marketing channels.

    We’d like to support the person in this role to take on more responsibility over time as we expand our marketing team.

    80,000 Hours provides free research and support to help people find careers tackling the world’s most pressing problems.

    We’ve had over 8 million visitors to our website, and more than 3,000 people have told us that they’ve significantly changed their career plans due to our work. We’re also the largest single source of people getting involved in the effective altruism community, according to the most recent EA Survey.

    Even so, about 90% of US college graduates have never heard of effective altruism, and we estimate that just 0.5% of students at top colleges are highly engaged in EA. As a marketer with 80,000 Hours, you would help us achieve our goal of reaching all students and recent graduates who might be interested in our work. We anticipate that the right person in this role could help us grow our readership to 5–10 times its current size, and lead to hundreds or thousands of additional people pursuing high-impact careers.

    We’re looking for a marketing generalist who will:

    • Start managing (and eventually own) our two largest existing marketing channels:
      • Sponsorships with people who have large audiences,

    Continue reading →

      #134 – Ian Morris on what big-picture history teaches us

      Wind back 1,000 years and the moral landscape looks very different to today. Most farming societies thought slavery was natural and unobjectionable, premarital sex was an abomination, women should obey their husbands, and commoners should obey their monarchs.

      Wind back 10,000 years and things look very different again. Most hunter-gatherer groups thought men who got too big for their britches needed to be put in their place rather than obeyed, and lifelong monogamy could hardly be expected of men or women.

      Why such big systematic changes — and why these changes specifically?

      That’s the question best-selling historian Ian Morris takes up in his book, Foragers, Farmers, and Fossil Fuels: How Human Values Evolve. Ian has spent his academic life studying long-term history, trying to explain the big-picture changes that play out over hundreds or thousands of years.

      There are a number of possible explanations one could offer for the wide-ranging shifts in opinion on the ‘right’ way to live. Maybe the natural sciences progressed and people realised their previous ideas were mistaken? Perhaps a few persuasive advocates turned the course of history with their revolutionary arguments? Maybe everyone just got nicer?

      In Foragers, Farmers and Fossil Fuels Ian presents a provocative alternative: human culture gradually evolves towards whatever system of organisation allows a society to harvest the most energy, and we then conclude that system is the most virtuous one. Egalitarian values helped hunter-gatherers hunt and gather effectively. Once farming was developed, hierarchy proved to be the social structure that produced the most grain (and best repelled nomadic raiders). And in the modern era, democracy and individuality have proven to be more productive ways to collect and exploit fossil fuels.

      On this theory, it’s technology that drives moral values much more than moral philosophy. Individuals can try to persist with deeply held values that limit economic growth, but they risk being rendered irrelevant as more productive peers in their own society accrue wealth and power. And societies that fail to move with the times risk being conquered by more pragmatic neighbours that adapt to new technologies and grow in population and military strength.

      There are many objections one could raise to this theory, many of which we put to Ian in this interview. But the question is a highly consequential one: if we want to guess what goals our descendants will pursue hundreds of years from now, it would be helpful to have a theory for why our ancestors mostly thought one thing, while we mostly think another.

      Big though it is, the driver of human values is only one of several major questions Ian has tackled through his career.

      In Why the West Rules—For Now, he set out to understand why the Industrial Revolution happened in England and Europe went on to dominate much of the rest of the world, rather than industrialisation kicking off somewhere else like China, with China going on to establish colonies in Europe. (In a word: geography.)

      In War! What is it Good For?, he tried to explain why it is that violent conflicts often lead to longer lives and higher incomes (i.e. wars build empires which suppress interpersonal violence internally), while other times they have the exact opposite effect (i.e. advances in military technology allow nomads to raid and pull apart these empires).

      In today’s episode, we discuss all of Ian’s major books, taking on topics such as:

      • Whether the evidence base in history — from document archives to archaeology — is strong enough to persuasively answer any of these questions
      • Whether or not wars can still lead to less violence today
      • Why Ian thinks the way we live in the 21st century is probably a short-lived aberration
      • Whether the grand sweep of history is driven more by “very important people” or “vast impersonal forces”
      • Why Chinese ships never crossed the Pacific or rounded the southern tip of Africa
      • In what sense Ian thinks Brexit was “10,000 years in the making”
      • The most common misconceptions about macrohistory

      Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

      Producer: Keiran Harris
      Audio mastering: Ben Cordell
      Transcriptions: Katy Moore

      Continue reading →

      #133 – Max Tegmark on how a 'put-up-or-shut-up' resolution led him to work on AI and algorithmic news selection

      On January 1, 2015, physicist Max Tegmark gave up something most of us love to do: complain about things without ever trying to fix them.

      That “put up or shut up” New Year’s resolution led to the first Puerto Rico conference and Open Letter on Artificial Intelligence — milestones for researchers taking the safe development of highly capable AI systems seriously.

      Max’s primary work has been cosmology research at MIT, but his energetic and freewheeling nature has led him into so many other projects that you would be forgiven for forgetting it. In the 2010s he wrote two best-selling books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, and Life 3.0: Being Human in the Age of Artificial Intelligence, and in 2014 founded a non-profit, the Future of Life Institute, which works to reduce all sorts of threats to humanity’s future including nuclear war, synthetic biology, and AI.

      Max has complained about many other things over the years, from killer robots to the impact of social media algorithms on the news we consume. True to his ‘put up or shut up’ resolution, he and his team went on to produce a video on so-called ‘Slaughterbots’ which attracted millions of views, and develop a podcast and website called ‘Improve The News’ to help readers separate facts from spin.

      But given the stunning recent advances in capabilities — from OpenAI’s DALL-E to DeepMind’s Gato — AI itself remains top of his mind.

      You can now give an AI system like GPT-3 the text: “I’m going to go to this mountain with the faces on it. What is the capital of the state to the east of the state that that’s in?” And it gives the correct answer (Saint Paul, Minnesota) — something most AI researchers would have said was impossible without fundamental breakthroughs just seven years ago.

      So back at MIT, he now leads a research group dedicated to what he calls “intelligible intelligence.” At the moment, AI systems are basically giant black boxes that magically do wildly impressive things. But for us to trust these systems, we need to understand them.

      He says that training a black box that does something smart needs to just be stage one in a bigger process. Stage two is: “How do we get the knowledge out and put it in a safer system?”

      His favourite MIT project so far involved taking a bunch of data from the 100 most complicated or famous physics equations, creating an Excel spreadsheet with each of the variables and the results, and saying to the computer, “OK, here’s the data. Can you figure out what the formula is?”

      For general formulas, this is really hard. About 400 years ago, Johannes Kepler managed to get hold of the data that Tycho Brahe had gathered regarding how the planets move around the solar system. Kepler spent four years staring at the data until he figured out what the data meant: that planets orbit in an ellipse.

      Max’s team’s code was able to discover that in just an hour.

      Today’s conversation starts off giving a broad overview of the key questions about artificial intelligence: What’s the potential? What are the threats? How might this story play out? What should we be doing to prepare?

      Rob and Max then move on to recent advances in capabilities and alignment, the mood we should have, and possible ways we might misunderstand the problem.

      They then spend roughly the last third talking about Max’s current big passion: improving the news we consume — where Rob has a few reservations.

      They also cover:

      • Whether we would be able to understand what superintelligent systems were doing
      • The value of encouraging people to think about the positive future they want
      • How to give machines goals
      • Whether ‘Big Tech’ is following the lead of ‘Big Tobacco’
      • Whether we’re sleepwalking into disaster
      • Whether people actually just want their biases confirmed
      • Why Max is worried about government-backed fact-checking
      • And much more

      Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

      Producer: Keiran Harris
      Audio mastering: Ben Cordell
      Transcriptions: Katy Moore

      Continue reading →

      Know what you’re optimising for

      There is (sometimes) such a thing as a free lunch

      You live in a world where most people, most of the time, think of things as categorical, rather than continuous. People either agree with you or they don’t. Food is healthy or unhealthy. Your career is ‘good for the world,’ or it’s neutral, or maybe even it’s bad — but it’s only the category that matters, not the size of the benefit or harm. Ideas are wrong, or they are right. Predictions end up confirmed or falsified.

      In my view, one of the central ideas of effective altruism is the realisation that ‘doing good’ is not such a binary. That as well as it mattering that we help others at all, it matters how much we help. That helping more is better than helping less, and helping a lot more is a lot better.

      For me, this is also a useful framing for thinking rationally. Here, rather than ‘goodness,’ the continuous quantity is truth. The central realisation is that ideas are not simply true or false; they are all flawed attempts to model reality, and just how flawed is up for grabs. If we’re wrong, our response should not be to give up, but to try to be less wrong.

      When you realise something is continuous that most people are treating as binary,

      Continue reading →

      #132 – Nova DasSarma on why information security may be critical to the safe development of AI systems

      If a business has spent $100 million developing a product, it’s a fair bet that they don’t want it stolen in two seconds and uploaded to the web where anyone can use it for free.

      This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars. But once trained, such models may be only a few gigabytes in size and run just fine on ordinary laptops.

      Today’s guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic with the security team. One of her jobs is to stop hackers exfiltrating Anthropic’s incredibly expensive intellectual property, as recently happened to Nvidia. As she explains, given models’ small size, the need to store such models on internet-connected servers, and the poor state of computer security in general, this is a serious challenge.

      The worries aren’t purely commercial though. This problem looms especially large for the growing number of people who expect that in coming decades we’ll develop so-called artificial ‘general’ intelligence systems that can learn and apply a wide range of skills all at once, and thereby have a transformative effect on society.

      If aligned with the goals of their owners, such general AI models could operate like a team of super-skilled assistants, going out and doing whatever wonderful (or malicious) things are asked of them. This might represent a huge leap forward for humanity, though the transition to a very different new economy and power structure would have to be handled delicately.

      If unaligned with the goals of their owners or humanity as a whole, such broadly capable models would naturally ‘go rogue,’ breaking their way into additional computer systems to grab more computing power — all the better to pursue their goals and make sure they can’t be shut off.

      As Nova explains, in either case, we don’t want such models disseminated all over the world before we’ve confirmed they are deeply safe and law-abiding, and have figured out how to integrate them peacefully into society. In the first scenario, premature mass deployment would be risky and destabilising. In the second scenario, it could be catastrophic — perhaps even leading to human extinction if such general AI systems turn out to be able to self-improve rapidly rather than slowly, something we can only speculate on at this point.

      If highly capable general AI systems are coming in the next 10 or 20 years, Nova may be flying below the radar with one of the most important jobs in the world.

      We’ll soon need the ability to ‘sandbox’ (i.e. contain) models with a wide range of superhuman capabilities, including the ability to learn new skills, for a period of careful testing and limited deployment — preventing the model from breaking out, and criminals from breaking in. Nova and her colleagues are trying to figure out how to do this, but as this episode reveals, even the state of the art is nowhere near good enough.

      In today’s conversation, Rob and Nova cover:

      • How good or bad information security is today
      • The most secure computer systems that exist today
      • How to design an AI training compute centre for maximum efficiency
      • Whether ‘formal verification’ can help us design trustworthy systems
      • How wide the practical gap is between AI capabilities and AI safety
      • How to disincentivise hackers
      • What should listeners do to strengthen their own security practices
      • Jobs at Anthropic
      • And a few more things as well

      Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

      Producer: Keiran Harris
      Audio mastering: Ben Cordell and Beppe Rådvik
      Transcriptions: Katy Moore

      Continue reading →

      #131 – Lewis Dartnell on getting humanity to bounce back faster in a post-apocalyptic world

      “We’re leaving these 16 contestants on an island with nothing but what they can scavenge from an abandoned factory and apartment block. Over the next 365 days, they’ll try to rebuild as much of civilisation as they can — from glass, to lenses, to microscopes. This is: The Knowledge!”

      If you were a contestant on such a TV show, you’d love to have a guide to how basic things you currently take for granted are done — how to grow potatoes, fire bricks, turn wood to charcoal, find acids and alkalis, and so on.

      Today’s guest Lewis Dartnell has gone as far compiling this information as anyone has with his bestselling book The Knowledge: How to Rebuild Civilization in the Aftermath of a Cataclysm.

      But in the aftermath of a nuclear war or incredibly deadly pandemic that kills most people, many of the ways we do things today will be impossible — and even some of the things people did in the past, like collect coal from the surface of the Earth, will be impossible the second time around.

      As Lewis points out, there’s “no point telling this band of survivors how to make something ultra-efficient or ultra-useful or ultra-capable if it’s just too damned complicated to build in the first place. You have to start small and then level up, pull yourself up by your own bootstraps.”

      So it might sound good to tell people to build solar panels — they’re a wonderful way of generating electricity. But the photovoltaic cells we use today need pure silicon, and nanoscale manufacturing — essentially the same technology as microchips used in a computer — so actually making solar panels would be incredibly difficult.

      Instead, you’d want to tell our group of budding engineers to use more appropriate technologies like solar concentrators that use nothing more than mirrors — which turn out to be relatively easy to make.

      A disaster that unravels the complex way we produce goods in the modern world is all too possible. Which raises the question: why not set dozens of people to plan out exactly what any survivors really ought to do if they need to support themselves and rebuild civilisation? Such a guide could then be translated and distributed all around the world.

      The goal would be to provide the best information to speed up each of the many steps that would take survivors from rubbing sticks together in the wilderness to adjusting a thermostat in their comfy apartments.

      This is clearly not a trivial task. Lewis’s own book (at 300 pages) only scratched the surface of the most important knowledge humanity has accumulated, relegating all of mathematics to a single footnote.

      And the ideal guide would offer pretty different advice depending on the scenario. Are survivors dealing with a radioactive ice age following a nuclear war? Or is it an eerily intact but near-empty post-pandemic world with mountains of goods to scavenge from the husks of cities?

      If we take catastrophic risks seriously and want humanity to recover from a devastating shock as far and fast as possible, producing such a guide before it’s too late might be one of the higher-impact projects someone could take on.

      As a brand-new parent, Lewis couldn’t do one of our classic three- or four-hour episodes — so this is an unusually snappy one-hour interview, where Rob and Lewis are joined by Luisa Rodriguez to continue the conversation from her episode of the show last year.

      They cover:

      • The biggest impediments to bouncing back
      • The reality of humans trying to actually do this
      • The most valuable pro-resilience adjustments we can make today
      • How to recover without much coal or oil
      • How to feed the Earth in disasters
      • And the most exciting recent findings in astrobiology

      Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

      Producer: Keiran Harris
      Audio mastering: Ben Cordell
      Transcriptions: Katy Moore

      Continue reading →

      Let’s get serious about preventing the next pandemic

      I recently argued that it’s time for our community to be more ambitious.

      And when it comes to preventing pandemics, it’s starting to happen.

      Andrew Snyder-Beattie is a programme officer at Open Philanthropy, a foundation which has more than $1 billion available to fund big pandemic-prevent projects. He and Ethan Alley (co-CEO at Alvea) recently wrote an exciting list of projects they’d like to see get founded, including:

      • An international early detection centre
      • Actually good PPE
      • Rapid and broad-spectrum antivirals and vaccines
      • A bioweapons watchdog
      • Self-sterilising buildings
      • Refuges

      One of these ideas is already happening. Alvea aims to produce a cheap, flexible vaccine platform using a new type of vaccine (DNA vaccines), starting with an Omicron-specific shot. In two months, they hired 35 people and started preclinical trials.

      The above are technical solutions, which make it possible for a relatively small number of people to make a significant difference to the problem. But policy change is also an important angle.

      Guarding Against Pandemics was an effort to lobby the US government for $30 billion in funding for pandemic prevention. Unfortunately the relevant bill didn’t pass, but the sum at stake made it clearly worth trying.

      Efforts in the UK might be more successful —

      Continue reading →

      Clay Graubard and Robert de Neufville on forecasting the war in Ukraine

      In this episode of 80k After Hours, Rob Wiblin interviews Clay Graubard and Robert de Neufville about forecasting the war between Russia and Ukraine.

      They cover:

      • Their early predictions for the war
      • The performance of the Russian military
      • The risk of use of nuclear weapons
      • The most interesting remaining topics on Russia and Ukraine
      • General lessons we can take from the war
      • The evolution of the forecasting space
      • What Robert and Clay were reading back in February
      • Forecasters vs. subject matter experts
      • Ways to get involved with the forecasting community
      • Impressive past predictions
      • And more

      Who this episode is for:

      • People interested in forecasting
      • People interested in the war in Ukraine
      • People who prefer to know how likely they are to die in a nuclear war

      Who this episode isn’t for:

      • People who’d hate it if a friend said they were 65% likely to come out for drinks
      • People who’d prefer if their death from nuclear war was a total surprise

      Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ’80k After Hours’ into your podcasting app. Or read the transcript below.

      Producer: Keiran Harris
      Audio mastering: Ben Cordell
      Transcriptions: Katy Moore

      Gershwin – Rhapsody in Blue, original 1924 version” by Jason Weinberger is licensed under creative commons

      Continue reading →

      #130 – Will MacAskill on balancing frugality with ambition, whether you need longtermism, & mental health under pressure

      Imagine you lead a nonprofit that operates on a shoestring budget. Staff are paid minimum wage, lunch is bread and hummus, and you’re all bunched up on a few tables in a basement office.

      But over a few years, your cause attracts some major new donors. Your funding jumps a thousandfold, from $100,000 a year to $100,000,000 a year. You’re the same group of people committed to making sacrifices for the cause — but these days, rather than cutting costs, the right thing to do seems to be to spend serious money and get things done ASAP.

      You suddenly have the opportunity to make more progress than ever before, but as well as excitement about this, you have worries about the impacts that large amounts of funding can have.

      This is roughly the situation faced by today’s guest Will MacAskill — University of Oxford philosopher, author of the forthcoming book What We Owe The Future, and founding figure in the effective altruism movement.

      Years ago, Will pledged to give away more than 50% of his income over his life, and was already donating 10% back when he was a student with next to no income. Since then, the coalition he founded has been super successful at attracting the interest of donors who collectively want to give away billions in the way Will and his colleagues were proposing.

      While surely a huge success, it brings with it risks that he’s never had to consider before:

      • Will and his colleagues might try to spend a lot of money trying to get more things done more quickly — but actually just waste it.
      • Being seen as profligate could strike onlookers as selfish and disreputable.
      • Folks might start pretending to agree with their agenda just to get grants.
      • People working on nearby issues that are less flush with funding may end up resentful.
      • People might lose their focus on helping others as they get seduced by the prospect of earning a nice living.
      • Mediocre projects might find it too easy to get funding, even when the people involved would be better off radically changing their strategy, or shutting down and launching something else entirely.

      But all these ‘risks of commission’ have to be weighed against ‘risk of omission’: the failure to achieve all you could have if you’d been truly ambitious.

      People looking askance at you for paying high salaries to attract the staff you want is unpleasant.

      But failing to prevent the next pandemic because you didn’t have the necessary medical experts on your grantmaking team is worse than unpleasant — it’s a true disaster. Yet few will complain, because they’ll never know what might have been if you’d only set frugality aside.

      Will aims to strike a sensible balance between these competing errors, which he has taken to calling judicious ambition. In today’s episode, Rob and Will discuss the above as well as:

      • Will humanity likely converge on good values as we get more educated and invest more in moral philosophy — or are the things we care about actually quite arbitrary and contingent?
      • Why are so many nonfiction books full of factual errors?
      • How does Will avoid anxiety and depression with more responsibility on his shoulders than ever?
      • What does Will disagree with his colleagues on?
      • Should we focus on existential risks more or less the same way, whether we care about future generations or not?
      • Are potatoes one of the most important technologies ever developed?
      • And plenty more.

      Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

      Producer: Keiran Harris
      Audio mastering: Ben Cordell
      Transcriptions: Katy Moore

      Continue reading →

      Climate change

      Climate change is going to significantly and negatively impact the world. Its impacts on the poorest people in our society and our planet’s biodiversity are cause for particular concern. Looking at the worst possible scenarios, it could be an important factor that increases existential threats from other sources, like great power conflicts, nuclear war, or pandemics. But because the worst potential consequences seem to run through those other sources, and these other risks seem larger and more neglected, we think most readers can have a greater impact in expectation working directly on one of these other risks.

      We think your personal carbon footprint is much less important than what you do for work, and that some ways of making a difference on climate change are likely to be much more effective than others. In particular, you could use your career to help develop technology or advocate for policy that would reduce our current emissions, or research technology that could remove carbon from the atmosphere in the future.

      Continue reading →

      Effective altruism and the current funding situation

      This post gives an overview of how I’m thinking about the “funding in EA” issue, building on many conversations. Although I’m involved with a number of organisations in EA, this post is written in my personal capacity. You might also want to see my EAG talk which has a related theme, though with different emphases. For helpful comments, I thank Abie Rohrig, Asya Bergal, Claire Zabel, Eirin Evjen, Julia Wise, Ketan Ramakrishnan, Leopold Aschenbrenner, Matt Wage, Max Daniel, Nick Beckstead, Stephen Clare, and Toby Ord.

      Main points

      • EA is in a very different funding situation than it was when it was founded. This is both an enormous responsibility and an incredible opportunity.
      • It means the norms and culture that made sense at EA’s founding will have to adapt. It’s good that there’s now a serious conversation about this.
      • There are two ways we could fail to respond correctly:
        • By commission: we damage, unnecessarily, the aspects of EA culture that make it valuable; we support harmful projects; or we just spend most of our money in a way that’s below-the-bar.
        • By omission: we aren’t ambitious enough, and fail to make full use of the opportunities we now have available to us. Failure by omission is much less salient than failure by commission, but it’s no less real, and may be more likely.
      • Though it’s hard, we need to inhabit both modes of mind at once.

      Continue reading →

      Data collection for AI alignment

      Why might becoming an expert in data collection for AI alignment be high impact?

      We think it’s crucial that we work to positively shape the development of AI, including through technical research on how to ensure that any potentially transformative AI we develop does what we want it to do (known as the alignment problem). If we don’t find ways to align AI with our values and goals — or worse, don’t find ways to prevent AI from actively harming us or otherwise working against our values — the development of AI could pose an existential threat to humanity.

      There are lots of different proposals for building aligned AI, and it’s unclear which (if any) of these approaches will work. A sizeable subset of these approaches require humans to give data to machine learning models, including include AI safety via debate, microscope AI, and iterated amplification.

      These proposals involve collecting human data on tasks like:

      • Evaluating whether a critique of an argument was good
      • Breaking a difficult question into easier subquestions
      • Examining the outputs of tools that interpret deep neural networks
      • Using one model as a tool to make a judgement on how good or bad the outputs of another model are
      • Finding ways to make models behave badly (e.g. generating adversarial examples by hand)

      Collecting this data —

      Continue reading →

      #129 – Dr James Tibenderana on the state of the art in malaria control and elimination

      The good news is deaths from malaria have been cut by a third since 2005. The bad news is it still causes 250 million cases and 600,000 deaths a year, mostly among young children in sub-Saharan Africa.

      We already have dirt-cheap ways to prevent and treat malaria, and the fraction of the Earth’s surface where the disease exists at all has been halved since 1900. So why is it such a persistent problem in some places, even rebounding 15% since 2019?

      That’s one of many questions I put to today’s guest, James Tibenderana — doctor, medical researcher, and technical director at a major global health nonprofit known as Malaria Consortium. James studies the cutting edge of malaria control and treatment in order to optimise how Malaria Consortium spends £100 million a year across countries like Uganda, Nigeria, and Chad.

      In sub-Saharan Africa, where 90% of malaria deaths occur, the infection is spread by a few dozen species of mosquito that are ideally suited to the local climatic conditions and have thus been impossible to eliminate so far.

      And as James explains, while COVID-19 may have an ‘R’ (reproduction number) of 5, in some situations malaria has a reproduction number in the 1,000s. A single person with malaria can pass the parasite to hundreds of mosquitoes, which themselves each go on to bite dozens of people each, allowing cases to quickly explode.

      The nets and antimalarial drugs Malaria Consortium distributes have been highly effective where distributed, but there are tens of millions of young children who are yet to be covered simply due to a lack of funding.

      Despite the success of these approaches, given how challenging it will be to create a malaria-free world, there’s enthusiasm to find new approaches to throw at the problem. Two new interventions have recently generated buzz: vaccines and genetic approaches to control the mosquito species that carry malaria.

      The RTS,S vaccine is the first-ever vaccine that attacks a protozoa as opposed to a virus or bacteria. Under development for decades, it’s a great scientific achievement. But James points out that even after three doses, it’s still only about 30% effective. Unless future vaccines are substantially more effective, they will remain just a complement to nets and antimalarial drugs, which are cheaper and each cut mortality by more than half.

      On the other hand, the latest mosquito-control technologies are almost too effective. It is possible to insert genes into specific mosquito populations that reduce their ability to reproduce. Of course these genes would normally be eliminated by natural selection, but by using a ‘gene drive,’ you can ensure mosquitoes hand these detrimental genes down to 100% of their offspring. If deployed, these genes would spread and ultimately eliminate the mosquitoes that carry malaria at low cost, thereby largely ridding the world of the disease.

      Because a single country embracing this method would have global effects, James cautions that it’s important to get buy-in from all the countries involved, and to have a way of reversing the intervention if we realise we’ve made a mistake. Groups like Target Malaria are working on exactly these two issues.

      James also emphasises that with thousands of similar mosquito species out there, most of which don’t carry malaria, for better or worse gene drives may not make any difference to the number of mosquitoes out there.

      In this comprehensive conversation, Rob and James discuss all of the above, as well as most of what you could reasonably want to know about the state of the art in malaria control today, including:

      • How malaria spreads and the symptoms it causes
      • The use of insecticides and poison baits
      • How big a problem insecticide resistance is
      • How malaria was eliminated in North America and Europe
      • Whether funding is a key bottleneck right now
      • The key strategic choices faced by Malaria Consortium in its efforts to create a malaria-free world
      • And much more

      Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

      Producer: Keiran Harris
      Audio mastering: Ryan Kessler
      Transcriptions: Katy Moore

      Continue reading →

      Leadership change at 80,000 Hours

      Hi readers!

      We’ve decided for Howie to become CEO and for me to become President of 80,000 Hours.

      After ten years in the role, I’d become less excited about overseeing several aspects of the organisation’s on-going operation. We asked the board to investigate, and their recommendation was that Howie Lempel is the best person to take the org to its next level of scale.

      In the President role I hope I’ll be able to focus on my most valuable contributions – providing advice on org strategy & the website, writing, and helping with outreach – and won’t have set responsibilities.

      I also have a growing list of other projects in effective altruism that I’m excited to explore.

      Howie and I expect the transition to be smooth – in part because Howie is already doing several parts of the role as Chief of Staff. We intend for Howie to officially become CEO this week, and to complete the transfer in about a month.

      I’m excited to explore this new role and for 80,000 Hours to continue growing and getting the next generation working on the world’s most pressing problems.

      Ben

      Note from Howie:

      Hi everyone,

      I’m really looking forward to taking on this new role and leading 80,000 Hours as we continue to grow.

      I’m going to send an initial update on our plans as part of our post-Q2 email update.

      Continue reading →

      #128 – Chris Blattman on the five reasons wars happen

      In nature, animals roar and bare their teeth to intimidate adversaries — but one side usually backs down, and real fights are rare. The wisdom of evolution is that the risk of violence is just too great.

      Which might make one wonder: if war is so destructive, why does it happen? The question may sound naïve, but in fact it represents a deep puzzle. If a war will cost trillions and kill tens of thousands, it should be easy for either side to make a peace offer that both they and their opponents prefer to actually fighting it out.

      The conundrum of how humans can engage in incredibly costly and protracted conflicts has occupied academics across the social sciences for years. In today’s episode, we speak with economist Chris Blattman about his new book, Why We Fight: The Roots of War and the Paths to Peace, which summarises what they think they’ve learned.

      Chris’s first point is that while organised violence may feel like it’s all around us, it’s actually very rare in humans, just as it is with other animals. Across the world, hundreds of groups dislike one another — but knowing the cost of war, they prefer to simply loathe one another in peace.

      In order to understand what’s wrong with a sick patient, a doctor needs to know what a healthy person looks like. And to understand war, social scientists need to study all the wars that could have happened but didn’t — so they can see what a healthy society looks like and what’s missing in the places where war does take hold.

      Chris argues that social scientists have generated five cogent models of when war can be ‘rational’ for both sides of a conflict:

      1. Unchecked interests — such as national leaders who bear few of the costs of launching a war.
      2. Intangible incentives — such as an intrinsic desire for revenge.
      3. Uncertainty — such as both sides underestimating each other’s resolve to fight.
      4. Commitment problems — such as the inability to credibly promise not to use your growing military might to attack others in future.
      5. Misperceptions — such as our inability to see the world through other people’s eyes.

      In today’s interview, we walk through how each of the five explanations work and what specific wars or actions they might explain.

      In the process, Chris outlines how many of the most popular explanations for interstate war are wildly overused (e.g. leaders who are unhinged or male) or misguided from the outset (e.g. resource scarcity).

      The interview also covers:

      • What Chris and Rob got wrong about the war in Ukraine
      • What causes might not fit into these five categories
      • The role of people’s choice to escalate or deescalate a conflict
      • How great power wars or nuclear wars are different, and what can be done to prevent them
      • How much representative government helps to prevent war
      • And much more

      Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

      Producer: Keiran Harris
      Audio mastering: Ben Cordell
      Transcriptions: Katy Moore

      Continue reading →

      My experience with imposter syndrome — and how to (partly) overcome it

      I’ve felt like an imposter since my first year of university.

      I was accepted to the university that I believed was well out of my league — my ‘stretch’ school. I’d gotten good grades in high school, but I’d never seen myself as especially smart: I wasn’t selected for gifted programmes in elementary school like some of my friends were, and my standardised test scores were in the bottom half of those attending my university.

      I was pretty confident I got into the university because of some fluke in the system (my top hypothesis was that I was admitted as part of an affirmative action initiative) — and that belief stayed with me (and was amplified) during the decade that followed.

      Throughout that decade, there was evidence that I really was good at my work at different points, but I could always come up with an explanation for why the evidence was unreliable.

      For example, as an undergraduate, I was the only first-year student in my biology department to get a research internship at the Mayo Clinic — one of the most prestigious biomedical institutions in the US. But I felt I only got the internship because I’d met the right person at the right time, and tricked them into thinking I was smarter than I was by saying smart-sounding things.

      Likewise, during my final year of university, I was given an award for being the top performer in my sociology department.

      Continue reading →

      Open position: writer

      About the 80,000 Hours web team

      80,000 Hours provides free research and support to help people find careers tackling the world’s most pressing problems.

      We’ve had over 8 million visitors to our website (with over 100,000 hours of reading time per year), and more than 3,000 people have told us that they’ve significantly changed their career plans due to our work. We’re also the largest single source of people getting involved in the effective altruism community, according to the most recent EA Community Survey.

      Our articles are read by thousands, and are among the most important ways we help people shift their careers towards higher-impact options.

      The role

      As a writer, you would:

      • Research, outline, and write new articles for the 80,000 Hours website — e.g. new career reviews.
      • Rewrite or update older articles with information and resources — e.g. about rapidly evolving global problems.
      • Generate ideas for new pieces.
      • Talk to experts and readers to help prioritise our new articles and updates.
      • Generally help grow the impact of the site.

      Some of the types of pieces you could work on include:

      Continue reading →

        #127 – Sam Bankman-Fried on taking a high-risk approach to crypto and doing good

        This podcast highlighted Sam Bankman-Fried as a positive example of someone ambitiously pursuing a high-impact career. To say the least, we no longer endorse that. See our statement for why.

        The show’s host, Rob Wiblin, has also released some personal comments on this episode and the FTX bankruptcy on The 80,000 Hours Podcast feed, which you can listen to here.

        If you were offered a 100% chance of $1 million to keep yourself, or a 10% chance of $15 million — it makes total sense to play it safe. You’d be devastated if you lost, and barely happier if you won.

        But if you were offered a 100% chance of donating $1 billion, or a 10% chance of donating $15 billion, you should just go with whatever has the highest expected value — that is, probability multiplied by the goodness of the outcome — and so swing for the fences.

        This is the totally rational but rarely seen high-risk approach to philanthropy championed by today’s guest, Sam Bankman-Fried. Sam founded the cryptocurrency trading platform FTX, which has grown his wealth from around $1 million to $20,000 million.

        Added 30 November 2022: What I meant to refer to as totally rational in the above paragraph is thinking about the ‘expected value’ of one’s actions, not maximizing expected dollar returns as if you were entirely ‘risk-neutral’. See clarifications on what I (Rob Wiblin) think about risk-aversion here.

        Despite that, Sam still drives a Corolla and sleeps on a beanbag, because the only reason he started FTX was to make money to give it away. In 2020, when he was 5% as rich as he is now, he was nonetheless the second biggest individual donor to Joe Biden’s general election campaign.

        In today’s conversation, Sam outlines how at every stage in FTX’s development, he and his team were able to choose the high-risk path to maximise expected value — precisely because they weren’t out to earn money for themselves.

        This year his philanthropy has kicked into high gear with the launch of the FTX Future Fund, which has the initial ambition of giving away hundreds of millions a year and hopes to soon escalate to over a billion a year.

        The Fund is run by previous guest of the show Nick Beckstead, and embodies the same risk-loving attitude Sam has learned from entrepreneurship and trading on financial markets. Unlike most foundations, the Future Fund:

        • Is open to supporting young people trying to get their first big break
        • Makes applying for a grant surprisingly straightforward
        • Is willing to make bets on projects it completely expects to fail, just because they have positive expected value.

        Their website lists both areas of interest and more concrete project ideas they are looking to support. The hope is these will inspire entrepreneurs to come forward, seize the mantle, and be the champions who actually make these things happen. Some of the project proposals are pretty natural, such as:

        Some might raise an eyebrow:

        And others are quirkier still:

        While these ideas may seem pretty random, they all stem from a particular underlying moral and empirical vision that the Future Fund has laid out.

        In this conversation, we speak with Sam about the hopes he and the Fund have for how the long-term future of humanity might go incredibly well, the fears they hold about how it could go incredibly badly, and what levers they might be able to pull to slightly nudge us towards the former.

        Listeners who want to launch an ambitious project to improve humanity’s future should not only listen to the episode, but also look at the full list of the kind of things Sam and his colleagues are hoping to fund, see if they’re inspired, and if so, apply to get the ball rolling.

        On top of that we also cover:

        • How Sam feels now about giving $5 million to Biden’s general election campaign
        • His fears and hopes for artificial intelligence
        • Whether or not blockchain technology actually has useful real-world applications
        • What lessons Sam learned from some serious early setbacks
        • Why he fears the effective altruism community is too conservative
        • Why Sam is as authentic now as he was before he was a celebrity
        • And much more.

        Note: Sam has donated to 80,000 Hours in the past

        Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

        Producer: Keiran Harris
        Audio mastering: Ben Cordell
        Transcriptions: Katy Moore

        November 17 2022, 1pm GMT: This podcast highlighted Sam Bankman-Fried as a positive example of someone ambitiously pursuing a high-impact career. To say the least, we no longer endorse that. See our statement for why.

        Continue reading →