Since the launch of our marketing programme in 2022, we’ve increased the hours that people spend engaging with our content by 6.5x, reached millions of new users across different platforms, and now have over 500,000 newsletter subscribers. We’re also the largest single source of people getting involved in the effective altruism community, according to the most recent EA Survey.
Even so, it seems like there’s considerable room to grow further — we’re not nearly at the ceiling of what we think we can achieve. So, we’re looking for a new team lead to help us bring the marketing team to its full potential.
We anticipate that the right person in this role could help us massively increase our readership, and lead to hundreds or thousands of additional people pursuing high-impact careers.
As some indication of what success in the role might look like, over the next couple of years your team might have:
Cost-effectively deployed $5 million reaching people from our target audience.
Worked with some of the largest and most well-regarded YouTube channels (for instance, we have run sponsorships with Veritasium, Kurzgesagt, and Wendover Productions).
Designed digital ad campaigns that reached hundreds of millions of people.
Driven hundreds of thousands of additional newsletter subscriptions,
Blog post by Cody Fenwick · Published July 16th, 2024
The idea this week: people pursuing altruistic careers often struggle with imposter syndrome, anxiety, and moral perfectionism. And we’ve spent a lot of time trying to understand what helps.
More than 20% of working US adults said their work harmed their mental health in 2023, according to a survey from the American Psychological Association.
Jobs can put a strain on anyone. And if you aim — like many of our readers do — to help others with your career, your work may feel extra demanding.
Work that you feel really matters can be much more interesting and fulfilling. But it can also sometimes be a double-edged sword — after all, your success doesn’t only matter for you but also for those you’re trying to help.
So this week, we want to share a roundup of some of our top content on mental health:
An interview with our previous CEO on having a successful career with depression, anxiety, and imposter syndrome — this is one of our most popular interviews ever. It gives a remarkably honest and insightful account of what struggles with mental health can feel like from the inside, how they can derail a career, and how you can get back on track. It also provides lots of practical tips for how you can navigate these issues, and tries to offer a corrective to common advice that doesn’t work for everyone.
In today’s episode, host Luisa Rodriguez speaks to Pulitzer Prize finalist and New York Times bestselling author Annie Jacobsen about her latest book, Nuclear War: A Scenario.
They cover:
The most harrowing findings from Annie’s hundreds of hours of interviews with nuclear experts.
What happens during the window that the US president would have to decide about nuclear retaliation after hearing news of a possible nuclear attack.
The horrific humanitarian impacts on millions of innocent civilians from nuclear strikes.
The overlooked dangers of a nuclear-triggered electromagnetic pulse (EMP) attack crippling critical infrastructure within seconds.
How we’re on the razor’s edge between the logic of nuclear deterrence and catastrophe, and urgently need reforms to move away from hair-trigger alert nuclear postures.
And plenty more.
Producer and editor: Keiran Harris Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore
This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI. You can listen to them in either order!
If we develop artificial general intelligence that’s reasonably aligned with human goals, it could put a fast and near-free superhuman advisor in everyone’s pocket. How would that affect culture, government, and our ability to act sensibly and coordinate together?
It’s common to worry that AI advances will lead to a proliferation of misinformation and further disconnect us from reality. But in today’s conversation, AI expert Carl Shulman argues that this underrates the powerful positive applications the technology could have in the public sphere.
As Carl explains, today the most important questions we face as a society remain in the “realm of subjective judgement” — without any “robust, well-founded scientific consensus on how to answer them.” But if AI ‘evals’ and interpretability advance to the point that it’s possible to demonstrate which AI models have truly superhuman judgement and give consistently trustworthy advice, society could converge on firm or ‘best-guess’ answers to far more cases.
If the answers are publicly visible and confirmable by all, the pressure on officials to act on that advice could be great.
That’s because when it’s hard to assess if a line has been crossed or not, we usually give people much more discretion. For instance, a journalist inventing an interview that never happened will get fired because it’s an unambiguous violation of honesty norms — but so long as there’s no universally agreed-upon standard for selective reporting, that same journalist will have substantial discretion to report information that favours their preferred view more often than that which contradicts it.
Similarly, today we have no generally agreed-upon way to tell when a decision-maker has behaved irresponsibly. But if experience clearly shows that following AI advice is the wise move, not seeking or ignoring such advice could become more like crossing a red line — less like making an understandable mistake and more like fabricating your balance sheet.
To illustrate the possible impact, Carl imagines how the COVID pandemic could have played out in the presence of AI advisors that everyone agrees are exceedingly insightful and reliable.
To start, advance investment in preventing, detecting, and containing pandemics would likely have been at a much higher and more sensible level, because it would have been straightforward to confirm which efforts passed a cost-benefit test for government spending. Politicians refusing to fund such efforts when the wisdom of doing so is an agreed and established fact would seem like malpractice.
Low-level Chinese officials in Wuhan would have been seeking advice from AI advisors instructed to recommend actions that are in the interests of the Chinese government as a whole. As soon as unexplained illnesses started appearing, that advice would be to escalate and quarantine to prevent a possible new pandemic escaping control, rather than stick their heads in the sand as happened in reality. Having been told by AI advisors of the need to warn national leaders, ignoring the problem would be a career-ending move.
From there, these AI advisors could have recommended stopping travel out of Wuhan in November or December 2019, perhaps fully containing the virus, as was achieved with SARS-1 in 2003. Had the virus nevertheless gone global, President Trump would have been getting excellent advice on what would most likely ensure his reelection. Among other things, that would have meant funding Operation Warp Speed far more than it in fact was, as well as accelerating the vaccine approval process, and building extra manufacturing capacity earlier. Vaccines might have reached everyone far faster.
These are just a handful of simple changes from the real course of events we can imagine — in practice, a significantly superhuman AI might suggest novel approaches better than any we can suggest here.
In the past we’ve usually found it easier to predict how hard technologies like planes or factories will change than to imagine the social shifts that those technologies will create — and the same is likely happening for AI.
Carl Shulman and host Rob Wiblin discuss the above, as well as:
The risk of society using AI to lock in its values.
The difficulty of preventing coups once AI is key to the military and police.
What international treaties we need to make this go well.
How to make AI superhuman at forecasting the future.
Whether AI will be able to help us with intractable philosophical questions.
Whether we need dedicated projects to make wise AI advisors, or if it will happen automatically as models scale.
Why Carl doesn’t support AI companies voluntarily pausing AI research, but sees a stronger case for binding international controls once we’re closer to ‘crunch time.’
Opportunities for listeners to contribute to making the future go well.
Producer and editor: Keiran Harris Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong Transcriptions: Katy Moore
This is the first part of our marathon interview with Carl Shulman. The second episode is on government and society after AGI. You can listen to them in either order!
The human brain does what it does with a shockingly low energy supply: just 20 watts — a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution has already managed, and could accomplish the work of top human professionals given a 20-watt power supply?
Many people sort of consider that hypothetical, but maybe nobody has followed through and considered all the implications as much as Carl Shulman. Behind the scenes, his work has greatly influenced how leaders in artificial general intelligence (AGI) picture the world they’re creating.
Carl simply follows the logic to its natural conclusion. This is a world where 1 cent of electricity can be turned into medical advice, company management, or scientific research that would today cost $100s, resulting in a scramble to manufacture chips and apply them to the most lucrative forms of intellectual labour.
It’s a world where, given their incredible hourly salaries, the supply of outstanding AI researchers quickly goes from 10,000 to 10 million or more, enormously accelerating progress in the field.
It’s a world where companies operated entirely by AIs working together are much faster and more cost-effective than those that lean on humans for decision making, and the latter are progressively driven out of business.
It’s a world where the technical challenges around control of robots are rapidly overcome, leading to robots into strong, fast, precise, and tireless workers able to accomplish any physical work the economy requires, and a rush to build billions of them and cash in.
It’s a world where, overnight, the number of human beings becomes irrelevant to rates of economic growth, which is now driven by how quickly the entire machine economy can copy all its components. Looking at how long it takes complex biological systems to replicate themselves (some of which can do so in days) that occurring every few months could be a conservative estimate.
It’s a world where any country that delays participating in this economic explosion risks being outpaced and ultimately disempowered by rivals whose economies grow to be 10-fold, 100-fold, and then 1,000-fold as large as their own.
As the economy grows, each person could effectively afford the practical equivalent of a team of hundreds of machine ‘people’ to help them with every aspect of their lives.
And with growth rates this high, it doesn’t take long to run up against Earth’s physical limits — in this case, the toughest to engineer your way out of is the Earth’s ability to release waste heat. If this machine economy and its insatiable demand for power generates more heat than the Earth radiates into space, then it will rapidly heat up and become uninhabitable for humans and other animals.
This eventually creates pressure to move economic activity off-planet. There’s little need for computer chips to be on Earth, and solar energy and minerals are more abundant in space. So you could develop effective populations of billions of scientific researchers operating on computer chips orbiting in space, sending the results of their work, such as drug designs, back to Earth for use.
These are just some of the wild implications that could follow naturally from truly embracing the hypothetical: what if we develop artificial general intelligence that could accomplish everything that the most productive humans can, using the same energy supply?
In today’s episode, Carl explains the above, and then host Rob Wiblin pushes back on whether that’s realistic or just a cool story, asking:
If we’re heading towards the above, how come economic growth remains slow now and not really increasing?
Why have computers and computer chips had so little effect on economic productivity so far?
Are self-replicating biological systems a good comparison for self-replicating machine systems?
Isn’t this just too crazy and weird to be plausible?
What bottlenecks would be encountered in supplying energy and natural resources to this growing economy?
Might there not be severely declining returns to bigger brains and more training?
Wouldn’t humanity get scared and pull the brakes if such a transformation kicked off?
If this is right, how come economists don’t agree and think all sorts of bottlenecks would hold back explosive growth?
Finally, Carl addresses the moral status of machine minds themselves. Would they be conscious or otherwise have a claim to moral or rights? And how might humans and machines coexist with neither side dominating or exploiting the other?
Producer and editor: Keiran Harris Audio engineering lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Transcriptions: Katy Moore
Blog post by Cody Fenwick · Published June 21st, 2024
The idea this week: the cynical case against voting and getting involved in politics doesn’t hold up.
Does your vote matter? Around half of the world’s population is expected to see national elections this year, and voters in places like Taiwan, India, and Mexico have already gone to the polls. The UK and France both recently scheduled elections.
And of course, the 2024 US national election campaigns are off and running, with control of the House of Representatives, the Senate, and the White House in contention — as well as many state houses, governorships, and other important offices.
Sometimes people think that their vote doesn’t matter because they’re just a drop in the ocean.
But my colleague Rob has explored the research on this topic, and he concluded that voting can actually be a surprisingly impactful way to spend your time. So it’s not just your civic duty — it can also be a big opportunity to influence the world for the better.
That’s because, while the chance your vote will change the outcome of an election is small, it can still matter a lot given the massive impact governments can have.
To take a simple model: if the US government discretionary spending is $6.4 trillion over four years, and you have a 1 in 10 million chance of changing the outcome of the national election,
In today’s episode, host Luisa Rodriguez speaks to Eric Schwitzgebel — professor of philosophy at UC Riverside — about some of the most bizarre and unintuitive claims from his recent book, The Weirdness of the World.
They cover:
Why our intuitions seem so unreliable for answering fundamental questions about reality.
What the materialist view of consciousness is, and how it might imply some very weird things — like that the United States could be a conscious entity.
Thought experiments that challenge our intuitions — like supersquids that think and act through detachable tentacles, and intelligent species whose brains are made up of a million bugs.
Eric’s claim that consciousness and cosmology are universally bizarre and dubious.
How to think about borderline states of consciousness, and whether consciousness is more like a spectrum or more like a light flicking on.
The nontrivial possibility that we could be dreaming right now, and the ethical implications if that’s true.
Why it’s worth it to grapple with the universe’s most complex questions, even if we can’t find completely satisfying solutions.
And much more.
Producer and editor: Keiran Harris Audio engineering lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore
In today’s episode, host Luisa Rodriguez speaks to Rachel Glennerster — associate professor of economics at the University of Chicago and a pioneer in the field of development economics — about how her team’s new Market Shaping Accelerator aims to leverage market forces to drive innovations that can solve pressing world problems.
They cover:
How market failures and misaligned incentives stifle critical innovations for social goods like pandemic preparedness, climate change interventions, and vaccine development.
How “pull mechanisms” like advance market commitments (AMCs) can help overcome these challenges — including concrete examples like how one AMC led to speeding up the development of three vaccines which saved around 700,000 lives in low-income countries.
The challenges in designing effective pull mechanisms, from design to implementation.
Why it’s important to tie innovation incentives to real-world impact and uptake, not just the invention of a new technology.
The massive benefits of accelerating vaccine development, in some cases, even if it’s only by a few days or weeks.
The case for a $6 billion advance market commitment to spur work on a universal COVID-19 vaccine.
The shortlist of ideas from the Market Shaping Accelerator’s recent Innovation Challenge that use pull mechanisms to address market failures around improving indoor air quality, repurposing generic drugs for alternative uses, and developing eco-friendly air conditioners for a warming planet.
“Best Buys” and “Bad Buys” for improving education systems in low- and middle-income countries, based on evidence from over 400 studies.
Lessons from Rachel’s career at the forefront of global development, and how insights from economics can drive transformative change.
And much more.
Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore
In today’s episode, host Luisa Rodriguez speaks to Matt Clancy — who oversees Open Philanthropy’s Innovation Policy programme — about his recent work modelling the risks and benefits of the increasing speed of scientific progress.
They cover:
Whether scientific progress is actually net positive for humanity.
Scenarios where accelerating science could lead to existential risks, such as advanced biotechnology being used by bad actors.
Why Matt thinks metascience research and targeted funding could improve the scientific process and better incentivise outcomes that are good for humanity.
Whether Matt trusts domain experts or superforecasters more when estimating how the future will turn out.
Why Matt is sceptical that AGI could really cause explosive economic growth.
And much more.
Producer and editor: Keiran Harris Audio engineering lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore
If transformative AI might come soon and you want to help that go well, one strategy you might adopt is building something useful that will improve as AI gets more capable.
That way if AI accelerates, your ability to help accelerates too.
Here’s an example: organisations that use AI to improve epistemics — our ability to know what’s true — and make better decisions on that basis.
This was the most interesting impact-oriented entrepreneurial idea I came across when I visited the San Francisco Bay area in February. (Thank you to Carl Shulman who first suggested it.)
Navigating the deployment of AI is going to involve successfully making many crazy hard judgement calls, such as “what’s the probability this system isn’t aligned” and “what might the economic effects of deployment be?”
Being able to make these kinds of decisions a little bit better could therefore be worth a huge amount. And that’s true given almost any future scenario.
So the idea is to set up organisations that use AI to improve forecasting and decision-making in ways that can be eventually applied to these kinds of questions.
Why space travel is suddenly getting a lot cheaper and re-igniting enthusiasm around space settlement.
What Zach thinks are the best and worst arguments for settling space.
Zach’s journey from optimistic about space settlement to a self-proclaimed “space bastard” (pessimist).
How little we know about how microgravity and radiation affects even adults, much less the children potentially born in a space settlement.
A rundown of where we could settle in the solar system, and the major drawbacks of even the most promising candidates.
Why digging bunkers or underwater cities on Earth would beat fleeing to Mars in a catastrophe.
How new space settlements could look a lot like old company towns — and whether or not that’s a bad thing.
The current state of space law and how it might set us up for international conflict.
How space cannibalism legal loopholes might work on the International Space Station.
And much more.
Producer and editor: Keiran Harris Audio engineering lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore
If we’d been trying to do that in 1950, one thing would have been at the top of everyone’s minds: the terrifying threat of nuclear annihilation. Indeed, many of the world’s greatest thinkers, politicians, and communicators devoted their careers to understanding and reducing the threat — people like Thomas Schelling, Carl Sagan and even, in his later years, Albert Einstein.
Cooling relations between the US and Russia mean that existing arms reduction treaties (like New START) are looking very likely to lapse.
Rising tensions in the Middle East, alongside the collapse of the Iran deal, mean we could very possibly see a new nuclear weapons state in the near future.
In today’s episode, host Luisa Rodriguez speaks to Dean Spears — associate professor of economics at the University of Texas at Austin and founding director of r.i.c.e. — about his experience implementing a surprisingly low-tech but highly cost-effective kangaroo mother care programme in Uttar Pradesh, India to save the lives of vulnerable newborn infants.
They cover:
The shockingly high neonatal mortality rates in Uttar Pradesh, India, and how social inequality and gender dynamics contribute to poor health outcomes for both mothers and babies.
The remarkable benefits for vulnerable newborns that come from skin-to-skin contact and breastfeeding support.
The challenges and opportunities that come with working with a government hospital to implement new, evidence-based programmes.
How the currently small programme might be scaled up to save more newborns’ lives in other regions of Uttar Pradesh and beyond.
How targeted health interventions stack up against direct cash transfers.
Plus, a sneak peak into Dean’s new book, which explores the looming global population peak that’s expected around 2080, and the consequences of global depopulation.
And much more.
Producer and editor: Keiran Harris Audio engineering lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore
Readers naturally focus most on the top of the list. But while we want readers to consider our top-ranked paths (and we think it’s good to be transparent about what we think are the best opportunities to do good), you shouldn’t underrate the personal factors that will make one path or another a better fit for you — both in terms of social impact and personal satisfaction.
So this week we wanted to highlight a few paths and career steps (in no particular order) that we think people should consider if they want to have a lot of impact:
Public discourse shapes the way societies understand and react to key problems in the world, and journalists have a significant role in shaping it. So if you can become an influential journalist, you might be able to have a big impact by drawing attention to pressing world problems, how to solve them, and how to generally think well about these issues.
The staggering scale of animal suffering in factory farms, and how it will only get worse without intervention.
Work to improve farmed animal welfare that Open Philanthropy is excited about funding.
The amazing recent progress made in farm animal welfare — including regulatory attention in the EU and a big win at the US Supreme Court — and the work that still needs to be done.
The occasional tension between ending factory farming and curbing climate change.
How AI could transform factory farming for better or worse — and Lewis’s fears that the technology will just help us maximise cruelty in the name of profit.
Lewis’s personal journey working on farm animal welfare, and how he copes with the emotional toll of confronting the scale of animal suffering.
How listeners can get involved in the growing movement to end factory farming — from career and volunteer opportunities to impactful donations.
And much more.
Producer and editor: Keiran Harris Audio engineering lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore
Many of you will have heard of Zvi Mowshowitz as a superhuman information-absorbing-and-processing machine — which he definitely is.
As the author of the Substack Don’t Worry About the Vase, Zvi has spent as much time as literally anyone in the world over the last two years tracking in detail how the explosion of AI has been playing out — and he has strong opinions about almost every aspect of it. So in today’s episode, host Rob Wiblin asks Zvi for his takes on:
US-China negotiations
Whether AI progress has stalled
The biggest wins and losses for alignment in 2023
EU and White House AI regulations
Which major AI lab has the best safety strategy
The pros and cons of the Pause AI movement
Recent breakthroughs in capabilities
In what situations it’s morally acceptable to work at AI labs
Whether you agree or disagree with his views, Zvi is super informed and brimming with concrete details.
Zvi and Rob also talk about:
The risk of AI labs fooling themselves into believing their alignment plans are working when they may not be.
The “sleeper agent” issue uncovered in a recent Anthropic paper, and how it shows us how hard alignment actually is.
Why Zvi disagrees with 80,000 Hours’ advice about gaining career capital to have a positive impact.
Zvi’s project to identify the most strikingly horrible and neglected policy failures in the US, and how Zvi founded a new think tank (Balsa Research) to identify innovative solutions to overthrow the horrible status quo in areas like domestic shipping, environmental reviews, and housing supply.
Why Zvi thinks that improving people’s prosperity and housing can make them care more about existential risks like AI.
An idea from the online rationality community that Zvi thinks is really underrated and more people should have heard of: simulacra levels.
And plenty more.
Producer and editor: Keiran Harris Audio engineering lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Transcriptions: Katy Moore
In this episode of 80k After Hours, Luisa Rodriguez and Christian Ruhl discuss underrated best bets to avert civilisational collapse from global catastrophic risks — things like great power war, frontier military technologies, and nuclear winter.
They cover:
How the geopolitical situation has changed in recent years into a “three-body problem” between the US, Russia, and China.
How adding AI-enabled technologies into the mix makes things even more unstable and unpredictable.
Why Christian recommends many philanthropists focus on “right-of-boom” interventions — those that mitigate the damage after a catastrophe — over traditional preventative measures.
Concrete things policymakers should be considering to reduce the devastating effects of unthinkable tragedies.
And on a more personal note, Christian’s experience of having a stutter.
Who this episode is for:
People interested in the most cost-effective ways to prevent nuclear war, such as:
Deescalating after accidental nuclear use.
Civil defence and war termination.
Mitigating nuclear winter.
Who this episode isn’t for:
People interested in the least cost-effective ways to prevent nuclear war, such as:
Coating every nuclear weapon on Earth in solid gold so they’re no longer functional.
Creating a TV show called The Real Housewives of Nuclear Winter about the personal and professional lives of women in Beverly Hills after a nuclear holocaust.
A multibillion dollar programme to invent a laser beam that could write permanent messages on the Moon, and using it just once to spell out #nonukesnovember.
Producer: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Ben Cordell and Milo McGuire Content editing: Katy Moore, Luisa Rodriguez, and Keiran Harris Transcriptions: Katy Moore
Blog post by Arden Koehler · Published March 15th, 2024
The idea this week: working on a highly neglected or pre-paradigmatic issue could be a way to make a big positive difference.
We usually focus on how people can help tackle what we think are the biggest global catastrophic risks. But there are lots of other pressing problems we think also deserve more attention — some of which are especially highly neglected.
Compared to our top-ranked issues, these problems generally don’t have well-developed fields dedicated to them. So we don’t have as much concrete advice about how to tackle them, and they might be full of dead ends.
But if you can find ways to meaningfully contribute (and have the kind of self-directed mindset necessary, doing so could well be your top option.
If we put aside risks of extinction, one of the biggest dangers to the long-term future of humanity might be the potential for an ultra-long-lasting and terrible political regime. As technology advances and globalisation and homogenisation increase, a stable form of totalitarianism potentially could take hold, enabled by improved surveillance, advanced lie detection, or an obedient AI workforce. We’re not sure how big or tractable these risks are, but more research into the area could be highly valuable. Read more.
In today’s episode, host Rob Wiblin speaks for a fourth time with listener favourite Spencer Greenberg — serial entrepreneur and host of the Clearer Thinking podcast — about a grab-bag of topics that Spencer has explored since his last appearance on the show a year ago.
They cover:
How much money makes you happy — and the tricky methodological issues that come up trying to answer that question.
The importance of hype in making valuable things happen.
How to recognise warning signs that someone is untrustworthy or likely to hurt you.
Whether Registered Reports are successfully solving reproducibility issues in science.
The personal principles Spencer lives by, and whether or not we should all establish our own list of life principles.
The biggest and most harmful systemic mistakes we commit when making decisions, both individually and as groups.
The potential harms of lightgassing, which is the opposite of gaslighting.
How Spencer’s team used non-statistical methods to test whether astrology works.
Whether there’s any social value in retaliation.
And much more.
Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Transcriptions: Katy Moore