Get this episode by subscribing to our podcast: search for 80,000 Hours wherever you get your podcasts.
Take a trip to Silicon Valley in the 70s and 80s, when going to space sounded like a good way to get around environmental limits, people started cryogenically freezing themselves, and nanotechnology looked like it might revolutionise industry – or turn us all into grey goo.
In this episode of the 80,000 Hours Podcast Christine Peterson takes us back to her youth in the Bay Area, the ideas she encountered there, and what the dreamers she met did as they grew up. We also discuss how she came up with the term ‘open source software’ (and how she had to get someone else to propose it).
Today Christine helps runs the Foresight Institute, which fills a gap left by for-profit technology companies – predicting how new revolutionary technologies could go wrong, and ensuring we steer clear of the downsides.
We dive into:
- Can technology ‘move fast and break things’ without eventually breaking the world? Would it be better for technology to advance more quickly, or more slowly?
- Whether the poor security of computer systems poses a catastrophic risk for the world.
- Could all our essential services be taken down at once? And if so, what can be done about it? Christine makes a radical proposal for solving the problem.
- Will AIs designed for wide-scale automated hacking make computers more or less secure?
- Would it be good to radically extend human lifespan? Is it sensible to cryogenically freeze yourself in the hope of being resurrected in the future?
- Could atomically precise manufacturing (nanotechnology) really work? Why was it initially so controversial and why did people stop worrying about it?
- Should people who try to do good in their careers work long hours and take low salaries? Or should they take care of themselves first of all?
- How she thinks the the effective altruism community resembles the scene she was involved with when she was young, and where it might be going wrong.
If you subscribe to The 80,000 Hours Podcast, you can listen at leisure on your phone, speed up the conversation if you like, and find out about future episodes. You can do so by searching for ‘80,000 Hours’ in your podcasting app (include the comma).
Below you’ll find the full interview, along with a coaching application form, brief summary and extra resources to learn more.
Table of Contents
Three key points
- A similar social scene to the effective altruism community existing in the Bay Area in the 70s focussed on moving humans into space to deal with environmental limits. In the late 80s, as space colonisation seemed increasingly far-off, some people in that scene moved on to thinking about the risks posed new technologies. The focus was initially nanotechnology, and later biotechnology and artificial intelligence as further analysis suggested nanotechnology was not such a large risk. Some of those people, including Christine, work at the Foresight Institute. They try to fill the gap left by for-profit companies that push for rapid technological advances, by trying to foresee and avert the dangers future technologies will pose, while also promoting positive uses of these technologies.
- Present day computer systems are fundamentally insecure, allowing hacking by state-level actors to take down almost any service on the internet, including essential services such as the electricity grid. Automated hacking by algorithms in future could allow computer systems around the world to be rapidly taken down. Christine believes the only way to effectively deal with this problem is to change the operating systems we all use to those that have been designed for maximum security from the ground up. Christine and two colleagues recently released a paper on tackling this issue.
- It’s important to take care of your own health and welfare in order to be able to continue working hard on useful things for decades. Christine also advocates young people making risky bets on difficult projects to tackle the world’s biggest problems while they still have the flexibility to do so. Her impression was the effective altruism used to be too focussed on maximising easily measured outcomes, but this is improving now.
- We also discuss life extension research, cryonics, and how to choose a life partner.
Extra resources to learn more
- Cyber, Nano, and AGI Risks: Decentralized Approaches to Reducing Risks by Christine Peterson, Mark S. Miller and Allison Duettmann
- Engines of Creation: The Coming Era of Nanotechnology by Eric Drexler
- Making sense of long-term indirect effects – Robert Wiblin, EA Global 2016
- Podcast: We aren’t that worried about the next pandemic. Here’s why we should be – and specifically what we can do to stop it.
- Participate in Foresight’s Vision Weekend, Dec. 2-3, San Francisco
- Volunteer for the Foresight Institute
Hey podcast listeners, this is Robert Wiblin, director of research at 80,000 Hours.
I recorded this episode with Christine at Effective Altruism Global in San Francisco last month.
Christine has been working on risks from new technologies for decades. She starts by describing the young and idealistic social scene in Silicon Valley that she was part of in the 80s, which in some ways resembles today’s effective altruism community.
We then talk about how the fundamental lack of security in our computer systems could end up posing a threat to civilization.
We then talk about how you can plan out your life in order to accomplish more, including taking proper care of yourself, choosing a good life partner and taking risks at the right time.
As always you can apply for coaching if you want to work on any of the problems discussed in this episode. You can subscribe by searching for 80,000 Hours in your podcasting software.
And now I bring you Christine Peterson.
Robert Wiblin: Today, I’m speaking with Christine Peterson. Christine is co-founder of the Foresight Institute, a nonprofit focused on speeding up the benefits and reducing the risks from coming revolutionary technologies, especially nanotechnology, AI, and longevity advances. She’s also credited with coining the term open source software. Christine also thinks the EA community should explore the high-leverage opportunities available when working on problems at the earliest possible upstream stages, where measurement is most challenging, so we’ll get to discussing that. Thanks for coming on the podcast, Christine.
Christine Peterson: Oh, it’s so fun to be here, Rob.
Robert Wiblin: We’re going to spend a lot of time going through Christine’s perspective on a bunch of different technologies, and how could they affect the world, and make the future either better or worse, but first it looks like you’ve done a ton of stuff in the course of your life. Tell us a bit about how you ended up where you are today.
Christine Peterson: Wow. When I was growing up, I wasn’t … I can’t say I was particularly altruistic. I wasn’t raised to be especially altruistic. My parents didn’t spend a huge amount of time doing charitable work, but when I went away to college, I met a new kind of folks, people who were focused on very ambitious goals. At that time, the primary focus among young people was environmental, as it often is today as well. That was the overriding concern among young people at that point, and so we were all searching for answers. How can we solve the environmental problems facing the Earth today?
Robert Wiblin: How did you first get pulled into efforts to try to make a really big difference to the world? What were you doing when you were in your twenties or thirties?
Christine Peterson: The first altruistic effort that really got my attention was, oddly, perhaps, space settlement. The listeners may say, “What? How is that your number one? How is this altruistic, and number two, how does this fit in with this environmental concern?” At the time, back then, the concern was running out of resources and overpopulation. Those were the two main issues. We weren’t really looking so much at climate change back then, so we were thinking about, how can we support larger numbers of people? At that time, the population was growing. It wasn’t slowing as it seems to be now, and also where can we find more resources? We’re looking for, where can people live? Where are their resources? The Earth is by definition limited, and remember, this was not that long after the big space program of the US, so we were all very aware. We’d grown up watching tremendous numbers of space launches, and men walking on the moon for the first time. It was very exciting, so we were very aware of space, and space resources.
It was starting to become known that the asteroids had tremendous amounts of resources. We were starting to learn what’s out there, and realized wow, there actually are resources out there, especially immense amounts obviously of solar power, more than you could ever use, 24 hours a day up there, of course, and continuous. We thought, wow. There’s energy. There’s resources. You could actually live in space, and this at the time was a relatively new idea. Prior to that, only in science fiction was that explored. It wasn’t taken seriously, but increasingly this was seen as an actual option, and I think it is a real option. It will happen someday, so we young, idealistic people were saying, “Hey, let’s do space settlement as another way to deal with environmental issues.” We didn’t pretend it solved all the problems, but it would clearly help relieve the overpopulation burden. It would make a lot more resources available to the human species, without having to continually take them out of the Earth.
The idea was that it would lift the burden of human civilization off our fragile biosphere, and at the same time, as we all know, right now we have all our eggs in one basket here in Earth. There are existential risks that could occur that would actually wipe out all life on Earth, and so colonizing space is another way to deal with that. It has an existential risk benefit as well.
Robert Wiblin: This is the Elon Musk SpaceX strategy, trying to go to Mars.
Christine Peterson: It is, although this, because it preceded Elon, we were looking more at initially the moon, but also free standing space settlements, perhaps at the L5 point. Some of your older listeners may remember the L5 Society, which was a very idealistic, very young organization dedicated to building space settlements, free standing, where you create the gravity by rotating the settlement.
Robert Wiblin: Right, so I don’t know that much about the history of this movement. What became of it?
Christine Peterson: I think the basic ideas are still there. Obviously people like Elon Musk are carrying forward the concepts, and I think free standing space settlements will happen eventually when the economic time is right. I think we were too early, but I think the basic concepts we developed were right. I think they will happen eventually. The activists from that movement largely moved on to nanotechnology, which is what I did myself, when we realized wow, this is another way to address the huge environmental challenges that we want to take on.
Robert Wiblin: Is that because they thought it was a more promising technology, they could see more of a path towards nanotechnology?
Christine Peterson: I would say it was … We all understood that the space settlement vision, although technically feasible, is extremely expensive to start. Once you get it going, yes, then you can mine the asteroids, and there’s tremendous value there, but the up-front costs are immense, so … The US at this point, the space program was kind of faltering, and we could see, wow, this is not taking off as we had hoped, as fast as we wished, but nanotechnology is based on the science of chemistry, and that’s a small science. The investments compared to space are more manageable, so we became a little more practical, which is kind of typical of … You get your-
Robert Wiblin: People as they get older, and-
Christine Peterson: It’s true. You say, “All right, let’s … Now we really want to get something done that is actually gonna work.” I think Foresight attracted a lot of these former, super-idealistic young people who were starting to, instead of being in their twenties, now they’re in their late twenties, they’re in their early thirties, and they’re looking for, all right, how can we get more leverage to help our environmental problems?
Robert Wiblin: In the last few years, you’ve encountered the effective altruism movement. What do you make of it? Is it similar to the groups you were involved with when you were in your twenties?
Christine Peterson: Absolutely, and it’s the same kind of folks, extremely intelligent people, quite idealistic, quite ambitious, which I think is appropriate. I think every, when you have a new generation of extremely intelligent, very idealistic people, you want them to be ambitious. You want them to take on the hardest problems in the world, and that’s what effective altruism is doing.
Robert Wiblin: Yeah. Do you have any advice for us? Do we need to maybe get more practical and focus on things that are easier to do, like the space people did in the eighties and seventies?
Christine Peterson: My initial exposure to effective altruism was to some of the earliest documents and the earliest visions, and there was in some of those, there was a very high emphasis on measurement. There was a lot of discussion of bed nets. We all like bed nets… They’re a good thing, right? We like bed nets. However, somehow I got the impression that it was over-emphasized, and that effective altruists were perhaps overly focused on measurement, overly focused on near-term goals, and I … My gut reaction was, no, no. You guys are the most intelligent, most ambitious, most energetic. You’re at a time of your life where you don’t have a lot of burdens on you. You don’t, you’re not raising kids yet. Now is not the time to focus on near-term, easy to measure goals. Now is the time to take on the biggest, hardest, most revolutionary things you possibly can, and throw yourselves at them, because some of you will succeed.
Not most of you, but some of you will succeed, and that’s super important, so we don’t … What we do not want to do is all go work on Wall Street to make money for bed nets. It would help with the bed net issue, but there are bigger … We can get much more leverage by taking on harder problems, and that’s why I’m kind of advocating people look at problems that are challenges that are longer term, more abstract. You don’t get the warm fuzzies that you get from things like bed nets. If you work on something that’s quantifiable, and that saves life, you get major warm fuzzies from that. The idea that I personally saved 100 lives, that’s huge, right? However, my advocacy would be that only people who need warm fuzzies should do things that produce warm fuzzies, because if there’s … I know there’s some fraction of effective altruists who don’t need that. They are very abstract thinkers. They’re very long term. They generate their own excitement about what they’re doing, and they have a small group of friends who also feel that way.
They can get all the social support they need from that. They don’t need external …
Robert Wiblin: Validation.
Christine Peterson: They don’t need it, no, and they don’t need very short-term rewards. They have very long time horizons, and then those of you who are listening who have long time horizons hopefully are resonating with this and saying, “Yes, I don’t need these short-term rewards. I am willing to work on a project for 20, 30, 40 years. I’m even willing to work on projects that extend beyond my own lifespan. I will do that.” Human poverty. We are not going to fix that soon. That’s a really hard problem. The environmental issues are a hard problem, so if you want to work on those, you have to be willing to really postpone gratification, but if you’re good at that, and I know some of you are very good at that, I would urge you to do it. Take on something super hard, because the number of people on the planet who will do that is tiny, so we need all of you who can do it to do it.
Robert Wiblin: I think that’s a fair criticism, perhaps, that even in the early days of effective altruism, when I was involved I think, we were talking in some places about these really big technological challenges, and how you could get, have very high-risk, high return projects. Most people, if you only read about it for 15 minutes or an hour, then you would mostly encounter the ideas that were easiest to explain, that you put first, which is, yeah, bed nets, and quantification, and all of that. I think fortunately, the focus is getting brought up as we mature. Have you found effective altruism changing in the culture in the groups that you’re involved with? We’re relatively new, but are we having any impact as far as you can see?
Christine Peterson: Yes, I think so. I think because it’s a broad movement, I think it’s attracting more young people to come together in one movement. Before, I think we probably had perhaps … We had a lot of young people involved in many different causes, but they weren’t coordinating. They weren’t coming together at the kind of meeting we’re at today, effective altruism global, where you have people from throughout the movement coming together at least once a year to compare notes, to give each other … I said that you don’t get warm fuzzies in some of our work. Human beings do need some positive feedback, and the way we do it is at these kinds of events. We see our friends, maybe someone we only see once a year, but these are people we feel very close to. When they say, “I really admire what you’re doing,” that can last you a whole year.
Robert Wiblin: I agree. You’re a co-founder at the Foresight Institute, which has been around for a couple of decades, and you’re still involved with that. What does the Foresight Institute do?
Christine Peterson: Our goal is to maximize the benefits and minimize the downsides of coming advanced technologies. Our primary focus has been on nanotechnology. We are permitted by our originating documents to take on any technology. We are ramping up in our work on artificial intelligence. Just this Thursday, we were taking advantage of the fact that so many effective altruists were coming into town here for this event that we said, “Wow, let’s get them in one day early and do a satellite event on artificial general intelligence.” Because so many great people were coming into town, plus the fact that we have a lot of good folks here in the Bay Area as well, we got together an excellent group of folks to brainstorm about the future of artificial intelligence, and how to, again, maximize the benefits, but more important, in terms of AI, is minimize the downsides. The for-profit sector focuses on delivering benefits. The nonprofit sector, at least for foresight, we focus on minimizing downsides, because for-profit companies don’t do that, and the government is very slow.
The government hasn’t even figured out about AI at all yet, and they’re not going to notice it until it’s way too late. As you know, as your listeners probably know, there are a number of groups out there now, some of them pretty well-funded, looking at the future of AI. It turns out they don’t talk to each other enough, and that was something we had kind of noticed, and we said, “Well, we can work … We can fix that, because they’re coming here. Let’s pull them in a day early and do a heavy duty, serious workshop and make them really work together.” We did that. It worked really well. I’m thrilled about it.
Robert Wiblin: What kind of specific things did you talk about?
Christine Peterson: The initial workshop goal was to take note of the fact that time frames for AGI have shortened. People have been kind of noticing that, either one by one or in small groups.
Robert Wiblin: You mean people are expecting artificial general intelligence to be invented sooner.
Christine Peterson: That’s right.
Robert Wiblin: What kind of times are we talking about?
Christine Peterson: Let’s see, so you always get a range, no matter any … Whatever group, even if you ask one person, you never get a single number, and that’s correct. We can’t predict the future. However, if you picture kind of a bell curve, and say, “Where is the … Where do we start seeing significant percentages of chances?” We’re starting to hear some numbers under 10 years, which is kind of surprising. It’s certainly-
Robert Wiblin: Bit alarming.
Christine Peterson: It is a little alarming, yes. That doesn’t mean that those are not the fifth … That’s not the peak of the bell curve. The peak of the bell curve is still out there farther, but people are realizing, gee, there is some chance, some non-trivial chance that it could be under 10 years, and that means we have basically no time. That changes strategies. If that’s true, then all of the AI organizations, although their primary focus may stay with the longer time frames because statistically that’s still perhaps likely, but we need to have strategies that are robust across a variety of time frames, including relatively near term ones.
Robert Wiblin: Perhaps different people adopting different strategies. Some people are focusing on, what do we do if artificial intelligence is invented in eight years, and what do we do if it comes in 25 years? Other people are thinking, what if it takes 60 years?
Christine Peterson: We will need to interview those individual groups. I think each one is, they’re still all realizing, of course, obviously, it’s still a bell curve. That hasn’t changed, it’s just that the, that it has shifted a little bit to the left in terms of slightly shorter time frames. I think everyone is realizing, no, we really need to go to these robust strategies, where whatever we’re working on is useful across different time frames. I think it was good to get all the groups together. We got almost all of them in one room and say, “All right. Let’s all acknowledge this,” and we also wanted to talk a little bit about, compare different countries, start to speculate about what societal reactions might be, and the main goal was to get the groups talking, and we also designed a series of future workshops. Those will happen if funding can be found for them.
Robert Wiblin: Do you know if the timelines for AGI development are shortening because it’s turned out to be an easier problem than we thought, or is it because the for-profit sector is just shoveling as much money as they can at this problem?
Christine Peterson: I think the feeling was more in the latter, which is people are seeing that this is very powerful technology, even in the early stages, even way before you get to AGI, just what we have now is extremely profitable technology. Obviously it has applications in terms of military use, so yeah. Lots and lots of investment in terms of money, and also some of the brightest minds, right? This is attracting some of the brightest people in the world, around the world.
Robert Wiblin: Yeah. How did the Foresight Institute get off the ground?
Christine Peterson: What happened was I was at MIT as an undergrad, and one of my friends who actually is also, I don’t know directly or indirectly in the effective altruist movement, because he works at Future of Humanity Institute, which is one of the EA groups, Eric Drexler was also an undergrad at that time. We were both interested in space, and then he was the one who had the insights, the original insights that, wow, atomically precise manufacturing, what we called nanotechnology back then, is technically possible.
Robert Wiblin: Do you want to explain what autonomically precise manufacturing is?
Christine Peterson: Sure.
Robert Wiblin: A lot of people have misconceptions about what nanotechnology refers to.
Christine Peterson: The term is used in a lot of different ways. The particular type of nanotechnology that excited us as undergrads at MIT at the time was saying, if you look at biological systems, and at that time some of the early work was being done seeing how biological systems build things with DNA, and RNA, and proteins, all that. We were realizing, wow, this is not unique to life. You could build artificial systems that could do something very like this, but even better. You could build products both small products and eventually large products with every atom in a designed location. Obviously you have to follow the rules of chemistry. There’s no way around that, but as long as you stay within them, then you could construct pretty much whatever you wanted with atomic precision, being inspired by what we see, that’s how it’s done in nature. That was the fundamental insight that he had, and we were both very young at the time.
When you hear revolutionary things as a very young person, you’re not that surprised, because you don’t have a baseline to compare it with, so when I heard these insights from him, I thought, “Okay. Sure. Why not?” I knew enough chemistry at that stage to say, “Well, this doesn’t seem to violate the laws of chemistry, which is critical.” That’s the first thing you check. Is this physically possible, and if it is physically possible, then you have to say, “All right. How would we get there? How long is this going to take, and how expensive is it going to be?”
Robert Wiblin: You could have gone into a lab in academia or industry and tried to develop atomically precise manufacturing, but instead, you made a nonprofit that’s focused on the risks, and rewards, and so on. Why do that?
Christine Peterson: We could see that this technology, when it reached its full extent, would have tremendously revolutionary consequences for society. Some of them would be, many of them would be positive. There’s the huge environmental benefits, huge medical benefits, and then of course there’s always the problem of military use. Military offensive use, so we felt gee, rather than going to the lab and do this ourselves prior to that, we really should try to get the word out about both the positive and the negative results of this world that’s coming, and that was the decision that we made, that all right, we will open up, open up this information to the world so that we aren’t alone in this.
Robert Wiblin: Quite a lot of people are skeptical that you can do a lot of preparatory work to make sure that new technologies are used well and don’t cause harms, and I guess to some extent it’s been a little while now, and atomically precise manufacturing isn’t here, and it doesn’t seem like it’s exactly around the corner. One could say that some of the early work there might have been wasted, or perhaps it was premature. What do you think of those criticisms?
Christine Peterson: It’s a great debate to have. I wish I knew the answer, and to go back to something we talked about before, this is one of those cases where you’re doing a one-off, unique activity, and because there is no way to run any kind of a control, we will never be sure. We’ll never know. We can guess. We can speculate whether it was a good thing or not to do that, but it’s literally impossible to know. It’s just speculation. All we can have is kind of a gut feel, and say, “Well, I think it was better that we did this, that we didn’t,” or we can say, “Well I wish we had just gone in the lab and done it.” I don’t know how to make that evaluation.
Robert Wiblin: Yeah. Who funds a startup nonprofit focused on making a technology that doesn’t exist yet safe?
Christine Peterson: We realize that they were powerful ideas, and that if for example, a book could be written that conveyed them in a persuasive way, we felt that it would start a movement, and that was true. The book was written. I didn’t write the book. I helped comment on it, but my role was more of an earn to give situation. I spent about five years, and the only job I’ve ever had that wasn’t altruistic, and just to make some money, I did my activism in my spare time, and the money that I was making went into making sure this book happened. That was my earn to give phase.
Robert Wiblin: What’s the book’s name?
Christine Peterson: The name of the book is Engines of Creation. It is still in print. It is still inspirational. I try to read it every now and then, because it is still a super inspirational book.
Robert Wiblin: That’s by Drexler, right?
Christine Peterson: That’s right.
Robert Wiblin: Let’s dive deeper into the nanotechnology question, so in effective altruism, there’s a lot of interest in this issue of revolutionary technologies, and how they could transform society, but nanotechnology hasn’t gotten so much attention, perhaps because people just don’t think that it’s going to … They think that AI’s going to come sooner, or that perhaps biotechnologies are going to come sooner. Should we be more focused on it?
Christine Peterson: I think people who think that AI is going to come sooner and biotech is going to come sooner, I would agree with that. I think that is probably true. We were having debates 20 or 30 years ago, which would come first, nanotech or AI? Back then it really wasn’t clear, and of course today it’s not 100% clear, but I think most people at this point are betting AI will be first. That’s part of the reason why foresight is starting to ramp up our AI work. We are making the same observation that everyone else, saying, “Wow. This is moving fast. So much money is piling in. It’s a worldwide effort.” It looks like this means that nanotechnology will still come but it will probably arrive in a world with AI, and that’s a different looking space.
Robert Wiblin: What kinds of scenarios would we be worried about if atomically precise manufacturing turned out to be a lot easier to create and perhaps we could actually develop it in 10 or 15 years? What are the risks?
Christine Peterson: The primary downside would be deliberate abuse. In the early days, we were looking at accident scenarios, and those are still conceivable, but I think in terms of likelihood of problems, most people would say, no, the real issue is deliberate abuse. For example, smart weapons, very smart, very targeted weapons.
Robert Wiblin: How would you target atomically precise manufacturing machines? Wouldn’t they just tend to spread out of control, and blow back on whoever tried to use them?
Christine Peterson: I would say that to some extent, this is a software issue. These devices would need to be controlled with software, and as we all know, if you look at hardware systems and software systems, the software ones are much harder to understand. They’re hard to … It’s hard to get software to do what you really want, so to the extent that any type of machine goes haywire and comes back, and bites its originators, if software’s involved, often software is the issue. In fact, software security, computer security, is a huge, huge issue. Foresight is taking, as far as I can tell, I haven’t heard of another organization that is taking it as seriously as we are. That is in part because first of all, it’s an immediate problem. It’s happening right now. The risks are very high. The vulnerability is high, and it does affect how AGI will play out in the future.
Robert Wiblin: All right. We’ll come back to that one. Nanotechnology might be a lower probability risk, but it’s also more neglected. How many people in the world do you think are working on risks from nanotechnology?
Christine Peterson: Oh, very few at this point. There are very few. Actually there are very few people working on biological and chemical risks compared to the magnitude of those problems also. There just aren’t that many people working on risk of that, risks of those types.
Robert Wiblin: Yeah, so it still could be useful potentially, even if we think it’s quite unlikely that nanotechnology is coming soon, or is coming before other breakthrough technologies.
Christine Peterson: If we feel that our AGI is coming before molecular nanotechnology, then it’s still worth thinking about scenarios involving it, but you have to do it in a totally different way, because basically you have to solve the AGI problem first. It’s a huge … It makes it very hard to even start to think about molecular nanotechnology, because first you have to fix the AGI problem, and we’re nowhere near that.
Robert Wiblin: I guess I’m thinking, it now looks maybe 90% likely that we’ll get artificial general intelligence before nanotechnology, but that could turn out to be wrong. Maybe nanotechnology will turn out to be a lot easier to develop than AI, or it will turn out that this machine learning isn’t actually the paradigm that we need to develop AI. Maybe you want to have a few people in the back pocket, working on what would we do if atomically precise manufacturing came first.
Christine Peterson: Yeah, so I think that’s true. I think it would be wise for society as a whole to recognize that possibility. Society as a whole right now is not very good at allocating people to low probability risks, and so frankly, we need … We don’t have enough people working even on the major risks right now, so once we get that under control, then we can start to say, “All right. Let’s allocate some folks [crosstalk 00:31:43] to the less likely things.”
Robert Wiblin: Yeah. How can you tell if the Foresight Institute is making a difference? What kind of metrics do you focus on?
Christine Peterson: That comes back again to this challenge of measurability. I realized pretty early on, I think we all did, that Foresight, because it takes fairly complex positions on things, we’re not 100% pro-technology and we’re not 100% anti-technology. We take a balanced approach. Only a small segment of people have the time horizon and the balanced, and can get passionate about balance, right? It’s an unusual thing to get passionate about, so I knew we were going to always be a small organization, but what that does is it gives us the freedom to work on precisely what we think is the most important thing. What you have to do when you think about this is look back at the things that we’ve taken on over time and say, “All right. How did that go?” For example, in the early days of Foresight, even scientists, even the best scientists were still taking the position that atomic precision was not possible. Even a Nobel Prize physicist was arguing no, we will never control with atom by atom.
The education effort has been tremendous, and I think that when we were making good progress, and then finally there was an experience that showed that you actually can place atoms with precision, and then the debate was over. Thank goodness.
Robert Wiblin: Tell us a bit about that controversy.
Christine Peterson: It’s funny, because Richard Feynman, who many of you know was a wonderful, brilliant physicist, gave a talk actually in 1959, where he said that this was going to be possible. It’s not as though nobody knew. It was clear, if you were a brilliant physicist, you could see as early as ’59 that this was going to happen, but that knowledge seemed to have not been taken up by the scientific community. We had a wide variety of people in science who, if you look at their credentials, you say, “Wow, I can believe this person on this issue,” who were completely confused, and just … They were not … They didn’t have the level of understanding of science that Feynman had, so we did what we tend to do at Foresight. What we do is we bring the right people together. That’s our goal. First figure out who are the right people, then bring them together, and so we had a series of meetings where we would bring in the very best people we could, some of whom understood the point and many of whom who did not, and just make sure that when they left, everybody got it.
Basically you seed the community.
Robert Wiblin: In the eighties, I only vaguely know this story, but Drexler was saying that atomically precise manufacturing was going to be possible, and there were some pretty prominent naysayers, right, who were writing articles saying effectively that he was a crank, admittedly a very well-credentialed crank, but that he was just totally wrong about this. Has the debate been settled, do you think, and what really was the disagreement? You would think the laws of physics or chemistry, that we would have understood them well enough that it wouldn’t be possible to have a disagreement about something as specific as this.
Christine Peterson: You would think. I agree. Here’s what happened. There was a particular scientist who unfortunately now is deceased, by the name of Richard Smalley, and he was at Rice University. He read Engines of Creation, got very excited, gave copies of … My understanding is he gave copies of the book to the board of trustees at Rice and said, “We want to be a leader in this at Rice,” and got a bunch of money put aside for that, and started doing it. Rice today is in fact doing a lot in this space, but then the topic of the risks of molecular nanotechnology was getting a lot of press coverage.
Robert Wiblin: This is the gray goo idea.
Christine Peterson: Yes. Gray goo, or military use, or whatever, right? Downsides, general downsides. It was getting too much attention in the press, and this upset Professor Smalley, and so he decided all right. This was a mistake to talk about. I don’t think he was arguing that molecular nanotechnology was not possible. I think he was more arguing we need to stop talking about these potential downsides in public. I think that was really what bothered him, and understandably so. That’s really what the debate was about, and then when the press coverage died down, the whole issue kind of went away, because it was not a problem. The problem that was causing those debates went away, and then unfortunately, Dr. Smalley passed away. He’s not around to ask, “Were you ever persuaded?” I don’t think the idea of artificial molecular machines is controversial today, really.
Robert Wiblin: Okay. Let’s push on. You and a colleague are doing a one-hour workshop tomorrow here at EA Global. What’s the goal of the workshop? What are you talking about?
Christine Peterson: A theme that’s come up as we’ve been talking, more than once, is this challenge of deciding what challenges to take on when measurement is difficult, and that’s the topic we’re going to take on, which is shall we do things where measurement is not just difficult but perhaps impossible? In fact if you look at the things I’ve done, I would say measurement is almost impossible on all of them. I can retroactively come up with a measurement scenario for the coining of the term open source software. I can come up with one, but in fact, the amount of time it would’ve taken to implement that scenario was more than the time it took me to do the work, so there’d be no point in doing it. Just do the work, and I did it. It was faster to do the work than it would’ve been to figure out whether to do the work, so I just did the work.
Robert Wiblin: The Foresight Institute was kind of at the bleeding edge of this nanotechnology issue, and this question of revolutionary technologies, and I guess effective altruism is now similarly kind of a young movement with a bunch of new ideas. I’m curious to know, what kind of challenges did you have early on, and might that be similar to some of the challenges that we might face in the future?
Christine Peterson: Yes. I think there are some similarities. I would say that any early movement is going to attract a wide variety of folks, many of whom are extremely competent and have great social skills, some of whom are technically competent, maybe not such great social skills. On the fringe you’ll sometimes get some folks who are enthusiastic but it’s very hard to figure out how they can contribute in any way, so the challenge for a young movement, and in the early days of a movement, you want to welcome everyone. We’re all very enthusiastic. We’re excited. We’re small. We want everyone to come in. We want everyone to participate, but then you realize, gosh, while the vast majority of folks who are coming in are useful and helpful people, some of them, although they want to help, either there’s some issue, either they don’t know what they don’t know, for example, they think their technical skills are better than they actually are, or they have some serious social skill issues, or even personality disorders.
You just get everyone. It’s like a cross section of the whole population, so the challenge for a new movement is to reconcile the goal of getting everyone involved, and making everyone feel welcome, which we would love to be able to do with the fact that not everyone has the same skillset. Some folks are even challenging to work with at all. How can we still allow them to contribute, and to feel part of the group, without slowing everybody else down?
Robert Wiblin: How did you work around that problem? Did you manage to do a good job of it?
Christine Peterson: I think we did. I think first you have to admit this, which is a stage in a young movement when you go from saying, “Everybody is equally welcome and can perform equally well in any task,” and realizing okay, that’s just not right. Let’s start to figure out, as people come to us, figure out, all right, what are their real skills? Sometimes the person, himself or herself, knows that, and sometimes they don’t, and then how can we direct them into a role in the organization that is the highest use of their time? Sometimes there are folks where really the best use of their time for the movement is in an earning to give role and they can be made welcome at open events, where their contributions are appreciated, and they’re given those warm fuzzies we all need, but we don’t necessarily put them in a full-time role at the organization.
Robert Wiblin: Let’s talk a bit more about space settlement. Do you still think that that’s an interesting priority? What do you think of Elon Musk’s strategy with SpaceX?
Christine Peterson: I think it will happen eventually, and I think it’s something that should happen for existential risk reasons. I think it’s also something that should happen for environmental reasons, so I think it will happen. I’m still in favor of it. When you consider Elon’s goal for Mars, other folks are more interested in the moon, and then there’s the L5 crowd that still advocate for freestanding space settlements. Those are the three options that, those are basically technical and economic choices. It’s not something … It shouldn’t be … The decision shouldn’t be made based on emotion, or what’s the sexiest goal. It should be based on what is economically and technically the most feasible, so Elon thinks Mars is it. There are other people who feel it’s the other way, but one thing about Elon is, he does get some stuff done, so you have to give him credit for that. He may succeed at his goal just because he’s pushing so hard, and he has some money to throw at it.
Robert Wiblin: I feel like Elon gets more done in a month than I might hope to accomplish in a lifetime. That said, I’ve been something of a critic of this idea of space colonization as a way of dealing with existential risk. The idea is to kind of back up humanity on Mars, so if there’s ever a disaster on Earth, or at least this is one reason that you might do it, go to Mars so that you have a second copy and potentially they can come back and recolonize Earth if humanity is in deep trouble or even extinct. Yeah. I think if that was your goal, wouldn’t it be a lot cheaper just to stick people under a mountain, in a mine shaft, or dig very deep, and create a very, a bunker that’s extremely well-stocked, and people could live for a long time, or put people under the sea, or in Antarctica? Those places are all very difficult to have free-standing independent colonies, but they’re still a lot easier than Mars, I would think.
Christine Peterson: I think you make a good point, and I think for some existential risk scenarios, that would be the way to go. I think longer term, we don’t really know when a very, very large rock is going to hit the Earth and really mess it up completely, so you can … There are existential risk scenarios where you really, and then you can say, go beyond that and say, “Well, there are some existential risk scenarios where even being on Mars isn’t good enough.” In the very long-term, you want to keep going. You want to just get out of the solar system entirely, but that is again a very long-term goal. Yes, I would agree with you that those other scenarios you described are ways to address some existential risk scenarios, but I should also mention that if there’s anyone who thinks that colonizing Mars is a way to deal with the AI risk to humanity, I would say I don’t see that as an answer. I don’t think if there were a Mars colony, and there were a problem with AI on the Earth, I don’t think Mars would be independent of that.
Robert Wiblin: Yeah. My impression is Elon now agrees with that.
Christine Peterson: That’s good, because I … A lot of folks had thought he was seeing it the other way, so it’s good to hear that that has been clarified.
Robert Wiblin: You had some involvement in the free software movement when it was developing, right? What did you contribute there?
Christine Peterson: What happened was Foresight was developing some free software that enabled annotations to be made on webpages. That was written by Ka-Ping Yee, who is a very active altruist in the developing world now. We had decided this would be free software, so we were in communication with a lot of folks in that movement, including the ones who were about to release the Netscape code, which was the first major company to release code as free software. Our group, and I felt most passionate that the term free software was holding back the movement, and there’s a very simple reason, which is when new people were introduced to the term free software, they all thought it meant free as in price. Every single time, that’s what they thought, and you’d have to go into a long explanation saying, “No, that’s not what we mean. We really mean this other thing,” even though it really is free. Yes, it’s free, but that’s not what we mean, and people would just, they’d glaze over.
Richard Stallman would say, “Well, we mean free as in freedom not free as in beer,” and now you’re in a discussion of alcoholic beverage prices, which is not the goal.
Robert Wiblin: Fairly poor branding, I guess.
Christine Peterson: It was awful, and much as we all love Richard, it was a real problem, so we all kind of felt the term was wrong, and we would talk about it, and kind of try to brainstorm new terms, and we just weren’t coming up with anything. Then I had on my own probably in the shower, you know how it is when you have ideas in the shower, probably in the shower I thought, “Well, you know, how about just open source? That’s clear. Pretty clear. Anyway, it’s not great but it’s better than free software.” I asked a few people, and most of them said, “Yeah, that’s okay.” One guy said, who was in PR, he said, “No,” he said, “The word open’s overused,” but he was overruled by the fact that other people liked it. Now the challenge was introducing it. How do you change the name of a movement? What if somebody decided that effective altruism was a bad name, and they had a better name? How would they introduce it? You have an active movement. You can’t just rename it. It’s not easy to do. You have to have the movement agree with you.
I was not a coder. To have prestige in the free software community, you write free software, and if you can’t write software at all, you have no prestige at all. You have no standing, so …
Robert Wiblin: What happened?
Christine Peterson: What happened was I was helping out Eric Raymond. He was visiting in the Bay Area. He had prestige in the community, so we had a meeting. It wasn’t about the name at all. We weren’t discussing that, but one of the other people in the room knew about this proposed new name that I was hoping to introduce, but I could see given my lack of prestige in this group, I had no standing to even bring up the issue. I think people probably thought that I was either Eric Raymond’s chauffeur or possibly his girlfriend, which I was not. I was married at the time, so we were talking away on other issues, me actually being silent, when this other fellow who knew the name, he just used it. He didn’t propose it. He just used it as a term, and I thought, “Whoa. Okay. Let’s see what happens now,” and they talked away for a while, and then somebody else just used it. It’s like, whoa. The meme has jumped. The virus has spread to a new mind, automatically. We didn’t even have to suggest it. It just jumped, and that’s when I thought, okay. This is going to work.
It was not until quite a bit later that the community actually held a vote of the leadership, and I was not there, which is appropriate, as I was not a leader in that community. The leaders of the community made an active decision to rename it. They voted between the term sourceware, which was also a good term, and open source, and I guess open source got more votes.
Robert Wiblin: We had a slightly similar experience on a much smaller scale at 80,000 Hours in the very early days. Some people might remember that earning to give, it used to be called professional philanthropy for about a year, but we found that that was quite a confusing term to a lot of people, because they were imagining Bill Gates, and Zuckerberg. It had more of an emphasis on being richer as a philanthropist, and giving the money away rather than going out and trying to make the money. We basically decided in one day, we’re going to call this [inaudible 00:49:52] now. We sent out an email to the professional philanthropy Google group, Google at the time, and then we just started calling it that, and basically the switch was overnight. It was very easy, but the group was a lot smaller then. I think trying to rename it today, or trying to rename effective altruism would be a real uphill battle.
Christine Peterson: Fortunately both earning to give and effective altruism are pretty darn good terms, I think.
Robert Wiblin: We talked about computer security briefly earlier. Is that something that we should be recommending that more people go in and specialize in, but become real experts who can contribute and make computers safer?
Christine Peterson: I would say yes. If you look at the future, the future is run by computers. Nothing will not be computerized, right? We’re already largely there, and the problem is these computers are almost in every case are insecure. It’s not going to be very long before automated software, and I’m not referring to AGI here, I’m referring to AI of, the AI of today maybe AI of today plus two, three years, is going to be automatically able to probe for flaws in security in a software. What that means is they’re going to have the capability to take them all down, and our civilization now is dependent on these machines. We will not get food. We will not get water. We will not get electricity if they are taken down, so the scenarios are pretty serious in terms of, I wouldn’t say that it’s an existential risk for humanity, but it is a huge catastrophic risk.
Robert Wiblin: What can we do? Is it actually possible to make computers secure, or is this just … Do they have to be air gapped or something really extreme to make sure that people can’t break into them?
Christine Peterson: Fortunately they don’t have to be air gapped, because that would basically mean we can’t have networking, and networking is absolutely necessary. No, it turns out there are two trusted code bases that we can build on. It is possible to build secure software. There is, for example, coming from one side in terms of a trusted operating system, we have the SEL4 system, which has been actively validated as secure, so we can build on that. Also, blockchain has proven so far to be pretty darn reliable, and if you think about it, the Bitcoin code is code that has like a $40 billion bug bounty on it at this point, and that has not been taken down. That’s pretty impressive, so yes, secure software is possible. It just needs to be in a very hostile environment. It needs to be attacked continually to be proven to be safe, to be proven to be secure, but that can be done.
Robert Wiblin: Is SEL4 only proven secure when people try to crack it for a long time, and they can’t succeed, or is there some way of mathematically proving that this software just cannot even in principle be broken?
Christine Peterson: My impression is yes, that both ways work. I think, here’s how I think about it as a non-programmer. If for a small enough piece of code, you can sometimes do the mathematical proof for something really big. But perhaps you can’t do it that way. Then what you have to do is just do these continual attacks, and that would perhaps give you the level of comfort that you need.
Robert Wiblin: You remember if you had these AI algorithms that were extremely good at probing software for weaknesses, then you could also use that to test your own software.
Christine Peterson: Absolutely.
Robert Wiblin: I guess you just need to make sure that the people developing the software are on the frontier of that technology, so they’re not out-foxed.
Christine Peterson: Absolutely. You need to have a really good red team attacking, and fortunately the US has an excellent red team. It’s called the National Security Agency. They do this all the time and they’re very good at it, so if we could get them-
Robert Wiblin: They don’t always share their findings though, do they?
Christine Peterson: No, they do not, and that is something that we can work with them on, and say, “Okay, there’s been a proposal by … ” I won’t say who proposed this, because I’m not sure I’m allowed to say who it was, but maybe he’ll step forward, that what we do is say, “All right. The entire treasure trove of vulnerabilities that the NSA is holding will be released in 10 years. You have a 10 year deadline. You have to get off all these insecure software systems, start over, and build secure software.”
Robert Wiblin: Everyone has to quit Windows? Really?
Christine Peterson: Yeah, and Apple, and the whole deal. Yeah, they’re insecure.
Robert Wiblin: Won’t they just patch them?
Christine Peterson: You can’t. They’re fundamentally insecure.
Robert Wiblin: They released that treasure trove of all of the vulnerabilities, and then Microsoft and Apple just, they have a very busy month or something fixing them up, and then isn’t it good?
Christine Peterson: I don’t think that will work, because I think you really have to change paradigms. This patching business, if you had … It’s like a pail with innumerable holes in it. You patch them, and then it rusts through in another area. They’re just fundamentally not secure systems.
Robert Wiblin: Would these secure systems be user-friendly? Is there a reason that they’re not used now?
Christine Peterson: There is a reason why they’re not used now. It takes a little more work to work with them. Basically you have to think about security the whole time you’re building, rather than try to, rather than first design the system and then try to add a security layer on top. That doesn’t work, so it’s a new way of thinking. Basically it’s a new paradigm, and it’s a new set of operating systems, so yeah, people will resist it for obvious reasons, and there’s no reason to make the change until the computing environment gets hostile enough that you are forced to do it. I think now, those of us who look forward, like effective altruists tend to do, and say, “We can see this coming. We know this is coming. There’s no doubt about it. We have a certain amount of time to get ready. How about if we try to get ready?” Now many people will not get ready, but we can try to do what we can to secure, for example, the electricity supply would be nice.
Robert Wiblin: The most important thing.
Christine Peterson: For example, yeah.
Robert Wiblin: This is a pretty new topic for me, so I’ll try to find some links that we can put up in the notes on the show, and I’ll be interested to learn more about that. If someone listening wants to go into computer security, I know this isn’t your area, but do you have any advice for what maybe they should study, or where they should get jobs to build up their skills?
Christine Peterson: For sure. In the show notes here, we’ll link to a paper with the … The most technical co-author is Mark Miller, who is affiliated with Foresight and also has a day job at Google on computer security, so in the show notes is a paper where we address AGI risk and cyber risk, and that gives references to most of the things we’ve been talking about, the SEL4, work. That will lead you into Mark’s publication lists, and he’s published a great deal about this.
Robert Wiblin: Okay. Let’s talk about the last category of technology that I have a big interest in. You’ve done some work to try to prevent aging and increase human’s healthy lifespan. What do you … Why do you see this as such an important area to work in?
Christine Peterson: I think there’s two ways to come at it. One is if you just look at the number of human life years lost to aging, it is … It far outweighs any other disease that we’re tackling as an EA group. If human health and disease is your concern, I think aging wins in terms of just the number of human life years lost total. It’s by far, or orders of magnitude, so there’s that argument. From a very personal perspective, most of our listeners perhaps are young EAs, but imagine yourself as an older person. You’ve built up these decades of experience doing effective altruism, and now it’s going to be, it’s going to disappear. You can try to pass that on to young people, and older people do try to do that, but there’s a lot of losses there, so in terms of basically the intellectual capital of the EA movement itself, aging is going to decimate it. It’s going to be awful.
Robert Wiblin: Yeah. I think this would be one of the biggest economic effects of reducing aging. At the moment, people tend to study for between the first 18 years of their life, maybe 30 years if they’re going through and finishing a PhD, but if people were living two or 300 years, or really an indefinite lifespan, if we managed to basically just stop aging altogether, then people could spend 50 years training until they were just, by today’s standard, absolutely world experts, and then they could work in the field for the rest of their lives developing even more expertise. Could be an enormous transformation in terms of productivity.
Christine Peterson: That’s right, and another way to look at the same issue is to say, “All right, right now what’s happening is we’re extending the lifespan, but we’re not extending the health span.” In other words, we’re having increasingly long periods at the end of life where people are frail. They’re in nursing homes. They’re in memory care. They have Alzheimer’s, and not only does this create tremendous amounts of human misery, it sucks up immense amounts of capital and money that we need for other things. It’s just a tremendous loss. It’s a tremendous dead weight on society. I think we’re evolved to not think about this. We’re evolved to not think about this problem, and it takes … We have to really try hard to say, “All right. I’m not gonna get emotional about this. I’m gonna think hard and clearly about this.”
Robert Wiblin: In terms of benefit to me personally, it’s hard to think of anything that would be more valuable, but in terms of benefit to the entire world I could spend time trying to save my own life or save the life of all the people I know who are alive now, or I could work on some of these other problems that admittedly might not make sure that I live for that much longer, but the future generations will survive because humanity won’t go extinct because of some disaster, or civilization won’t be really thrown off track. Where do you kind of stand on that trade off between life extension seems particularly good for the present generation, where some other programs might be better if you’re thinking about the lives of future generations as well?
Christine Peterson: There’s a number of ways to come at that. First of all, I’m glad people are thinking about these different questions. The goal isn’t for all EAs to work on aging, right? The goal is for us each to think these things through and try to find our point of leverage. In terms of why I look at this in particular as a high leverage, we mentioned the large number of human life years lost. There’s also the point that although initially this may seem like a first world problem, it may seem selfish to work on aging because it will help us, and it will help the wealthy nations, it turns out that if you look at developing nations and the poorer countries, because of advances in healthcare over there, increasingly the problems they’re having are aging-related as well, and the problem there is that they’re being faced with these extraordinarily expensive problems of aging without having gone through the process of becoming wealthy countries first. Even though the burden on wealthy countries is immense in terms of the cost of these frail elders, if you think about the cost to the developing countries, it’s proportionately much, much harder on them.
We tend to think of aging as a first world problem. It totally is not. It’s a problem that effects even poor countries now, and they are the least able to handle it, so I think it’s really a global problem.
Robert Wiblin: Do you want to comment on the technical side of anti-aging work?
Christine Peterson: Aging is a tremendously difficult technical problem. However, I kind of like the strategy that Abrey de Grey argues for, which is a repair scenario. Rather than trying to prevent all aging processes, which is really hard, although in the long term perhaps we can do it, in the near term, we can probably focus on doing repairs of … I think he identifies something like seven major processes, whatever the exact number is, and he’s got some pretty creative ideas. Some of them will work. Some of them will probably need some modification, but I think that we can come up with some workaround, some shortcuts, some tricks that will gain us time while we work on the really, deep fundamental issues.
Robert Wiblin: You don’t think it’s too impractical to imagine that we could make significant progress in aging within a couple of decades? Is it possible that this could actually help me or you, or is it more something that we’re doing for our children?
Christine Peterson: There’s disagreement on that. I think that, I think it could certainly, to the folks who are listening to this, many of whom are in their twenties, I absolutely think it could help that generation, so whether it could help my generation, which is one generation older, I think in our case, perhaps it will … I think it could, it could. I think it’s quite possible, and then there’s always the backup emergency strategy of cryonics. Some of the people I know in EA are signed up for cryonics. I am signed up, so that’s sort of your worst-case scenario is well, we’ll just have to wait, wait it out, and hope that advanced technology eventually kicks in and can do the repairs.
Robert Wiblin: What do you think are the odds of that?
Christine Peterson: I think the technical odds are excellent. I think the challenges come in terms of things like societal disruption that would disturb the organizational structure that has been come up with to make sure that the people who are in cryonic suspension are well-taken care of at the right temperatures, and the electricity doesn’t fail. Remember we just talked about electricity failures, so there are societal issues, but I think the technology, the science will eventually work out.
Robert Wiblin: As I understand it, we’re pretty good at doing the preservation, and we’re getting better at figuring out, what do you need to put through someone’s body before you freeze them, to make sure that all of the structures aren’t disturbed at all, but do you really think it’s that likely that at some point in the future we’ll be able to reanimate people effectively, or undo the damage that caused them to die? I guess I feel like I’m fairly optimistic about this, but that still seems to me not super likely. We’re talking about like 5% or 10% rather than 50%.
Christine Peterson: Okay. I want to kind of turn around. You made two points, and I kind of want to tweak on both. One is the question is, is damage caused by the current suspension procedures? I think damage still, even though it has improved, damage still is caused. I think it’s reparable damage, but there is some damage still. There are coming up with new procedures involving pressure, I believe, which reduces damage even more. I think whatever repair procedures take place in the future, they’re going to have to … For people who are suspended today, there is going to have to be some repair work of the damage itself, so then the second part was, what about doing repairs on the original problem that caused the person to die, whether it be cancer, or heart disease, or Alzheimer’s, whatever. On that one, interestingly, I’m more optimistic in the sense that by the time these repairs are being attempted, we will have such incredible amount of data on what a healthy body looks like, and then … We will know.
We’ll know in great detail, right down to the molecular level, what healthy bodies look like and how they function. Then we have the challenge of saying, “All right. How are we going to do the repairs? How are we actually going to repair this person, getting rid of the cancer, or cleaning out the heart disease or getting rid of the plaques in the brain for Alzheimer’s?” If you have a model of what a healthy body looks like, if you really have excellent data on that, then you know what you want to build, which brings it down to, all right, we have an engineering problem. How do we get those molecules where we want them to be? If you take a long enough time horizon, you can say, “All right. We will someday figure that out. We don’t know when, but as long as the person is stable, it’s like they’ve been given first aid. They’re stable. They can wait. They can wait a really long time if they have to. We’ll just wait until the technology’s there, however long it takes.”
Robert Wiblin: How much does it cost roughly?
Christine Peterson: For a young person it’s quite cheap. The insurance, it’s usually done through, you pay for it with life insurance, and if you’re young, life insurance is super cheap. It’s a few hundred dollars a year in dues, so for young people, it’s very cheap. If you wait until you’re 80 and you haven’t bought the life insurance, then I think it’s like … I think one version of it is under $100,000 so it’s pricey, so I would say just for peace of mind, I’d say, well, why not do it when you’re young?
Robert Wiblin: Yeah, so it’s expensive, but it’s not beyond the reach of everyone.
Christine Peterson: If you think about what we spend on healthcare for people who are in their last five years of life, it’s huge, so this is … It’s not too different from that.
Robert Wiblin: Should I get cryonics? I’ve thought about it.
Christine Peterson: It’s so cheap.
Robert Wiblin: Yeah. Why not?
Christine Peterson: Why not?
Robert Wiblin: Yeah, so there is a lot of paperwork to go through.
Christine Peterson: There’s some paperwork, but EAs are smart. It’s not that big a deal. Actually they had a party at CFAR. I think it was at CFAR. They had a little cryonics party to help you with the paperwork, so tell them they need to do that again, and now that you live in Berkeley, you can just go to the party.
Robert Wiblin: Do I have to stay near California or something?
Christine Peterson: No, no, no, no, no. It’s worldwide, just wherever you want. Yeah.
Robert Wiblin: What if I die when I’m in China? Challenge.
Christine Peterson: What I do is they’ve said, “Hey, if you’re going to go outside the US, just let us know so we can be aware.” It would make it much harder, certainly, but …
Robert Wiblin: Anywhere in the US, as long as they can keep you cool, so.
Christine Peterson: They will do as best they can no matter where you are. They’ll do their best for you.
Robert Wiblin: Okay. Interesting. Yeah. I suppose one reservation I have is just a bit more skepticism than you about the likelihood of your body remaining frozen for long enough for these technologies to be created. Another one is just, it makes me nervous to think, what world might I be waking up into? It’s true, I didn’t get to choose the world that I was born into, so it’s kind of a similar problem, but I just worry that I might be brought back into a world that I wouldn’t want to live in. Of course, you can’t always choose that.
Christine Peterson: You always have options for not participating.
Robert Wiblin: Yeah. I suppose that’s true. Do you think it would be good for technology to advance more quickly, or perhaps even more slowly?
Christine Peterson: Wow. Great question. I don’t know the answer to that. I really don’t. I think if there was some magical way where we could say defensive technology would go faster and offensive technology would go slower, that’s what we would want. I would say if we … There are ways to speed up defensive technologies, basically throw money at those specifically, so I would say that’s what we would want, but in terms … I don’t know what the trade offs are right now, whether offense is moving … I think in general offense is easier than defense, and that’s a scary thing.
Robert Wiblin: Why do you think that?
Christine Peterson: I think defense is sort of inherently challenging. It’s certainly challenging in software, although there is hope. Remember we used to all get spam, and now we don’t get that much spam. We found a way to deal with that.
Robert Wiblin: I guess it’s hard to do the defense until you know what the offense is, so they kind of get first strike.
Christine Peterson: That’s right. That’s right, so the proposal has been, well, you can model the offense, and then build the defense, and just hope that you build it before the offense reaches the stage where it’s actually implemented in the physical world, so that’s one way to do it, is to say you imagine all the attacks that could possibly happen, whether it’s physical or software, and then try to build defenses against them. It’s hard. It’s very hard.
Robert Wiblin: Yeah. I think relative to a lot of other people I know, I’m relatively pessimistic about the value of speeding up technology. It’s not that I’m confident that it’s a bad idea to just speed up GDP growth, or speed up technological advancement, but I don’t see the arguments in favor of going faster as being that compelling. I have a talk where I go through some of the arguments here, that I’ll put up a link to. Broadly speaking, you were talking about offensive versus defensive technology, because technology can both make the world worse and also create new potential for abuse. There’s some technologies that make the world safer and more secure, and allow people to guard against the risks from other technological developments, and there’s other things, like missile technology or rockets, where it seems like the downside risks are larger than the upside risks.
Overall, it seems like technology for the last few hundred years has been making the world on a day to day basis better, and better, and better. My impression is it’s also made it kind of riskier and riskier. It’s true. We don’t have wars as often as we used to, but if we now have a single great power war between two nuclear powers, then that’s basically the end of civilization as we know it for now. We might be able to rebuild at some point in the future, but it would be just a catastrophic setback. The state is getting better, but it’s becoming more and more variable, and I’m just not sure that moving forward faster actually makes it happen in a more safe and sustainable way, or whether it just means that we’re rushing headlong into a disaster. I’m just agnostic on that question.
Christine Peterson: It’s a big question. One reason I don’t spend too much time on it is that I don’t feel that I have much control on that. What I do have some control over, or possible influence on, is this ratio of offense to defense. I can try to say, “All right, the world needs to,” and my personal, current crusade might be, hey, we need to do defensive software, which means computer security. We need to do defensive biology, which means preventing aging, so just focusing more on the defense, but in terms … There’s another factor that you didn’t mention here, which is in addition to things getting more dangerous in terms of a great power confrontation, one thing that’s definitely happening is smaller and smaller groups of people are being able to do offensive attacks with powerful technologies. For example, it’s coming that eventually smaller groups of people are going to have biological weapons of some kind. They’re going to be able to develop new viruses, or tweaks on viruses.
That’s another risk, and that’s happening, too, so yeah, things are riskier for sure, and that’s why we need more and more focus on defensive technologies.
Robert Wiblin: Yeah. We have two episodes about the risks from synthetic biology that I think should be out by the time this episode goes on the site, so I’ll put up a link to those if people are interested to learn more. I completely agree that because so many people are trying to work on technological progress in general, or trying to grow the economy even if just to start a business and support their family, each one of us has relatively little influence over those things. That’s probably another reason to think that just trying to increase GDP growth, or increase technological advancement in general probably isn’t that high leverage in area, because it’s not that neglected. I do sometimes speak to people who, their plan for doing good in the world is just to grow the economy, and to an extent when I was in the Australian government working as an economist, that’s basically what I was doing, was just trying to increase economic productivity.
I think now I’m just have a lot more question marks about how valuable that is, really. Completely changing track, do you have any other advice that people can use to kind of plan out their lives in order to accomplish as much as they can in the long term? Where might they live? How might they organize their personal life?
Christine Peterson: I do, I do. Because so many of our listeners, so many EAs are in the early, early stages of their careers, they still have a lot of choices to make in terms of what their life path is going to be. I just have a couple of points on that. One is we’ve all heard the phrase put on your own oxygen mask before helping others. What that is about is that if your goal is to maximize the good you do in the world, you’re thinking about a career of decades, right, at least, where who knows, if anti-aging works out it could be longer. Let’s say at least decades, right? One thing that happens routinely with altruists, whether they are in the EA movement or outside it, is they throw themselves at these problems not taking care of themselves, not taking care of other parts of their lives, and then they burn out or flame out in some serious way. Then they’re very unhappy. Perhaps they leave the altruistic movement, so the goal is to come up with enough balance in your life so that you have the capacity and the stamina, and are getting enough positive feedback in your life, and have the material necessities of your life are sufficiently in place that you can have the lifestyle you need to continue for decades.
For some people, that’s fine. They can live in a closet, and eat cardboard, and some people have very low material needs. That’s cool, and if they think that they can sustain that for decades, that’s great. Most of us aren’t like that. Most of us want to have a comfortable place to live. We want to have good food to eat. We want to have good friends. We want to have an occasional vacation. Some of us want to have families, right? Not everyone, but many people do, and so when you’re planning your effective altruism careers, whether it’s earning to give, or working at a particular project, or starting your own nonprofit as I did, you have to think about these things and say, “All right. How am I gonna get the income I need to have a decent lifestyle that will keep me going for decades, or if I’ve decided to have a family, how am I gonna pay for that family?” Do I want to find a life partner. I personally think it’s a very valuable thing to have in your life, especially for altruists, because it’s very handy to have someone who for example either is …
Perhaps the person is not an EA quite to the extent that you are. Then that person maybe has a steady job, which is very handy to have if you’re trying to pay rent or a mortgage, or if maybe the person is a fellow EA, which can be a great deal of fun, the two of you can take turns making sure there’s enough money to keep the household going. There’s just a lot of benefits to picking the right life partner if you’re an EA.
Robert Wiblin: Yeah. I saw you have on YouTube a couple of talks you’ve given. One was called Finding Love and a Life Partner. Do you have any other advice on that issue? What approach did you take?
Christine Peterson: I do. I think that if you want a life … I kind of lay it out in the talk, and I actually have a book draft, which I’m happy to send anybody who sends me an email, and I’m easy to find on the internet. Basically I lay out a strategy for finding a life partner that matches what you want in terms of character. Obviously as effective altruists, we need to find life partners who either are fellow effective altruists, or at least are enthusiastic about our efforts in that area. They don’t have to be full-time. They don’t have to even become part of the movement, but they have to support our work.
Robert Wiblin: What about other topics, like where do you think people should live? I guess you were brought up in the Bay Area near SF, or you moved here.
Christine Peterson: No, I moved here. I was brought up in New York State, then went to MIT, and then did a little bit of earning to give, and then a bunch of us realized, and we were very young, idealistic folks, very much like EA folks. We were in our twenties. We were scattered all over the country, but we wanted to work together on altruistic projects, and we said, “Where are we all willing to move to?” The only place that everyone was willing to move to was the Bay Area, San Francisco Bay Area, so that’s what we did. We all moved here, and we did our work together, and it was great. We don’t need all effective altruists to move to the Bay Area. We don’t really want that. It’s a very expensive area to live in, and we want altruism to happen all over the world, so each one of us has to think about, all right, what specific things am I working on? Where am I going to find … Can I do it alone? Do I need a team? Where can I pull that team together?
It may not be in the Bay Area. There’s a lot of great folks here, and it’s a great place of ferment in terms of EA, but there are other good places as well, so although what you might consider doing, a lot of people do, is come here for some period of time. Some people stay. Some people take what they’ve learned and go elsewhere. They may go home to where they came from, or they may start in a totally different place.
Robert Wiblin: Yeah. On this topic of life strategy, when I was younger, maybe five or 10 years ago, I used to think that there was a lot of tension between having a good life personally and doing a lot of good. I’ve learned that that, sometimes there are conflicts there. Sometimes there’s trade offs to be sure, but by and large, it’s very hard to be highly productive, and get a lot of work done for many years unless your life is in reasonably good order, unless you have good personal relationships, you’re taking care of your mental health, and you’re getting treated for anything that you need to get treated for, taking care of your physical health as well, and just feeling comfortable in your life. If you don’t have enough money to support yourself, and that makes you anxious, or you don’t have enough money to get healthcare, you don’t have friends to support you, it’s just so hard to get a project off the ground and stick with it year after year after year. It’s just highly likely that you’re going to at some point give up.
Christine Peterson: That’s very true, so it’s a balancing act, and I think if the goal, and our goal should be I think to maximize the work that we do for good over a lifespan. That doesn’t mean maximizing the amount of work you do right now. It may mean pacing yourself, taking care of your financial needs, taking care of your family issues, especially taking care of your health, right? That’s super critical. You don’t want to burn the candle at both ends for too long. When you’re really young, like in your early twenties, you can work long, long hours, and you’re fine, but don’t assume you can be able to do that forever. Eventually you’re going to pace yourself. You’re going to say, “Hey, you know, I need that vacation,” and I do recommend vacations for EAs, because what happens to me, and this happens every single time, it seems like a luxury, but I go away for two or ideally three weeks is better, actually, works better, and then you come back, and you’re so super charged for, and you have new ideas, new enthusiasm.
You’re just raring to go, and all that excitement that you were kind of dragging before suddenly, your work looks fun again, so I would say even though it’s hard to get away, try to get away for two or ideally three weeks a year at least, and by vacation, remember the vacation is about vacating, right? Go someplace else. Get out of your apartment. Get out of your house. Go somewhere far away. You can probably stay with EAs somewhere, right? They probably would be thrilled to have you crash on their couch, and tell them about what you’re doing, and hear what they’re doing. Get out now and then.
Robert Wiblin: Yeah. As I get older, I’m starting to notice some of the first signs of aging, my aches and pains where I didn’t have them before. I don’t quite have quite the same energy to go out several nights in a row. I get hungover where I didn’t used to. I guess the anti-aging work can’t come soon enough, as far as I’m personally concerned.
Christine Peterson: You’ve got that right.
Robert Wiblin: Are there any other lessons that we can learn from your career, and any other examples that you’ve set that people could follow?
Christine Peterson: Yeah. I would say one thing. One thing I’ve found very useful for what I like to do is if you have a passion for a new topic, it’s perfectly okay to start a whole new organization around it, and that’s what we did. We probably could’ve found an existing organization that would allow us to work with them, but that doesn’t give you the freedom that you have if you start your own thing. It’s very easy to start a nonprofit. The paperwork isn’t that hard. The challenge is raising the money, but if you think you … If you have a passionate cause that’s relatively new, if you can find people who are excited about it and will donate, it’s perfectly fine to start your own group, and we did that. We were still in our twenties, started our own nonprofit, and it worked out very well, because if you keep it small, you’re terribly nimble. There’s not a lot of politics. Those of you who’ve worked in organizations, whether they’re for profit or nonprofit, you know that politics is the mind killer that makes everything not fun.
When things aren’t fun, it’s very hard to get good work done, so the way to keep it non-political, and at least the way we’ve done it, is you have very high standards of behavior for everyone involved, and keep things small so you can move fast, make decisions quickly. That has worked well for us, and I don’t think it’s necessary to go work in a big nonprofit first. For example, when he took over at MIRI, Machine Intelligence Research Institute, Luke Muehlhauser was … I don’t think he had nonprofit experience, so what he did was he went around to everybody he knew who had nonprofit experience, and interviewed them at great length, and took copious notes. I was one of the people on his list. I’m sure he talked to many others, and basically rather than spending many years learning these lessons by working in another organization, he just went around and got the same exact information through interviews, much quicker.
I would say, and then he did a great job at MIRI, and I think it’s because he did those interviews, took it very seriously. If you can find advisors who will tell you the truth, for example, things like what I just said about the politics. When you’re picking your board make sure that there’s no politics. Just do not tolerate politics. We don’t do that stuff in our organization. I think that was really smart of Luke, and it worked well, so and MIRI I think did well under his direction because of that. Yeah, I’m an enthusiast of if you can’t find an organization that’s doing what you want to do, just start your own. Just make sure you have that support, and then you can do it. You can do it in your twenties. I did.
Robert Wiblin: You said that early in your career, you were driven a lot by concerns about overpopulation and resource scarcity. It seems like people in the effective altruism community may be less worried about those risks than people in the general population. We tend to be more concerned about other threats from new technologies, not so much climate change, but perhaps new weapons that our countries might use against one another, or ways that technologies could accidentally turn out to be really catastrophic. Are you still concerned about environmental problems, or have you also kind of moved on to these other perverse effects of technology?
Christine Peterson: That’s a good point, and I think it’s appropriate really that EAs are focusing elsewhere, and here’s why. Back when I was first thinking about these things, there wasn’t really much societal support for environmental improvement. There was no department that was … There was no Environmental Protection Agency, these kinds of things, or it was very small. Back then, these things were underrepresented in terms of people working on them, but now we have this huge bureaucracy, and lots and lots and lots of people, and lots of regulation, and lots of … There’s academic work. There’s huge amounts of work in the mainstream world on environmental problems, so since EAs are looking for leverage points and places that need more attention, it’s perfectly reasonable for them to say, “Hey, other folks are already addressing this.” There’s huge conferences where all the countries come together to talk about these issues. It’s not a leverage point for us.
We should look for problems that are either newer, or are, if not new, they’re at least not sufficiently appreciated. We should look for places where our efforts as individuals or as an EA movement, which is still relatively small, can make a huge difference, and that is not by taking on a big global problem where there’s huge numbers of government employees already looking at it. It makes sense to me that they are changing their focus, and so for me also, I’m not looking at traditional solutions to environmental problems. I’m looking for unusual ones, where something unusual can be done, something that’s not well-understood, not well-represented yet. I think that makes a lot of sense given the size of our movement.
Robert Wiblin: Yeah. I think that’s something that is very easy for people to misunderstand when they read 80,000 Hours materials. We have a problem profile on climate change, and we don’t take, and I certainly don’t take a contrarian view at all on how bad climate change will be, or how big a threat it is. I think it’s just as serious as I just take the scientific consensus, basically as a given. The reason that perhaps we don’t prioritize it so much is that we’ve looked at how much money has already been spent on a problem, and you can find about $300 billion worth of spending globally that is in some form meant to tackle climate change, which is 0.4, 0.5% of global GDP. It’s just vastly more than is spent, say, on work to promote peace, and even just all work to promote international cooperation, that kind of thing. It really is quite a large budget, and there’s other problems that seem in the ballpark of being as serious as climate change but attract a hundredth as much spending, and that’s why we tend to focus on those.
Christine Peterson: It makes perfect sense to me. I think that for our movement, given the size and where we are in our, in for most of us, are still pretty young, I think looking for these new challenges, new leverage points, under-appreciated problems, that’s where we’re going to make a difference.
Robert Wiblin: Let’s try to get even more concrete with things that people could do if they’re listening to this, and then want to tackle similar problems to what you’ve spent your life working on. Can people work at the Foresight Institute? Are you hiring?
Christine Peterson: We’re not hiring. We’re tiny. I would say, one thing I think that probably EAs could do, and this is something many EAs would be able to do, whether it’s for Foresight or any organization, and certainly something that everyone who earns to give should consider, one area that virtually all EA efforts need is fundraising. Fundraising, it sounds kind of hard. It sounds difficult. It sounds unpleasant to do. It turns out, though, once you actually get into it, that it can be fun in the sense that what you’re doing is you’re growing the team. Basically you’re reaching out to other people who share your values. These are folks who may be farther along in their careers. Many of them have mortgages, they have families, things like this, so they don’t have the option to be a full-time altruist. They’re kind of stuck in their earning to give model at this point of their lives, but they want to help. They want to be part of the team. They want to participate and you could help them do that.
Your passion, your excitement for your cause, whether it’s existential risk, or reducing animal suffering, whatever your passion is, if you can locate folks who want to be on the team, but have various constraints on their time, you enable them to join the team and be part of this exciting effort. When you make those connections, it’s tremendous fun. It’s actually a blast, and a lot of these folks who join the team through this earn to give model, they’re actually fun folks to know. That’s one of the best things about being an altruist, is that the people you meet on this pathway through life, they’re the best people on the planet. They are the people who care about other people, who care about animals, who care about the biosphere. These are the people who want to make a positive difference in the world, and they often are super intelligent, super nice people. Some of them have done very well in life in terms of their financial achievements, and that enables them to help your organization. It also enables them to basically help you have fun in your life, and that’s important. Having fun, doing your altruism is important, because that’s what keeps you going for 30, 40, 50 years doing it.
It has to be fun. It doesn’t have to be fun every second, but it should be fun. Another thing that makes this easier than it ever was before is there’s two reasons why there’s more money in the hands of young people today than there probably has been in the past, ever, which is number one, high tech jobs. People who come straight out of school and jump into a high tech job are paid very well, so these are folks, they may be almost as altruistic as you, but they are earning to give. Making connections with those folks is a great way to make a difference for your organization. Also there’s an awful lot of cyber currency money out there right now being held by very young people. These are folks who kind of lucked out, right? They were in the right place at the right time. They made some good decisions. Now they’re sitting on a really large sum of money, and these people are super idealistic and altruistic often, so if you could connect up to them, you can pull that money toward important causes.
That’s something almost anyone can do if you care enough, if you’re willing to develop the communication skills, and if you’re willing to learn how to reach out to folks and make those personal connections. It’s a great pathway, and it’s not as hard and not as unpleasant as people tend to think. It can be actually fun.
Robert Wiblin: Let’s say you were speaking to an undergraduate who was pretty bright, and they’re open to doing kind of any major. What things would you want to see more people in the effective altruism community studying?
Christine Peterson: For folks who are technical, who have a mathematical bent, I would say we really … Even though there’s a lot of them already, we need more great computer scientists, so computer science is worth learning. It has a lot of benefits. We’ve already talked about the need for computer security as a vital thing that needs to be done, whether it’s by EA or elsewhere. It’s also a great way to do earning to give, so it’s … In case you decide hey, I want to spend a few years of my life raising a family. Many people do want to do that. Computer science gives you that income that enables you to both raise a family and earn to give at the same time, and then you can always do full-time EA work both before and after that family is raised, so it gives you tremendous flexibility. Computer science is a great option there. The sciences in general. I studied chemistry. It’s a great background for anything in the physical sciences. If you decide that that kind of thing is what you’d like to do, I think it’s possible to be an EA probably with just about any background.
It’s critical to figure out, number one, where your skillsets lie, if you can. If you have any gifts, and then what excites you? What can you picture spending decades doing possibly for not a huge amount of money? What makes you think, “Wow, I enjoy doing that. I could do that for that long.” Not too many people want to do stock trading for decades. It’s pretty unpleasant work.
Robert Wiblin: Given your interests in things like life extension, and atomically precise manufacturing, and AI, and so on, do you think that we should be encouraging more people to specialize in particular areas of science and technology where they can really become experts in those particular topics, and make a real contribution, and understand them well enough that they can figure out how they can be made safe for the world?
Christine Peterson: I would say yeah. In addition to the computer security I think we do need more people in aging research, and that’s a very specific field, and I would say for those of you who kind of enjoy biology, or chemistry, that’s a great pathway, and there is money available, so you can have a … You can do good work and get paid. For example, my husband is … He is a hydrologist. He works for the USGS, and he gets paid a decent, not a huge amount of money, but you can live a very nice life on the salary, and he also knows he is doing wonderful things in his job. He’s got the best of both worlds. He’s getting paid to do something he loves to do that actually helps the environment also, and there are jobs like that out there, and it’s worth putting some effort into trying to find them.
Robert Wiblin: The Foresight Institute isn’t hiring. Are there any other organizations that people might not be aware of that you think they should think about applying to now, or preparing themselves to be able to work at in the future?
Christine Peterson: Whatever you are most inspired by, for example, I just heard that the Future of Life Institute is looking for an AI policy person. There’s something really fun to work on, so for someone who feels that they are qualified for that, that would be something to look into, but most of these groups are pretty small frankly, so targeting an individual group is challenging, although it can be done. For example, Allison Duettmann [Dudeman 01:39:51], who is co-leading the workshop here at EAG with me, she works at Foresight now, and she made a decision when she was in school. She was at the London School of Economics. She surveyed all the nonprofits and decided Foresight was the right one for her, and it took a lot of persuading on her part for her to convince us that this is right. It turns out she was correct. She has been a fantastic addition to our staff, so for those of you who are super persistent and you know for sure that one particular organization is where you want to be, the way she did it was she initially started by volunteering, which is often …
You have to get to know the leadership, and volunteering is the fastest way to get them to say yes. Once they see what a great performer you are, yeah, maybe they’ll bring you on staff. That’s how it happened with Allison.
Robert Wiblin: One thing that a lot of people can find tricky early on in their career is finding mentors who can show them the ropes, and give them all of this inside knowledge that might not be written up anywhere. Do you have any advice on how people can find mentors who can support them through their career?
Christine Peterson: Yeah. I think the thing to realize is it’s kind of a two way street, which is you’re asking someone with more experience to help you, and there has to ideally, if you can think of a way, there ideally is some way for you to help them back. Maybe they’re 100% altruistic, but even so there should be some way for you to help them. For example, perhaps they’re writing a book, or maybe they should write a book and they don’t have time. You could say, “Hey, I would love to help you write your book. Let’s do some recordings. Let’s do some audio recordings. I’ll get them transcribed. I’ll put it into an outline, and try to do a draft based on what you said.” In fact, that is a strategy that Allison used with one of our senior fellows, Mark Miller. He knows a great deal about computer security and how it might affect AI. This is something Allison would like to learn, so she said, “Hey, let’s … I will help you write a paper on this,” and that’s how we did it.
Mark didn’t have time to write, but he did have time to just give us a few hours of talking, so we interviewed him. We recorded it all, transcribed it, and then massaged it into a paper, which we will link to in the references on this show. Yes, it was a lot of work for Allison, but it’s a great way to number one, learn the content, and number two, build a great relationship with your mentor, because now your mentor is super impressed, and is going to go way out of his or her way to help you in your career.
Robert Wiblin: What are conferences do you go to? If someone was trying to get you as a mentor, how could they possibly meet you?
Christine Peterson: Oh, I’m easy. I’m easy to meet. I invite people over for tea at my house all the time.
Robert Wiblin: Okay but yeah, so which conferences do you go to, and if people were interested in the kind of same things as you, there’s EA Global where we’re at, of course. Are there any others that you regularly attend?
Christine Peterson: Oh, I go to a lot of conferences. I just, in conflict with this meeting, actually, is the Startup Society Summit, which looks at blockchain, Seasteading, special economic zones. I go to aging conferences. I go to futurism conferences in general. I go to a wide variety, I find all of them are valuable, so it’s pretty easy. Actually the easiest way to meet me at a conference is get me invited as a speaker, because then I show up, and give my talk, and talk to whoever wants to talk to me, so that works great.
Robert Wiblin: What’s been the biggest downside of the path that you’ve taken?
Christine Peterson: Let’s see. I can’t say there are big downsides, actually, because I realized pretty early on that it was necessary to have that balance. Like most people in their twenties, I was heading off into, well, I’ll just sacrifice my whole life for this. I won’t pay attention to income. I won’t pay attention to my health. I’ll burn the candle at both ends, work every hour that I possibly can, but I realized, okay, this is not sustainable. It’s fine for a little while, but then it’s not sustainable, so I thought, all right. We have to come up with a model where I get a decent income, I can afford health insurance, I can pay my rent, I can go on an occasional vacation, and all that has worked out. I don’t actually have any regrets at this point. I think it worked well, and the main thing I’d recommend to young folks on this pathway is, the sooner you realize about the balance thing, the better, and put some work into selecting your life partner.
Make sure that all the character traits you need are there, and that the person appreciates your EA efforts, either joins you in them or at least is appreciative of the fact that you do them. Make sure that you have a pattern that will last for decades.
Robert Wiblin: My guest today has been Christine Peterson. Thanks so much for coming on the 80,000 Hours podcast.
Christine Peterson: Great, Rob. This has been so fun.