Enjoyed the episode? Want to listen later? Subscribe by searching 80,000 Hours wherever you get your podcasts, or click one of the buttons below:

Nobody is in favor of the power going down. Nobody is in favor of all cell phones not working. But an election? There are sides. Half of the country will want the result to stand and half the country will want the result overturned; they’ll decide on their course of action based on the result, not based on what’s right.

Bruce Schneier

November 3 2020, 10:32PM: CNN, NBC, and FOX report that Donald Trump has narrowly won Florida, and with it, re-election.

November 3 2020, 11:46PM: The NY Times, Washington Post and Wall Street Journal report that some group has successfully hacked electronic voting systems across the country, including Florida. The malware has spread to tens of thousands of machines and deletes any record of its activity, so the returning officer of Florida concedes they actually have no idea who won the state — and don’t see how they can figure it out.

What on Earth happens next?

Today’s guest — world-renowned computer security expert Bruce Schneier — thinks this scenario is plausible, and the ensuing chaos would sow so much distrust that half the country would never accept the election result.

Unfortunately the US has no recovery system for a situation like this, unlike Parliamentary democracies, which can just rerun the election a few weeks later.

The constitution says the state legislature decides, and they can do so however they like; one tied local election in Texas was settled by playing a hand of poker.

Elections serve two purposes. The first is the obvious one: to pick a winner. The second, but equally important, is to convince the loser to go along with it — which is why hacks often focus on convincing the losing side that the election wasn’t fair.

Schneier thinks there’s a need to agree how this situation should be handled before something like it happens, and America falls into severe infighting as everyone tries to turn the situation to their political advantage.

And to fix our voting systems, we urgently need two things: a voter-verifiable paper ballot and risk-limiting audits.

He likes the system in Minnesota: you get a paper ballot with ovals you fill in, which are then fed into a computerised reader. The computer reads the ballot, and the paper falls into a locked box that’s available for recounts. That gives you the speed of electronic voting, with the security of a paper ballot.

On the back-end, he wants risk limiting audits that are automatically triggered based on the margin of victory. If there’s a large margin of victory, you need a small audit. For a small margin of victory, you need a large audit.

Those two things would do an enormous amount to improve voting security, and we should move to that as soon as possible.

According to Schneier, computer security experts look at current electronic voting machines and can barely believe their eyes. But voting machine designers never understand the security weakness of what they’re designing, because they have a bureaucrat’s rather than hacker’s mindset.

The ideal computer security expert walks into a shop and thinks, “You know, here’s how I would shoplift.” They automatically see where the cameras are, whether there are alarms, and where the security guards aren’t watching.

In this impassioned episode we discuss this hacker mindset, and how to use a career in security to protect democracy and guard dangerous secrets from people who shouldn’t have access to them.

We also cover:

  • How can we have surveillance of dangerous actors, without falling back into authoritarianism?
  • When if ever should information about weaknesses in society’s security be kept secret?
  • How secure are nuclear weapons systems around the world?
  • How worried should we be about deep-fakes?
  • The similarities between hacking computers and hacking our biology in the future
  • Schneier’s critiques of blockchain technology
  • How technologists could be vital in shaping policy
  • What are the most consequential computer security problems today?
  • Could a career in information security be very useful for reducing global catastrophic risks?
  • What are some of the most kind of widely-held but incorrect beliefs among computer security people?
  • And more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.


We need to convince organizations that they need technologists on their staff, and whether it’s organizations working on climate change, or social justice, or human rights, that technologies are critical to what they’re doing. That all legislative staffs, investigative journalism, sort of again and again, these groups need technologists. What can we do?

I think we need to push the fact that technology is intimately entwined in public policy, inseparable, and that we need more people who understand this. Maybe the way to push it is by convincing policymakers that they need technologists. Convincing that senator who questioned Facebook that maybe if you had a tech person on your staff who you can clear your questions with first. Maybe that you who are working in some kind of immigration rights office… you need a technologist because that’s how you’re going to understand what sorts of surveillance technology are being used against the people you’re trying to represent and you’re trying to keep safe. So I think maybe that’s where we have to push.

Uncle Milton’s ant farm… it’s two pieces of plastic, and they have maybe a quarter inch of space between them, and you fill the space with some sand. Then you get ants and you put them in the ant farm and you watch them dig tunnels, right?

Super cool, if you’re like a 12 year old boy. Now, when you buy this at a toy store, it doesn’t come with ants because, well because, and you can do one of two things: you can get some ants out of your backyard and they don’t have a queen, so they’re going to die soon but you know, they’ll make tunnels. Or you can, at least back, when I was a kid, there’d be a card. In the ant farm, you could write your name and address and send it to company and they’d mail you a tube of ants. Now the normal person says, “Oh that’s kind of neat, I can get a tube of ants”. Someone who’s a hacker looks and says, “Wow, I can send a tube of ants to anybody I want”, and that’s how I characterize the mentality needed to be a computer security person, to be a hacker. The ability to look at a system and sort of naturally see how it might be misused.

The problem is not lack of ideas. I used to run a movie plot thread contest on my blog every April 1st and the idea was to come up with the the “most scary impressive computer security everybody dies” disaster scenario and I got email from people saying, “Don’t give the terrorists ideas!”. I was like, “Are you kidding? Ideas are the easiest thing in the world to get… Execution is hard.”

So no, I am not worried that lists of bad things that can happen will give people who want to do us harm, ideas. They’ve already got the ideas. We need the ideas out in the world so that we can think about them. Please do not, do not promulgate that myth. I think that myth is harmful and dangerous and keeps this stuff secret and we’re all worried that we’ll talk about it and the people who have the ideas, which are the bad guys, are the ones who are going to do all the thinking.

…I think we can invent a scenario, again a great movie plot, where this idea is so obscure and weird that you wouldn’t want to make it public. In general, in my world, we call this “security by obscurity” and we laugh at it. Right, you do not want something so fragile as the idea of the thing being what makes you secure. If that’s what makes you secure, you are not secure. Because I assure you somebody in the lab across the street or across the world is almost as good as you and will come up with the idea, if not today, in two weeks or in a month.

So you’re not going to get enough headstart. In general, you make these things public so the good guys can think about them. We do not get security through secrecy. That is much too fragile. We get security through actual security.

Unlike other countries, the United States doesn’t have a Federal Bureaucracy for elections. The security we use is that we have people from each party sitting in the same room watching each other. When you go to vote, there are poll watchers from Republicans and Democrats and they’re sitting at the table and they’re, sort of all there for this, if you think about this very mid-1800s threat. And their great security against this mid-1800s threat is the way election stealing would happen.

We’re all going to watch each other and if you do anything suspicious, I’m going to notice and that’ll keep all of us honest. It doesn’t work against 21st century threats. And, we are hurt by the fact that we don’t have a Federal Bureaucracy in charge of accuracy and elections.

Articles, books, and other media discussed in the show

Bruce’s work

Everything else


Rob’s intro [00:00:00]

Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, where each week we have an unusually in-depth conversation about one of the world’s most pressing problems and how you can use your career to solve it. I’m Rob Wiblin, Director of Research at 80,000 Hours.

This conversation with world-famous computer security expert Bruce Schneier is one of my favourites, and I think will be of interest to almost all of you out there. Bruce is a bold and opinionated commentator online, and as I hoped he brought that spirit to our interview. I disagreed with a number of the things he said, but I think that just made the interview more informative.

Before that a few quick notices.

First of all, sorry we haven’t had more episodes for you the last few months. The good news is I’ve had a busy October and we now have about 23 hours of recorded material working their way to you.

At the end of my interview with Bruce I give a few suggestions for how to improve your own computer security, so stick around for that.

Also it has been a while since I’ve suggested other podcasts you might like to subscribe to, so here’s two.

The first is Reply All, which brands itself as a show about the internet, but actually investigate all range of things. I don’t imagine it will help you have more social impact, but I find it very entertaining. If you want an episode to start with try #104 – The Case of the Phantom Caller. The solution to the mystery in that one is one I never would have guessed.

The second is Probable Causation, a podcast about the economics of crime, hosted by criminologist Jennifer Doleac. If you love diving into the details of social science research, I don’t know any other show that works through papers in such a sophisticated manner.

Alright, back to the episode with Bruce Schneier.

I only got 2 hours with Bruce, so we’ve stuck a quick 13 minute presentation he gave the same day called ‘Why technologists need to get involved in public policy’ at Codex’s World’s Top 50 Innovators event.

In it, he explains what it is to be a so-called ‘public interest technologist’, and why it might be a way to have a big social impact. That’s then the first topic we cover in our interview.

It’s a good talk and pretty snappy, but of course you’re welcome to skip forward 13 minutes if you’d rather just hear me and Bruce together.

Without further ado, here’s Bruce Schneier!

Bruce’s Codex talk [00:02:23]

Watch the talk here.

The interview begins [00:15:42]

Robert Wiblin: Today, I’m speaking with Bruce Schneier. Bruce is a cryptographer, computer security professional, privacy specialist, and writer, based at the Berkman Center for Internet and Society at Harvard Law School. He’s one of the world’s most widely read commentators on security issues as a columnist at The Guardian and his blog, “Schneier on Security”. He’s the author of a lot of books on security and the impact of technology including “Liars and Outliers: Enabling the Trust that Society Needs to Thrive”, “Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World” and most recently, “Click Here to Kill Everybody: Security and Survival in a Hyper-connected World”.

Thanks for coming on the podcast, Bruce.

Bruce Schneier: Thank you for having me.

Robert Wiblin: So there’s three key things I’m hoping to talk about today. Firstly, the most pressing issues in cryptography and information security and how listeners might be able to use their careers to address them. Then, how a career in information security might possibly be very useful for reducing global catastrophic risks and finally, how we might be able to use surveillance to make the world safer without increasing the risk of recreating the Stasi in a kind of a lot of authoritarian backsliding. But first, the question I always ask is, what are you working on at the moment and why do you think it’s very important work?

What is Bruce working on at the moment? [00:16:35]

Bruce Schneier: These days I’m thinking a lot about technology and public policy. That a lot of our tech issues are deeply embedded in society and people. A lot of our policies are deeply embedded in tech and there’s too much talking past each other. There’ve been technologists and policymakers or Humanities people and we need better conversations.

We need more interdisciplinary people. I’ve been trying to think about how tech people can get more involved in policy and trying to make the world safer for that and that’s really what I’ve been thinking about now because it feels like this is important and we see it in flavors of things. We see it in Facebook and democracy.

We see it in driverless car policy. We see it in surveillance policy… Future of work. Sort of again and again, tech is reshaping society and society doesn’t understand tech and that’s just not going to end well.

Robert Wiblin: Yeah, so there seems to have been a kind of explosion of interest in yeah, well at least in policy and in technology. How do you think things are going? It seems to me like it’s been a bit of a train wreck and policymakers mostly don’t know what they’re talking about.

Bruce Schneier: It’s kind of been a mess. In the United States, Mark Zuckerberg came to Congress and there was a hearing and a senator asked him on national television at a hearing, “How does Facebook make money?”. Now, on the one hand you can laugh at the sort of ridiculousness of asking that question, but not only did he ask it, he didn’t think he would be mocked for asking it.

I mean that sort of shows a really deep not getting it in technology. Now, that’s an extreme case, but I think it’s largely been not good and it’s certainly in the news, I think, because of the United States election, Brexit… All those examples of democracy going sideways because of social media, either organically because of the way ideas spread, or from influence either by foreign or domestic actors, and that’s sort of brought these problems to a fore but they feel much deeper and much more important. I’ve been thinking about this for a while and it’s sort of nice that other people are too. Not convinced they’re closer at solving it though.

How technologists could be vital in shaping policy [00:18:52]

Robert Wiblin: So your talk about public interest technology made me wonder — how large a role do you think there is for philanthropist in creating demand for people to work on these problems? Because it sounded like there were already more people looking for work in this area than there were positions available, though that might well shift in future.

And listeners aren’t only looking for places to work, many are people who have money that they’re looking to give away in some high impact way. Maybe they should be giving to organizations that can hire people to do this kind of public interest technology advocacy?

Bruce Schneier: And I think that’s hundred percent right and the story I gave about the Ford Foundation and public interest law, is really relevant here. And organizations like Ford, MacArthur and Hewitt, are funding public interest tech.

There’s the Public Interest Tech University Network, which is right now 21 universities, almost all in the US right now that are going to have public interest tech programs and Ford Foundation is funding this. So it could be staff positions at organizations that you already support. It could be staff positions at new organizations, it could be courses and programs at universities and law clinics.

It could be more positions at your favorite public interest tech organization that already exists. It could be funding tech-driven journalism. All these things are being funded right now, just not enough and not to the degree we need. So if you’re interested in the space at all, there’s a really good report by Freedman Consulting written for Ford and MacArthur.

It’s called “A Pivotal Moment.” You can Google for that, but it’ll probably be attached to this podcast and that lists something like 40 different places of intervention where we can effectively deploy resources to move this field along and anyone can feel free to contact me if they are looking for places to deploy financial resources, because I have a list of things I think should be funded.

Robert Wiblin: Yeah, is Electronic Frontier Foundation on the list.

Bruce Schneier: I mean certainly, I think they are one of the most effective public interest tech organizations in the world right now. But there are lots of others, you know, and I think we really very much need a broad array of organizations and not just one.

I should say that I’m on the board of the EFF. So I you know, I mean happy for people to support EFF, but we can’t support one to the exclusion of all the others. It’s really important that lots of organizations engage in this space. Because really, if you don’t understand algorithmic discrimination, you don’t understand 21st century discrimination.

So if you’re working in anti-discrimination, you better have a technologist, because that’s what it’s going to look like in this century.

Robert Wiblin: Are there any groups that you think aren’t being recognized for the great work that they’re doing in this area.

Bruce Schneier: Probably are. None come to mind right. I maintain a website, “public-interest-tech.com” with hyphens between the words, and I list all of the public interest tech organizations and university programs and everything I could think of in this space. People writing about it, speaking about it, doing government programs. So, you know, you look there to see sort of what’s going on. And if any listener has something in mind that’s not on that document please email me and I will add it to that document. It’s really meant to be a living document in this space.

Robert Wiblin: Fantastic, yeah, we’ll stick up a link to all those things. What are the downsides of going into public interest technology, because it sounds like one of them is just you got to kind of have to beat your own path a little bit because there’s not a clear track where, you know, thousands of people have gone down this already and shown you the way.

Bruce Schneier: I mean right now it’s not a viable career path. There are actually lots of people doing this right now and I don’t want to minimize their efforts but you know, they’re all unique. I mean, they’re all remarkable in that they have found their own path. Now right now there is less money. I mean just like, you know doing any kind of public interest work in any field, just not going to make the same amount of money you would in the corporate world.

So that’s a downside. Now, that’s probably the two and it’s not a well-worn career path, and it’s less lucrative than you would make otherwise.

Robert Wiblin: What are some benefits of it you think people don’t appreciate and I mean, it seems like you might be able to get a lot of attention because these issues are so hot right now and there’s not so many people who are doing advocacy.

Bruce Schneier: Yeah, I suppose that fame is potential. But you know, I really want this to be something that becomes so pedestrian that doing it doesn’t make you famous. And again, the public interest law analogy is great. Yes, there are some famous public interest lawyers, but, the majority of them are defending a particular person who’s facing eviction. A particular person who’s facing discrimination or being kicked out of the country and this kind of in the trenches work is public interest law.

It’s not the stuff that gets you in the newspaper, but the stuff that makes a difference, you know, we don’t have that in tech. But you know who’s doing IT security for Human Rights Watch, or Greenpeace, or Amnesty International. Those are public interest technologists that are doing a very important job and they’re not getting world credit for it. And you know, you have to know that doing it is enough.

Most consequential computer security problems today [00:24:12]

Robert Wiblin: Yeah. All right. So let’s jump forward, straight into kind of your views on what are the most consequential computer security problems that exist in the world today and what might be done about them. So at 80,000 Hours, we kind of quantify the scale of problems, or we try to quantify it in terms of how many people are affected by a problem and how severely they’re affected by it. And we also have this particular focus on catastrophic risks, like risks that are sufficiently big that they could… It wouldn’t just be easy to pick up and fix them and then muddle through and then come out the other side alright. Kind of problems that are so severe like a global power war-

Bruce Schneier: That’s feels like flawed math. Your number of people times severity. What about likelihood?

Robert Wiblin: Oh yeah, absolutely. So likelihood matters as well completely.

Bruce Schneier: Okay good, because otherwise it’s a zombie apocalypse and we’re done!

Robert Wiblin: Yeah. Sorry, yeah, you’re right. So it’s expected value. So, it’s likelihood times by the number of people times by how much it affects them. And I guess, yeah, then the reason to focus on the catastrophic risks is that all the ones that you can’t just recover from and fix things up, patch it, and then move on is that it can affect future generations as well as the current generation. Are you with me? Okay, great. So yeah, in light of that, what threats do you think are kind of particularly serious or perhaps underestimated these days?

Bruce Schneier: Yeah, I think less about the far future because in cybersecurity the near future is kind of bad enough.

I don’t need no killer robots to make things scary. I think regular cars that connect to the internet are kind of scary enough. So, a lot of the times we focus on the near term risks because that’s the problems we have knowing that any solutions, ideas, technologies we develop will also help with the catastrophic risks. In computer security, the risks tend to be the way computers fail. They fail differently compared to normal things.

So we’re moving into a world where everything’s becoming a computer. Your refrigerator is a computer that keeps things cold. Your microwave oven’s a computer that makes things hot. And your phone is actually a computer that happens to make phone calls. And as things become computers, the way computers fail becomes everything. So, normal things fail kind of randomly.

Think of cars. Cars have parts. Parts have mean time between failures and they fail regularly and in everybody’s city and town there is this system of car repair shops that handle the steady state of cars that break. Computers don’t fail that way. Computers all work perfectly until one day when none of them do.

And that kind of catastrophic break – suddenly, all of Microsoft Windows has a new vulnerability or all of TCP/IP and the internet. That kind of way of failing is is unique. So, you know, we’re having this interview in a hotel. The hotel room has a keyless entry system. So, for the listeners, if you’ve gone to a hotel you’ve probably seen this, it’s a card, you wave it in front of a reader and the door opens…It’s a wireless key entry system.

That is… Several companies make that. One of them is called Arma D. Arma D is a company that makes these keyless locks for hotels. A few years ago, someone found a vulnerability in Arma D door locks, rendering every single hotel room with this lock around the world insecure. Now, this hotel, any hotel, has a mechanism to deal with broken locks.

They probably have a locksmith on call and every what, every once or twice a month, a lock is broken and they call the locksmith to to fix it. The hotels don’t have any way to deal with “Every one of our locks are broken”. And the way you fixed it because, no it wasn’t a very well designed thing, is that you had to go to every lock manually and update the firmware.

It wasn’t easy. It wasn’t like you get an update on your phone. So here we are a few years later and a lot of those hotel room doors are still insecure because it was just too hard to fix. So that is a real risk I worry about in computer security. As computers become everything, computer failure modes go to places where they’re not expected. So we’re actually worried about crashing all the cars. I mean, or more realistically, all the cars of one make in one model year. But still a lot of vehicles all at once in a way which isn’t possible with non-computer systems.

Robert Wiblin: Yeah, so in the past you had a situation where like one percent of cars would break down every month, but I guess here you’re worried that because they’re all so similar, it becomes almost like you could have a disease that infects billions of people. You can also have one failure that infects like all of the computers in the world, or like a large fraction of the computers in the world and that creates a much worse situation that’s a lot harder to recover from.

Bruce Schneier: And the public health model is actually a really good one. Yeah, I mean we talk about computer viruses and we talk about, you know, getting inoculated from them. We use public health and computer security all the time. Because they do look like that on occasion and we really do worry about ransomware against cars and DDOS attacks against your refrigerator and your thermostat sending spam and these are actually things we’re concerned about because these things are all now computers.

Robert Wiblin: Yeah, so I was trying to anticipate what answers you might give to this. And I was kind of expecting this one. But how bad do you think it can realistically get? Is there any scenario in which you know, just most of the systems that, you know, most of the people in the world are using like all go down like within reasonably quick succession.

Is that possible in some kind of cyber war, that they get a bit out of hand or a particularly competent terrorist group? Is there any scenario where that’s possible to envisage?

Bruce Schneier: Yeah, I’m not going to probably give an answer you’ll like because of course it’s possible. Anything’s possible. A bunch of years ago I coined the term “Movie plot threats”. And this is something done a lot in cyber security, “What’s the worst that can happen? Give me the scenario. Is this possible?”. Of course, it’s possible. But if we focus on the worst case, the movie plots, I think we ignore the more common. We ignore what’s likely to happen.

You know, my field isn’t AI threats that are all far in the future and we can sort of sit here calmly and discuss them. I’ve got ArmaDLocks that are broken today. They’ve been broken for two years and I’m worried about ransomware against cars now. I don’t need to think about what’s the worst that can happen?, how bad can it get, what are the possibilities?

It’s near term. So I tend to duck those… What’s the worst… The hypothetical. Yes, it’s all possible. But I’m not worried about that. I’m worried about what is happening now. That’s kind of bad enough.

Robert Wiblin: Can you imagine worrying about it a little bit more because as it is you’ve got with the hotel doors that don’t work, as long as it’s just like one set of doors or like as long as it’s only doors at one point in time and we can just go around and kind of fix that up, or kind of muddle through a bit. Even though it’s like it’s all like a very serious design problem now, but if you have like everything go down simultaneously or like too many things go down simultaneously, then you kind of have potentially like a more cascading problem.

So it’s not only the doors are broken, but the whole system that you used to send someone out to fix the doors is also broken and kind of the electricity grid is down. That kind of thing.

Bruce Schneier: That’s bad. But honestly, that’s beyond my ability to fix. I mean certainly, as everything becomes critically attached to the internet, the internet becomes more critical and you know, that’s kind of obvious.

But as soon as those things start happening, it’s no longer a computer security problem. It’s a way bigger problem. But I think about it, as these things become more functional – think of autonomy, automation, physical agency. As these things stop being computers and phones and start being cars and medical devices and you mentioned the power grid and things that affect life and property if they go down, if they get hacked, if they get taken over, certainly the risks are greater. My latest book I did title, “Click Here To Kill Everybody” I mean, to really bring that home. So what’s changing… It’s really interesting, is not the computers: the risks are exactly the same. What’s changing is what the computers are attached to and what they can do.

So on the one hand, your spreadsheet crashes, you lose your data. On the other hand, your embedded pacemaker crashes and you lose your life. But it could be the exact same CPU and operating system and application software and vulnerability and attack tool. It’s the same thing but because of where the computer is and what it’s doing, the risks are much greater.

So that’s what’s changing. Computer’s are staying the same, but they’re moving into areas where they touch life and property in a very real way.

Robert Wiblin: Given how much computers are getting embedded in everything is kind of surprising that more damage hasn’t been done because they’re just… It seems like they’re very easy to hack to fail all the time, but it’s kind of hard to find that many cases where people have died because of this.

Bruce Schneier: And this is interesting and this tells us, just like terrorism, right, I mean we know our security system is so bad, yet it hardly ever happens.

It kind of shows that, while in theory, this is all possible, in practice, it’s kind of harder. You need the right skill sets. You need the ability, you need the the willingness. And those people are rarer than we would expect if we just focus on the danger. But yeah, we’ve managed pretty well. We haven’t seen murder through automobile hacking, even though you can go on YouTube and watch videos of security researchers take over cars remotely and disable the brakes and sort of do all the things you would expect in the movies.

Robert Wiblin: Yeah. What do you think’s the limiting factor? Is it kind of the interest to do it? Just most people don’t care?

Bruce Schneier: You know, I think it’s a lot of things. I think it’s the interest to do it. It’s the fact that murder is actually illegal and you know, someone’s not going to wake up and say “So, you know, I really think this is interesting, let’s just kill a few people”. I mean that doesn’t happen. These are human beings and we actually act moral most of the time. I mean, we do have to worry about the few among Us who are going to do mischief. That’s a whole point of security, right? It’s a tax on the honest to protect us from the dishonest.

But by and large, you know things work out pretty well. Most of the time it is surprising, but it’s true. Now you see this in personal computer hacking. It’s not the catastrophes that we’re worried about, it is conventional cyber crime, stealing money… It’s stealing credit card information, its identity theft, it’s ransomware, and all those things that are very pedestrian and not catastrophic.

How secure are nuclear weapons systems around the world? [00:34:41]

Robert Wiblin: Yeah, in terms of the catastrophic stuff. Do you worry about the security of nuclear systems? So of nuclear weapons systems around the world?

Bruce Schneier: Somewhat those tend to be really old, really archaic and really offline. I mean, there isn’t a web page through which you launch nuclear weapons in any country and that’s actually a good thing and we like that so a little bit, but not really.

Nation states are probably doing their level best to get inside each other’s weapons systems and to make sure they either don’t fire, or misfire, or point in the wrong direction and again, we could imagine all the movie plots, but individuals I think largely are staying away from that because they don’t even know how they work.

We have some examples of attacks against the power grid, right, you know Russia attacked the Ukraine twice. And really did turn off power and we know that countries have been in the US power grid. I’m sure the US has been in everybody else’s power grid, right? Our only hope is that if really bad things happens nobody’s power will work. So there are worries that there’s been some non-nation state hacking in our electoral infrastructure. It’s little and is far between, but it does happen. These are things to worry about, I’m not saying don’t worry about them, but in terms of what keeps me up at night.

It’s not these risks. It really is that there’ll there be 20% more financial fraud and that will be too much for the system to handle. So it’s more differences in degrees than differences in kinds, although the differences in kinds matter, but the neat thing about computers is once you fix it somewhere, the fix applies everywhere. You improve Microsoft Windows and you’re improving things in thousands of different applications, from medicine on down to toys. So it’s kind of you know, you get good bang for your buck.

Robert Wiblin: Yeah. It’s an interesting way of looking at it. So if you make any contribution to something that’s used in, I guess billions of devices now, then potentially the scale of the impact is very large. You’re just going to prevent potentially a lot of crime.

Bruce Schneier: And nation-state, everything, you know, Apple produces a new iPhone and it’s more secure and not only are we all more secure from fraud identity theft, but all our heads of states are more secure because they use iPhones too, as do our nuclear power plant operators, as do our law enforcement officers, as does everybody else…Its infrastructure. So fixing infrastructure just has enormously broad consequences throughout all of our usage cases.

Robert Wiblin: Yeah, so sticking with the movie plots for a second because some people I know will worry that if one country gets a reputation or is known to be especially good at hacking other countries nuclear systems and it could destabilize nuclear deterrence.

Is that something that you’ve thought about seriously at all or that you know anyone who’s analyzed?

Bruce Schneier: So not about nuclear, but yes. Cyber weapons and cyber capabilities are sort of necessarily secret and if you think about… I don’t know Iraq, and weapons of mass destruction and chemical weapons.

There’s a lot of searching for chemical weapons plants because they were big and they’re hard to hide, but cyber weapons could be a couple of guys in a basement somewhere. You can hide them easily. So not knowing each other’s capabilities is destabilizing. I don’t think of it in terms of nuclear weapons. I don’t need that to make it destablizing.

Just capabilities to hack infrastructure and we’ve been talking about the power grid but we can talk about communications infrastructure, financial network. A lot of very critical things need the internet to work. So the ability to affect them in another country is destabilizing because we don’t know what they can do and they don’t know what we can do and you know fear and ignorance is how arms races are fueled.

So now we have an arms race in cyber capabilities because everyone’s scared of everybody else and that is actually a very real risk that has been studied extensively and it is destabilizing simply because capabilities are so hard to verify unlike nuclear weapons. Unlike even chemical weapons.

Robert Wiblin: Do you worry the kind of provocations in cyberspace or, what one country might think is just like a warning shot to another country could accidentally escalate into a more serious conflict or even potentially like non-state actors could pretend to be state actors in order to try to engender a conflict between two countries.

Bruce Schneier: There’s a lot there and again you have to step back from the movie plots and go into what’s happening today. In computer security, expertise flows downhill. So today something is a top secret NSA program, tomorrow it’s a PhD thesis, the next day it’s a hacker tool. So unlike nuclear weapons which are only owned by governments.

Non-state actors are a really big part of the computer security cybersecurity ecosystem and non-state actors with capabilities rivaling state actors is something very worrisome. There are times… There’s an attack on Sony by North Korea. North Korea attacked Sony, I think it was 2014.

In the first couple of months, there was a legitimate debate in my community about whether that attack was launched by a nuclear power with a twenty billion dollar military budget or a couple of guys in a basement somewhere. And in a sense, that’s extraordinary. It shouldn’t be possible not to be able to tell the difference and in computer security it is. It’s very easy to look at an attack and say “Well it could be criminals or it could be the Chinese government, we’re not sure”. And in fact, we might know it comes from China, but we don’t know if it’s state-sponsored, state-sanctioned, or just happens to happen in the same geography as the government. And again, this is very different than the real world. In the real world, the weaponry in a lot of ways determines who the actor is. You know, if I have a look outside and I see a tank, I would know the government is involved because only governments have tanks. But in computer security, everyone uses the same tactics, and the same techniques, and the same tools. So that is destabilizing. Another thing you asked is whether a cyber attack could be viewed more seriously by one side than the other. That’s kind of interesting to watch.

It seems like the reverse happens. Sony is an interesting example, it really wasn’t an attack against the United States by a foreign power and we largely said, “Oh okay”, we didn’t do many things about it. Russia attacking the US election, it wasn’t attacked by a foreign power and Obama did very little in retaliation.

Russia attacks the Ukraine, Iran attack Saudi Arabia. These have all happened and they’ve all happened without a lot of retaliation. I think countries are erring almost in the other direction in not responding. It’s possible and this is something more in international relations than cybersecurity. It’s out of my area of expertise, right. How nations view each other.

Although we do know that you know, act of war is not a dictionary definition, act of war is a thing someone did that you want to use as an excuse to declare war on them. So it becomes an act of war. Now, there are a lot of rules for you know, kind of what is and isn’t in that bucket. But most countries are very flexible and what they view acts of war. Just last week, France came out with a document of how they view cyber attack and cyber war and it was very restrained. You know, we do worry about attacks spilling over. We saw when Russia attacked the Ukraine with NotPetya, it affected the rest of the world. When the US and Israel attacked Iran with Stuxnet, it had lesser effects… It did have effects around the world.

These spillover effects are common, but again, countries are maintaining more restraint than less right. Now what happens in the future? We don’t know, it’s all rapidly evolving.

Stuxnet and NotPetya [00:42:29]

Robert Wiblin: Listeners, if you’ve not read about Stuxnet, I’d definitely look that one up. That’s like one of the great wonders of the world that Stuxnet actually worked. And it’s getting pretty close to a movie plot there that one.

Bruce Schneier: Stuxnet is and there might be a movie in the works for all I know and it was. This was a very well designed cyber weapon that the US and Israel aimed at an Iranian nuclear reactor and it did a lot. It actually did delay the Iranian nuclear program and we know that Obama launched cyber attacks against North Korea that delayed their nuclear program.

And so well targeted, these attacks are effective. Poorly targeted, you know, they’re less… The Russian attacks on Ukraine were very hit and miss and a lot of people, I think, successfully argue that it didn’t really change anything, but certainly in the event of hostilities, you can easily imagine part of what a country would do to another country would be to disrupt their internal government operations and their power plants and their financial networks, but what we have going for us is that a lot of these systems are very international. It’s hard to affect one country and not affect the rest of the world. And so I think that’ll both show restraint by countries that use this and will mean there’s more worldwide condemnation if these cyber weapons are used because of the bad targeting.

Robert Wiblin: Do you think there with like NotPetya that it was aimed at Ukraine but ended up causing tens of billions of dollars of damage to other countries.

Bruce Schneier: NotPetya was sloppy. That really was… It wasn’t designed to only hit the Ukraine. Look at Stuxnet. It was really designed to stay in Iran and just a nuclear power plant and it seems like it was kind of by accident that it got out. Stuxnet had a self-destruct timer, you know, I mean self-destruct timers, if you think about it, programmers do not put in self-destruct timers, lawyers do, right.

That was something that was well considered inside the policy discussion United States and it was designed with input from international attorneys that work on the laws of war and that is kind of amazing.

Messing with democracy [00:44:44]

Robert Wiblin: Yeah, moving back to the Russia thing, it seems like potentially one of the most consequential effects of bad computer security is that it’s making it a lot easier for countries to mess with other countries democracies and kind of internal conversations.

Do you think… Do you agree with that at least?

Bruce Schneier: So only part of that. It’s not bad computer security. It’s the way our systems work today. When Cambridge Analytica and Russia messed with the US election, it didn’t break any computer security. They were using Facebook as designed. And if I don’t know Kelloggs did the exact same things, they would’ve got an award for a savvy marketing campaign.

It wasn’t a misuse of the system. It was a use of the system and I think that’s what we’re coming to face. When we look at these platforms right now that as designed, they have these side effects and it did yeah, as much as I worry about misuse and illegally used systems, I worry a lot more about the legal uses.

Yes, I worry about criminals spying on me. I worry a lot more about corporations and governments using the systems as designed for perfectly legal spying and I think we all should. We’re really not thinking about the effects of these systems as we build them.

Robert Wiblin: Yeah, so I guess Cambridge Analytica seems a little bit overblown to me. I’m not sure whether you agree with this?

Bruce Schneier: I think in, you know, in the details of what they did and their ideas that they can map people’s personalities that seems like a whole lot of marketing hype. But certainly, the general ways of using networks to deliver propaganda, also known as campaigning to people to move people to more extreme positions to galvanize people with a few additions into into voting.

Those are I think well-documented and we understand them pretty well, but they are how those systems are designed and rethinking that is not going to be easy. Because in a lot of ways what those attackers are using democracy’s openness against ourselves and solutions that require us to back off on democracy seem like a mistake in the long run.

Robert Wiblin: Yeah, definitely. So setting aside the Cambridge Analytica, it did seem to me like the hacking of the emails by Russia potentially did swing the US election in 2016. And you can imagine this becoming more common or there being like more active disinformation campaigns. I mean, it seems like against Ukraine and against some other countries, Russia has been very effective at encouraging particular conversations that maybe a country would rather not focus on and they’ve constantly got bots on both sides of all the hot button issues in the US and other countries trying to just like engender as much internal conflict as is possible.

Do you think this could get significantly worse or maybe have we turned a corner here and we’re figuring out how to deal with it.

Bruce Schneier: It’s hard to know. There’s a lot of pieces of that, certainly the hacking of private information and releasing it is an effective tactic. I think Russia deployed it effectively against the United States, less effectively against France in the election a couple of years ago. They had data from Macron’s campaign and released that.

Robert Wiblin: There wasn’t anything that bad in there, right? Although they were kind of stretching to-

Bruce Schneier: There wasn’t anything bad in the United States data either. In all cases, I mean, the press really was complicit in reporting on the information without really reporting on where it came from. I think you have to get better at that. Now it’s hard to know what tactics will work in the next election. Companies like Facebook and Twitter are much better at pulling down fraudulent accounts and they pull them down for two reasons. Accounts that are fraudulently sourced. So Facebook, you have to say who you are. If you claim you’re… I don’t know a white American from Missouri and you’re actually someone at a troll farm in Vladivostok. If Facebook notices that, they’re going to pull your account down. And then also, inauthentic behaviour. So coordinated actions that they recognize as propaganda.

They will they will pull accounts down for that and we’ve seen them pull accounts down most recently from China engaging in Hong Kong propaganda. But also a few months ago, they pulled down a lot of accounts by an Israeli company that was offering these services for hire to countries that really couldn’t afford to build their own troll farms and they’ve pulled down Saudi propaganda against Oman.

Russian propaganda against some other countries, from Hungary, from Venezuela. Well, what we’ve learned is that on this planet, right now, if you are the victim of this sort of information caught operation and propaganda campaign, it is most likely your own government doing it. It i more likely than a foreign government.

A lot of countries are now using these techniques against their own citizens. So we’ve gotten better at detecting these across the platforms but tactics have changed. And, when you look at some of the operations being pulled down today, they are subtler, they go across different systems. So not just on Facebook.

They might move from Facebook to Twitter, to blogs into Instagram. Then back to Twitter in ways that make them harder to detect because the companies can’t coordinate in the way that the attackers can. But I think we’re going to be better at this this time around then we were last time.

On the other hand people worry about deepfakes. Now, I’m less worried about deepfakes.

How worried should we be about deepfakes? [00:50:02]

Robert Wiblin: I’m not so worried either. People going to learn that one pretty fast.

Bruce Schneier: I think as soon as deepfakes can be done from your smartphone and every highschooler does it on a joke all the time.

We’ll get quickly inured to its effects, but don’t discount the generational issues. One of the things we learned about 2016 United States, is it that people most affected by fake news where the baby boomers. So here are people who were brought up with the idea that news meant Walter Cronkite meant accurate, and they move into a world where news meant anybody with a website equals wildly inaccurate, all the way to accurate, and you have to be able to figure it out, but they never got that ability to figure it out.

Whereas younger people who were brought up in this world of ‘everybody’s a journalist’ were much more savvy. So we can say, you know, the high school kids, deep fakes, they’ll know what’s going on, but will their grandparents? And I think that’s the worry, is the generational effects. So I am more optimistic.

I think there’ll be a lot more going on. They’ll also be a lot more active defenses. So 2018, I don’t know… It’s not widely reported, but one of the reasons there was not a big propaganda campaign, so the United States actually went to the Russian troll farms and shut them down really in the weeks before and weeks after the election.

Robert Wiblin: So they hacked them back.

Bruce Schneier: Hacked them back, right. So they weren’t actually doing what they wanted and they coordinated with the FBI who coordinated with Facebook and Twitter and others to take down all the accounts they noticed. Probably by going into the attackers networks and getting a list.

Now, this is not discussed a lot. So we’re doing a lot more, what we call “active defense” which equals attack, and that turns out to be powerful. So we will see. It’ll be an interesting campaign when the problems we have here, that pretty much everything the Russians did in 2016, if it was done by an American, would be perfectly legal.

Fake news tends not to be lies, then to be stuff that isn’t actually news. That is opinion, that are memes, that are just ways to say ‘our side good their side bad’ and aren’t illegal. And Facebook struggles with this. Facebook struggles with the fact that their terms of service are sort of regularly violated by the US president.

Yeah, and how do you make that work? So when campaigning looks more and more like propaganda campaigns, it gets hard to delete propaganda, but also, like deleting legitimate campaign messages. So now I’ve got a real problem.

Robert Wiblin: So sounds like you think the glass is half full on the political side that we’re likely to wise an art will probably respond.

I do have to wonder that the people in the troll farm in Saint Petersburg and when their computers systems are getting shut down. That there’s something kind of beautiful about the whole game that they’re playing here. Where even though like what Russia is doing is terrible. There’s something that’s beautiful about the fact that they can like just have such a large influence with such a small budget and such a small number of people. It’s such a clever scheme.

It feels like a heist movie kind of thing. And then getting shut down back, it just, I wonder whether they’re like “Yeah, fair cop, I guess we deserve that”or whether they’re all just like playing a cat and mouse game that they kind of enjoy it on some level.

Bruce Schneier: You know, it is an arms race, but that’s what we like about the internet. We want the dissidents. We want the marginalized. We want the disempowered to get a voice and that’s what we like about it. And here it is the same things being used against us. So we don’t want to break what the internet has given us. And this is going to be an arms race. This is not static.

I’m cautiously optimistic about the next US election, but I think we’re going to see tactics that we haven’t seen before.

Robert Wiblin: I wonder whether it could be useful for someone to create a really outrageous deepfake that would be of great interest to 70 or 80 year olds. Get someone who, the older generation are particularly keen on and get them to say something particularly strange that would attract their attention so that they can learn that deepfake exists now.

Bruce Schneier: My guess is that that’s going to happen. That you’re going to see all these deep fakes and they’ll be very very good and obviously wrong because here it will be like, ‘Your cousin speaking Polish fluently and wow, isn’t that amazing’? Wait, what happened? And that kind of thing I think will inoculate people.

One of the things we know about lies is that even if we know they’re not true, they affect us. So, I mean interesting psychological studies. If we hear a lie enough times, even if we’re told it’s a lie. Kind of the residual ideas are in our head and still affect us. So, you know, this is going to be complicated. It’ll be interesting to watch. I just wish it wasn’t so important.

Robert Wiblin: I guess, I mean, democracy has been a messy process in the past as well. So it’s possible this isn’t as much of a difference from the past. I mean, people have run disinformation campaigns as being like, lots of nonsense they’ve murdered like… People would have incorrect beliefs all the time.

It’s possible that although this will be bad, it will be bad to the same extent that like democracy in the 50s was also extremely problematic.

Bruce Schneier: And I think a lot of people say that and then again this gets far beyond my expertise in computer security and I think that is the optimistic way to look at it. This has always been a sloppy process.

The similarities between hacking computers and potentially hacking biology in the future [00:55:08]

Robert Wiblin: Yeah, so recently on your blog you tried to draw an analogy between hacking computers and potentially hacking biology in the future. Do you want to kind of explain what you meant by that and what potential damage could be done if people in a sense can kind of hack biological systems.

Bruce Schneier: So computers are extraordinarily complex. They’re the most complex machines mankind has ever built and we actually don’t really have good… We don’t really have good engineering and how they work… I’m going to be sloppy here. We tend to use a lot of trial and error. If you write software you know this. You write a program, you run it, it doesn’t work you figure out why, you fix it, you run it, it doesn’t work. You do that a whole lot of times and eventually it works., And then with a massive program, you might do that thousands of times you send it out into the world and it breaks and you patch it, and you patch it throughout its lifetime and software kind of just barely works. And the way we do it is trial and error. Now, that’s fine.

If you think about that process, the reason you can do that is that the cost of failure is 0. You run your program, it doesn’t work, and there’s no ill effect. You can just run it again. That’s not the way we design buildings. If someone said, okay, we’re going to build this building and it collapses.

We’ll just figure out why and build it again. If it collapses, we’ll just do it again. We’ll do that, you know several hundred times and eventually we’ll get a building that stays up. That would be dumb. We don’t even do that with aircraft, right? Computers have this sort of unique property that failures are free.

And the cost of making changes is free. With a building ,you design it, you stare at the design, you over-engineer it, you have other people look at it because it has to be perfect the first time. There’s no second chance. Let’s move to synthetic biology. Synthetic biology has, in the past, looked more like building construction.

It’s hard to do, the cost of failure are enormous. You know, you do. Very small things and you pay very careful attention to the results. We’re moving into a world where biology will look a lot more like software. With things like CRISPR that can edit genomes and the follow-on technologies that are being thought of. You’re going to be able to program biological entities.

You’ll be able to write your genetic code like you can write software and so that will be possible. But, you don’t have this risk-free failure anymore. You get the genetic code wrong and you’ve created a superbug and you’ve like turned off somebody’s spleen, you’ve done something really crazy and bad. So I started to think about this risk, that we are going to have to bring a software mentality to this wetware-software thing where the failure modes are very different and I’m not sure we’re thinking about them as carefully as we should.

That was the point of that essay. That’s more speculative than I usually get. So it’s kind of fun to write, I wrote it with a biologist who actually knows stuff about synthetic biology. So that helped and we just wanted to sort of mention “Hey, you know, this is a risk worth thinking about”.

Robert Wiblin: Yeah, we’ll stick up a link to that of course.

Is this kind of just a different framing for the concerns that people have had for a while about synthetic biology that it’s giving us this power to create viruses or diseases that will then, you know, go into the human body or into other organisms and then make changes that could destroy them.

Bruce Schneier: It is and it isn’t. I mean, so the worry is someone with evil intent could do that and I’m not even approaching that risk. That risk is certainly there and I think we’re going to have to worry about it. And I don’t see any way to get around that risk. This is a separate risk. This is people with good intent. Just getting you know their first version of the attempt wrong.

Robert Wiblin: Throwing things at the wall. Just trying to play around with it and then accidentally creating seriously problematic software basically.

Bruce Schneier: Right, because the software is biological. It’s not limited to your computer.

Robert Wiblin: Yeah, are there any other lessons we can take away from that analogy? I suppose computer security has generally been bad and is an unsolved problem and so we maybe would expect biological security to be the same.

Bruce Schneier: I mean some of it is complexity. I often say that complexity’s the worst enemy of security. I just said a few minutes ago the computers are the most complex machines mankind has ever built.

It’s actually not true, it’s the internet, which is sort of all the computers attached together, which is the most complex machine mankind has ever built and securing that is very very difficult because we don’t really understand how it works. When you start moving into biology, biology is in that level of complexity. Now the number of genomes in a species, the interactions, the way things work, the interactions between species and in huge biological systems. Little changes can have huge effects.

So yeah, I think we have to really think in terms of complexity and our ability to affect biology will, you know, probably in our lifetime, approach our ability to affect software.

Bruce’s critiques of crypto [01:00:05]

Robert Wiblin: So this may be a little bit random, but you’ve been a bit of a critic of cryptocurrency and blockchain technology. Kind of suggested in some articles that it’s pretty close to useless, or much more useless than most people think.

And I read a long quote from your article in my interview with Vitalik Buterin to see how he’d respond to that and I guess one of the things that he said was that he thinks that many people who have been critical and actually including me to be honest, and maybe haven’t been paying attention to the advances that there’s been in terms of sharding which might allow you to scale up and do a lot more transactions, or proof of stake, which will mean that you know, you don’t have to use up obscene amounts of electricity just to keep the thing running.

And that the critics are right now, but like they might be wrong in five years time. Do you have any thoughts on that?

Bruce Schneier: So those things he said really don’t have anything do with my criticism, right? I mean yes, it is right now, I mean Bitcoin is the most inefficient consensus algorithm ever invented, and you know, it’s a disaster environmentally, and just the math makes no sense and the scale makes no sense.

And yes, of course, that’s all fixable. That’s all tech. Problem with blockchain and sort of any of that, is the notion that you can replace governance with math, which is just plain dumb. And you know this is true because even the systems that tried to replace governance with math, have to resort to governance when the math turns out not to work, right?

You remember computer security is hard. Getting this right is impossible. We don’t know how to build secure systems. We don’t know how to build perfect systems. So, you’re always going to need governance. And once you need governance, you might as well admit that you need governance and build a system that uses governance instead of pretending that the math will just make it work.

So right, so smart contracts, right? You can just have contracts without attorneys. Of course you can’t. Contracts involve human beings. Human beings will have disagreements and they’ll have disagreements about what the math says, and nobody’s going to enter a contract that says, “Okay, here’s the math, but if there’s a typo in this, you’ve just lost your entire life savings. Or, if there’s no typo, and we learn about a new type of mistake in the next five years, you’ve lost your life savings, or if you forget this really important number I’m going to write down on a piece of paper in front of you, you’ve lost your life savings”. This is ridiculous. And there’s no actual benefit toward it. We can build systems that do things fast.

I don’t need cryptocurrency for very fast, very cheap exchange of value. Credit cards seem to be doing that just fine, and they can do micro payments. So, my complaint with blockchain is one, it provides nothing of value that I can’t get much easier and gives me an enormous number of risks that I don’t actually want.

What are some of the most kind of widely-held but incorrect beliefs among computer security people? [01:03:04]

Robert Wiblin: Yeah. I wish we had time to dive into that, but I think unfortunately we’ll have to move on. What do you think are some of the most kind of widely-held but incorrect beliefs among computer security people?

Bruce Schneier: Interesting question. I think computer security people largely have it right? You know, I tend not to worry about misguided beliefs in computer security people.

Robert Wiblin: Maybe the broader community then, including amateurs or people who take a general interest, what do they misunderstand?

Bruce Schneier: I think there’s a lot of misunderstanding about how easy it is. And you see this in cryptography, you know amateurs say, “Look, I’ve invented this secure cipher” and really sort of thinking well, that’s easy. Now on the one hand, it is easy but not for you, that there’s an amount of math required. So I often say that anyone can design a system so secure that they themselves can’t break it. That’s easy.

So when someone comes to me and says, “Look, I have this secure system”. My first question is well, “Who are you?”. I mean, why is the fact that you can’t break this system any evidence that it is actually secure. Now if you are a really expert system-breaker, then your inability to break it is really good evidence.

If you’re just some amateur who you know, read a couple of books or you know did some stuff and don’t really understand security, then the fact that you can’t break it just tells me that you don’t know how to break things very well. I mean, I think that kind of disconnect, and in security, we can never really prove anything of value. There are proofs, but they’re not really useful in any real sense.

So all we’ve got is, “I can’t break this and all those other really smart people can’t either” and that’s our evidence that this is secure. So the pedigree of who is doing the analysis is extraordinarily important and I think that’s largely missed in the wider conversation. You’ll see companies all the time saying, “I designed this secure system, and it’s your job to break it”.

No it isn’t. It’s your job to show me that it’s likely to be secure. Otherwise, I got thousands of amateurs saying, “Break this, break this, break this”, we we just don’t have time for it. So there has to be some bar for presenting systems as secure and that bar’s got to be designed well by people who understand security.

Robert Wiblin: Is there anything that you think the experts are getting wrong?

Bruce Schneier: I think the experts are pretty good?

Robert Wiblin: That’s interesting. Okay, I guess that’s slightly reassuring as this is the first-

Bruce Schneier: I guess it is reassuring, right?

Robert Wiblin: Yeah, it’s not the case in every field.

Bruce Schneier: Maybe it’s just kind of self-evident because I’m one of the experts, right. So, if I’m getting it wrong too, so therefore I don’t see it and you have to get someone else.

The hacking mindset [01:05:35]

Robert Wiblin: How can someone tell if they kind of have the right mindset to be a good fit for computer security. Kind of what is that mindset?

Bruce Schneier: I’ve written about this and and the story always uses Uncle Milton’s ant farm. So I don’t know if listeners will remember this. This was around when I was a kid. I think it might be around today, I suppose we can just Google it. And it’s two pieces of plastic, and they have maybe a quarter inch of space between them, and you fill the space with some sand. Then you get ants and you put them in the ant farm and you watch them dig tunnels, right?

Super cool, if you’re like a 12 year old boy. Now, when you buy this at a toy store, it doesn’t come with ants because, well because, and you can do one of two things: you can get some ants out of your backyard and they don’t have a queen, so they’re going to die soon but you know, they’ll make tunnels. Or you can, at least back, when I was a kid, there’d be a card. In the ant farm, you could write your name and address and send it to company and they’d mail you a tube of ants. Now the normal person says, “Oh that’s kind of neat, I can get a tube of ants”. Someone who’s a hacker looks and says, “Wow, I can send a tube of ants to anybody I want”, and that’s how I characterize the mentality needed to be a computer security person, to be a hacker. The ability to look at a system and sort of naturally see how it might be misused.

To walk into a store and say, “You know, here’s how I would shoplift”. Not actually do it, but notice where the cameras are, whether there are alarms, and even pays attention to what the shelves look like. And that’s true for you know, apps on your phone and is true in the real world. That way of looking at the world, sort of as a hacker. That’s the one thing I think is hard to teach. I could teach the math, I can teach the engineering and all of that stuff. But that way of looking at the world is something that hacker’s do naturally and regular people don’t.

I mean it’s why people designed voting machines. They never understand the security implications because they’re not hackers. They’re voting machine designers, whereas one of us computer security people looks at that machine and says, “Well that’s ridiculous! You can’t have a paperless voting machine. That’s dumb all these things can happen. Where were you thinking?” They weren’t thinking of it, right?

They saw the card and they said, “Yeah, I can get some ants”. They didn’t say, “I can send ants to other people” and that’s, I think that’s vitally important.

Robert Wiblin: How big a filter is this? How many… What fraction of the population do you think has this kind of mindset where they look for the perverse outcome, the one that they could get away with?

Bruce Schneier: I have no idea. I bet everybody has it when they’re kids and it’s kind of just, you know, schooled out of you, right? You have to be obedient and follow the rules. And computer security people are never good at following rules. They might do it, but they’re just not good at it because rules always seems so arbitrary.

I remember once, many years ago, I’m at a cryptography conference and talking about some piece of math and some attack and there was someone from from the NSA, his name is ‘Brian Snow’ and he used to come to cryptography conferences back before any other NSA employee would. Very senior, great guy… I miss him a lot. And someone was talking about this attack and I said at one point, I forget why. I said, you know, “Hey that’s cheating”.

Brian looked at me with the look of, “What are you talking about? In our field, there’s no such thing as cheating”. All attacks are allowed, are legal, are right. It works or it doesn’t. There is no cheating. And of course he’s right. And you know, I don’t know what percentage of people, I mean that would be sort of interesting to find uh, some psych student who wants to research that. That would be cool. I would help with that.

Voting machines [01:09:22]

Robert Wiblin: Just quickly, voting machines, electronic voting machines. We should get rid of them, right? It’s a disaster waiting to happen.

Bruce Schneier: So what we need for voting machines are two things. We need a paper ballot. A voter verifiable paper ballot and a risk limiting audit.

So the first one is the way we vote. And what I like is the way they vote in my home state of Minnesota. It’s a paper ballot with ovals you fill in and then after you fill them in, you feed them into a computerised reader. You get a very very quick count, then the paper falls into a locked box that’s available for recounts. So that gives me the speed of electronic voting, with the security of the paper ballot. Touchscreen voting machines, even if they produce paper at the end, there’s just too many ways for them to go wrong.

So I like optical scan voting, but we do need the risk limiting audit at the backend, and those are audits that are automatically triggered based on the margin of victory, right? If it’s a large margin of victory, you need very small audits. For a small margin of victory, you need a large audit. But not something that one of the candidates demands that it just automatically happens. Those two things do an enormous amount to improve voting security.

We don’t have anything else that comes close. So yes, we should move to that as soon as possible. That of course is in the end of election security. That’s just one piece of the election. We have to worry about the voting rolls. The system that determines who is eligible to vote and where and those are vulnerable as well and we saw in the last US election, the Russians did penetrate but not actually attack voting rolls systems in half a dozen states and I think more.

And the belief is now that they weren’t trying to dink with the election. But if the wrong person won, there would be credible evidence that there could have been problems, right? It was there so as to be able to cast doubt on a result in the future.

So the voting rolls I think are a big issue and lastly the tabulations system. There was one election and I forget what the details were, but an operation was disrupted just to call the result wrong. And you think about the chaos that would sow even if we get the result right eventually. There’s so much distrust going on that people aren’t going to believe it.

Robert Wiblin: Yeah, so how about that for a worst-case scenario that US presidential election is close but it turns out that just all over the place, the voting systems in different states has been compromised by some other government? I mean, maybe the Russians, maybe someone else and then this just creates… I mean, I guess I could be a constitutional crisis of a sort because they don’t really have a system for having a do-over election.

Bruce Schneier: And that’s a real important point. I mean, more basically. You have to understand elections serve two purposes. The first is the obvious one to pick the winner. But the second, but equally important, is to convince the loser to go along with it. Yeah, to the extent the election fails and that second, it’s a failed election. So a lot of these sort of campaigns center around convincing the loser and if the losing side thinks the election wasn’t fair, they’re not going to accept the result. There’s no legitimacy and you’re right. There is no system in the United States for a do-over, so you can imagine I’m gonna take this up that, election day happens and in one state, let’s say Florida, in a state that matters in the national election, in that it’s usually close, a lot of electoral votes and deciding, there’s widespread problems.

Not fraud, but there’s problems. Machines aren’t working, long lines, lots of people can’t vote and you can sort of imagine those sorts of things statewide and at the end of it, Florida says, “You know, like, we don’t know who won. We just don’t. This is wrong. We know it’s wrong. Now what does the country do?

I mean the rules are, in the United States, the point of the constitution is that the state decides, and the state could decide, by you know, I don’t know-

Robert Wiblin: I think the legislature could, in theory, vote.

Bruce Schneier: The legislator could vote. There’s been example, I’m not making this up, of a local election in Texas where the result was a tie and they decided by playing a hand of poker. Now you could do that. But, I don’t think people are gonna stand for that. So no, we don’t have any kind of recovery system.

Robert Wiblin: It seems like, I mean one thing you could do is just like prepare ahead of time for what would you do in this situation? So you have some legitimately agreed process because in the UK or Australia, I think this would not be nearly so bad because you would just call another election basically, and the same person remains Prime Minister until you get to that point.

But the problem is in the UK, everyone would have a different idea of what they want to do. The thing that would benefit them and then whatever gets picked, like half the country thinks it’s a not a legitimate process to follow.

Bruce Schneier: And that’s extremely important point unlike every other aspect of computer security, this is partisan, right? So I can worry about, you know, computer viruses and I can worry about attacks against power plants. So nobody is in favor of the power going down. There’s nobody in favor of all the cell phones not working right? There isn’t a constituency for those things. But an election, as soon as the election happens, there are sides. And half of the country wants the result to stand and half the country wants the result overturned and they will decide on their course of action based on the result, not based on what’s right.

Unlike other countries, the United States doesn’t have a Federal Bureaucracy for elections. The security we use is that we have people from each party sitting in the same room watching each other. When you go to vote, there are poll watchers from Republicans and Democrats and they’re sitting at the table and they’re, sort of all there for this, if you think about this very mid-1800s threat. And their great security against this mid-1800s threat is the way election stealing would happen.

We’re all going to watch each other and if you do anything suspicious, I’m going to notice and that’ll keep all of us honest. It doesn’t work against 21st century threats. And, we are hurt by the fact that we don’t have a Federal Bureaucracy in charge of accuracy and elections. We are both hurt and helped by the Electoral College.

Robert Wiblin: Yeah, the fact that it’s decentralized makes it a bit trickier to do this, but-

Bruce Schneier: Decentralized is both good and bad. So, if you want to flip the Federal Presidential Election, where are you going to target? Well, if it’s 2000, it was Florida. If it was 2014, it was Ohio.

If it’s yeah, you know, the most recent election, there’s Pennsylvania and Michigan, right? So, the state’s change, but, on the other hand, we can be sometimes as secure as our weakest link. And fraud even in a state that goes heavily one way, will make a difference.

There’s also just a lot more than the presidential election. There’s a lot of state elections, a lot of local elections. And while you can imagine that Russia versus the United States is at least a fair fight, Russia versus some county in the middle of Nebraska isn’t even a contest. And that’s what we’re expecting. And it’s very hard to have Federal Security Solutions for local and state elections because those authorities don’t want Federal meddling in what they’re doing.

How secretive should people be about potentially harmful information? [01:16:48]

Robert Wiblin: All right. So the thing that prompted me to reach out to you, first off, was that there was two researchers at Open Philanthropy, Claire Zabel and Luke Muelhauser who wrote this really nice article called “Information Security Careers for Global Catastrophic Risk Reduction”. And obviously, we’ll stick up a link to that for people to read.

It’s pretty brief and keeps a good pace. It’s very interesting. But yeah, in brief, they make this case that for people who want to improve the world in a really big way, going into computer and information security can be a really promising career, and kind of paint two specific ways this might be the case that they highlight.

The most advanced machine learning methods are obviously just software themselves. That they could, in principle, be stolen and used by other people and at some point, kind of, the most advanced ML algorithms could become dangerous or potentially like very very powerful and would be bad if the most reckless actor or the most disreputable actor could kind of steal them from another group and then and then deploy them prematurely.

And so it seems like it would be good if the very best AI programs could be kind of kept locked down so that they can’t be stolen by North Korea or state enemies or terrorists. But currently it seems like we just don’t have very good methods to do that because all computers are basically insecure by design and at least to someone who’s sufficiently well-resourced and skilled. Then kind of another similar angle is that kind of biological weapons today and in the future could end up being super dangerous and easy to deploy once they’re invented for someone who’s got a couple of million dollars and some relevant training. Any advances there could potentially easily be stolen by other states or non-state actors.

Probably they already are being stolen by states in most cases which then kind of promotes proliferation. And so it’s possible that today like if you’re someone who wants to prevent misuse of synthetic biology, and wants to dream up the worst case scenario you could possibly have… Basically maybe you just shouldn’t do it because drawing up a list like that is actually just a huge information hazard that then there’s a pretty high risk that someone else is going to steal it.

I guess, more generally, like setting aside those two specific examples. It does seem like there could be opportunities to improve the world by making it easier for particular groups to keep secrets. I guess maybe you’ll disagree with this angle that sometimes it’s things that maybe shouldn’t be published and it might be good if they were able to hire computer security people who could keep their secrets for them.

Bruce Schneier: The problem is not lack of ideas. I used to run a movie plot thread contest on my blog every April 1st and the idea was to come up with the the “most scary impressive computer security everybody dies” disaster scenario and I got email from people saying, “Don’t give the terrorists ideas!”. I was like, “Are you kidding? Ideas are the easiest thing in the world to get… Execution is hard.”

So no, I am not worried that lists of bad things that can happen will give people who want to do us harm, ideas. They’ve already got the ideas. We need the ideas out in the world so that we can think about them. Please do not, do not promulgate that myth. I think that myth is harmful and dangerous and keeps this stuff secret and we’re all worried that we’ll talk about it and the people who have the ideas, which are the bad guys, are the ones who are going to do all the thinking.

Robert Wiblin: Okay. Well, what if you were actually like a world-class synthetic biologist or you work on a kind of biological weapons program in a government? Maybe you actually do have ideas that other people haven’t thought of? Maybe you just don’t want them to realize, you know, you’re happy to delay them by 5 or 10 years noticing that this is a thing.

Bruce Schneier: So my guess is you’d delay them from two to three weeks, if you’re lucky, but almost certainly somebody’s written a science fiction story 20 years before with the exact same idea. Ideas are not hard. Ideas are not rare. Don’t worry about it.

Robert Wiblin: But it seems like, you know, you might be able to have a better idea, a better way of implementing it, and try to find like, the thing that’s most dangerous that would be most straightforward for, you know, a small group to. Do you really want to publish that? Because maybe like a concern will be that in the future it will become easier for people who aren’t that technically capable to do some particular things and then like highlighting those things that, you know, a group of 10 people with like some biological training.

Bruce Schneier: I think we can invent a scenario, again a great movie plot, where this idea is so obscure and weird that you wouldn’t want to make it public. In general, in my world, we call this “security by obscurity” and we laugh at it. Right, you do not want something so fragile as the idea of the thing being what makes you secure. If that’s what makes You secure, you are not secure. Because I assure you somebody in the lab across the street or across the world is almost as good as you and will come up with the idea, if not today, in two weeks or in a month.

So you’re not going to get enough headstart. In general, you make these things public so the good guys can think about them. We do not get security through secrecy. That is much too fragile. We get security through actual security.

Robert Wiblin: Do you think that there might be any differences between computer security and synthetic biology security here that could make it more appealing to keep some of the cutting-edge risks there secret?

Bruce Schneier: My guess is that we are going to wish it so, but in the end it’s not going to be. But yeah, it would be great if we just told the good guys, “Just keep quiet about the whole synthetic biology thing” and that you can make a virus, but no, sorry, in our experiences that’s not the way it’s going to work.

Anyway, keep going with your thing!

Could a career in information security be very useful for reducing global catastrophic risks? [01:21:46]

Robert Wiblin: Yeah, so I guess another reason why this is an appealing career is that you know, even if those kind of scenarios don’t pan out, the biological stuff or the AI, it just seems like it’s a very hot area where there’s lots of other opportunities to have impact, you know some of which we’ve talked about, and we’ll talk about later.

So basically, yeah. What do you make of this argument in general?

Bruce Schneier: There’s a lot here. In general, yes, I think computer security is a great way to improve the world primarily because it is infrastructure. It doesn’t do anything but it enables everything else to be done. If you think about it, security is kind of a weird thing because nobody actually wants to buy security. What they want is not to have the thing that security prevents.

I don’t want a door lock, but I don’t want to get burglarized. So the door lock gives me the ‘not getting burglarized’. So security is never a thing, but it enables everything else. Its core infrastructure. When you think about all of the promise of computers, from AI to autonomy and physical agency and all of the things, all the magic, all the technology, we want it to be secure.

We want it to not have any bad side effects and computer security is how we get that. So without computer security, nothing’s going to work. With it, everything will work. So it’s extremely important. Now, when that article which you summarized quite nicely talked about how computer security affects the world. It talked about AI and catastrophic risks and probably killer robots and biology and again, I’m less worried about that.

My risks are today and my solutions today will carry forward. I’m going to read a few sentences from my latest book, “Click Here To Kill Everybody” that talks about that. “I am less worried about AI. I regard fear of AI more as a mirror of our own society than as a harbinger of the future. AI and intelligent robotics are the cumulation of several precursor technologies, like machine learning algorithms, automation or autonomy. The security risks from those precursor technologies are already with us.

And they’re increasing as the technology become more powerful and more prevalent. So I’m worried about intelligent and even driverless cars. Most of the risks are already prevalent in internet-connected drivered cars, and I’m worried about robot soldiers. Most of the risks are already prevalent in autonomous weapons systems.

Right, so the risks today, are the same risks going to be worried about in those catastrophic futures that article mentions. So the neat thing about computer security is not that you’re going to prevent a catastrophe in the future. You’re preventing catastrophe tomorrow. And this isn’t theoretical, this is real. I got real problems right now that if I don’t solve, none of that future stuffs going to work well. So come join computer security not because you’re worried about the Terminator, but because you’re worried about the iPhone.

Robert Wiblin: So I think this extra point maybe isn’t clearly central because we could still say it’s very important. Could there be organizations or could there be information in the future that is very important to keep racked up and currently we don’t really have good ways of doing that?

I guess the argument would be that you know machine learning algorithms, machine learning systems are gonna become more and more powerful in the future. They’ll be responsible for doing more and more things. And then so them being abused by North Korea or some other organization just becomes, in the same way that you could do more with, you know, the most advanced email algorithm today than five years ago, you know in 10 or 20 years time, the stakes are probably just increasing over time.

Bruce Schneier: At some point, a difference in degree will be a difference in kind. You know, we don’t know where those inflection points are. Yeah, and we should worry about difference in degree. But all these machine learning security worries exist right now.

We’re worried about adversarial machine learning, we’re worried about model stealing. We’re worried about algorithms that can’t explain themselves, or veer off into weird side effects, or they embed existing pathologies like discrimination and biases based on the data we give them or the feedback we give them. These are all problems right now. And yeah, there are going to be bigger problems when these systems don’t just make parole decisions. They make you know, left-right turning decisions billions of times a day in cars. But in a lot of ways, it’s the same thing. So the research today is extraordinarily valuable for the problems today, which will extend to the problems tomorrow.

I am not really worried about AI. To get there, we have to solve so many problems. That’s where my focus is. I mean, it’s nice that people are thinking about these long-term risks, but I think the near-term risks are are just as interesting and even more important.

Robert Wiblin: Let’s maybe just set aside the AI specifically for a second. So do you think that there is an opportunity to have a big impact as a computer security person by kind of finding the organization’s where it’s like it’s most important to secure that organization or to ensure that it can operate without having you know, the Chinese government steal its information and then going and working there and you know hardening their systems so they can like operate without that fear.

Bruce Schneier: So I think that’s definitely true. I will often get asked you know, “How do I help?”. And one of the things I’ll say is, “Find an organization you believe in, and help them”. Now, I don’t think we need to optimize here, right? There are so many problems. So many areas in computer security, so many areas and sort of ways you can help the world.

Pick the way that makes you excited to get up in the morning. I don’t pick the one that’s most optimal. You know, we are literally all in this together and someone’s going to have to handle all the things you’re not handling. So I care less what you’re doing as long as you’re doing something.

Because we all have to help or this is not going to work at all. So yes, I think finding an organization that that matters to you and helping them operate securely, whether it’s security from a rival government… I mean this gets back to Human Rights Watch and Amnesty International being secure against cyber criminals.

Sort of any organization with a budget secure against hackers, secure against you know anything. Yes that is a way to do good. Definitely.

Robert Wiblin: Yes. So maybe taking the example of Human Rights Watch or Amnesty International or maybe some other political group where it’s easy to envisage that right now.

Criticisms people can make is just that it’s not possible maybe to harden a system sufficiently against an adversary. So if you got like Amnesty International’s targeting the Chinese Communist party, maybe it’s just kind of a futile effort to try to ensure that they can kind of keep their secrets organized without interference because it’s just they’re always going to lose.

Bruce Schneier: So this is my field. This is not just Amnesty International. This is everybody. This ranges from you know, you at home, to a small business, to a large multinational corporation, to a government, to a politically minded NGO. Security is never binary. It’s not, “Your secure or insecure”. It’s what are the threats? What is the adversary and what are they risking? And in my world, attack is easier than defense. And if someone like the NSA wanted into your computer right now, they’re in your computer. Period. End of sentence. And if they’re not, a couple things might be true.

One is — it’s against their law. You know, you might be a US citizen or a US person or for whatever reason, you’re not high up on their priorities list. Right, I mean there’s only so much they can do.

Right, so a lot of our goals for any attacker is just to make it so hard that they don’t bother. Against most criminals, I just have to be more secure than the average person, right? Because most criminals don’t want me, they just want you know, some account, right? So transferring works great. If you’re the Chinese government and you’re attacking Amnesty International, you know, you’ve got some budget and if Amnesty International is weaker than your budget, yep they’re in. If they’re stronger then they’re not.

Robert Wiblin: But I guess maybe they have a lot of other-

Bruce Schneier: They have a lot of targets, right? And so where do you fit? They’ve got to worry about Taiwan, they’ve got to worry about Hong Kong. They’ve got to worry about the US, you know, maybe you’re so low on the priority list that they’re going to get to you in a few years, maybe because you’ve done the bare minimum.

So there’s a lot we can do to increase the work the attacker has to go through and that is worth doing, even in a world where yes, attack is easier than defense. Because there are no absolutes, this is all relative. So doing good helps.

Robert Wiblin: So I guess, I mean, at the moment, Google DeepMind publishes basically everything, but in future they might be wanting to keep their algorithms more secret because they’re either more dangerous or just more commercially sensitive and imagine that they’re forecasting that they’re very likely to be kind of close to the top of the list for, you know, something like the Chinese government wanting to steal commercially valuable information.

For someone who is likely to be kind of close to the top of list where it’s like, the things that they are stealing are worth billions and billions of dollars. Yeah. Is it practical to hire some really great people and harden your systems against that? Or is it maybe we just don’t know where things will stand in 20 years time?

Bruce Schneier: We don’t know where things will stand in 20 years time at all. Certainly it’s practical to hire people. Google hires a lot of security people. And they have withstood government attacks. In 2011, they were penetrated by the Chinese trying to get information on Taiwanese descendants and they’ve done a lot of hardening since then. They were penetrated by the NSA… Came out in the Snowden documents, and then they’ve done a lot of work against nation-state attackers and they consider themselves secure against most nation-state attacks and they probably are. Now, we don’t know the details but-

Robert Wiblin: Really? You think they’re secure? I mean, I guess they’re not secure against legal request from the US government, but they’re secure against hacking-

Bruce Schneier: They’re certainly secure against it which means they follow it. But these are not computer security attacks. That is a legal requirement which, because they are a US corporation, they have to follow and they can choose not to but that is a legal battle, not a security battle and that’s very much in the separate place. And that’s sort of what our laws are and what they should be.

But yes, I think Google has spent a lot of time being secure against nations. Now, that doesn’t mean it’s impossible. But again, there are no absolutes here. So there’s a lot we can do. And I certainly think that Google will get to the point where a lot of their algorithms will be kept as trade secrets, you know, the obvious one now is PageRank, you know the algorithm by which they rank search.

And that is secret for two reasons. One, because they don’t want our rival search engines to use them and two, because they don’t want-

Robert Wiblin: People to know how to hack or abuse it.

Bruce Schneier: They don’t want optimization companies to figure out how to game the system, you know, so you will probably see AI algorithms in that same boat. What we see now is that AI systems are all public but training data and the resulting model tends to be secret. Again, those aren’t secrets for long, right? And security will come from moving fast, not from having, you know the secret pile. What is the state of the art today won’t be state of the art six months from now and you have to just assume they will get out and I think companies do.

Robert Wiblin: I guess if you have something that you really want to keep secret do you largely have to air gap it? So I mean the Iranians tried this with their nuclear program and even that didn’t work in that case. So they had a very concerted enemy.

Bruce Schneier: This is more complicated than the podcast probably can support. To keep a thing secret, it depends what it is, right? If it’s a small thing, you don’t write it down. You write it down on paper. You don’t put on the internet. You don’t put it on computers. Air gaps are just a very slow interface. We know that air gap systems are broken all the time. Actually Stuxnet was designed to cross an air gap into the Iranian nuclear power plant. The United States has an air-gapped private classified internet called SIPRNet. Actually, there are a few of them. And last time I saw writings a few years old, it’s probably still true, that viruses tend to jump that air gap within 24 hours just because stuff happens-

Robert Wiblin: Someone sticks a USB drive-

Bruce Schneier: Someone sticks a drive, someone takes a computer home, you know stuff happens. So air gaps help, but they’re not a panacea. In the Snowden documents, there were any number of programs designed to cross air gaps and move data through air gasp. So, you know, this isn’t something that’s going to solve things. If you want to keep something absolutely secret, there are things you can do, but largely, you recognize there are no absolutes.

How to develop the skills needed in computer security [01:33:44]

Robert Wiblin: Yeah interesting. So if you want to become someone who changes the world by, you know, hardening systems that are really valuable to harden. What’s the best way to go about developing their skills? Imagine that hear maybe you’re talking to a 25 year-old CS grad who has some interest in computer security, but isn’t working in it yet.

Bruce Schneier: So I get the question all the time and they always use the word ‘best’ and I tell them not to use the word ‘best’ because what you want is the career that makes you excited to wake up in the morning and the last thing you want to be told is that ‘this is the best thing’ and you’re miserable, where the second best thing would be great.

So find what you’re excited in. Computer security is a very varied career. There are lots of different things you can do, ranging from hardcore math, to hacking, to policy, and dealing with people and users and figure out, you know, what gets you excited and do that because you’ll do way better for yourself and the world by doing the thing that excites you than doing the thing that might objectively better that doesn’t excite you. But again and again students always ask it in that way, “What is the best way, what’s my best path?”. Take a random path. Take… Just wander through the space, do different things, see what’s interesting.

Robert Wiblin: What are maybe some promising options that people could take if they’re excited by them? Are there any like courses that are interesting or is this something you really have to learn by doing it yourself on your own systems or just get a job and like learn on the fly?

Bruce Schneier: Well, that’s possible. We have something called “Cybersecurity skills gap” right now, which basically means that there are way more jobs than there are people to fill them at all levels. So yes, there’s a lot of on-the-job training that goes on. Where companies hire people with general skills and give them more specific skills in any aspect of computer security.

There are lots of programs, you know, most universities have some computer security either sub-degree or courses. So there’s any number of ways to engage and again poke around and see what’s exciting to you.

Robert Wiblin: So I guess if there’s such a shortfall of skills, then it’s probably easier to get in on the ground floor right now?

Bruce Schneier: And shortfall is even understatement. I mean hundreds of thousands of unfilled jobs. And that’s just today. And that’s just the United States and worldwide, into the future, there’s going to be many, many more. And, you know, we’re talking about AI, but we can talk about it here. I mean, this is an area where I can actually do some good because I think there are some of these jobs that can be usefully computered out and it’s not going to affect the people because they’ll just move into the other areas where we really can’t be computered out because there so much more human.

But my hope is that we can have computers working alongside humans in some of these areas. I mean, a bunch of reasons why this’ll make a big difference. Attacks happen at computer speeds. Defense often happens at human speeds. That’s kind of not fair. The more that defense that can happen at computer speeds, i think the better off we’ll be. Some aspects of computer security like vulnerability finding seem really ripe for mechanization and you can have machine learning systems find vulnerabilities, which would do an enormous amount of good. Because a lot of our vulnerability stem from the fact that there are vulnerabilities in the software.

Because we’re terrible at writing secure code. We have no idea how to do it. And if computers can find vulnerabilities, well it benefits the attacker and the defender, of course, but if you think about it, once you have this automatic system, you build it into the compilers and code generation tools and suddenly vulnerabilities are a thing of the past and that’s actually possible 5, 10, 20 years and that would make a huge difference.

Robert Wiblin: Some people worry that we’re going to have a computer security apocalypse basically because will design ML algorithms that can find weaknesses and sort of like find computer bugs and like security weaknesses incredibly quickly, but I guess you’re saying, well maybe in the short run that does look bad because, you know, potentially someone with bad intentions will get that early on, but in the longer term that’s actually a more generalized solution because if you can just run these algorithms against every piece of software and then patch all the bugs, then we end up in a better place

Bruce Schneier: Right and that’s where the defender has wins here. The attacker finds a vulnerability-

Robert Wiblin: Because they can do before they release it-

Bruce Schneier: Right, the defender finds it and fixes it and it no longer exists. So right, you have this very bad intermediate time when the vulnerabilities are found in everything that exists today and there you’re going to see systems that monitor these insecure systems that know those vulnerabilities because they found them too. Watches them from being used and sort of, you know destroys them in the network. So you’ll see solutions like that, but then the endgame is vulnerabilities are a thing of the past. We could have this podcast 20 years from now and you can ask me, “Wow, remember 20 years ago when software vulnerabilities were a thing, wasn’t that a crazy time? It’s great that we’re past that”. And that’s not unreasonable.

Robert Wiblin: So I guess maybe this is the highest leverage opportunity if you’re someone who like both has expertise in ML and in computer security, is trying to figure out how do we make defensive ML algorithms that can just, in a very general sense, go out and like find weaknesses and figure out how to fix them?

Bruce Schneier: Yeah, and I think that’s really valuable and then also find weaknesses and unfixable things. We talk about the internet of things. Everything becoming a computer. One of the worries of this is that there’ll be a lot of vulnerable things lying around our environment for years and decades.

So if you think about your phone, your computer, it’s as secure as it is for two basic reasons. One, the team of engineers at Microsoft and Apple and Google have designed them securely. And two, those engineers who are constantly writing and pushing patches down to our devices are when vulnerabilities are discovered. Now that ecosystem doesn’t exist in low-cost systems, like DVRs and home routers and toys-

Robert Wiblin: Lightbulbs.

Bruce Schneier: Lightbulbs right, and toasters that are designed offshore by third parties. They don’t have engineering teams. They often can’t be patched and they’re going to be around for decades. So this insecure toaster, fifteen years from now, is still making toast and still sending spam or DDOS attacks or whatever because it’s horribly insecure and this is going to be a big problem.

Right? I mean, our phones and computers, we throw them away after a few years. You think of a car, actually a car is a good example. You buy a car today. It’s two years old. You drive it for 10 years. You sell it. Somebody else buys it. They drive for 10 years. They sell it. Probably at that point it’s put on a boat, sent somewhere in the Southern hemisphere where someone else buys it and drives it for another 10 to 20 years. You find a computer from 1977. You turn it on. Try to make it secure. Try to make it run. We have no idea how to secure forty-year-old consumer software. Both Apple and Microsoft dereciate operating systems after like 5 to 7 years because it’s hard to maintain the old stuff.

So we’re going to need systems that live in our network that kind of monitor all of this old cruft. Right, the toy that someone bought in 2020 that was on the internet and now it’s 2040 and the thing is still on the internet, even though nobody’s played with it in a decade and a half because it somehow gets it power remotely. We can make this stuff up.

And this is going to be a security nightmare. We’re going to need some new technology to solve it. Now, there are people thinking about this, I mean, I didn’t just make this up. Again, ideas are easy, everyone thinks of everything all the time. But we really need to start thinking about how to deploy these. Do they go in the routers? Do they go in the backbone? Who’s liable? What are the regulatory mechanism by which these things work?

Robert Wiblin: Yeah, I mean the internet of things drives me a little bit crazy. I somewhat skipped over that because it’s covered pretty well in your book and we can link to talks where you’ve described all the issues there.

I mean, are there any particularly high impact things that people can do to like, I mean, it seems like we’re just heading towards like a worse and worse situation with so many little pieces of hardware being computerized and they’re all going to end up insecure eventually right and often not getting patched?

Bruce Schneier: I can talk about two other things. I just talked about patching, and so the way patching is going to fail in this world of low-cost, embedded, not maintained old systems. So we need to solve that. I think that’s a really big problem that we need to figure out. Second thing is authentication. Authentication kind of only ever just barely worked, you know, and we got solutions. We have two factor which is great. If you can do it and we often can’t, backup systems we need and they’re often terrible. But authenticating’s going to explode in a new way. So that right now, if you authenticate, you’re doing either one of two things. So you pick up your phone and, I mean, I have my phone in my hand and I put my fingerprint on the reader and then I pushed a button and checked my email.

So there’s the authentication. It was me authenticating to a device and me authenticating to a remote service. Those are both me authenticating to something else. What we’re going to see the rise of, is thing to thing authentication. So the whole point of 5G is actually not for you to watch Netflix faster. It’s so things can talk to things without your intervention and they’re going to have to authenticate. So you think of all these smart city things or so imagine a car? Either driverless car or some kind of driver-assisted car. That car’s going to have to authenticate, in real-time, ad-hoc, to thousands of other cars and road signs and traffic signals and emergency alerts and everything and we don’t know how to do that.

We don’t know how to authenticate thing-to-thing at that scale. We do it a little bit. I mean, right now, when I go to my car, my phone automatically authenticates to the car and uses the microphone and speakers. But if you think about it, that’s Bluetooth. That works because I was there to set it up. I set it up manually.

That’s not going to work, ad hoc, as I’m driving through a city. That’s not going to work if I have a hundred different IOT devices at my home. Not going to pairwise-connect five thousand connections. So, we don’t have an answer for that. I think that’s an area that we need a lot of good research. The third is supply chain security.

This is in the news a lot. I mean right now, it’s Huawei and 5G. Should we trust Chinese-made networking equipment? Two years ago, the problem was Kaspersky. In the US should we trust Russian-made antivirus programs?

Robert Wiblin: Yeah. Should we trust things that are shipped by USPS?

Bruce Schneier: Yeah, and that’s the point right? I mean the question, “Can you trust a company that operates in a country you don’t trust?” is an obvious one. But all computer systems are deeply International. IPhones are American, but they’re not made in the US. The chips aren’t made in the US. Their programmers carry a hundred different passports.

And you have to trust update mechanisms and distribution mechanisms. And you mentioned shipping mechanisms. And you mentioned that because you know of a very famous photograph of NSA employees opening a Cisco router that was destined for the Syrian telephone company. And supply chain is an insurmountably hard problem because we are very international in our industry.

And subversion of that supply chain is so easy. I saw a paper, you can hack an iPhone through a malicious replacement screen. So you have to trust every aspect of the supply chain, from the chips, to the shipping. And we can’t. And that’s something that is a very difficult problem. Some of that I think is an internet-like problem. If you think about the internet. The internet, the origins of the internet was a research solution to the problem: “Can I build a reliable network out of unreliable parts?”. I’m asking a similar question: “Can I build a secure network system infrastructure out of insecure parts?”. And that I think is a research question on par with the internet.

Robert Wiblin: Yeah, so I’ve found some links that discuss that and I guess people can definitely buy your book if they want to dive in more. I find, this is my personality, but I find these computer security issues and like well, yeah, looking at all the vulnerabilities and people fighting endlessly endlessly fascinating.

I guess for you it’s all very entertaining-

Bruce Schneier: I think it’s the best field to be in too. I’m not going to deny that.

Robert Wiblin: Yeah, I think I took the wrong path somewhere and studied economics. Yes, so I’ve read this-

Bruce Schneier: But it’s very funny, economics matters a lot. My security problems actually are much more economic than technical. I have a lot of tech. My problem is not that it’s not being used. My problem is not that it’s not being deployed. My problem is, it’s not economically sound for companies to use this tech. We have a conference, WEIS, Workshop on Economics and Information Security, where economists and techies get together and do research on the economic models that drive computer security so you think about something like, oh spam.

Spam was a really interesting problem that had an economic solution. So spam was a huge problem. And then we all would have spamcheckers and they’d be on our mail servers. And they’d be pretty good and sort of not that great. We really wanted spamchecking to be in the backbone, but the telcos had no economic incentive to deploy spamcheckers at all. There was no upside or downside. And that’s where the problem lay and no one ever solved spam.

It was solved because the economics of email changed and now there are only like, you know, seven email providers on the planet. So they were now big enough to internalize the problem and they tackled spam and now spam is not a problem at all for anybody.

Robert Wiblin: Yeah.

Bruce Schneier: And that had nothing to do with the tech. That was all the way the economic models of email shifted around and we have lots of those. So I welcome economists in computer security.

So if you ever decide that podcasting is not as exciting as you want it to be. You can come join us!

Robert Wiblin: Cost and benefits tend to sneak in everywhere, yeah, economists kind of colonize it as everything. So yeah-

Bruce Schneier: Psychologists as well.

Robert Wiblin: Yeah, social sciences in general.

Bruce Schneier: The human interface. A lot of systems we have fail because of the people and the way this tech interfaces with people. So psychology and sociology also extraordinarily important in my field right now. What we’re recognizing is we’re not building techsystems. We’re building socio-tech systems and that economics, psychology, sociology matter so much because they are core to what we’re building. And this is different, right?

Twenty years ago, we were building tech tools to now where you design Facebook and economics, sociology, psychology matter just as much as the tech, if not more. And all those groups need to be together in design, implementation, maintenance, upkeep, features.

Robert Wiblin: I guess it sounds like you’re saying it’s a very very interdisciplinary field. It’s going to have to be. But sticking on the cyber security aspect for a second. To prepare for the interview I read this article, “How to build a cybersecurity career” by Daniel Miessler. Have you read that? And if you have, did you like it? Now, are there any other kind of similar guides for people who you know, want to figure out, you know, “What first steps should I take if I’m really taken with this idea?

Bruce Schneier: There are guides and that’s a good guide. I have recommended it to students who ask me. But I do tell them that they don’t need a guide.

This is not like medicine where there is a defined career path. It’s not like law., or like accounting.

Robert Wiblin: There’s just a lot of ways in?

Bruce Schneier: There’s a lot of ways in and there are so many aspects to it and different things you can do in such a demand that anyway in is fine. That any path is fine. That meandering around is actually beneficial. So I don’t want people to be wedded to a guide and follow the guide. I want them to follow their curiosity and they’ll learn more and do better that way.

Robert Wiblin: So yeah, it’s interesting. It seems like there’s not enough computer security people and yet it doesn’t seem like it’s that hard to break in. If you know, if you have the right mentality and you’ve got your head screwed on. There’s just… there’s so much demand that you can just play around in your basement and like figure out a whole lot of stuff and then try to go get a job where you harden systems. You can go to university courses. You can learn this stuff online. Why is it that there still aren’t enough people given that it’s so interesting. It seems like there’s not huge barriers to entry.

Bruce Schneier: Have you met the world? Everything is so interesting!

Robert Wiblin: Okay, there’s a lot of competition.

Bruce Schneier: There’s a lot of competition

Robert Wiblin: But it pays so well!

Bruce Schneier: So does designing video games. So does doing everything. We are sort of, as a society, we have a lot of choices of cool things to do with our time. And there’s no shortage. And I think doing computer security is a certain mentality. And that certain people are drawn to it other people aren’t. People who want to build and create things, you know, you want to go into the stuff that builds and creates. And in computer security we break things, right?

We tell you you can’t do that. We hack systems. We do things a little differently around here. And there are only certain kinds of people like that. I think a lot of people are drawn to the more creative and building aspects of tech and that’s fine. We need that too. There’s no computer security if there’s nothing to secure. So it takes all kinds. Actually doesn’t take all kinds.we just have all kinds.

Robert Wiblin: Yeah, I feel like I might have the right mentality for this in some ways because like everywhere, I’m like obsessed with my own security just because I think when I’m going around doing stuff, I’m constantly seeing the weakness in all the systems that these companies have built.

Like I had to reset my… My phone was stolen last week and I had to reinstall the app on my phone and I realized that everything that the bank demanded were only things that I knew that could be solicited through like a social manipulation. So there’s no like objects that I had to have.

There’s nothing that they couldn’t get through a phishing expedition or just like calling up my friends to try to figure out stuff that they might know. And I emailed them and I called them and was complaining about that their security here was bad. To be honest they probably might know more about the economics of this than me, that if this was being exploited very much then the system wouldn’t be designed that way. But at the same time, I was looking at this and like, “This is a terrible system for resetting someone’s phone access to their banking”.

Bruce Schneier: Oh, it is terrible and and hackers do exploit those and yes, the banks and other systems don’t fix them for really two reasons. One is the cost of losses is cheaper than the cost of fixing. More importantly the cost of losses to them is cheaper than fixing it.

Robert Wiblin: But it’s a whole lot of time for you, even if you get the money back-

Bruce Schneier: But you, Mr Economist, understand the notion of externalities and a bank is not going to fix the problem if someone else has the problem. So, iI mean, in 1978, in the United States, we passed the Fair Credit Reporting Act. And one of the things it did, is it limited liabilities for credit card losses to the individual to $50. And this was a game-changer in credit card security. Before that law, credit card companies would basically charge the user for fraud. Your credit card got stolen or lost and you were stuck with the bill until like the two weeks until the company could print the new little book with bad numbers. When Congress passed that law, suddenly the credit card companies were absorbing all the losses. They couldn’t pass it to the consumer.

Robert Wiblin: And they fix it very fast.

Bruce Schneier: But they did so many things that the consumer could never do. So think of what they did. Real-time verification of card validity. Microprinting on the cards. And the hologram to make them less forgeable. Shipping the cards and a PIN to the user in separate envelopes. Requiring activation from a phone that was recognized. Now, if you’re a user and you’re getting those losses you couldn’t implement any of those things.

But the credit card company could. They just never did because they never suffered the losses.

Robert Wiblin: Give the cost to the group that can do the most to fix the problem is just an obvious approach.

Bruce Schneier: So we in computer security try to use that principle again and again and again because we have this tech, but it can’t be deployed. It’s not being used because the people who can afford the tech aren’t seeing losses. Whenever I look at computer failures, I always try to look at the economic reasons and then see where you can move the liabilities to a place that’s consolidated. So the solutions can be researched, purchased, deployed and used.

Ubiquitous surveillance [01:52:46]

Robert Wiblin: I want to talk about ubiquitous surveillance for a minute. So it seems to me like we’re slightly stuck between a rock and a hard place here.

Maybe you won’t agree with it that there’s this trade-off, but there is this risk that in the future we’ll be able to design, you know, weapons that small groups can use that could kill like potentially billions of people. And it could become necessary in the future that we have like a lot of surveillance in order to monitor that and make sure that people aren’t actually going to do that kind of thing or don’t have access to that kind of technology. On the other hand, you know building basically the surveillance state that we have today in the US and the UK and Australia because it’s like building this infrastructure that could potentially be turned towards like authoritarianism or totalitarianism and it’s kind of surprising to me that the government has so much information about us and ability to track us and yet we haven’t seen like that much like backsliding away from democracy. Do you think that this is like a big problem potentially. This kind of trade off between like security versus like political problems and if so, like, what might be done about it?

Bruce Schneier: So there’s a bunch of unpack there. This is normally described as security versus privacy. That’s wrong. It’s really security versus security. There is security value in having our systems to be safe from hacking, eavesdropping, control. Especially as our communication systems and control systems are used by nuclear power plants and elected officials and CEOs.

I mean, so just take this debate about the iPhone. As long as an iPhone is used by all those people. And every police officer and judge and you know, they have to be secure. At the same time, we want to be able to solve crimes and there’s a security value in eavesdropping. My belief is that while that’s a debate today, in the future that debate will just disappear because the value of securing our systems will be so much greater. I mean sure, it’d be fun to eavesdrop on the cars, but if they crash, that’s really really really bad. So we will choose security over surveillance. Now this is a trade-off-

Robert Wiblin: But it seems like we have… Surveillance is getting worse and worse.

Bruce Schneier: Let me unpack. There’s a whole lot here. So the security value of making our systems eavesdrop-proof is greater than making them eavesdroppable. Simply because they are so critical to society and that will become even more stark when you can control your car from your phone or control your heart monitor from your phone. Now, we already have systems that give governments an extraordinary ability to spy on our private lives. We do that willingly because there is security value in then solving crimes. And the way we do that is we have mechanisms to ensure the police won’t misuse that process. In the US, there’s an entire warrant process, right? The police go to a judge and say, “T.His is what we want to do. Here’s the reasons. We won’t do more than this”. Judge says okay, and they do it. At the end of it, they inform the person spied on. There’s all these mechanisms. So we know how to do that properly.

A lot of the debate is not about surveillance. It’s about warrantless surveillance, right? Surveillance without a warrant. Which like, I cannot, I’m always amazed the police think it’s a good idea. Why don’t they want the legal protections to do the crimesolving. Now you’re talking about something different. You’re talking about, again the sort of more catastrophic system, by which there are incredible destructive capabilities that normal people can use and the conceit you gave in your scenario is that through surveillance, you can prevent them from using them. Which is of course ridiculous once you even think about it.

I mean right now in the United States mass shootings are a huge problem. There is no amount of surveillance that’ll prevent mass shootings, right? Because guns are plentiful. So the technology is so plentiful that surveillance isn’t an issue.

And that is the way to think about these future catastrophic risks. Any kind of technological risks will go through phases of ability according to difficulty, right? So in the beginning, it’s like nuclear weapons — only governments can do it. And then it’ll come to the point where people who are highly skilled can do it and maybe it takes a conspiracy. And then it gets to the point where gun massacres today is where like anybody can wake up in the morning and within 30 minutes kill 50 people. And so on that final point, surveillance is useless.

Robert Wiblin: Yeah, in the intermediate state, maybe-

Bruce Schneier: At that initial point, surveillance is irrelevant. In that intermediate state,q where it takes the conspiracy and some expertise, surveillance is valuable. But at best it’s going to buy you a few years. So I believe we’re going to realize eventually that the security cost of that surveillance state, which is enormous because of all the misuses not just by, you know, our police but by bad actors is so great that we’re gonna need some other solution.I mean, maybe like the solution to gun violence is like take away the guns. This isn’t stupid. There will be more organic solutions to all these catastrophic risks. There has to be. And that buying a few years, as the expertise flows downhill, won’t be enough to justify the enormous risks that the surveillance state will entail.

Robert Wiblin: Yeah, the philosopher Nick Bostrom in this paper, “The Fragile World Hypothesis” kind of paints like what if we are in this like very unfortunate world where it turns out that synthetic biology is like way more dangerous even than we think now, and it’s also going to be way easier than we imagined?

Bruce Schneier: Then surveillance isn’t gonna help.

Robert Wiblin: So then we’re just screwed no matter what.

Bruce Schneier: That’s my fear. My fear is we’re going to reach for surveillance because, as you painted it, it’s sort of obvious. Because just watch everybody, they won’t do the bad thing. But that isn’t true.

If it’s true, it’s only temporary. And if it’s temporary, it’s not going to be long enough to make enough of a difference. So I do fear that policymakers will reach for it as a tool. Just as they’re really doing today. Just as the FBI in the United States is saying, “Just give us the ability to eavesdrop on conversations and it will be fine”.

Really not paying attention to the costs of that. And there are a lot of ways to investigate crime that don’t involve compromising national security.

Robert Wiblin: How much do you worry about the political implications of having this level of surveillance and ability to spy on your political adversaries, potentially. Like do you think that we are in like a more vulnerable position now than we were 20 years ago where if you had a really bad leader in a country like the UK, who wants to reduce democracy in the country, that to some extent they have a lot of tools at their disposal. I mean in the UK, you have like even less legal protections, very few legal protections it turns out relative even to the US.

Bruce Schneier: Yeah, I think it is much more worrisome. And I worry about government surveillance. I worry about corporate surveillance a lot. The fact that, you know, Google knows what kind of porn everybody likes on this planet is a little creepy. And Google knows more about me than my spouse. By a lot. And this is true for pretty much everybody.

And that is worrisome and it’s worrisome for misuse. I mean certainly, these databases are vulnerable to attackers and criminals and you know, the various nefarious bad guys. But they’re also vulnerable to legal uses. And when you talk about, you know sliding into tyranny and Snowden’s called it “Turnkey Tyranny”.

That it just takes a bad leader. The infrastructure is there, and it can be misused. Then you go to a country like China where the infrastructure is being built for this purpose and it’s not a misuse, it is a design feature. But China’s exporting these surveillance control technologies to other countries.

So if you’re a third world want-to-be-dictator, you can buy at a very reasonable price these technologies from China. And, you know, other countries as well that will give you these capabilities. So yes, I think we’re building a world where dystopia is a lot easier and we really should step back and sort of figure out what we’re doing. It starts with the corporations. It starts with the companies that are collecting our data.

Robert Wiblin: More than the NSA? It also seems like the NSA has a lot of information about us and it’s taking it from the companies like this-

Bruce Schneier: Yeah, that’s it-

Robert Wiblin: Yeah, so the companies collect it for like what seems like somewhat more innocent commercial purposes, but then basically it’s all sitting there on a pile, that the government can appropriate and like use for much more nefarious purposes like whenever they want. It’s like this very nasty combination-

Bruce Schneier: It is, I mean the public-private surveillance partnership. I mean as much as we talk about NSA surveillance, they actually do very little actual surveillance.

Robert Wiblin: Interesting.

Bruce Schneier: They leverage everybody else’s right? But it’s not like they woke up in the morning and say ,”Let’s spy on everybody on the internet”. They woke up and said, “Wow, you know, these corporations are spying on everybody internet. Let’s just get ourselves a copy of it all”. And that’s what they do. And in the US they do it through legal means, national security letters, and sort of other ways to get bulk surveillance.

They do it through illegal means. They do it through hacking, through bribery, through threat. I mean all sorts of mechanisms and other countries don’t do it to that extreme, but they do it, right. We know China hacked the United States OPM database, which is a database of very sensitive information about US government employees. Right, that database was collected for perfectly legal purposes.

Robert Wiblin: Mostly sensible ones too.

Bruce Schneier: Right, mostly sensible ones, and China said, you know, “I’d love to copy that” and now, that happened.

Robert Wiblin: Yeah, and now they just have it forever.

Bruce Schneier: They have it forever.

Robert Wiblin: Yeah, I mean, I guess I see this mostly as an issue that requires maybe lawyers and politicians and public policy people to figure out because we’re not gonna get rid of surveillance, completely. Even if that was sensible. You’d just never get the bureaucrats and security and intelligence people on board with that,

Bruce Schneier: But we don’t want to get rid of surveillance.

Robert Wiblin: Yeah, we want to keep significant aspect sof it, but the question’s like, “What can we do to keep the good aspects of it and like get the intelligence people on board with a scheme that cannot just be like well you can’t just turn a key and turn a country and into the style like yeah, recreate the Stasi overnight.

Bruce Schneier: Well, I mean, we have a couple of ideas. We’re okay with retail surveillance, right? Suspect of a crime being surveilled after they’re the suspect. We tend to dislike wholesale surveillance, right? “Let’s watch the entire city of Baltimore and see what happens”. We know about due process.

We know about audits. We know how to do this. We just need to recognize that there’s one world one answer. And what you said, getting policymakers to think about this, gets back to where we started. That we need technologists in the room in forming this debate. It can’t be policymakers thinking about this without understanding the tech. Understanding the tech is critical and it’s how we’re going to get these answers.

Robert Wiblin: So bringing a hacker mindset to this, it’s not only that we don’t want there to be wholesale surveillance, you know watching everyone and collecting all the data all the time. It’s that you don’t even want it to be possible for them to do that because you have to worry like, what if a bad person gets in control of these systems? If the capacity is there, then like there’s always a risk that they might just like start doing that and then the ability to resist becomes becomes challenging.

Bruce Schneier: And right and that’s designing systems to be surveillance proof. There’s a lot of research in this and here again, the tech is outstripping the policy. We have tech that builds systems to prevent the kind of wholesale surveillance. We have systems that’ll prevent someone from dropping a malicious update on your phone and not everyone else’s phone. We have all those things. Getting them used, getting them deployed is often a matter of policy and economics. And it’s recognizing that that’s important.

Robert Wiblin: Do we have a system to stop the government from… Or like people with cameras just like watch you everywhere on the street. I mean, we’re in London. It’s like very hard to be on the street here and not be in the view of a camera. It’s going to be hard to like-

Bruce Schneier: But that’s gonna be policy-

Robert Wiblin: Yeah, that’s illegal.

Bruce Schneier: But in a sense, we are in a very unique time for cameras. They’re everywhere and you can still see them. Twenty years ago, they weren’t everywhere. In 20 years, you won’t be able to see them.

Robert Wiblin: Yeah, what about microphones?

Bruce Schneier: Same thing. The sensors will get so small and so distant that they will not be humanly perceptible anymore. And that’s going to be something that will be largely policy and not tech. Because the tech will be there and you know, whether, we see a lot of work… people have done some really interesting work on what’s possible. And it’s sort of two things. It’s the sensors. Whether they be cameras or microphones.

That are increasingly sensitive at a distance and can hover longer, see more, see better, see through walls, see infrared, learn a lot more. And then AI systems to process that data. Because cameras that watch the entire city are useless unless the human beings are watching the cameras. But if there are computers watching the cameras that’ll show the human beings everyone with a red shirt or everyone that matches this character or walks this way or has been in these five locations on these five subsequent days. That the ability to automatically process a lot of this data, brings a lot of the interpreting of the surveillance at machine speeds at scale. And that kind of surveillance state is very worrisome. And there’s going to be more policy.

Robert Wiblin: I feel a bit despondent about this problem because it seems like who’s going to stop the Chinese government from doing this?

Bruce Schneier: No-one’s going to stop the Chinese government from doing it to China. But honestly, look at the Chinese government’s history. There’s a lot of things nobody stopped their government from doing in China. So we are never going to stop countries from doing the bad things in their borders. We and the rest of the world are becoming more moral with every passing century and I think that will continue. I mean, I don’t think these tech developments spell the end of democracy. I just don’t.

Why is Bruce optimistic? [02:05:28]

Robert Wiblin: What gives you hope here and what do you think can be done?

Bruce Schneier: Human beings. I mean we are inherently moral and it takes us a World War or two, but we eventually do the right things. And it’s going to be noisy, it’s going to be messy. But I think we will figure this out. I don’t think this will be the end. I actually am optimistic.

Robert Wiblin: All right.

Bruce Schneier: All your robot scenarios and killer stuff.

Robert Wiblin: It’s not robots, its ML systems!

Bruce Schneier: Yeah, ML robots are even worse!

Robert Wiblin: Yeah, that’s true.

Bruce Schneier: With the zombie apocalypse.

Robert Wiblin: I mean, there’s so many threads I wish we could have gone down further here. We’re gonna have to have a lot more episodes about computer security. Maybe just one final question. We talked a bunch about computer security movie plots. What are some actually good movies about of computer and information security and hacking and so on and is it Sneakers? Do you like Sneakers-

Bruce Schneier: I like Sneakers, I like Hackers. I like War Games. I like the old ones.

Robert Wiblin: Yeah, interesting. Why are the old ones better?

Bruce Schneier: Because they’re nostalgic and fun.

Robert Wiblin: Cool! Well, it’s been a great pleasure having you on and yeah, I really hope we can get you back at some point in the future. I mean this issue just seems like it’s gonna get more and more important.

Bruce Schneier: This was grand fun, thank you.

Rob’s outro [02:06:43]

Robert Wiblin: So at the start I promised you some advice on how to improve your own computer security practices, so here it is

The first is that you should always use two factor authentication for important accounts like Google, Facebook, Dropbox, Microsoft, Apple and so on.

If the only thing they offer is SMS messages then use that. But using the Google Authenticator app on your phone is better, and U2F keys that you plug into your computer are better again.

The next important thing is not to use the same password everywhere. Passwords are constantly leaked and published online, and if you use the same password in many places at once, that means people can break into all the relevant accounts at once. To make that workable you’ll need to use a password manager like LastPass or 1Password. Stop procrastinating and just set that up.

The next most important thing is to quickly install software updates for MacOS, Windows, Android, iPhone, Chrome, Firefox and other software you use. Most of these get a package of security fixes each month, and you should set them to update automatically.

To protect yourself against ransomware you should keep full drive backups on an external harddrive that you plug in periodically.

I asked Bruce for advice and he actually suggested having antivirus on your computer was a good idea as well. I’ve heard mixed reports on that personally.

Finally, learn to identify phishing emails that can easily break into your accounts, even if you have two factor authentication (unless it’s U2F).

OK I just wanted to comment on one other point that came up in the interview.

I asked Bruce whether there were any differences between cybersecurity and biosecurity which might make analogising from one to the other dangerous. He said he suspected there weren’t. But my colleague Howie pointed out two of them.

The first is that once software security flaws are made public we know how to patch them, and this usually happens very quickly. If new biological threats are made public, it’s not clear we have any way to deal with them, and certainly not quickly.

The second is that billions of dollars are spent by hackers looking for software vulnerabilities, which means most problems out there are found pretty quickly, and are likely already being exploited. But there’s no way to make lots of money finding dangerous ways to use biotechnology, so few people bother trying. That means it’s more likely for dangerous possibilities to go unnoticed by anyone for years or decades, which might well be a good thing.

Alright, it’s a shame I wasn’t fast enough to raise those issues with Bruce, but I’m sure we’ll be able to revisit them in future.

The 80,000 Hours Podcast is produced by Keiran Harris.

Thanks for joining, talk to you in a week or two!

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world’s most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths - from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected]

What should I listen to first?

We've carefully selected ten episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe by searching for 80,000 Hours wherever you get podcasts, or click one of the buttons below:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.