#174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers

In today’s episode, host Luisa Rodriguez speaks to Nita Farahany — professor of law and philosophy at Duke Law School — about applications of cutting-edge neurotechnology.

They cover:

  • How close we are to actual mind reading.
  • How hacking neural interfaces could cure depression.
  • How companies might use neural data in the workplace — like tracking how productive you are, or using your emotional states against you in negotiations.
  • How close we are to being able to unlock our phones by singing a song in our heads.
  • How neurodata has been used for interrogations, and even criminal prosecutions.
  • The possibility of linking brains to the point where you could experience exactly the same thing as another person.
  • Military applications of this tech, including the possibility of one soldier controlling swarms of drones with their mind.
  • And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Highlights

The potential of neural interfaces

Nita Farahany: Yeah, you don’t even have to think up, down, left, right. Right now, we have become so used to the interfaces we use, like a keyboard or a mouse, that we’re used to not thinking, “I’m going to move my finger left and I’m going to move my finger right.” But really what we’ve done is we’ve added friction between us and operating a device: there’s some intermediary that you have to use brainpower to operate; you have to use your brain to have your finger move left and right. And just think about the time that you lose there too.

But it’s also unnatural. I mean, we’ve learned it, so it’s become more natural. But think about right now: whoever’s listening can’t see me, but I’m moving my hands in the way that I normally do when I’m expressing a thought. I’m not thinking, “move my hand up or down or left or right”; it’s just part of how I express myself. Similarly, the idea of being able to operate a drone is you’re not thinking, “now go left” or “now go right”: if you’re looking at a screen, that is the navigation — you’re just navigating, right? You’re just intentionally navigating. And then the drones are an extension of your body; they’re an extension of your mind that are navigating based on how you are naturally navigating through space.

And that’s the difference between neural interfaces: it’s meant to be a much more natural and seamless way of interacting with the world and with other objects that become extensions of our minds. Rather than the more direct connection that we have right now with our body, it’s forging a connection with external technology without the kinds of intermediaries that we’re used to — which, if you kind of step back and look at them, they’re a little weird. It’s a little weird that you move your hand around to actually try to navigate through a space. Or if you’re in virtual reality, it’s weird that you have to be using a joystick to move, right? You should just be able to think about moving naturally.

Luisa Rodriguez: Totally. Yeah. That really, really helped me. I don’t know if this works, but another analogy I’m thinking of is that I’ve now got muscle memory for my keyboard. I know that the L is on the right and the A is on the left. And not only will it remove the fact that I had to learn to type, but it, in theory, could also remove something like the fact that I’m used to having to translate whatever kinds of thoughts I have that are both verbal and visual into linear sentences created on a Word doc where I edit in a certain way, and I can’t backspace as quickly as I want to, or I have to switch to my mouse. It’s a mix of physical hand-eye coordination and also just something like the way of thinking.

Nita Farahany: Yeah. We’ve learned a way of expressing ourselves through chokeholds, right? But we have become accustomed to those chokeholds, and so it’s as if it’s natural — and in many ways, it is for us, because that’s what we’ve learned. That’s how we’ve wired our brains. Neural interface imagines a new world where, rather than having the chokehold, you are operating more like one with the devices that you’re operating, and you’re operating without the chokeholds in between.

There’s still going to be limitations on being able to have a full-throttled thought expressed through another medium. We have limitations of language right now of how we communicate: you can hear my words, but you can’t also see the visual images in my mind that go with those words. You can’t feel the feelings that I am feeling along with the words that I’m saying. You can pick some of that up from the tenor of my voice or pieces like that, but you’re not getting all of it.

And even when you’re interacting with a swarm of drones, there’s still these limitations. But I think people dream of a world in which brain-to-brain communication might enable sending across to another person a more full-throttled thought than we currently have. I don’t know of any technology that does that yet. I don’t know of anything that actually captures it. And part of it is, I don’t think anybody has figured out how to decode those multiple layers of thought, from cognition to metacognition to the full embodiment of thought. But I think it’s neat to think about that, the possibility of actually getting to that level of communication with one another.

Hacking neural interfaces

Nita Farahany: There was a patient who was suffering from really severe depression — to the point where she described herself as being terminally ill, like she was at the end of her life — and every different kind of treatment had failed for her. Finally, she agreed with her physicians to have electrodes implanted into her brain, and those electrodes were able to trace the specific neuronal firing patterns in her brain when she was experiencing the most severe symptoms of depression. And then were able to, after tracing those, every time that you would have activation of those signals, basically interrupt those signals. So think of it like a pacemaker but for the brain: when a signal goes wrong, it would override it and put that new signal in. And that meant that she now actually has a typical range of emotions, she has been able to overcome depression, she now lives a life worth living.

That’s a great story, right? That’s a happy story and a happy application of this technology. But that means we’re down to the point where you could trace specific neuronal firing patterns, at least with implanted electrodes, and then interrupt and disrupt those patterns. Can we do the same for other kinds of thoughts? Could it be that one day we get to the point where if you’re wearing EEG headsets that also have the capacity for neurostimulation, that you could pick up specific patterns of thoughts and disrupt those specific patterns of thoughts if they’re hacked? If your device is hacked, for example. Maybe.

I mean, we’re now sort of imagining a science fiction world where this is happening. But that’s how I would imagine it would first happen: that you could have either first very general stimulation — like I experienced at the Royal Society meeting, where suddenly I’m experiencing vertigo — and somebody could hack your device. Like, I’m wearing this headset for meditation, but it’s hacked, and suddenly I’m experiencing vertigo and I’m disabled. You know, devices get hacked. We can imagine devices getting hacked — and especially ones that have neurostimulation capacity, they could be hacked either in really specific patterns or they could be hacked in ways that generally could just take a person out.

Neural data and mental privacy

Nita Farahany: What does it mean to leave your brainwave collection on? It means multifunctional devices, right? So the primary devices that are coming are earbuds, headphones, and watches that pick up brain activity, but also let you take conference calls, listen to music, do a podcast. All of those things. And so, passively, it’s collecting brainwave activity while you use it in every other way. People are used to multifunctional watches, they’re used to rings, they’re used to all of these devices. It is another form of quantification of brain activity.

Then what does it mean? So you do it to unlock your app on your phone. Now you’re interacting with an app on your phone. How did you react to the advertisement that just popped up? Are you engaged? Is your mind wandering? Did you experience pleasure, interest, curiosity? What your actual reaction to everything is. A political message ad pops up on your phone. Did you react in disgust? Did you react in curiosity and interest?

I mean, these are all the kinds of things that can start to be picked up, and it’s your reaction to both explicit content, and also subliminally primed or unconsciously primed content — all of which can be captured, right?

Luisa Rodriguez: Yeah, I find myself drawn to the benefits. But also, I’m not the kind of person who’s super privacy-oriented, and I can easily see myself being like, “Who cares if they know my reaction to a song? I feel fine about that.” But then I can just really easily imagine the slippery slope where the technology keeps getting better and better, and it picks up more complex thoughts. And also, I’m not even correctly thinking about all the ways this data could be used. I’m probably imagining these kind of benign cases, but actually there are probably 100 different uses that I’m not even thinking of, and some of them might actually bother me.

Nita Farahany: Some of them might be totally fine. And some people — and you’re right, which is a lot of people — are not that worried about their privacy in general. So they may react to this and say, “That’s fine. Maybe I’m just going to get much better advertisements.” And that’s OK. If people choose that, if they’re OK with giving up their mental privacy, that’s fine. I’m fine with people making choices that are informed choices, and deciding to do whatever they will do.

I would guess there is a lot more going on in your mind than you think that you want other people to know. I would just ask you: Do you ever tell a little white lie? Do you ever tell a friend that you like their couch when you walk in? Or if you have a partner, do you ever tell them that their new shirt looks great? Or like, “No, you can’t tell about that giant zit on your forehead. You look terrific.”

There’s a lot of things that are like that. Or your instant reaction to something is disgust, but you have a higher order way of thinking about it. Or, less benignly, you harbour some biases that you’re trying to work on. You realise you grew up with some ingrained societal and structural biases, and you’re working on that. So your instant reaction to somebody with a different colour of skin or a different hairstyle or a different whatever — pick your bias — is one that you’re not proud of and you recognise it, you sense it in yourself, because that’s something you’re working on. And your higher-order cognitive processing kicks in, and you think, “No, that is not me. That is not who I want to be.” But your brain would reveal it, right?

Or you’re figuring out your sexual orientation, you’re figuring out your gender identity when you’re much younger, and your reaction to advertisements or your reaction to stimuli around you gives you away well before you’re ready to share that with the world. There’s a lot of that. Maybe you don’t have it in your life, but you might. You might have some of that in your life.

It’s hard to imagine that world, is just what I would say, because we’re so used to all of the rest of our private information that we in some ways intentionally express. Or like, yeah, I drove there, so you picked it up on my GPS. Or I typed that, but I intentionally typed it. There’s a lot of filtering that you’re doing that you’re just not even fully aware of. And just imagine the filter being gone. Filter is gone: all of it can be picked up and decoded by other people. And we haven’t even gotten to manipulating your brain based on what it is that people learn about you. This is just the passive decoding of information.

Will neurodata be used to convict criminals the way Fitbit data is?

Nita Farahany: So the Fitbit cases are passive collection of data, meaning you have your Fitbit on, and it’s tracking your movements and activities, and you’re not consciously creating the information. And then later, the police subpoena that information and use it to confirm or to try to show that you weren’t doing what you said you were doing at the time.

With brain data, it’s a little bit different for the context in the UAE, which is that it’s been used as a tool of interrogation. So instead of passive creation of data, a person’s hauled into law enforcement, into the police station, and then they are required to put on a headset, like an EEG headset. Again, these headsets can be like earbuds or headphones, but just imagine a cap that has dry electrodes that are picking up people’s brainwave activities.

Then they’re shown a series of images or read a series of prompts, and the law enforcement are looking for what are called evoked response potentials; they’re looking for automatic reactions in the brain. And here what they’re looking for is recognition — you know, you say a terrorist name that the person shouldn’t know, there’s no context in which they should know it, and they recognise it; their brain shows recognition memory. Or you show them crime scene details and their brain shows recognition memory.

And in the UAE, it’s been used apparently to obtain murder convictions by doing this. Similar technology has been used for years in India. And there’s been a really interesting set of legal challenges to the constitutionality of doing that in India, but in countries around the world, this technology apparently has already been used in a number of cases to obtain criminal convictions.

I have not gotten verification of this other case yet, but MIT Tech Review reported on this, and I reached out to the woman who made the comment about it at a conference. Apparently a patient who suffers from epilepsy has implanted electrodes in their brain — and this is not uncommon with some conditions like this — that can either be used to control the epileptic seizures or detect them earlier, something like that.

So this person had implanted electrodes. And I say that just because the data is being captured regularly all the time: if you have implanted electrodes it’s passively always collecting brain data. And the person was accused of a crime, and they sought their brain data from the company — the defendant themselves, rather than the government in this case — to try to show that they were having an epileptic seizure at the time, not that they were violently assaulting somebody. And that would be the first case of its kind, if that turns out to be true.

And really, it’s just like the Fitbit data, where people would say, “Google, provide my Fitbit data, because I want to show I was actually asleep at the time, not that I was moving around, and I couldn’t have killed somebody because I was asleep at the time.” Or “My pattern and alibi fits with what the data shows.” The brain data is going to be a lot more compelling than the Fitbit data in those instances. And just like the person can ask for the data, so too can the government then subpoena from a third party, the person who actually operates the device, that data as well.

How companies might use neural data in the workplace

Nita Farahany: I wrote a couple of scenarios in the book. Most of it is grounded in ‘here’s exactly what’s happening today.’ But I wanted to help people understand, no matter what their frame of reference is, why it would be problematic. I wanted to try to help people who really strongly believe in freedom of contract in the workplace — so kind of the staunchest libertarian, who thinks, “OK, but the market will take care of itself” — understand why, in a context like this, the market can’t just take care of itself.

The kind of scenario that I painted in the book for that was to imagine this: You’ve got your employee who’s wearing these earbuds to take their conference calls and do everything else, right? And there’s asymmetry in information — that is, the employer can see what the person’s brain is doing at any given time, but of course, the employee can’t see what the employer’s brain is doing at any given time.

So the employer calls the employee up and says, “Hey, I wanted to let you know that you did great last quarter, and so you’re going to get a raise. I’m delighted to let you know that you’re going to get a 2% raise in salary.” And the employee’s brain data shows that they are just thrilled. Like, they’re just so happy: “Hooray, I’m getting a 2% raise!” But they know better than to say, “Hooray!” — they know that would give away their negotiating position right away — so they say, “Thanks so much. I was actually hoping for a bigger raise. I was really hoping for 10%.” And while that’s happening, they’re afraid, right? And you register that in the brainwave activity. And the employer says, “I’m going to think about it and I’ll get back to you.”

And then they go and they look at the brain data and they see the person was overjoyed when they got the 2%, and they’re super fearful when they offer the 10%. They have this additional asymmetry of knowledge, which really frustrates freedom of contract. It turns out the employer can easily handle the 10% — they’ve got the funds: their revenue really went up last quarter, they could have easily done it — but they have this information. They come back the next day and they say, “So sorry, we can only afford 2%.” And the person feels relieved, but still content, and the employer walks away having gained a significant advantage from what the brain data revealed.

And that is to just help people understand that in every conversation, your reaction to every piece of information, can suddenly be gleaned. It’s not just whether you’re paying attention or your mind is wandering. It is your reaction to company-level policy as it’s flashed up and how you actually feel about it. It is working with other people in the company where your brain starts to synchronise with theirs — because when people are working together, you start to see brainwave synchrony between them — and maybe you guys are planning for collective action to unionise against the company, but you see a bunch of brainwaves that are synchronising in ways that they shouldn’t, and you’re able to triangulate that with all of the other information that you’re surveilling them on, and you prevent them from doing so.

So I’m describing a scenario in the workplace where the employer is just looking at broad emotional brain states. I would not be surprised if in a few years — or even less, really — what we’re talking about is decoding more complex thought.

The risks of getting really deep into someone's brain

Nita Farahany: The more people who have brain-computer interface technology as implanted neurotechnology, the more that they need to have a better sense of “Where am I and where do I end, and where does the technology begin? And how do I understand the interrelationship between me and the technology?”

I was talking to a researcher, a scientist, recently, who does a lot of work in deep brain stimulation. She was talking with me about her hearing loss and how she has started to wear hearing aids, and that that’s required her to sort of reestablish her sense of self in the world, because her concept of hearing is fundamentally changed. And we were talking about that in relationship to deep brain stimulation, where she sees patients who are suffering from intractable depression, and they then have an implanted device, and it takes about a year before they start to develop a sense of, “This is me, and that’s the technology, and here’s where I end, and here’s where the technology begins, and here’s me plus technology” — like this new concept of self. And I think we have to get to this place — whether it’s with implanted neurotechnology, or wearable neurotechnology, or just me and my mobile device — to start to update human thinking about us in relationship to our technology and our concept of self as a relational self.

We talked earlier about hacking. We could get into the dark side of all of this. But before we even do the risks, it is: How do people understand themselves? And one thing people have worried about a lot with these technologies is a discontinuity of self. There’s you, and then there’s you after the implant. And maybe you after the implant is a fundamentally different person. Or maybe accidentally in the surgery, parts of the empathetic you got damaged, and suddenly you are a violent killer or something like that.

There’s all those kinds of things that might emerge, but I think probably the most fundamental that people have really grappled with, is how do you get informed consent? Truly, for somebody to understand what does it mean to be a different person in relation to a technology that is implanted in your brain before and after? And how do you think about that future self and make decisions that are truly informed when you can’t have any idea of what that actually is like?

Luisa Rodriguez: Yeah. Out of curiosity, can you take me into the dark side? What are some of those less likely, but maybe scarier risks?

Nita Farahany: Yeah, I’m happy to go there. Although I’ll say this: I do a lot on the ethics of neurotechnology, and I am far more concerned from an ethical perspective about widescale, consumer-based neurotechnology than I am about implanted neurotechnology. And the reasons that’s true are both a very different risk-benefit calculus for the people who are currently part of the population who would receive implanted neurotechnology, but also because it’s happening in a really tightly regulated space as opposed to consumer technology, where there’s almost no regulations and it’s just the wild west.

But in the dystopian world — and with all of those caveats, which I think are really important — I think it’s still possible, without really good cybersecurity measures, that there’s a backdoor into the chips. That some bad actor could gain access to implanted electrodes in a person’s brain. And if they’re both read and write devices — not just interrupting a person’s mental privacy, but have the capacity of stimulating the brain and changing how a person behaves — there’s no way we would really even know that’s happening, right? When something is sort of invisibly happening in a person’s brain that changes their behaviour, how do you have any idea that that’s happening because somebody is hacked into their device versus that’s coming from their will or their intentionality?

And we have to understand people’s relationship to their technology, and we have to be able to somehow observe that something has happened to this person, which would lead us to be able to investigate that something has happened to their device and somebody has gained access to it or interference with it or something like that.

You know, we’re dealing with such small, tiny patient populations. It’s not like the president of the United States has implanted neurotechnology, where some foreign actor is going to say it’s worth it to hack into their device and turn them into the Manchurian candidate. But in the imagined sci-fi world of what could go wrong: what could go wrong if this goes to scale, and if Elon Musk really does get a brain-computer interface device into every one of our brains, is that we’d have almost no idea that the person had been hacked, and that their behaviour is not their own.

Articles, books, and other media discussed in the show

Nita’s work:

Criminal applications of current neurotechnology:

Therapeutic and enhancement applications:

Concerns about cognitive warfare:

Efforts at regulation and preserving rights:

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.