La bataille pour votre cerveau, avec Nita A. Farahany

14 mars 2023 - 72 minutes d'écoute

Le moment est venu d'étendre les droits de l'homme aux droits cognitifs, propose Nita A. Farahany, professeur à la Duke Law School, dans son livre qui vient d'être publié. The Battle for Your Brain : Defending the Right to Think Clearly in the Age of Neurotechnologies (La bataille pour votre cerveau : défendre le droit de penser clairement à l'ère des neurotechnologies). Elle y présente la vaste gamme d'appareils déjà déployés qui permettent d'échantillonner diverses formes d'activité cérébrale. Dans son livre et dans ce podcast d'une grande portée sur l'intelligence artificielle et l'égalité avec Wendell Wallach, membre de Carnegie-Uehiro, Mme Farahany explique comment les informations cognitives, même limitées, collectées par les neurotechnologies peuvent être combinées à d'autres données pour améliorer la compréhension de soi ou manipuler les attitudes ou l'état d'esprit.

La bataille pour votre cerveau Farahany AIEI Lien podcast Spotify La bataille pour votre cerveau Farahany AIEI Apple Podcast

WENDELL WALLACH : Bienvenue. Je suis Wendell Wallach, codirecteur de l'initiative sur l'IA et l'égalité (AIEI) à l'adresse Carnegie Council pour l'éthique dans les affaires internationales. Ce podcast est le deuxième de notre série sur la neuroéthique. Le premier était avec le Dr Joseph Fins, avec qui nous avons discuté de ses recherches sur l'utilisation des neurotechnologies pour communiquer avec des patients peu conscients. Joe a qualifié la neuroéthique d'"éthique de la technologie", et je pense que cela deviendra encore plus clair aujourd'hui lorsque nous parlerons de l'étendue des neurotechnologies déjà déployées avec ma collègue et amie Nita Farahany. Nous sommes particulièrement ravis de l'accueillir aujourd'hui, date de publication de son merveilleux nouveau livre The Battle for Your Brain :Defending the Right to Think Freely in the Age of Neurotechnology (La bataille pour votre cerveau : défendre le droit de penser librement à l'ère de la neurotechnologie).

Avant d'entrer dans le vif du sujet, permettez-moi de vous parler un peu de Nita. Elle est une éminente spécialiste des implications éthiques, juridiques et sociales des biosciences et des technologies émergentes, en particulier celles liées aux neurosciences et à la génétique comportementale. Nita est professeur de droit et de philosophie à la Duke School of Law et directrice fondatrice de Duke University Science & Society. En 2010, Nita a été nommée par le président Obama à la Commission présidentielle pour l'étude des questions de bioéthique, où elle a siégé jusqu'en 2017. Nita fait également partie du réseau d'experts du Forum économique mondial. Nous nous sommes d'ailleurs rencontrés pour la première fois lors d'un événement du Forum économique mondial à Tianjin, en Chine.

Félicitations, Nita, pour la publication de The Battle for Your Brain.

NITA FARAHANY : Merci, Wendell. Je suis ravie d'être avec vous aujourd'hui. Je ne peux imaginer personne avec qui j'aurais plus de plaisir à avoir une conversation à ce sujet ou à le célébrer que vous, compte tenu de notre longue histoire commune.

WENDELL WALLACH: Thank you.

To start out, let's talk a little bit about the manipulation of our brain and behavior with the use of neurotechnologies because I think that is something that appears immediately for many of our listeners when they hear about technologies that are being designed to get in touch with what is going on in our inner lives. Tell us about how precise we are in the manipulation of the brain, what you find acceptable, and what you think is truly over the line?

NITA FARAHANY: That is a good, big, giant place to start, Wendell.

There are all kinds of predictive algorithms that can already tell with a startling degree of accuracy what we are thinking or feeling in general ways. If you think about a platform like TikTok and the algorithms that power it, part of the reason why governments like the United States are so worried is because just after a person spends minutes to hours on a platform like that the algorithm is better and better able to tell what a person's preferences, desires, and biases are, segment them, and start to give them a lot more of what it is that their preferences reveal, and that can subtly manipulate and change people's behavior by shaping their views and making them think that whatever it is that they are interested in, whatever their bias is, there is a whole lot of that in the world. It becomes their entire world as the algorithm more precisely shapes what it is that they are interacting with.

If you think about the ways that the rest of the technology we interact with is designed to hack in many ways the shortcuts in our brain—from using something like AutoPlay features that keep you onscreen and watching the next show or a like button that plays on your need for social reciprocity as a craving, a shortcut in your brain that has you come back time and again, and notifications which are bunched together in just a precise way to have you addicted to platforms—our brains are being manipulated all the time. So when I wrote The Battle for Your Brain I did not write it just about neurotechnology in isolation. I wrote it about neurotechnology as integrated into that broader environment as well as the technologies which are using advanced understandings of the brain through developments in neurotechnology and through developments in neuroscience to be able to more precisely manipulate the brain.

When I think about manipulation and what that is, it is not an easy line to draw. We try to persuade each other all the time—I am trying to persuade you and listeners today—about the importance of the battle for our brain, but when do we cross the line between persuading other people to try to bring them around to your perspective or your point of view and share knowledge with them or inspire them in your call to action versus doing something that crosses the line into what we would call impermissible, unethical manipulation?

I proposed a line in The Battle for Your Brain that is different than what some other people have proposed by going through the categories of neuromarketing—marketing to our brains based on a better understanding, addictive technologies, disinformation, and using our brain heuristics and shortcuts—and looking at a startling new marketing strategy called "dream incubation," and I argue that the line that we need to draw in all of that may be different than what some other people think we should.

WENDELL WALLACH: Dream incubation—tell us what that is.

NITA FARAHANY: Dream incubation kind of creeped me out, to be perfectly honest, when I first read about it, Wendell. It is a marketing technique where researchers have tried to figure out together with marketers if you could use the suggestible state of minds just upon waking—when all of the blood flow has not been restored to the prefrontal cortex and distributed across the brain, a moment at which your brain is most suggestible—to try to plant essentially preferences, desires, or even associations.

Coors was regularly excluded from the National Football League's Super Bowl Halftime Show and wanted to figure out if there was some other marketing tactic or technique that they could come up with. What they decided to do was reach out to a researcher who had been studying dream incubation.

What she found is that there is a suggestible state of mind during the period just after waking up, before full blood flow is restored to all parts of the brain, and that during that suggestible window if you played things like a soundscape or visual images, for example, that you could potentially lead to a person as they fall back asleep thinking about whatever it is that you have primed them to think about.

If you have primed them to think about, for example, mountains and water when it comes to Coors, like it's refreshing, and to have that as a positive association when you fall back asleep and dream about that, then on the next waking when the person was just in that state of mind when you can still remember, they would ask them what they dreamed about and, sure enough, based on self-reports they were dreaming about mountains, water, and this refreshing association with Coors. That idea, that you can use the time that a person is unconscious, that they are asleep, as a time to market to them seems pretty chilling to me actually.

WENDELL WALLACH: Might a neuromarketing company, Coors, or somebody else be able to know unbeknownst to you that you were in that vulnerable state?

NITA FARAHANY: Potentially. I should begin by saying that the hope with this kind of research is that people could use the suggestibility of sleep state to do things like work on post-traumatic stress disorder (PTSD) and overcome traumatic memories, that there could be therapeutic and valuable applications for it. I am not troubled by somebody consenting to the use of dream incubation for therapeutic purposes or really for any purpose. I am troubled by exactly what you asked about, which is the possibility that it could be used in ways that are not fully consented to.

For example, people wear biosensors to sleep that track their sleep activity, whether that is a watch or a sleep mask with sensors built into it or a Fitbit that picks up their sleep activity. Those can pick up the moment at which you have those jostles, those movements where you are at an awake-enough state that you are in that suggestible state of mind. With increasing use of biosensors for picking up brain activity that could become even more precise.

Given the ubiquity with which people have cellphones in their bedrooms and bedside tables or other in-home smart devices like Google Home or Amazon Echo devices that could play music, you could imagine a world in which there is integration between these things. Your Apple Watch senses that you are waking up and then starts to play a soundscape for dream incubation.

Again, for therapeutic reasons that might be just fine. You might actually program that and think, This is just what I need to fall back asleep, but if it was done without consent for marketing, for micromarketing, or even to try to shape a person's views, political preferences, or ideology, the possibilities of using a suggestible state of mind as a time to target the brain could be profoundly problematic.

WENDELL WALLACH: What about in more ubiquitous kinds of applications like neuromarketing generally? Are there other areas than dream incubation where you would want to see informed consent?

NITA FARAHANY: The tricky thing about neuromarketing is that you have informed consent from the people to undergo the studies but not informed consent by people who are subject to the marketing. I am less troubled, I should say, by neuromarketing as a practice and as a tactic. I don't think it is that different from other marketing practices which we have found to be permissible for a very long time.

What feels uncomfortable about neuromarketing for many people is that the reason neuromarketing has become as popular and as widespread as it has is because of a belief that people's self-reports, their perception of their preferences, does not align or correlate as well as their brain-based responses to advertisements. You show a series of advertisements to a person without looking at their brain activity, and they say, "I like this one" or "I like that one," or "This is the one that is most engaging," and then you show them the same information using brain-based detection techniques, whether that is an electroencephalography (EEG), functional magnetic-resonance imaging (fMRI), or some other technology, and their brains show very different levels of activation, interest, and immersion, and it shows it with precision: "This is the moment at which they stopped looking, their brain dropped off, their engagement dropped off; they didn't feel joy when they saw this; they felt disgust or boredom." You use those insights to change it until you get the reaction that you are hoping to get out of the person, notwithstanding what they have self-reported.

That idea of going with what our brains reveal rather than what people say has a sense for some people of again bypassing conscious preferences, bypassing the conscious mind. I am less troubled about it because the truth is that there is not an unconscious and subconscious mind in that way. We have a mind, and it is integrated between all kinds of unconscious primes, subconscious primes, in our daily life, and it is a more effective marketing technique that is generally aimed at trying to give people more of what they want.

I understand why people are uncomfortable with it, and I think it is possible that it could be put to ends that we would as a society come to decide as wrong. As it is currently used I don't think it violates our freedom of thought, I don't think it interferes with our mental privacy, even if it does sometimes feel uncomfortable given the precision of information and insights it gains about human behavior.

WENDELL WALLACH: Great, great.

Let's talk about other applications of neurotechnologies and other environments and what is already going on. One of the things I found fascinating about your book was how many examples you had of neurotechnologies already being used in the workplace. Tell us about that.

NITA FARAHANY: One of the things that has been interesting as I have talked with people about The Battle for Your Brain is that many people will say, "What if we could do X, Y, or Z", and I say, "We're already doing that."

They will say, "I know, but what if your boss could monitor your brain in the workplace?"

"Yeah, that is already also happening in the workplace."

I wanted to focus the book on documenting as many real-world examples as I could find of the technology to help people understand that this is not a science fiction book. This is not about the future. This is about a future that has already arrived, and the question is just the scale that it will reach before we do something about it.

The book is very much a call to action, to say, "This is already happening." It will happen much more rapidly now that these devices are becoming multifunctional, that is, brain sensors are being imbedded into everyday devices like earphones, headphones, and watches.

In the workplace the thing that was surprising to me was that it has been around for a while. There is a company called SmartCap out of Australia that has been selling EEG tracking that has been tracking electrical activity from the brain for more than a decade already. There are more than 5,000 companies worldwide that are using SmartCap technology, which includes sensors to pick up brain activity, dry sensors that are put into hard hats, baseball caps, and conductor's brims. They are using it to monitor fatigue levels of people in mines and on factory floors and truck drivers. They have done pilots in the United States and in other countries worldwide.

They are not the only one. There are also hundreds of thousands of these devices in use across Asia, where workers are required to have their brain activity monitored.

It is important to note what brain activity is being monitored. It is not like people are having their real-time inner monologue decoded by their employers. What they are having monitored right now is brainwave activity from a few sensors that are worn on the scalp of the head, and that can pick up some basic things like fatigue levels, attention, engagement, frustration, boredom, these kinds of basic brain states, and yet, given the right cues and the environment it can also be used to probe brains for information, recognition, and preconscious signals as well. There are already some disturbing reports coming out of China, for example, about that kind of testing of people's brainwave activity to pick up things like political ideology or adherence to the person's party.

The fact is it is happening, and I think it is going to go much more wide-scale. There are also brain wellness programs that are happening worldwide. In a lot of even U.S.-based companies, rather than being used as a tool for surveillance there are neurotech headsets that have been given to employees to make part of the brain wellness program to improve their stress levels and enable them to undertake meditation and "brain breaks," and that can all be positive. I think that can be a great thing—cognitive ergonomics, trying to design a better workplace.

One of my concerns about the brain wellness programs or the health wellness programs is that, as well-intentioned as they are, the data that is captured through those programs are not subject to the Health Insurance Portability and Accountability Act (HIPAA), which means that the privacy of data records that an employee would enjoy for health records, that employers could not use or gain access to, employers can use, gain access, and use for any purpose that they want generally the data that is gathered in wellness programs, and that could also include brainwave data.

WENDELL WALLACH: Could they sell it to a neuromarketer?

NITA FARAHANY: Yes.

WENDELL WALLACH: Because that is the problem. We are moving into this realm where different kinds of data are getting picked up in very different contexts. What happens if it gets consolidated?

NITA FARAHANY: Already there is brain data that has been commodified. A company like Entertech, which is a Chinese company that sells a device called FlowTime, which is used in the United States, has entered into an agreement with Singularity to sell huge data sets of brain data. I am sure that is just one of many examples of the commodification of the data. A lot of the data that neurotech companies have used as training data is data that, after using it with wellness programs, they have been able to get the data and use it and then modify and sell that data for purposes or insights by companies from L'Oréal to IKEA.

WENDELL WALLACH: "Wellness" sounds great, and wellness can be used to increase productivity of course, but have you encountered examples where the monitoring of this data is being used in a negative fashion to increase productivity or create fear in workers that they will be disciplined if they don't maintain certain attention levels, for example?

NITA FARAHANY: Already in China there are reports about employees who have their brain metrics tracked. It is used for decisions about their jobs, their employment, and even their stability levels during the workday. If their emotional level suggests that they could be disruptive, they are sent home.

Students in China who have had their brain data measured and monitored during the school day have reported a chilling effect, a fear of being punished if their brain metrics show that they are not paying as deep attention as they ought to be during the day, and some of them have reported that they have been punished by their parents, who are looking over the data, or their teachers and fears about it being used by the state and feeding into the broader surveillance society that China has established, so already there are reports about discomfort with this.

In Australia where SmartCap has been rolled out, at one of the mines the workers were represented by a union, and they unionized against the use of the SmartCap devices at their mine because they felt it would be Big Brother or a big corporation looking at their brains, and they were deeply uncomfortable with that. They were successfully able to keep it out of the workplace, which suggests that same fear about misuse because they had union representation who had the right to review and had to agree to the introduction of new technology in the workplace for it to be used, and were able to successfully prevent it.

WENDELL WALLACH: This is mainly workplace, and you mentioned students at one point, but how are governments getting involved in this? Are they collecting biometric data, including your brain activity?

NITA FARAHANY: In a number of different ways governments have already started to use this technology. Some of the earliest uses of it that have been documented are these brain-wearable or EEG headsets to try to probe or interrogate criminal suspects. There is a characteristic pattern in the brain called the P300 response, and that P300 response is a preconscious signal of recognition. For example, I show you a picture of your wife, and your brain registers recognition, which I can pick up in your brain through recognition memory.

There was development of that technology in the United States to try to pair this P300 signal together with probes, like trying to go through a police file and find things that only a criminal suspect or somebody who is part of the investigative team would be aware of, and then show that to the person and see whether or not they signaled recognition memory. It has not been used very much in the United States for a variety of reasons we could go into, but it has been exported abroad, and many police departments to this day are using some version of that technology in order to interrogate criminal suspects.

There is also a lot of focus on the development of brain-functional biometrics. These are neural signatures to try to authenticate a person. If you are wearing one of these brain-wearable devices that pick up electrical activity in your brain and you sing the first stanza of your favorite song in your head and then I were to sing that same song in my head, your brainwave activity would look different than my brainwave activity, so we could use that as a baseline, record it, and that would be your password, and then the next time you would try to unlock something you would unlock it with that neural signature. Governments are investing a lot in that to figure out if functional biometrics could become a biometric that they use for authentication of individuals. Then there is significant investment in military applications of the technology, whether that is to build stronger militaries or to try to use more disconcerting cognitive warfare.

WENDELL WALLACH: Tell us more about cognitive warfare. That was intriguing.

NITA FARAHANY: Cognitive warfare is scary, not just intriguing, Wendell.

WENDELL WALLACH: I was intrigued when you brought that up in the book, but yes, I was also scared.

NITA FARAHANY: There was a report that was sponsored by the North American Treaty Organization (NATO) a couple of years ago that was entitled, "Cognitive Warfare." They focused on the fact that the kind of "sixth domain" of warfare is warfare for the human brain, for the human mind, and that happens in a few different ways. One of them is information and propaganda warfares, and we are seeing a lot of that. We saw that with Cambridge Analytica. We have seen that with attempts to shape and use platforms like social media and disinformation and propaganda campaigns to be a battle for the brain.

The other area of that is significant investment into brain-computer interface or brain-wearables, neural interface devices, both to enhance militaries and soldiers and their capabilities, but also to try to interfere with others. Imagine a world in which you have widespread neural interface, which is the world which I have described that I think is coming, and people are wearing their ear buds, their headphones, and all of that is brain activity that is being tracked and also used and interfered with by other countries.

These are not always just single-directional devices. They can be bidirectional devices, where you also are writing to the brain as well as reading from the brain: You are watching TikTok, you have your brain-sensing ear buds in. Not only can I tell how you are moving through the screen, but I am also picking up your brain activity and your emotional reactions to the different videos you are looking at and giving you more of the same or trying to amp you up and make you frustrated and angry.

There are also worries about being able to hack information from people's brains. With the same kind of P300 that we were talking about researchers have tried to figure out whether you could hack information. If I am wearing these all day while I am doing my workplace environment, and I type my personal identification number (PIN) in and you decode that from brain activity, it seems like the answer is yes. Is there some sort of on/off device I could have to make sure that when I am entering sensitive information that that is not being recorded, or could you put into the environment on my screen subliminal probes that would probe my brain for things like my PIN number? These are all possibilities within the world of cognitive warfare.

I am going to tell you one more. There is this whole domain people really worry about—which we can get into or not as you want—about weapons to target or disable the brain, and that is I think the scariest category of all. This is like the Havana syndrome claims and claims that people in countries are developing electromagnetic or other weapons to try to target and take out brains, to eliminate even people's capacity for choice.

What do you think about that, Wendell?

WENDELL WALLACH: That is truly scary, but I get confused when we get into this subject area. I know on the one hand there is all this speculation about what is coming or what might be coming, but it seems to me that some of the speculation is based on getting more exacting feedback about what is going on in your mind than we either can do now, or to the extent that we can do it we can only do it if you have been in a situation when you are donning all the appropriate technology.

On the other hand, it seems to me what we are dealing with here are some pretty crude tools that are picking up on some baseline information about mental activity, but why it becomes powerful is because they are able to combine it with other activities, such as what we are clicking on the screen or what we are actually buying on Amazon while that brain activity is going on.

Help me understand. Are we still largely in this realm of applying crude brain information, or are we truly moving into this more science fiction-y realm where more exacting information about your brain is accessible without you having engaged in a willingness to be let's say in an fMRI machine?

NITA FARAHANY: Let me sideways answer your question first. I am going to sideways answer it by saying that there have been a couple of criminal cases recently where Fitbit data was introduced into the criminal case, passively collected data from a person who, for example, said that they were asleep at the time so they could not possibly have been the killer, and their Fitbit data showed in fact they were asleep at the time, or Fitbit data which showed that they were quite active at the time and moving at the time they said they were asleep, contrasting from the information they provided.

I say that because when you describe a world in which people have voluntarily gotten into an fMRI that is different than the world I'm imagining. The world I'm imagining is the world that has already arrived, where people are wearing multifunctional devices like ear buds, headphones, and watches that have brain sensors in them that can pick up brain activity.

How much can we decode? It would be science fiction for sure to say that you can take the inner deep monologue that a person is having inside their brain, unexpressed, unintentionally communicated speech, and decode that with a couple of sensors in each ear. That you cannot do. You might be able to make some crude inferences if a person is surfing on Amazon and have those two ear buds in their ears, but you cannot pick up complex thought from these devices.

You may not even be able to do that with fMRI. What you might be able to do with fMRI is what a person is imagining in the moment but not the full dialogue inside their brain. Real-world mindreading is not happening with these devices in that way.

WENDELL WALLACH: Right.

NITA FARAHANY: But that does not mean that there isn't a lot of information that you can decode from the brain. You call it crude or some people call it crude, but we could not before pick up engagement, boredom, frustration, mind-wandering, attention, fatigue levels, simple numbers, shapes, and even faces that you are seeing in your brain. This is a form of mindreading. It is just not the full resolution of everything in your brain, and the question is, is that information dangerous or risky? Is it different than what I can pick up? Sometimes. Right now the algorithm and your surfing on TikTok may tell me more about what is going on in your mind than one of these brain-sensing ear buds would.

But, if you combine them, which is what the reality is—we are not talking about brain activity in isolation. We are talking about one piece of a missing puzzle that corporations and governments have about people, which is the unexpressed emotions, the unexpressed inner feelings and biases, uncommunicated and unexpressed through swipes, movements, or anything else. You think you have a poker face? It turns out your poker face can be decoded through your emotions that are revealed through brain activity.

There is some form of mind-reading that is happening with these devices. Will they get more precise over time? Will we get even more resolution and more information over time? Absolutely, but right now, today, can they tell your complex thoughts? No, and I don't think they ever will from one little sensor in each ear.

WENDELL WALLACH: I remember already ten years ago talking with Rosalind Picard, the godmother of affective computing, about the extent to which we would consider it acceptable to have technologies that could deduce, not even with these sensors, from facial expressions or other actions the emotions we have. It seems to me that we may not mind if your computer knows that you are happy, but do we want your computer to know you are vulnerable, that you are in that kind of state? It would seem to me from what you are saying that that might be pretty easy to know with technologies that you are now donning but it is unbeknownst to you.

NITA FARAHANY: Yes, I think that is right. I think it will be pretty easy to know some things that you don't even know, including the suggestible state that you are in, when you are the most vulnerable, when you are the weakest, when you are the most tired, when you are the most likely to pull the trigger on buying something that you shouldn't buy because your resolve is at its weakest in that moment in time.

There are a lot of things that can be beneficial about those kinds of insights. I am hardly somebody who I would say is just dystopian about the technology, but what I want people to move away from is dismissing the technology because they think it can't read enough to drive us to act now. It can do already more than enough that we should be concerned about it, and if we wait until we get to the place where we have technology that could have full-resolution thought that is decoded it will be far too late to do anything about it.

Right now is the right moment. There is more than enough widespread use of the technology, there are more than enough applications and capabilities in these technologies to justify action, but we are not so far down the path that it is pervasive and that we can do everything with the technology that we will one day be able to do. I consider that the "sweet spot" of acting.

WENDELL WALLACH: I guess what I am trying to grasp here—and I hope that we are being clear about it—is that on one level we are not getting into your brain in a way where we know exactly what you are thinking. We are dealing with general characteristics and the ability to put those general characteristics sometimes together with other activities you are engaged in, such as striking your keyboard or purchasing something, some activity you might be engaged in on the web.

You used this one example, and I am intrigued by it, about guessing your password. I wonder how much research has actually been done on that. Has it been demonstrated with a high degree of probability that you can guess somebody's password with a combination of neurotechnologies?

NITA FARAHANY: There has not been a ton of studies on this. There have been a few studies. If there have been more studies, at least they have not been peer-reviewed studies that I have come across.

There are two different ways in which that presents a potential concern. One is that you are passively having your brain activity monitored while you are doing things like entering your PINs into your bank account on the screen of your computer. That has neural representation, and that neural representation can be potentially decoded and recorded, so it could be intercepted by intercepting your brain activity while you are entering the information on the screen. It comes up as little black circles on your screen. It does not come up as little black circles in your brain. It is the actual numbers that you intend to be typing in your brain. In the same way that we can already with some devices enable you to type on a virtual keyboard, that is because you can decode your intention to type "2714" and figure out what that is and decode it.

The second is an experiment that was done a number of years ago now which sought to use recognition memory and subliminal priming within a gaming environment. Gamers who wore these neural-interface headsets—who knew they were part of a research experiment, so this was not literally stealing information from their brains, but they did not know the priming happened; they just knew they were part of a research experiment. The idea was to see whether or not they could accurately flash numbers and figure out through the subliminal primes what the PIN and mailing address was for the recipient, and they were able to do that with a high degree of accuracy.

That needs to be replicated many, many times with different brains across different people and different subjects to see the extent to which it is a problem. These were done as "proof of concept" to help people understand that there are cybersecurity and biosecurity risks of decoding information from the brain, whether that is by probing to obtain information from the brain or the sensitive information that you may be conveying that you think you are doing in secret that can be picked up and decoded from brain activity that you are intentionally communicating and typing, one stored in your brain and one expressed through typing.

To your point, am I literally rifling through your brain, Wendell, to find your PIN with brain activity? No. The question is, can you use access to brain activity to get at information stored in the brain, and that is different and importantly different from your full, robust thoughts. It is not like you are sitting there and daydreaming and I am decoding everything that you are thinking about, but the idea that everything in your brain is safe and that it is just general states of mind that can be decoded is also wrong because you can probe the brain for specific pieces of information and even decode specific pieces of information as a person is thinking about it.

There is a gulf, a gap, between mind-reading as the ordinary person thinks about it, as being able to decode the complex thoughts and images in your mind, but it is also much more advanced than people think in that it is not just some general sense that I am getting from your brain. I can actually—not me but researchers, scientists, and people who sell the technology—decode information from brain activity with increasing precision.

WENDELL WALLACH: I think this is probably what is most important for citizens to understand right now, that a lot of the science fiction-y things are truly science fiction at this point. Don't presume that we are getting involved in the intricacies of your thoughts.

On the other hand, what is not science fiction, what can be done, may on the surface seem to be somewhat superficial, but when you start looking at the applications of it, it is anything but superficial in terms of how it is already being applied.

NITA FARAHANY: That I think is an important point, Wendell. What I try to do in The Battle for Your Brain is not only show what we can do but how it is being applied and misapplied, because you think, Oh, it doesn't matter if you know X, Y, and Z from my brain, and then I show how it does matter if you know X, Y, and Z from your brain and how it can be misapplied against you. The point is, we can already from modern-day science and technology have reason for concern that, if we do not set into place some appropriate safeguards and appropriate rights for individuals, our brains actually are at risk, even if mindreading is impossible today.

WENDELL WALLACH: Let's make sure that everyone has a sense of what the narrative arc of the book is because I think you did a marvelous job of building from rather standard but surprising examples to a richer and richer appreciation of how perhaps minds are already getting intervened in ways that we need to be putting in place some restraints, some extending human right. Before we talk about the actual extension, maybe you can take people through that narrative.

NITA FARAHANY: Sure. I lay the book out I would say in two ways. The first is across the dimensions of tracking or decoding the brain and hacking or manipulating the brain. The first half of the book focuses on the remarkable advances that have been made across a number of different contexts and ways in which individuals, corporations, and governments can already track and decode what is inside the human brain. The second half of the book focuses on the ways in which the brain can be changed, and those ways include ways that individuals can enhance their brains to slow down their brains to their brains being changed, whether it is manipulated or assaulted by others. That is one way in which I try to walk through it.

What I am really doing with the book underneath all of that is building an arc around the concept of the right to cognitive liberty, so it is organized very much around helping us to understand in context-specific ways the right to mental privacy that is at risk and that needs to be recognized, the right to self-determination over our brains and mental experiences and how we ought to think about that, and then the right to freedom of thought. Those pieces are the pillars of the book, which build toward recognizing this right to cognitive liberty and, through each chapter and through detailed examples about the ways in which neurotechnology but also neuroscience and other modern-day advances, algorithms, and technology are being used, how those three rights—the right to mental privacy, self-determination, and freedom of thought—fundamentally are at risk and have to be updated in order to enable us to be empowered by neuroscience, neurotechnology, and modern technologies, and not be oppressed or surveilled by them.

WENDELL WALLACH: None of those rights to my understanding exist in law yet. We are talking about them, but perhaps I am wrong about that.

NITA FARAHANY: You are not wrong, but I am hopeful that we may both be proven wrong, and here is how: I argue for a recognition of a right to cognitive liberty, which requires an updating of preexisting human rights. What that means is, there is a right to privacy under the United Nations' Universal Declaration of Human Rights (UNDHR), there is a right to self-determination, and even an already-recognized right to informational self-determination, and there is a right to freedom of thought. Because privacy, freedom of thought, and self-determination have all been written and interpreted in a world in which our brains were not transparent to others they were not written in a way that addressed these issues.

There was a wonderful report written by the now former special rapporteur for freedom of thought, Ahmed Shaheed, in October of 2021 that he presented to the UN General Assembly arguing for an updated understanding of freedom of thought, that it has been rather narrowly interpreted to apply to freedom of religion but is broadly written and could and should cover these advances in neurotechnologies, artificial intelligence (AI), and other ways in which the brain can be hacked and tracked.

That is what I am arguing for as well, that the right to cognitive liberty as an umbrella right directs us to update those existing rights. That does not require the recognition of brand-new rights. It requires updating general comments that are attached to these rights or opinions that are issued by the Human Rights Committee that oversees the treaties that implement these rights to direct our attention to saying: "Look, these rights exist. Now that they are under threat by these modern technologies we have to update those rights to recognize that." Human rights law by design evolves as society and the risks to human rights evolve. This is a natural evolution as the technology evolves.

WENDELL WALLACH: As you know, there is considerable resistance to the evolution of human rights law and international humanitarian law. We don't have to get into the war side of that.

NITA FARAHANY: There is resistance to the recognition of new rights, Wendell.

WENDELL WALLACH: Won't this be seen as new rights?

NITA FARAHANY: Cognitive liberty would be declaring through a general comment that the Human Rights Committee writes that there is a right to cognitive liberty implicit within existing UNDHR rights would be new. What would not be new is updating each of those three rights consistent with cognitive liberty. What it does not require as a result is all of the nations coming together and passing a new right. The existing bodies, like the human rights committee that oversees the implementation of the International Covenant on Civil and Political Rights, has the power to issue opinions and write general comments that update our interpretation and understanding of those rights, and I think they would have the political will to do so.

I would dare say that this is an issue that I believe crosses all kinds of divides. When you tell people about the technology, when you tell people about the risks of greater brain transparency, I don't care what their political ideology and background is, there is rare unanimity and concern in the belief in the right to cognitive liberty. Maybe I am naively optimistic, but I think it is possible.

WENDELL WALLACH: Let's go with the worst case scenario. Both you and I have had much more interaction with Chinese scholars than is common, even among our colleagues, and most of the scholars I interact with are supportive of human rights, but we do know that the government has been very resistant to the embrace of human rights, even though China is a signatory to the Universal Declaration of Human Rights, something I remind them of every time I am in conversation with them. Their resistance is a resistance to human rights being defined by the more liberal nations.

NITA FARAHANY: I think they would have a super, super-hard time coming out in opposition to the right to think freely. Speak freely maybe, but the right to think freely? They would have a super-hard time voicing opposition to that is what I think.

Would they take actions contrary to it? Would they violate the norms once they have been recognized? Will they violate the laws once they have been recognized? Maybe. If you look at their investments into cognitive warfare—the United States issued sanctions at the end of December 2021 because of the reported development of brain-controlled weapons by China. They have not been all that secretive about their intentions to focus on the brain as the next domain for battle.

WENDELL WALLACH: They have not been secretive at all.

NITA FARAHANY: Do I think that there will be actors who violate these norms? There always are. No matter what, no matter how many human rights we pass, no matter how many laws we pass, there are actors who violate them. I don't think the result is that we act in a defeatist way and don't try to do the best we can to identify and recognize rights that both create norms and protections.

If you look at the way Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) has gone, you could look at the fact that there was a rogue scientist out of China who acted contrary to global norms, using CRISPR on embryos and transferring them to a woman's uterus, or you could look at the fact that there was swift global condemnation of that and that there have been no additional reports since then—I say "reports" rather than "actions"—of the same conduct. I see that as not a failure. I see that as a success of the implementation of global norms, laws, and regulations about the progress of science and technology.

WENDELL WALLACH: One of the interesting things to me, for example, about China is that I see the government is actually amazingly responsive to a lot of the concerns that come out of the academic community and the public more generally, just as long as they are not threats to the government itself.

Let's get away from China for a minute. I have concerns around the Department of Defense and what it thinks is acceptable in neurotechnologies and that it would like to enhance soldiers, and that raises questions about what enhancements it foresees, will it be reversible, and will there be truly informed consent? Can there be informed consent when you are dealing with soldiers who understand that resistance to what their officers are asking them to do will actually not reflect very well on what their experience is going to be in the military? I have this concern whether particularly these aspects that might have military applications whether the U.S. government, let alone the European Union and NATO, will be willing to embrace some of these constraints.

NITA FARAHANY: I think that's fair. There are always military carve-outs and national security exceptions that are created. Rights and human rights are context-specific. They are rarely absolute. The way I have described cognitive liberty is not an absolute right against any abusive neurotechnology by any third party. It is like all rights—relative rights. Mental privacy is a relative right. Freedom of thought is an absolute right, but freedom of thought is pretty narrow in what it would protect. It would protect that core of robust thought and images in your mind. Self-determination is not absolute. It gives way to strong-enough societal interest.

Any right is constrained by national security, military, and other exceptions that are carved out, but again if we are talking about the average person and their ability to have self-determination over their brains and mental experiences, recognizing a right to cognitive liberty would change the terms of service. It would put the default rules in favor of individuals.

I think that is a good starting place. Does it solve all of the risks of misuse of neurotechnology and technologies designed to track and hack our brains? No. But is it better than throwing up our hands and saying: "This is our final frontier of privacy and freedom. Oh, well." No, that is not the right answer either.

This is I believe a hopeful pathway forward, but it is a pathway forward that I think we have to act on now. We cannot wait to have the sad story about how we had a moment to act and to define and create default rules in favor of individuals and we chose not to, and now brain data is commodified just as easily as our financial data, online activities, and everything else that has been used to quantify, discriminate, and make choices about people.

WENDELL WALLACH: Let me ask you about that now. You and I both know a woman, Wrye Sententia, who 20 years ago when I first met her was already pushing the right to cognitive liberty.

NITA FARAHANY: Indeed.

WENDELL WALLACH: It wasn't just her. There was a whole flock of transhumanists in those days who perceived all kinds of technologies that have not arrived yet as being right around the corner. In fact I think many of them were surprised that a decade went by and very little progress was made in terms of what their expectations were. Now is not 20 years ago. Why is now today?

NITA FARAHANY: I have been writing about this stuff for a very long time. The first time I wrote about cognitive liberty I thought I had cleverly come up with the term. I think the first article I wrote it in was "Incriminating Thoughts" in the Stanford Law Review. I then found the work by the Center for Cognitive Liberty & Ethics, where they had developed the concept and had some great insights about how the right should apply and how it should be used. I think they were well ahead of their time. I think I was ahead of my time when I was writing "Incriminating Thoughts."

I wrote my proposal for this book in 2012. It is not that I wrote it from 2012 to 2023. It is that I set it aside for a while. While I thought it was an issue that I was fascinated by, I did not see it being an issue of scale yet. I thought, These are real issues that we ought to work on, but the call to action was not urgent in my mind.

Then I heard the presentation by CTRL-labs in 2018 about the device they were developing and how they were embedding the sensors into a wristwatch. I realized immediately that of course that meant that it could be put into something like an Apple Watch. I was like: Oh, my gosh. That is going to get acquired in a heartbeat, and I am sure it's going to be Apple that acquires it, and that is what is going to take this mainstream. Then Facebook acquired it a year later.

That is when I was like, "I am writing this book," now because the fact that you have half a billion dollars being invested by Facebook into a multifunctional device that finally people are getting past the form factor and Big Tech is investing in this heavily, now is to the moment to do so. That is when I turned my attention to writing the book.

I have been working on trying to develop and build out what the concept of cognitive liberty meant to me for years in prior articles, but this was the really deep dive for me because it was not just what's happening but building the philosophical case and the legal case for cognitive liberty was very hard work that I had not fully done, to figure out all the contours of it across all of these different contexts. That is when I decided it was worth the investment of time because now, as we sit at the moment when the technology has already grown at scale—literally this year there are major multifunctional devices from major companies that are launching—I realized the call to action is now.

WENDELL WALLACH: So the ubiquity of the devices and the fact that most consumers don't even know what they are donning when they take on these devices.

NITA FARAHANY: That plus one more piece. As you know as an expert in AI, the algorithms have gotten so much better. The sensors have gotten better, the multifunctional devices have gotten better, but the training data and the power of the algorithms to actually decode information—from 2012, when I first wrote the book proposal, until now, we are in a completely different world, and it is happening so much faster than people expected. With generative AI and what that is going to mean for the next generation of these devices I don't think we have time to wait anymore.

I think we are on that rapid-ascension curve now whereas a decade ago, when I first wrote the proposal, maybe the book would have been way too early for a call to action. I would have said, "It would good in the next decade for us to recognize this right, so that when it goes mainstream," but I don't think I could have made that case then. If I had written the book in 2012, you would not hear me saying, "It is absolutely urgent that we do so today." It would have been more of an academic exercise.

WENDELL WALLACH: I appreciate the fact that not only that it is now, but you did the legwork of laying the framework for the philosophical and legal trajectories in which we can put some of these rights in place and build on them.

Let's pivot for a final topic. I think you alluded to it in terms of the right to self-determination, but I believe you also see a positive narrative in terms of how these technologies for the individual can be used for individual empowerment and self-determination. This is a topic I have also written about or discussed but from a very different angle than the one you bring up. Tell us about how you see the technologies can be brought into this.

NITA FARAHANY: I believe cognitive liberty is not just a right from; it is a right to. That right to I think is very powerful when it comes to the ability to know thyself, know one's own brain, and change one's own brain.

One thing that is extraordinary as we sit here in 2023 is how little any of us know about our own brain activity and brain health. Most people can tell you their cholesterol numbers, they can tell you their heart rate, their blood pressure, and their sleep patterns. There is so much of the human body that has been quantified—the number of steps they take per day—but brain health, wellness, and activity is a giant black box.

The idea that people could have real data to peer into their own brains and learn about their own biases, when they focus best. They think that they focus best in the morning, but here is real objective data; they are feeling like they are slow and sluggish today—here is real objective data about that; they are able to track their cognitive development over time or cognitive decline over time: they are able to see the earliest stages of glioblastoma before it becomes a death sentence and it is something that actually is operable; they are able to use it to meditate more effectively or to decrease their stress levels or to enhance and speed up their brain activity and improve their focus and concentration or even use it to erase painful memories or be able to work on PTSD, like I did. I think there is a powerful case for a right to that.

There is nothing more fundamental to the experience of being human than one's own brain and mind, and the right to access one's own brain data and the right to speed up, slow down, or modify it I think is as fundamental as it can be to what it means to be human, so I see the right to self-determination to be a critical component of the right to cognitive liberty. It is the right not just to free will or something like that; it is the right to self-determination over our brains and mental experiences, which is the right from but also the right to.

WENDELL WALLACH: So it is a right to have that knowledge, to have that information, to utilize those tools in terms of how they help you understand yourself better.

NITA FARAHANY: Yes. What is the context you wrote about it in, Wendell?

WENDELL WALLACH: For me it goes back many decades into meditative practices, self-knowledge, self-help, and self-therapy. I have always felt that that is central to our ability to function as something more than machines of our conditioning and to recognize when we are under psychological, internal, or social pressure to act in ways that we may not be fully conscious of, that our actions are more or less conditioned, more or less set for us.

I had always championed this need for self-knowledge, but it seems to me in this age that it has become more than a simple healthy pathway in your life. Your buttons are being pushed all the time by these technologies. Your buttons are being pushed all the time by the fundraising letters you get, the scam phone calls you are getting, and it is very important not to unconsciously walk through life in this age of new technologies. It is becoming central that you have enough self-knowledge to recognize when those buttons are being pushed. Whether or not you have the conscious knowledge, you can at least have the subtle attunement to when your body is telling you something is off.

NITA FARAHANY: Totally right. That includes a more pedantic version of that, the Twitter button that asks you, "Would you like to read the article first before you send it," engaging your brain to think more slowly. It attunes your awareness to what shortcuts have just happened with your brain as opposed to slowing down and being more thoughtful about your interaction with technology. I think it couldn't be better said. It has never been more important for people to actually slow down and think about what the technologies and the environment in which they are interacting with do and how they can make a difference in what that experience is.

WENDELL WALLACH: I think it is wonderful that some of the technologies are integrating that and saying, "Stop, take a look around." For me I think I have conditioned myself to stop quite often and reflect, so I don't always stop when my Apple Watch tells me that it is nap-taking time, but I am quite thankful that at least some of the manufacturers have taken on facilitating self-knowledge, facilitating self-awareness and what is taking place in the moment as one of the functions they want to facilitate for their users. I wish that was more ubiquitous than the collection of this data about where our buttons lie and how we can subconsciously—I don't think it's our unconscious—be nudged to activity for political and marketing purposes serve those who are nudging us.

The other part of this is that I think it is not only that you have the right to self-determination and not to be intervened by others, but I think you also have the right to the data, to the algorithms they have by profile every month as to what the marketers think I am, if I could just peruse it so I could be a little bit more self-aware of when I am being manipulated.

NITA FARAHANY: I argue for a right to informational self-access. I think easily that right to informational self-access—which has already been recognized in international human rights law—should include that. It should include the data. It should include transparency into the algorithms: What is it that is being collected about you? What is it that is being inferred? I think we would make a huge difference in the world if we could get that recognized, Wendell.

WENDELL WALLACH: We would have accomplished something if that can be put in place, and luckily it's not just us. Luckily we have a whole community of legal scholars, tech ethicists, and others out there who have now come to recognize that these technologies cannot be just benignly ignored.

NITA FARAHANY: I think that's right, and that is true with neurotechnology too. I am not speaking into a void. There are so many different important organizations, scholars, and academics who are doing the hard work of developing the ethical frameworks and guidelines and trying to help us make this technology empowering and as valuable to individuals, and that includes so many of the technologists themselves. So many of the companies are deeply invested and engaged in trying to find the ethical pathway forward.

That I think is encouraging. While it is a call to action, it is not a call to action that happens without already a lot of momentum in the right direction, but it is a call to action that a lot of people in the general public have not joined yet. That is part of why I wrote this book I hope in a way that is very accessible, concrete, and grounded to help even people who are not in the debate yet, who are not part of the conversations yet, who are not part of the momentum toward recognizing the right to cognitive liberty to join the conversation, to understand the stakes, and understand what is already happening to make that possible.

WENDELL WALLACH: I think you have achieved that. I can highly recommend the book. I know that many listeners of our podcast are looking just for the dilettante's level of understanding to know what a book is about, but let me share with you, as somebody who has actually read the book in prepublication, that we have just scratched the surface. If this is a subject that intrigues you, I would suggest that you purchase a copy of The Battle for Your Brain. I don't think you will be at all disappointed as you move your way through it.

The second part about this is that this is a book that has made a contribution. People may not remember Nita Farahany in future years, but she has certainly done some original work here that is going to catalyze others in the field to pick up from pointers she has laid down and work to hopefully put in place the kinds of cognitive liberties that she proposes for us all.

NITA FARAHANY: Thank you, Wendell, both for your kind words but also for this rich and delightful conversation.

WENDELL WALLACH: Thank you ever so much, Nita, for sharing your time, your insights, and expertise with us. This has indeed been another rich and thought-provoking discussion.

Thank you to our listeners for tuning in and a special thanks to the team at the Carnegie Council for hosting and producing this podcast. For the latest content on ethics and international affairs be sure to follow us on social media at @carnegiecouncil. You can also go to carnegiecouncil.org for other podcasts and articles that we have published. My name is Wendell Wallach, and I hope we earned the privilege of your time. Much appreciated.

Carnegie Council for Ethics in International Affairs est un organisme indépendant et non partisan à but non lucratif. Les opinions exprimées dans ce podcast sont celles des intervenants et ne reflètent pas nécessairement la position de Carnegie Council.

Vous pouvez aussi aimer

22 MARS 2024 - Podcast

Deux questions fondamentales dans la gouvernance de l'IA, avec Elizabeth Seger

Dans ce podcast intitulé "Artificial Intelligence & Equality", Wendell Wallach, membre de Carnegie-Uehiro, et Elizabeth Seger, de Demos, discutent de la manière de rendre l'IA générative sûre et démocratique.

21 FÉVRIER 2024 - Podcast

Se préparer, ne pas paniquer : naviguer dans le paysage des droits numériques, avec Sam Gregory

Anja Kaspersen, Senior Fellow, s'entretient avec Sam Gregory, directeur exécutif de WITNESS, sur les défis et les opportunités présentés par les données synthétiques, les médias générés par l'IA et les "deepfakes".

23 JAN 2024 - Podcast

Quand la science rencontre le pouvoir, avec Geoff Mulgan

Dans cet épisode spécial, Anja Kaspersen, Senior Fellow, s'entretient avec le professeur Geoff Mulgan de l'University College London sur les tendances qui façonnent l'impact de la technologie sur la société.