Ethics Matter: A Conversation on Bioethics with NASA's Paul Root Wolpe

Mar 8, 2013

TV Show

Highlights

In this eye-opening conversation, renowned bioethicist Dr. Wolpe grapples with the ethical issues raised by advances in biotechnology and neuroscience, including "brain fingerprinting" and eventual mind-reading.

Introduction

MARLENE SPOERRI: Hello, and welcome to Ethics Matter. I'm Marlene Spoerri, program officer for Ethics Matter here at the Carnegie Council for Ethics in International Affairs.

Glow-in-the-dark pets, insect drones, and brain reading are just some of the new directions that science is headed. Today's guest has spent his career examining the ethics underlying tomorrow's scientific breakthroughs. As one of the nation's most prominent bioethicists, Paul Root Wolpe encourages scientists to reconsider not only what they can do, but what they should do.

Dr. Wolpe is the Asa Griggs Candler Professor of Bioethics and the director of the Center for Ethics at Emory University. He serves as the first senior bioethicist for NASA and the first national bioethics advisor to Planned Parenthood.

As if that wasn't enough, Dr. Wolpe is also the co-editor of the world's premier journal of bioethics, the American Journal of Bioethics, and he sits on the editorial boards of over a dozen professional journals in medicine and ethics.

Dr. Wolpe, it's a real pleasure to welcome you to the Council.

PAUL ROOT WOLPE: Thank you.

Remarks

MARLENE SPOERRI: One of the first things that I'd like to discuss with you is what you tell your students at Emory University. You say that one of the first things you discuss is that science fiction no longer exists. Can you talk to us a little bit about that?

PAUL ROOT WOLPE: When I talk about biotechnology and some of the advances we've made in science recentlyI read a lot of science fiction as a kid. I was a real fanatic. I realized how many of those things that were seen as fictional or things that we might someday accomplish are now being done almost routinely. There's very little that we imagined being able to do years ago that isn't being attempted.

We are synthesizing virtually every organ of the human body. We have artificial hearts and lungs and kidneys and bladders and artificial skin in development or already here. We have entered an era of brain imaging that will allow us, perhaps, moving along very quickly towards being able to apprehend subjective thought, so that's mind-reading.

We are re-creating animals. We are genetically changing animals' very nature, creating hybrid animals.

We are interfacing technology and biology so that we now have all kinds of hybrids, whether we're talking about technologies that put electromechanical controls into brains, whether we're talking about taking nerve cells out of animals and using them to control robots.

The only possible area of science fiction that we haven't quite gotten to yet is being visited by aliens. That's why I'm at NASA. I'll let you know if that actually happens; not that I'll be privy to it, but I'm keeping my eyes open.

MARLENE SPOERRI: You gave a fascinating TED Talk in 2010 in which you described that we are moving towards a third phase in evolutionary history in which we are actually intentionally designing and altering the physiological forms that are inhabiting our planet.

Now, to some extent, we have been engaged in that for quite some time, and you discuss dogs, for example. To what extent do you think the ethical quandaries that we are confronting right now are new to this 21st century?

PAUL ROOT WOLPE: They're very new.

The first stage of evolution, Darwinian evolution, was that we got changed over time by whatever ecological niche we happened to be inand "we" means all organisms.

Then, human beings stepped out of that by creating civilization. What we did was we changed the ecological niche and, therefore, almost reflectively, changed ourselves. That wasn't a direct intervention in evolution; that was an indirect intervention in evolution.

Now, in this third stage, we are directly changing the nature of animal evolution and, perhaps very soon, human evolution. That raises all kinds of new problems and questions. I'll give you one example, of which there are many.

Yes, we have changed animals for a long time through selective breeding, using a natural process to create an unnatural result. That's true of plants and animals. You go into Whole Foods, there is not a single vegetable you buy there that looks anything like what it looked like 1,000 or 2,000 or 10,000 years ago. We've created these "natural" foods that we eat, that we think of as being natural. They're not natural. They're human creations.

The difference is, and I think where the first step of the ethical concern happens, is the ethics of speed, which no one really discusses. If I want to get from animal A to animal M through selective breeding, I have to go through B, C, D, E, F, G, and at each step I'm looking at my product, I'm making natural corrections. I have time to think through the process. I have time to see mistakes.

But when I can go from A to M through genetic engineering in one step, virtually overnight, I've lost the perspective that leisure brings me. There are some wonderful aspects to doing things quickly, and we like it, and we want the products that we desire to happen more quickly.

But we've lost something. We have to be very careful about that loss, because in the process of moving through those steps in one step, what we've lost is a perspective. So we try to guess ahead of time what all the negative consequences might be for any particular intervention, and we're not really that good at it. So the laws of unintended consequences have created alien species affecting our Great Lakes and our ecosystem; they put pathogens into the environment that have caused disease.

So before we rush to create these new organisms and think everything is going to be okay, I think we have to put some serious thought into how we want to live in a world where new organisms can spring up overnight.

MARLENE SPOERRI: Let's talk a little bit about your role as a bioethicist. Where do you come in? Do you set boundaries on where you think science should and should not go, or are you raising questions?

PAUL ROOT WOLPE: There are different kinds of ethicists. I'm a sociologist by training; my Ph.D. is in sociology.

Many ethicists are philosophers or theologians. Philosophers and theologians generally believe they have some purchase from which to make ethical decisions, whether it comes from sacred texts or whether it comes from some philosophical strategy to reach an ethical conclusion.

As a sociologist, I don't actually feel like I have any particularly better purchase from which to make ethical decisions than anyone else has. I just know a lot more about it because I've made it my life's work to study it. So, in general, I raise more questions than I give answers. When people ask me for my answers, I give them, always careful to say, "I'm not sure this answer has any more validity than your answer. It's just I've thought about it a lot. So let me explain to you why the kinds of conclusions I come to I think are valid." Or, in some cases, perhaps I don't know what the right answer is.

Ethics that has simple answers is uninteresting ethics. The question of ethics that's always so difficult is when you have competing values, both of which you think are good values. It's when they clashspeed versus the advantage of having a new organism that might help in some way to create a new drug, or whatever it is. When you have two fundamental positive values in conflict is really when ethics becomes so challenging.

MARLENE SPOERRI: You've written that "scientists have an obligation to reflect on the ends, not just the means, of their scientific work, and that ethical conversation should be part of normal science in the everyday laboratory." To what extent is that the case today?

PAUL ROOT WOLPE: I think it's getting more that way. Part of my goal in my career is to get scientists to take more responsibility for thinking about the ethics of what they do. A lot of scientists do.

We are not very good at teaching that in graduate schools. Then scientists go off sometimes into industry, sometimes into academia, and can be in places where there is everything from lots of ethical conversation to none whatsoever.

I always tell young scientists I think there are three levels at which every scientist has responsibility.

At the biggest level, they have a responsibility to science itself. So when they find people engaging in anti-scientific polemics, or when scientifically valid ideas are not being entertained as they should bewhen, despite enormous amounts of evidence, climate change is being denied, things like thata scientist has a responsibility to stand up for science as an enterprise.

Then, at a slightly reduced level, an expert has a responsibility to their field. So when there are questionsand it's not just challengeswhen there are any questions that come up in a particular field, scientists need to speak for that field. They're the experts.

So when we have a question about synthetic biology, we need the synthetic biologists to speak up. Unfortunately, there has been some stigma attached in science to people who go out and speak a lot in public fora or talk to the media a lot. I've been trying to fight that and say, "No, it is a responsibility of scientists."

Then, the third level is your individual science, whatever it is. That means you have a responsibility to what you create. That's not just true of scientists, but it also means that there may be times when you have to stand up to your institution, to your boss, to the goals of your laboratory. It's a very tough thing to do. But I think it is a scientist's responsibility, just like it's a responsibility of someone in business, when they see something unethical happening in the workplace, to stand up for right behavior there as well.

MARLENE SPOERRI: You've also said that it's not simply the responsibility of scientists and ethicists such as yourself, but of all of us.

PAUL ROOT WOLPE: Right. I think the worst thing that could happen would be if ethicists got to make all the ethical decisions. I know them all. Believe me, they're not the people you want. [Laughter] The same thing is true of scientists.

Science is so fundamentally changing the world right now, and it's changing it in all different kinds of waysnot just through the biological sciences, but the Internet is a scientific creation. Science and technology, whether it's biotechnology or information technology or communications technology, is altering our world faster than we ever imagined, even when we thought things were going fast 30, 40, 50 years ago, and Alvin Toffler wrote Future Shock about how fast things were going. He didn't have a clue.

Things are going to so much faster now, and not just the speed of any particular field, but this remarkable interaction that's happening between fields. So information technology is informing biological sciences. That interchange is happening all over.

That's why Ray Kurzweilalthough I don't think he's right about thissuggested that scientific progress is exponential, rather than linear, and we've just sort of crossed over the knee of that exponential curve, and things are going to happen so fast over the next 30 or 40 years, that he suggested we're going to reach what he called a singularity, a moment in time when things change so profoundly that there is no way we can predict on this side of the singularity what human form or function will be like on the other side.

Now, I think that that is an exaggeration and I don't think we are going to reach a singularity. But I do agree with him that the pace of scientific change is accelerating. It's not just that change is happening fast; change itself is happening faster and faster.

So when you've got that exponential, rather than linear, change, you do have remarkable kinds of unexpected results. I think we just have to be very careful about how to think about them and how to handle them.

MARLENE SPOERRI: Ethics and genetics have long been closely intertwined in the public eye. But I think many of us don't necessarily think of ethics when we think of how we study the brain. You argue that we should. In fact, you're one of the founders of what's called neuroethics. Can you talk to us a little bit about that? Why is the study of the brain and how we use those studies so ethically contentious?

PAUL ROOT WOLPE: Back when genetics in its modern form was first becoming one of the central concerns of ethics, around the time of the Human Genome Project and right before that, why did we think ethics was so important?

One of the things you heard all the time was, "My DNA is uniquely me; it is something about my identity, something about who I am. I don't want people to have it. I want there to be genetic privacy." Well, if you think about it, my DNA isn't uniquely me. I could haveI don't have, but I could havean identical twin brother who has identically the same DNA as I have but is not me.

But my brain is uniquely me. What is it that makes us organisms different than other organisms? My memories, my thoughts, my feelingsall of those things that we think of as characteristics of our brains.

So we should be much more concerned about our increasing ability to change the function of the brain. So that's one side of it, which we are able to do now in ways that really were science fiction just a short time ago. I'm not sure the public realizes where neurosciences are right now compared to where they were a few years ago. We hear much too much about genetics, in a sense, and not enough about neuroscience.

And not only that, but brain scans are sitting in drawers all over the country, and not that many people worry about cerebral privacy. They're much more worried about genetic privacy. Let me tell you, with the increasing sophistication and technological specificity of brain scans, we should start being concerned about brain scans getting out, rather than our genetics. There's just not that much you can tell from my genome. There's a lot more you are going to be able to tell very soon from my brain scan.

MARLENE SPOERRI: President Obama recently announced that he will be requesting that we invest very heavily, $3 billion, in the mapping of the brain. What are your thoughts on that?

PAUL ROOT WOLPE: The project itself has merit. Whether that's where we should be putting this amount of money is a kind of policy judgment call. I'm not sure it is fundamentally an ethical issue, except insofar as any allocation of money for science is an issue of where we put our priorities.

One of the things I like about the brain map, as opposed to the Human Genome Project, is you're not seeing a lot of hyperbole about it. The Human Genome Project, when you look at what was said about it and what it would doit did a lot, and it was a wonderful project, and we needed to map the human genome. But in terms of what they promised Congress in order to get the money, it was ridiculous. Some of the things that they said the mapping of the human genome would do simply have not come to pass. And then they realized that they didn't really understand the role of RNA, and then there were proteomics, and there were all of these other associated things that made it so that just knowing the genome wasn't enough.

What I like about brain map is nobody has said, "As soon as we map the brain, we're going to be able to solve all the problems of human existence."

On the other hand, it is part of an enterprise of understanding the function of this most human part of usand, by the way, the single most complicated thing we know about in the universe is the human brainand we need to understand it better. But it raises a lot of ethical issues that the field of neuroethics has begun to ask.

As we begin to get more and more understanding of how the brain functions and more and more ability to alter that function, we get increasing abilities to do things like understand people's subjective thoughts, perhaps actually control things like religious experience. People are working with transcranial magnetic stimulation to induce this kind of euphoric state.

We're talking about other kinds of possible technologies we don't really know. We're implanting deep-brain stimulators for Parkinson's, and now for psychiatric syndromes. We're not really 100 percent sure what the long-term side effects of those things are.

And then, what makes it worse is when you're in this field, like I am, I get hundreds of emails from people who, as soon as I say something like that, say, "I knew it, they're controlling my brain, I know they've been controlling my brain. You've got to stop them. Let me tell you what's happened to me," in great detail, through thousands of words' worth, of, "Yesterday I got up and I had a pain right here and one right there." I get those all the time.

So you have to balance both scientific need and people's irrational fears about this, and then the reality is somewhere in between. Even more than genetics, it's a very contentious and difficult field.

MARLENE SPOERRI: Neuroscience, as you mentioned, has for the first time demonstrated that there may be ways to directly access human thought, perhaps even without the thinker's consent. So there's talk about the potential for mind reading and something called "brain fingerprinting."

You spent a lot of your time thinking about this. Can you tell us where is the science today on the subject of mind reading and what are the ethical risks that are associated with it?

PAUL ROOT WOLPE: Mind reading is both much further along than people think and much less of a worry than people think. So how are both of those true?

Because in order for us to do anything even remotely like mind reading right now, despite the fears of so many people who write to me, you have to put someone in a functional magnetic resonance imaging machine [FMRI], which is a multimillion-dollar machine. It's donut-shaped, and their head has to be in the middle of it. So it's not like you and I sitting here and someone can be over there with some device beaming microwaves into our brain and knowing what we're thinking.

Outside of FMRI, we can't know what anyone is thinking. Even in FMRI, it's still extraordinarily crude. That being said, what we can now do with an FMRI machine is far more than I thought 10 years ago we would ever be able to do, period, because I always kept thinking we were going to hit this wall. And yet, every wall we've managed to push through.

So what can we do with FMRI right now? There have been a whole series of experiments that show that function in our brain reflects our thoughts, feelings, and movements. With functional magnetic resonance imaging, we can look at what parts of the brain are working at any particular time, which is all we can actually look at, despite what people think. That is, what we look at is which parts of the brain are using oxygen and which parts of the brain are activated at any particular time, and then we extrapolate from that what the person might be doing or thinking about.

That being said, we can do things like look at a brain and tell whether a person is looking at checks or stripes, or now landscapes or beings or a particular scene from a particular movie when we give the person 20 or 30 scenes. We can know very easily whether the person is thinking of an object or a face, because the facial recognition part of the brain is a very large specific area.

But even more, at Carnegie Mellon they did a study where they took random nouns and they took someone and they asked them to think about broccoli and think about bicycle and think about wrench. They did 58 nouns and watched how a person's brain thought of these 58 nouns. Then they took two other unrelated nouns and they predicted what the constellation of activation would look like in this particular person when he thought of lampshade. They were startlingly accurate.

So we are getting very close in those kinds of studies to actually being able to apprehend words. Are we at the point where they can say, "Ah, she was thinking of that time she was lying on the Seine drinking Mai Tais with her lover"? No, we're nowhere near that. But we are moving in that direction, and we are moving in that direction pretty rapidly.

I'll give you one other example. We know where the reading area of the brain is. It's a very distinct area, different than other language parts. What that means theoreticallyit has never been tested in the fieldis we could capture someone in a field of battle who refuses to communicate with us in any way and theoreticallyI want to make this very clear, this experiment hasn't been done to my knowledgewe could flash words up in front of them and watch their reading areas, and when we hit our target language, whatever it is, and we saw the reading area light up (which it would because you can't look at a word you can read and not read it; it's instantaneous), you could say, "Aha, this person speaks" whatever the target language is.

There are a lot of little examples like that that aren't mind reading, but they give us an ability that we never had before, which is to know what's going on in the human brain absent the person's desire to communicate it to us.

Let me say that in a slightly different way. Throughout all of human history, without a single exception ever, not even one, since the time we crawled out of the trees, any information I got about you, I got through your peripheral nervous system. So expression, blushing, heart rate, spoken language, whatever it was, that's all peripheral nervous system activation. I could get no useful information at all from you from your central nervous system; that is, from your brain and spinal cord.

They tried it with phrenology, they tried it with craniometry. Always failed. For the first time in history now, we can get, not just crude information, but sometimes fairly detailed information directly from the brain. The question in front of us is: Does that make a difference? Does that pose an ethical challenge? I think it poses a great ethical challenge.

MARLENE SPOERRI: Can you talk to us a little bit more about that? What ethical challenge does it pose?

PAUL ROOT WOLPE: It poses a lot of different kinds of challenges.

First of all, let me start with knowing the function of the brain without individuals. For example, there have been a number of studies that have made the argumentand there are refutations of this, so it's not decidedthat we actually don't have free will. These studies have looked at the process of making decisions.

It's a long story, but the bottom line was a very famous experimenter did a study where he thought he discovered that if you ask someone to do something, the brain activates about 400 milliseconds, 400 thousandths of a second, before the person is consciously aware that they've made the decision. That person interprets that as saying, "We don't actually make any decisions. Our brain makes the decision, informs our conscious mind what the decision is, and we interpret that as having made a conscious decision."

So a big conversation began about whether we actually have free will. What some people have said is, "Well, actually, once we make that decisionso I make the decision I'm going to pick this upthen I can stop and say, ‘No, I'm not going to pick this up,' so we don't have free will, we only have free won't."

That is a big argument going on right now. But imagine if someone could actually prove that, that we make no conscious decisions at all, that our decisions are pre-conscious, inform our conscious mind, and we think of that as having made a conscious decision. What does that say about things like criminal responsibility? There's a whole project looking at the issue of jurisprudence and neuroscience. So that's one kind of set of concerns, and that's just one example of that set.

On the other side, there is a question of our being able to control, our being able to look, for example, into people's brains. Imagine.

We know, for example, already that the government is looking into brain imaging for lie detection. There are now a number of people working on it, believing that we will be able to create a much better lie detector using brain imaging. There is some really convincing evidence that we are already much better at it than any other form of lie detection we have. It's still not perfect, it's still probably not good enough to use in courts, but much better than polygraphy.

Well, what if we can perfect that? Still, right now it means sticking someone in an FMRI. So you can't really do it yet, and may not be able to do it anytime soon, without the person's cooperation, because if you don't cooperate in FMRI, you can't do anything. You have to lie still; you have to think about what they want you to think about.

So it's not anytime soon. But what if we actually could perfect a lie detector? You know that it's going to be misused, if not by our governmentno, our government would never do thatbut imagine dictatorships and totalitarian governments.

Then, finally, we are actually now learning how to control the function of the brain. Here we might actually be able to do it more remotely. That is through something like transcranial magnetic stimulation, which is simply a very simple coil that sends an electromagnetic pulse into the brain. What it does is when you hold it over a particular part of the brain, it disrupts function in that part of the brain. Right now it is being used for a number of different, I think, positive things.

But it could theoretically be used, if it could be refined, to, for exampleimagine that we discover that there is one discrete part of the brain that is required for deception. This part of the brain is integral to the act of deception. We don't know that that's true yet. There are some theories about this. And then, what if you could put a TMS [transcranial magnetic stimulation] wand over that part of the brain and disrupt its function? You might actually not create a lie detector, but a deception inhibitor.

Is that really what we want? I mean do we want a world where the government or some other entity can have a really reliable deception detector or, even perhaps more problematic, a deception inhibitor?

These are questions we need to think about before we have the technology. So we have to engage in what I call prophylactic ethics. We need to think about what this means for us and then try to make some decisions on how we're going to handle it before they become a reality.

MARLENE SPOERRI: Many of the technological advances that we have discussed have a clear military application. In fact, much of the research on things like rat bots and insect drones and mind reading is being funded very generously by the Department of Defense. As you know, the military has long been a driving force behind some of the most significant technological breakthroughs that we have witnessed in the 20th century, whether it's the atom bomb or the Internet. Does the close relationship between science and the militaryDepartment of Defenseraise any ethical questions for you?

PAUL ROOT WOLPE: You know, it's deeply disturbing, and it's not just disturbing because they're funding it. There is no more problematic and long-reaching ethical challenge in technology we have right now than the drone program itself, the existing unmanned drone program. That program is only going to build. More and more countries already have unmanned drones. The technology for it will be over-the-counter technology in another 10 years.

What happens when battles can always be fought remotely? What happens when you remove the human risk and the human encounter from battle? You create a very, very different kind of killing. That, in itself, is a deeply disturbing path we are already far down. We have already killed thousands of people in drone strikes. Not to mention even just using them for surveillance has ethical challenges.

So we don't need to go to rat bots and bug bots and other kinds of technologies to ask these questions. The biological side does have certain of its own kinds of ethical challenges. But there is a broader ethical challenge here, about the use of technology. It is a question around how we interact with each other as human beings as technology takes us further and further away from the human encounter, not just in military settings, but in one-on-one settings with our friends.

That's not to say these technologies don't have great use. But they are changing the nature of human interaction. Whenever you do that, you have a deep ethical question that you need to think about, and you need to create a response, I think.

It's not for the ethicists to think about this alone. We all need to think about it, and our philosophers need to think about it, and our humanists need to think about, and our scientists need to think about it.

MARLENE SPOERRI: In 2010, Curtis Bell circulated a pledge by neuroscientists asking them to refuse to participate in the application of neuroscience to violations of basic human rights and international law. That was a deeply contentious pledge, and not a great deal of scientists signed up. Why do you think that's the case?

PAUL ROOT WOLPE: There are virtually no neuroscientists who think that they do something that violates international law. Here is where the great technological divide happens.

Very few tools have, in and of themselves, moral staturemaybe the nuclear bomb. There may be one or two exceptions where things are so destructive that the tool itself might be unethical. But a hammer can drive in a nail or a hammer can bash in a head, right? So the question is not the tools we create; the question is what we do with the tools we create.

Certainly, there are some kinds of scientific questions or some kinds of scientific inquiry that we think should not be done. We have a right, as the taxpayers who usually fund most of this, to demand certain accountability for where our science dollars go.

But ultimately science itself is not the problem. The problem is the values by which we apply scientific findings, the values by which we use technology.

And there is a reciprocal relationship between science and culture. Culture asks questions; science answers them. Those answers change culture, which raises new questions. There is this cycle that has gone on from the first time human beingsyou know, in 2001: A Space Odysseyfrom the first time that proto-human started banging on something and realized tools could change the world. We've always had that question.

The creation of the plow changed human history. The creation of the car destroyed the inner cities for decades, as industry moved out and we didn't know what inner cities were for anymore. The personal computer and the PDA have changed human interaction and changed access to information.

When technology and society are in a cycle of rapid change, we need to ask the deep questions of what that means. We're not always that good at knowing, but it's really our responsibility to at least try. So I think the question goes really beyond the specifics of biotechnology to a deeper question of what kind of a society do we want to be and how are we going to use technology to achieve it.

MARLENE SPOERRI: Finally, we've discussed the ethics of life on earth. But you, of course, are the chief bioethicist at NASA. So can you tell us a little bit what you do there and what is different about ethics in space versus on earth?

PAUL ROOT WOLPE: It's one of the more interesting things that I do. I started my career doing medical ethics and then eventually moved into more biotechnology, though I still do a lot of medical ethics.

The issues at NASA are primarily medical ethics issues, but they are different medical ethics issues, in that when you ask a medical ethics question here about care for average people, things like cost, access, our Byzantine and disgraceful reimbursement system in this country, poverty, they almost always play into the equation.

When you ask a question about how do you medically outfit a craft going to Mars, none of those are issues. NASA will have the money it needs to outfit that craft.

Now there's a different, interesting question that comes in. Let's imagine that we are all the committee whose job it was to decide how to outfit a craft going on what we call long-duration space flight beyond earth orbit, which is the long way of saying going to Mars or the moon.

Right now, on Space Station, if there is a medical emergency, the philosophy is stabilize the person; do whatever you can up there, which isn't usually that much; and then get them back to earth quickly.

When you're 8 million miles on your way to Mars, you can't do that, which means we have to equip a craft with all of the medical necessities that it needs.

Now, every ounce you put on a craft you have to take an ounce off somewhere. Weight is one of the single greatest issues of space transportation. So then you have to start making decisions: Can you put an x-ray machine on there? That's a really heavy piece. So maybe you only use ultrasound.

What about the formulary? Are you really going to put a drug on for every possible thing that someone could get on the way, or are you going to compromise by saying, "These five drugs treat these five different syndromes really well. This one drug treats all five okay, so we'll use this drug because it takes weight off."

Are we going to have a surgical suite? How are we going to train the astronauts? We want redundancy, so we can't just have one doctor. What if the doctor gets sick, which has happened in Antarctica, which is the closest analog we have to being in space? When you winter over in Antarctica, you're actually far more isolated than the people on Space Station are.

There was a famous case, for those people who know Antarctic lore, of a Russian physician who took out his own appendix, with the engineer holding a mirror over his stomach and himI'm not sure how much vodka was involved, but he managed it. [Laughter]

So those kinds of questions become really very interesting ethical challenges.

There are many ethical challenges on the Space Station right now around medical issues. What about medical privacy? Let's say an astronaut gets sick on Space Station and he talks to his doctor, which he can do privately through a secure channel. But his sickness might affect the mission. Do the other astronauts have a right to know? HIPAA says no, by the way, which is the health care act that ensures privacy. Astronauts have privacy also.

There are all those kinds of interesting questions around ethics of putting people into space that we are just beginning to really think about in a deep way.

Questions

QUESTION: I'm Ernestine Bradley from the New School.

I wonder whether you can elaborate a little bit on the patenting of DNA. As you know, the Supreme Court has decided to hear a case in which a lab owns all the patents in relation to a certain breast cancer, excluding further research on that particular chromosome, I think. You spoke about culture versus technology. To me this really is a case in which you have big business versus ethics.

PAUL ROOT WOLPE: Gene patenting itself is a complicated question. It boils down to the question of, first of all, whether genes should have been patented in the first place. Technically, you don't patent a gene. A gene is a naturally occurring part of the body. You can't patent a gene.

What they actually patent is the means to determine the order of base pairs in that gene. So BRCA1, for example, is a particular mutation in a gene that we all have, and they control the way to tell whether that mutation is in that gene.

There are many people who have argued that that was a misuse or misunderstanding of patenting. There are other people who argue that that was an appropriate use of patenting. I happen to think it was a misuse of patenting and that genes should never have been patented.

I find the argument that was made by industry at the timeif you don't allow this patenting, nobody is going to invest in finding these genessilly. It was an argument that I don't really think has merit.

So I have a feeling that the Supreme Court is going to decide against Myriad Genetics. I'm not 100 percent sure that it will.

But it is not as simple as people often make it. The idea behind gene patenting was that the company will have enormous investment in making sure that that test is as widely available as possible and, therefore, get their licensing fee for that test. That has been generally pretty much true, though there have been exceptions.

The issue hasn't been so much in the testing for clinical purposes. The big controversy has been in using that test for research or trying to do research on that gene, where you have to pay a licensing fee to just do research in an academic institution. That's where a lot of the tension has happened.

But I generally agree with you. I don't think we should allow that kind of patenting.

QUESTION: James Starkman. Thank you for a very interesting talk.

I just would like your opinion on the current state of ethical practices with regard to two areas of medicine.

First, on diagnostics, the development of tests. In other words, if you had an Alzheimer's gene, would you want to know, should you know, who should know?

The second, on fast-track drug development, how do you feel that ethical practices are being pursued now with regard to the speed of development versus safety factors, et cetera?

PAUL ROOT WOLPE: Around the first question of diagnostics, there are many different kinds of diagnostic tests. Genetic diagnostics, which is just one small part of a very largewe do diagnostic tests through analyzing blood non-genetically; we do metabolic teststhere are a lot of different kinds.

The issue of genetic testing has been a difficult one since the day they released the first genetic tests. The main questions are: there's nothing particularly ethically different in my getting a genetic test and my getting any other kind of test if the information is limited to me.

So, for example, imagine that we could tell that you were susceptible to Alzheimer's because we found that you had the APO-E gene, which is an Alzheimer's susceptibility gene; or we had some other testI'm making this upwe looked in your eyes, made you move your eyes back and forth, and said, "Aha, because of the way he moves his eyes, we see that he's in the class of people that is more susceptible to Alzheimer's."

I suspect that if we talked about that second eye test, there aren't that many people who say that's unethical or you can't do it. It is because it is a genetic test that we seem to have such problems with it. That is because we misunderstand the nature of genetics and we think genetics are destiny.

With very few exceptionsand there are some exceptionsgenetic tests are susceptibility tests; they are not deterministic tests. We can say, "You have an increased risk," which brings up a whole other issue that I've been writing about and thinking about a lot, and I'm going to tell you it, even though you didn't ask about it, because I think it's so ethically fascinating.

Let's take susceptibility to schizophrenia. We have now gotten to the point where we have certain markers, both potentially genetic but certainly physiological and behavioral, that suggest that any particular individual boy, let's say, at the age of nine, ten, or eleven, is probably more likely to develop schizophrenia than another boy. We have certain indicators.

Let's imagine for a minute that we can prove that this kid has a 10 times greater chance than the average kid of developing schizophrenia. The average risk of developing schizophrenia is just under 1 percent. So now this kid has a 9 percent chance.

What do we do with that information? His chance is 10 times greater than the average kid. He still has a 91 percent change of not getting schizophrenia. Do we have some ethical obligation to act on that knowledge, even though the odds are still ten-to-one that he's not going to get it?

Let's say my risk for colon cancer, because of genetic testing, was 30 percent and yours was 45 percent. Is that a meaningful distinction?

What we are doing is we are turning medicine into a risk assessment system, and we're beginning to get more and more information about what our risks are. But we don't know what to do with it, first of all, because there are ranges. So we say this test shows that you have from 30 to 50 percentsometimes it's even greaterand, second of all, because human beings are extremely bad risk assessors.

That's why we have state lotteries, because we do not understand statistics. I tell my students, "If you bought a ticket for the state lottery every day since Homo Sapiens were inventedevery day, not every year, every day since Homo Sapiens were inventedyou still wouldn't have a 50/50 chance of winning the lottery." I mean people don't understand what 400-million-to-1 odds means, which is what Powerball is. And yet, people go out and spend, which is fineif you want to spend a dollar for the 24 hour fantasy of winning $100 million, it's worth the dollar. But for the people who don't understand what they are risking, or for the people who go and risk their paycheckwe're very bad at that.

There isn't good language to talk about risk. We don't even know how to speak about it, never mind how to assess it. We try it, through. You ask 100 people out on the street "What does a 40 percent risk mean?" and you will get 50 different answers. We don't know statistics either.

And then we try to create analogies: Your risk of colon cancer is kind of like your risk of getting hit by a car. Is that more helpful? We don't even know how to talk about this.

So the issue of diagnostic tests and the kind of information they give us, and things like susceptibility and risk, is one of the greatest challenges I think in all of medicine right now.

The second question was drug development. We are not developing drugs a lot more quickly. So I assume the question that you're asking is: once a drug is developed, fast-tracking it into use. Is that what you're asking?

QUESTIONER: Yes.

PAUL ROOT WOLPE: This was also another ethical problem, and it really came to a head during the AIDS epidemic. The United States's way of developing drugs is cautious, more cautious than many other countries. So we have a very high bar before we allow a drug to reach the market. Some other countries have lower bars. That is why the Thalidomide tragedy happened in the UK and Europe and not in the United States nearly to the same extent, because of that. On the other hand, we've had our own. But the point is we have a fairly high bar.

What happens when you have a drug that seems to be a drug that could really help people with a life-threatening illness and it has not yet reached that high bar? That's where the issue comes up.

The Food and Drug Administration, in response to ACT UP and other AIDS activists, decided to create this fast-track idea, where if a drug has a certain level of evidence of its efficacy and there are people who will die without it, they will allow it to be released on the market for very limited and targeted use early.

I think it's a reasonable compromise. I think it can get out of hand, and too many of these drugs can be released because of pressure, rather than of great need. We have created drugs and released them on the market that have killed peoplenot only Vioxx, but there was another drug that created a whole series of liver transplants.

We have to be really cautious. Our system has been very good so far. So we have to find that balance between the two, and it's not an easy balance to strike. I don't think there is the perfect right answer.

QUESTION: Susan Ball.

I was just reading Brain Matters, the newspaper, this morning, and they were discussing how they have now gotten to the point where they can order memory, and the ethics involved. For example, do you take somebody who has memories of shooting everybody in Iraq and everything and change their

PAUL ROOT WOLPE: Let's talk about that.

There are both attempts to improve memory and to suppress memory.

On the suppression of memory side, we can do it in rats pretty well; we haven't yet been able to do it in human beings well. So, for example, one of the technologies that is being used is propranolol, which is a beta blocker.

The idea is this: when I'm walking through the forest and a tiger jumps out, I have a surge of adrenaline and fear and I run and I get away. Evolution wants me to remember thattiger bad, tiger lives there, don't go there. So I have this incredible fear. What's supposed to happen over time is the fear is supposed to diminish and the memory is supposed to remain. So emotion cements memory. When we are in a state of high emotion, we remember more than when we are in a state of low emotion.

The brain doesn't need to remember the four hours you spent sitting in the Department of Motor Vehicles staring at the ceiling tiles waiting for your number to be called. We don't need that. In fact, the brain is very good at not remembering the details of that. But should the person sitting in front of you have an epileptic fit, you'll remember that for years, maybe for the rest of your life, because it's emotion, it's affectively exciting.

What happens in PTSD [post-tramautic stress disorder] is that affective response and the memory never get decoupled, so that the memory can remain and the affective response can diminish, which is what is supposed to happen.

What propranolol and other "memory-suppressing" drugs do is decouple those two things much more effectively. That means you can evoke the memory, theoreticallythere has been a lot of work to show we can do this; it's just not sure how effective it isgive someone propranolol and help them decouple that so that the memory remains but the valence of it is gone, and then eventually they may forget the memory, because without that we forget certain memories.

That's one example of some research that is being done now that is being called "memory suppression." But, you see, it isn't actually. In fact, in human beings we don't have anything that suppresses memory yet. We seem to be able to suppress memory in rats. But we're not 100 percent sure; it's just behavioral.

Then, on the other side, there is memory enhancement, and there are people who are trying to create memory-enhancing pills, especially for dementia and other things.

But I'm going to tell you one real quick thing that happened. They created rats that had better memoriesgenetically, not through pharmaceuticalsand they ran them through a maze. They had normal rats. Then they changed the maze. Now, the enhanced rats learned to run that maze much more quickly than the non-enhanced rats. But when they changed the maze, those rats ran right into the wall where there used to be a passage. They were very good at remembering; they were bad at forgetting.

We have to selectively forget all the time. The idea that we are just going to increase our memory and keep exactly the same pattern of forgetting may not actually be true.

One other last thing: Those rats were much, much more averse to pain, because another thing that our brain does is it forgets pain. If it didn't forget pain, I'm not sure any woman would ever have a second child. [Laughter] But we're supposed to remember that there was pain, but not the pain. What this seemed to have done was to so increase their memory of pain in whatever way rats have it that they were extraordinarily averse to painful stimulus, much more than normal ones.

So this idea that we will take a pill and exactly the kind of memory we want to enhance will become enhanced is, I think, another place where we need to be cautious.

QUESTION: Years ago I was a little bit familiar with a system of some ethicist at the Hastings CenterBernard Gert I think it was. He described his system as a sort of philosophically justified ethical system.

Early on, you said that you, as a sociologist, seemed to be grounded in sociology more than in ethics or in philosophy. I was wondering, don't you still need somehow to gravitate towards, or identify somewhat with, some sort ofI'm guessing it might be a philosophically justified, rather than religiously justified, in your casesystem in order to give your refereeing as a sociologist some authoritativeness?

PAUL ROOT WOLPE: I agree with you up until the last phrase. Yes, we all have that. Every single one of us makes ethical decisions based on either an implicit and explicitly conscious set of values and systems, or an implicit and unconscious set of systems. I have my own system by which I make my ethical decisions.

What I'm saying is I don't think that that system has a professional grounding. Sociology is a descriptive, not a normative, field. That is, it doesn't believe that it has within its methodology a way to make normativethat is, valuedecisions. it is descriptive.

So what I think I do as a sociologist is I understand all the details of the questions of, not what is the right ethical decision, but what are all the aspects that go into an ethical decision. Why now? What kind of social import goes into the kinds of ethical things that concern us?

Many of the ways in which we make ethical decisions evolve over time. One of the fundamental questions that we ask our freshmen to try to think about is the question: Is there one ethics for all time or does ethics change in time and place? The answer I think is both. That is, there are ethical principles that are universal, but how we think about them, how we express them, and, most importantly, how we balance them, in any particular time and place changes. That's what I, as a sociologist, try to think about.

Now, I have my own set of standards and philosophies by which I make my ethical decisions, and you do too. But, as a professional, I don't think that they have any particular authority over yours.

Now, a philosopher or a theologian might. Some don't, but some of them might. A theologian might say, "I believe in following Scripture. I believe it gives me an appropriate way to make ethical decisions. Because I believe that's the revealed word of God, I believe it has greater authority that Bernie Gert's philosophical system."

Or a philosopher might say, "I belong to the utilitarian school, I belong to the deontological school, and I believe that's the right way to make ethical decisions."

I don't claim professionally to have a system that I believe is necessarily better than other people's systems. So I separate out those two questions.

QUESTION: Joanna Smolenski.

Earlier when you spoke about genetic privacy, you said that the main reason that people worry about genetic privacy is because they have this sense that "my genetics are unique to me; therefore, they somehow constitute who I am." You said in fact that a lot more of those issues come up in issues regarding memory or brain imaging.

It seems to me that a lot of concerns about genetic privacy stem from things like employer discrimination or insurance discrimination. Do you think that those concerns are trivial in light of the predictive power of genetics, or do you think there are analogous concerns in brain imaging?

PAUL ROOT WOLPE: I don't think that those concerns are trivial in genetics. I think that those concerns are illegal. We passed GINA [Genetic Information Nondiscrimination Act], which is the genetic privacy act that makes insurance companies not able to discriminate against people based on genetics. And employers don't do it either.

It is a very legitimate concern, and was much more of a concern a decade ago. But right now, at least for the present time, it's not that much of a concern. Employers can't discriminate against you on the basis of genetic information.

That's why we put a lot of attention in that. We passed state laws and federal laws towards that. That's why I say it's not that those things are unimportant or that there might not have been a really serious issue there; it's that we've handled it, for now at least.

We haven't handled issues of brain science. We haven't thought about it. When was the last time you heard people worry, not about brain privacy with the FBI beaming microwaves into my brain, but real brain privacy that is, what my brain scan is doing in a drawer without any protection other than normal medical protections? I think we should be a little more worried about that right now than we are about genetic privacy. That's all I meant to say.

About the second half of your question, is there an analogous thing in the brain like genetic privacy? Not really yet.

I'll give you one example. People who have been chronic cocaine users can have a particular morphological alteration of their brainthat is, their brain can show a trace of that chronic cocaine use that a really skilled technician can detectnot all of them, but there is a kind of characteristic brain change. So one could go in for a brain scan for somethingyou're having chronic headaches; they say "Go get a brain scan"and now they'll do scans. There are places where you can get body scans in malls. People might get what we call collateral information that you never wanted them to have. So that is a perfect analogy to the genetic privacy problem.

It is pretty minor right now in neuroscience. There aren't that many things we can tell. But what ifthey're doing a lot of work right now on the brain structures and functions involved in things like pedophilia. You might not want the guy in the mall to know that you have the brain structure for pedophilia. So there are issues there.

On the other side, genetics, yes, someone could get my genes and do some analysis. Why anyone would want to spend thousands of dollars to analyze my DNA to come and tell me, "You have a susceptibility to Alzheimer's" or something, I'm not sure.

But let me give you an example where that might happen. Imagine for a minute that Ronald Reagan was running now and he releases his medical records. The APO-E gene, which I mentioned before in relation to Alzheimer's, was originally a gene that was thought to have a predictive value for certain kinds of heart disease. So let's do a thought experiment. President Reagan is running and he releases his medical record, and there it is that he has a particular mutation, this APO-E gene, or he shows what his APO-E gene analysis shows, in order to show us that he doesn't have this propensity towards heart disease that the APOE gene was originally identified for. In the midst of his campaign, it is discovered that the APO-E gene also tells us something about susceptibility to Alzheimer's disease. We look at that medical record that he has released, and in fact he's got it.

Now, I made all of that up, except for the fact that the APO-E gene actually does have both those functions. But the story is made up. Imagine, that could have changed American history.

I'm not trying to say that there aren't real issues there in genetics. I'm just saying that right now we have spent an enormous amount of time attending to those and very little to the neuroscience issues.

QUESTION: Sonia Miller, attorney.

You mentioned briefly about brain fingerprinting. Could you elaborate a little bit about that and exactly what might be some of the legal implications in its advancement?

PAUL ROOT WOLPE: Brain fingerprinting doesn't mean what it sounds like it means. Brain fingerprinting is actually a patented process that a man named Lawrence Farwell created.

What brain fingerprinting isand I'm not going to do it justice in this explanation, but we don't have timebasically it's this. It uses electroencephalography [EEG], not brain imaging. If I were to give you all a set of pictures of random female faces and I stuck in there the face of your mother, and you are going through and you've never seen any of these faces and all of a sudden you come upon the face of your mother, your brain goes, "I know that face," and there's this recognition reaction that your brain has that you don't have control over. If we had you hooked up to an EEG, you would see a particular spike of a particular wavelength when that happensthe P300 it's called.

What Farwell has argued isbrain fingerprinting is called a kind of lie detection, but it's actually not, it's something else. So I say, "I didn't kill her. Not only didn't I kill her, I was never in her apartment." So what if we could take 20 apartments and put them in a pile and have you on an EEG and you start looking through it, and then when you hit her apartment your P300 spikes? I could say, "Your brain is telling me that you recognize this." Now, you might recognize it from a picture in the paperwho knows? What Farwell was saying was it's a useful tool, and he has patented a process he calls brain fingerprinting in order to do that.

That doesn't do it justice, but it explains it enough to say what's happened. There are some real serious problems with it on a number of different levels. Because it's proprietary, he won't allow independent scientists to test his particular way of doing it. He has been trying very hard for a long time to get into the courtroom. He has succeeded in one case.

I think it is very premature. I think one of the things that is happening throughout neuroscience right now is premature use of these technologies for a whole host of things, a lot of them legal, including brain imaging for lie detection. So I have been a very strong advocate of the idea that these things are not ready yet to be used outside the clinic, that we need a very high bar for validation and reliability testing, and that it is a misuse of science to be doing these things out in the public realm. There are two commercial companies right now doing brain imaging for lie detection. There's Larry Farwell doing his brain fingerprinting. I'm very worried about these technologies.

MARLENE SPOERRI: That about wraps it up. Thank you so much for being with us.

PAUL ROOT WOLPE: My pleasure.

You may also like

NOV 21, 2024 Article

A New International Order Is Emerging, We Must Bring Our Principles With Us

On the heels of a new international order, Carnegie Council will continue to champion the vision of peace and cooperation that remains our mission.

NOV 13, 2024 Article

An Ethical Grey Zone: AI Agents in Political Deliberations

As adoption of agentic AI increases, it is critical for researchers and policymakers to agree on ethical principles to inform governance of this emerging technology.

OCT 24, 2024 Article

Artificial Intelligence and Election Integrity in 2024

This final project from the first CEF cohort discusses the effects of AI on election integrity as billions of people go to the polls in 2024.

Non traduit

Ce contenu n'a pas encore été traduit dans votre langue. Vous pouvez demander une traduction en cliquant sur le bouton ci-dessous.

Demande de traduction