Skip to Content

Picking Your Brain

Bioethicist Paul Wolpe explores the implications of wiring computers to the human brain.
November 1, 2004

Paul Root Wolpe

Position: Professor, Departments of Psychiatry, Medical Ethics, and Sociology, and senior fellow, Center for Bioethics, University of Pennsylvania; chief of bioethics, NASA

Issue: Brain-computer interfaces. Neuroscientists and engineers are developing technologies that allow the brain to interact directly with computers, from chips that could enable amputees to control prosthetic limbs to devices designed to enhance brain function. How will these new technologies influence daily life?

Personal Point of Impact: A founder of the field of neuroethics, which examines the implications of emerging neurotechnologies. Organized the first series of meetings on the topic in 1999 and 2000, bringing together leading brain scientists such as Steven Pinker, Steven Hyman, and Michael Gazzaniga.

Technology Review: A company called Cyberkinetics received U.S. Food and Drug Administration approval in April for a clinical trial of a brain implant designed to allow paralyzed patients to interact with a PC. Is the technology really advanced enough to make this sort of test ethical?

Paul Root Wolpe: There are issues with device testing of this kind in terms of human-research protection. The kinds of people that these devices tend to be tested on are deeply coerced by the nature of their disabilities. I don’t think it’s insurmountable; all medical progress depends on somebody being the first one to try a new technology. What is crucially important is really good oversight and really good informed consent. Given the history of oversight and of informed-consent issues with medical devices, it does concern me that these technologies will be used without strong external review and monitoring.

TR: Isn’t this technology at a much earlier stage than where you would test a drug?

Wolpe: Generally in bioinstrumentation, that happens. For historical reasons, we are much, much more concerned about people ingesting drugs than we are about subjecting them to bioinstrumentation, and we have different regulations about how to test them and protect subjects. Pharmaceuticals alter the basic chemistry of our bodies; bioinstrumentation, until recently, was primarily external to our bodies. The problem is, the nature of bioinstrumentation is about to change, and emerging biotechnologies will be incorporated into our bodies much as pharmaceuticals are. Ten, 20, 30, or 50 years from now, perhaps, nanotechnology will develop little nanobots that are injected into our bodies to roto-rooter out our arteries. Would these be a drug or a bioinstrument? We need to begin to change the way in which we think about bioinstrumentation in general. We have to rethink our tendency to be less rigorous about applying bioinstrumentation to the human body than drugs. Right now, even these new technologies that may have profound effects on our brains do not have the degree of oversight that drugs do.

TR: But the payoff seems huge.

Wolpe: For people who are paralyzed, the Christopher Reeves of the world, the ability to manipulate things in the world with the mind is an extraordinarily desirable outcome. Implants for people who have locked-in syndrome – so they can’t communicate with the outside world – are being tested right now and allow the subjects to directly translate brain impulses into computer responses, such that they can move a cursor around a screen and choose phrases, simply through thought. That is certainly a wonderful thing. It would be churlish to say, Let’s not allow this person to communicate because we’re not sure what the long-term effect is of putting electrodes in his brain. You have to ask yourself the risk-benefit question. But those cases are different than neurotechnologies that might eventually become fairly common.

TR: What kinds of technologies are those?

Wolpe: A lot of the technologies we’re talking about are communication technologies; they take information from the brain and externalize it for one reason or another. We also have internalizing technologies – cochlear implants, optic-nerve implants – whose purpose it is to take information from the outside and give us access to it. These two technologies will eventually come together, and then we’ll have interactive-chip technologies, such that we’ll have input-output interactions.

But in terms of what most people mean by brain-computer interfaces, there’s a lot of work being done to create noninvasive BCIs by putting electrodes on people’s scalps or having them wear these caps that are infiltrated with sensors. The goal is a system that could retrieve much more detailed and specific information from the brain so that people could do sophisticated kinds of work through thought alone. It’s very promising for people who are paralyzed, but it also means that I could sit here at my computer with a cap on my head and answer the phone, “type” on my computer, be connected to my colleague in the office next door – through brain impulses alone. That’s one area I think technology may take us over the next 50 or 60 years. We’re going to be able to manipulate any system that has a sophisticated chip in it, everything from your wristwatch to your car.

TR: Will those kinds of devices raise ethical questions?

Wolpe: A key issue is the implications of these technologies for personal privacy. If there are eventually technologies that externalize internal states, who has a right to access that information? And what about cases where that information could be taken against people’s will, or without their knowledge? Are we going to start implanting electrodes in the brains of the suspected terrorists in Guantánamo Bay? Certainly not yet – there’s nothing we could get out of that. But research is being funded by the Departments of Homeland Security and of Defense for things like lie detection technologies using functional MRI or near-infrared light. These technologies can be used coercively in a way polygraphs, for example, can’t. If you’re not willing to cooperate with a polygraph, there’s really nothing they can do. But you aren’t necessarily going to need to cooperate for some of these technologies; they can, theoretically, be used covertly. They may be used on suspected criminals or enemies of the state, or on you and me when we’re going through airports. Near-infrared technology may someday employ an undetectable spot of light on your forehead. Research on ways to take what used to be private thoughts and make them accessible will challenge our laws and our thinking about what privacy means.

TR: How does the societal impact of brain-computer interfaces compare to other areas of biomedical research, such as genetics or stem cells?

Wolpe: Neurotechnology is way ahead of genetic technology. We’re not cloning anybody yet. We’re not creating genetically modified human beings. Yet we are already testing implanting electrodes into people’s brains. Unfortunately, there’s only a fraction of the scrutiny by policymakers, legal scholars, ethicists, and the religious community towards neuroscientific advances that there is towards genetic advances. That’s in some ways very, very troubling.

If I had your genome in front of me and did every test on it that I could think of, what could I really tell about you aside from your disease profile? Not much. We don’t know how to look at a genome and tell if you’re happy or shy or funny or extraverted. But we are beginning to be able to tell those things from brain scans. Brain technology is, before genetics, going to tell us things about people that they really consider to be private.

Another big issue is intervention: is it ethical to change fundamental aspects of who people are by changing their genomes? We still can’t intervene in human beings genetically; even gene therapy has been, so far, largely unachievable. Our ability to manipulate the brain raises far more immediate questions about intervening in who we are fundamentally and what’s the right and the wrong kind of intervention. People involved in the development of the technologies and people like me who study them need to spearhead a very open, public discussion so that society as a whole can begin to respond in ways that direct the research into productive and socially desirable avenues.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.