Skip to Content

Big Brain Thinking

Stanford neuroscientist Bill Newsome wants to implant an electrode in his brain to better understand human consciousness.
February 10, 2006

Scientists are learning volumes about the brain – how it can make split-second decisions, how it learns from past mistakes, how it converts pulses of light into a complex visual scene. But, for some, deciphering the “language” of the electrical pulses that travel through our brains is only half the story. The second part, and one that is far more philosophical and complex, is how that brain activity translates into consciousness – a person’s self-awareness and perception of the world around them.

Bill Newsome, a neuroscientist at Stanford University in Palo Alto, CA, has spent the last twenty years studying how neurons encode information and how they use it to make decisions about the world. In the 1990s, he and collaborators were able to change the way a monkey responded to its environment by sending electric jolts to certain parts of its brain. The findings gave neuroscientists enormous insight into the inner workings of the brain.

But Newsome is obsessed with a lingering question: How does consciousness arise from brain function? He feels the best way to answer that question is by implanting an electrode into his own brain – and seeing how the electric current changes his perception of the world.

Newsome would not be the first person with a brain implant. Epilepsy patients undergo electrical stimulation prior to brain surgery. A paralyzed man in New England has an experimental implant that translates his brain activity into movements of a robotic arm. And, perhaps most famously, Kevin Warwick, a cybernetics professor at the University of Reading, U.K., first implanted a chip into nerve fibers in his arm in 2002, then implanted a chip in his wife’s arm, as part of his quest to become a cyborg.

It’s not certain that Newsome will get approval for such a radical undertaking. But, if he does, his experiment won’t be in the interest of curing a disease or become a human machine. He’s hoping to do something broader: understand consciousness.

Technology Review: Why is understanding consciousness so important to you?

Bill Newsome: I think that how consciousness arises out of brain function is one of the most fascinating and important questions in all of neurobiology. If we understand the system completely (from input to output) at a cellular level, but still do not know exactly what causes conscious mental phenomena, we will have failed.

TR: Most of your experiments have been done on monkeys. How did that begin to shape your view on the relationship between brain functions and human consciousness?

BN: We study motion perception. We train monkeys to look at a pattern of dots moving in a certain direction and to report the direction of the dots by moving their eyes in the same direction. If a monkey picks the correct answer, he gets a reward.

This simple behavior contains a world in terms of understanding how the nervous system performs intelligent behavior. Sensory information that comes into the brain through the eye must be coded into some neural language that represents the stimulus within the brain. Based on this neural representation, the monkey must then make a high-level judgment about what he is actually seeing. This “decision” in turn guides the selection of a motor response, to look to the left or the right.

TR: And you added a new level to this experimental setup by stimulating the monkey’s brain.

BN: We put an electrode in an area of the brain known as MT. The cells in this area respond selectively to a specific direction of motion. Some cells are active when the monkey looks at dots moving to the left, some cells are active when the monkey looks at dots moving to the right. People had suspected for a long time that MT was important for our ability to see motion. So we did an experiment where we stimulated these cells artificially with tiny pulses of electrical current – it changed what the monkeys reported seeing.

TR: So with the monkey experiments, you can stimulate the brain in very focused ways and change the way the monkey responds. But the monkey can’t tell you what he sees when you stimulate the brain.

BN: Yes. People can report what they see or hear or feel, but with monkeys, you can only look at their change in behavior. I can’t climb into a monkey’s head and see what the monkey really sees.

This gets to core of the current debate about the study of consciousness. What is the conscious experience that accompanies the stimulation and the monkey’s decision? Even if you knew everything about how the neurons encode and transmit information, you may not know what the monkey experiences when we stimulate his MT.

TR: People have shown that stimulating the human brain can do similar things too, right?

BN: Electrical stimulation of the brain is not new. Wilder Penfield, a neurosurgeon in Canada in 1930s and 40s, who pioneered the neurosurgical treatment of epilepsy, was the first to start stimulating the brains of conscious humans. He wanted to be able to identify the parts of the brain involved in speech and movement, before he took out the piece of brain he thought was responsible for disease, so he developed ways to make a hole in the skull and expose the brain in fully conscious humans.

While he was in there stimulating the brain for clinical purposes, he also stimulated other parts of brain. He showed that by stimulating visual cortex, you can get people to see stars or flashes of light. When he stimulated the auditory cortex, people could hear buzzing signals. When he went deeper into the brain, into the temporal cortex, he could elicit complex perceptions. A patient would say things like, ‘I’m sitting on the back porch of my mother’s house and she’s calling me to dinner.’

He did all of this in the 1930s, but the field never went anywhere because he knew nothing about the circuitry of the brain. Penfield was just stimulating neural tissue of an unknown nature. He could elicit conscious phenomena, but he gained no insight into how, exactly, the conscious phenomena are related to the [behavior] of the activated neurons.

Now we know about single cells, neural circuits, and their selective properties. So we can make better hypotheses about how cells might contribute to cognitive phenomena such as perception or memory or attention. We can tweak carefully targeted parts of the system and get a predictable response.

TR: So how do you plan to understand the link between activity in specific parts of the brain and consciousness?

BN: I don’t now how to figure it out, but it seems to me that stimulating a human brain such as my own would be a good place to start. If I could stimulate my MT, then, presumably I would know and could say whether I really see the [actual] dots moving [as in the monkey experiments] or something else altogether. This would be a start toward identifying the [specific aspects of consciousness that accompany] neural activation at different points in the nervous system.

TR: Do you think you could really get regulatory approval? What are the major ethical issues?

BN: Getting approval to do something like this would be difficult. Any human experiments in this country are under rigorous scrutiny. Lawyers and administrators at institutions take a dim view of this kind of thing because of the liability issues. And there is a definite slippery slope argument. I might be able to make a case for my own experiment, but it could set precedent for others for whom it would be more risky.

For example, if I did this experiment, it would probably be a big deal and get in the newspapers. Some young graduate student might see it as a way to get ahead in his career and decide to do it. He might put himself at greater risk than I would. Maybe he would probe deeper into his brain, where there is more risk of damaging the vasculature. It would be uncomfortable to think that I was responsible in part for that, even if my own adventure turned out just fine.

TR: Do you really want to do this?

BN: Well, I’ve thought about it very carefully. I’ve talked to neurosurgeons, both in the United States and outside the country where the regulatory environment is less strict, about how practical and risky it is. If the risk of serious postsurgical complications was one in one hundred, I wouldn’t do it. If it was one in one thousand, I would seriously consider doing it. To my chagrin, most surgeons estimate the risk to be somewhere in between my benchmarks.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.