Skip to Content
Uncategorized

Seeing Is Believing

There’s a ways to go yet, but the artificial retina is poised to move out of academic labs and into corporate R&D.

“I could see it. Just a little light. That’s all it was,” recalls 71-year-old Harold Churchey. Not a very dramatic statement-until you realize that Churchey is completely blind.

The sudden spark of vision in Churchey’s brain was caused by a jolt of electricity coming from the tip of an electrode introduced into his eye minutes earlier by ophthalmologists Eugene de Juan and Mark Humayun. Churchey could not see the men’s faces. But if he had, he might have witnessed their masks of professional anxiety give way to twin grins of “Eureka!” After all, the 1992 experiment at Duke University was a landmark in the fast-accelerating quest to give artificial sight to the blind.

De Juan and Humayun, now professors at the Johns Hopkins University’s Wilmer Eye Institute, are among several teams of physicians, engineers and scientists moving to adapt advances in microelectronic technology to create implantable synthetic vision systems. The incredible prospect of bionic vision, says Terry Hambrecht, director of the National Institutes of Health’s Neural Prosthesis Program, was less-than-credible “even a few years ago. The technology wasn’t there, and neither was the neuroscience. But now a lot of basic research and device development are coming together to make it possible.”

Although it is tissue-paper thin, the natural retina contains a complex layering of neurons that work together to convert light into electrical nerve signals. Streaming through the pupil, incoming light hits the retina and passes through its outermost layer of transparent ganglions before running into a thicket of more than 100 million rods and cones. These photoreceptors soak up the light, which changes the rate at which they release neurotransmitter packets. The chemicals, in turn, set off a cascade of signaling first in the bipolar cells (which help distinguish between light and dark) and then in the amacrine and ganglion cells. By the time it hits the ganglion layer, the analog light signal has been completely digitized-it is now a series of nerve impulses which the ganglion cells proceed to pump into the optic nerve. The optic nerve’s 1 million fibers carry the signal to the brain’s visual cortex, the place where we experience vision.

Humayun and de Juan weren’t worried about what was happening in the brain. What they needed to know was whether people blinded by a disease like RP retained enough intact retinal circuitry to permit them to get a signal into the optic nerve. To answer the question, they approached the eye bank of the Foundation Fighting Blindness, where they obtained eyes that had been preserved from deceased RP patients so that their cell structure would not deteriorate. Counting the retinal cells at 100-micron intervals, says Humayun, “We found a near-total absence of photoreceptors.” That was as expected. The important discovery was that 30 percent to 80 percent of the other retinal neurons were still intact.

A step in the right direction, but it remained to be seen whether the cells remaining in a blind person’s retina could function. “We had no idea what the effect of 50 or 60 years of degeneration would be on the response of these cells,” says Humayun. There were other unknowns. From animal experiments the team knew how strong an electrical stimulus was needed to elicit a response from the retinal cells, but they had no idea if this signal would produce anything like normal sight. Was it possible that when the current hit the vitreous gel inside the eye, which is 99 percent water, it would simply diffuse out and appear as one huge flash of light? Even if they produced an image, says Humayun, “Would it look like a dot, be blue or green? Would it be something that is appealing, or would the stimulus be so noxious that patients would rather be blind?”

Because these questions could be answered only by a live patient, seven years ago Humayun and de Juan put out the word that they were looking for a volunteer. A colleague put them in touch with Harold Churchey, a former welder who ran a snack bar in Maryland’s Washington County Courthouse after RP blinded him. Churchey was game, even though he would have to remain conscious as the physicians sunk a probe through the wall of his eye, then electrified it. “I might be over the hill, [but] if I can help some young person, I am for it,” says Churchey. Later, his twin brother, Carroll, also blind, would join the experiments.

The first of the team’s 15 human experiments took place at Duke on September 17, 1992. Peering through Churchey’s pupil with a surgical microscope, Humayun pushed a hand-held probe through the white of his eye and back toward the retina. The probe held a single platinum wire, coated with Teflon and embedded in silicone rubber. De Juan and Humayun started applying small electrical pulses of a few hundred milliamps, but for 20 minutes Churchey saw nothing. “You can imagine the level of anxiety as we checked every possible circuit,” said Humayun. When the physicians finally pushed the probe so it nearly touched his retina, Churchey announced that he was seeing…something.

“We were able to create a small dot of light exactly under the stimulating electrode,” says Humayun. Churchey told his interrogators the spot looked to be the size of a pea seen at arm’s length. Worried that it might be some kind of artifact (possibly of Churchey’s imagination) the doctors changed the frequency of the pulse, asking him to count out loud if he saw the light. He did, and also reported that the spot moved when the electrode did, proving there was some degree of spatial resolution.

Buoyed by the results of their first human test, de Juan and Humayun set out to determine whether a blind patient could be induced to see multiple spots of light, something that would be crucial if they were ultimately to create a useful image. In their second human test, another volunteer was able to see three spots of light produced by three probes with an edge-to-edge separation of about 300 microns, or the width of a few human hairs.

The next challenge was to answer what Humayun calls “the million-dollar question.” Namely, how many electrodes would they need to produce usable images?

When cochlear implants-the predecessor to visual implants, which have given hearing to many deaf people-were being developed, some experts believed that at least 1,000 electrodes would be needed to create coherent sound. Yet six electrodes proved enough to help many patients. “This points to the fact that there is incredible plasticity in the ability of the human brain to take a somewhat crude sensory input generated by a man-made machine and make good use out of it,” says Humayun.

Evidence of the brain’s forgiving nature had already come out. The volunteers had reported that the electrodes were producing flickering dots of light. To create a steady image, the doctors simply turned up the frequency of the pulse; just as a movie appears continuous, even though it is made up of a series of still pictures, the brain was compensating by keeping an image in mind until the next pulse came along.

In a 1996 experiment, Churchey’s third, the two physicians, who had by then moved to Johns Hopkins, placed a 25-electrode array (a 5-by-5 square, with a slightly convex surface allowing it to match the contour of the retina) in Churchey’s eye and attempted to create an image of the letter “U” by stimulating the electrodes in a dot-matrix-like format. They had picked the wrong letter. They couldn’t round the edges of the “U,” and Churchey reported seeing an “H.” Since then, de Juan and Humayun have conducted one more experiment, stimulating the outermost electrodes of a square array-the patient reported seeing a matchbox shape.

Although they’ve been able to create only the crudest kind of image, Humayun says the initial successes have “really lit a fire” to move from hand-held electrodes to an actual implant. The know-how gleaned so far about how to stimulate retinal nerves is now being turned into a prototype device by a collaborating team led by Professor of Electrical Engineering Wentai Liu at North Carolina State University (NCSU).

All the researchers interviewed for this story emphasized that the results at Johns Hopkins, while exciting, do not mean blind people will be able to read newspapers, or even recognize a face anytime soon. But, says Ronald Carr, a New York University professor of ophthalmology and retina expert, a retinal implant that could allow some blind people to see light and dark now appears “feasible.” Ultimately, they may even be able to perceive enough of the shapes around them to walk without a dog or cane. “Obviously, this is never going to approach what one sees with the human eye,” says Carr. “But there’s a huge difference between seeing nothing, and being able to see outlines. Anything that could be done is a marked improvement.”

John Wyatt, a professor of electrical engineering at the Massachusetts Institute of Technology, says it is precisely because “the standards we need to be useful are quite low” that artificial vision is feasible at all. Wyatt’s MIT lab is home to another artificial retina project, which is taking a slightly different approach than the Hopkins/NCSU team.

The effort got under way in 1988, when Joseph Rizzo, a Harvard Medical School neuro-ophthalmologist, approached Wyatt to find out if the engineer could help him build a retinal implant. Wyatt, who had some experience in retinal neurophysiology from his doctoral studies at Berkeley, was initially skeptical. The retina looked like a pretty flimsy circuit board. But his fascination with the eye’s circuitry made the project too tempting to pass up, and he has since become more optimistic. Wyatt gives most of the credit to advances in microelectric fabrication technology, which, he says, “open up the ability to make little delicate things that you might be able to put in the eye [and are] the only reason there’s any hope of doing this. Without that, it would just be a complete dream.”

Although Rizzo and Wyatt have conducted only two human tests so far (both with inconclusive results), the pair think they know what a retinal implant will ultimately look like. The system, says Wyatt, will start by taking digital pictures with a small camera that can mount on a pair of glasses. Off-the-shelf technology that could do the trick already exists in the form of charged couple devices (CCD) found in conventional camcorders, as well as the newer, smaller and more energy-efficient active pixel sensor (APS) technology that debuted with digital cameras.

A small computer would probably be needed to process the image, which would then have to be sent to the implant inside the eye. The wireless system on Wyatt’s drawing board uses a diode laser, also mounted in a pair of glasses, to flash the images captured by the camera onto an array of photovoltaic cells built into the front of the implant. The laser beam would also provide power.

The implant itself, according to Wyatt, will be a silicon chip, loaded with transistors, sitting on the surface of the retina. In this “epiretinal” configuration, the side covered with photovoltaic cells faces outwards, while the other face, studded with 100 or more electrodes, would ride right on the retinal surface close to the layer of ganglion cells. The Johns Hopkins/NCSU implant has a similar overall design, except that it uses radio frequencies instead of a laser to transmit data and power.

The researchers plan to treat each electrode as one picture element, or pixel, with which to build an image. To squeeze the most out of each pixel/electrode, Wyatt hopes that changing the electrical current to each electrode will control the intensity of each spot a patient sees. “The idea is to convey various shades of gray, rather than just light or dark,” he says. With just a ten-by-ten grid of electrodes, each providing four to six levels of gray, Wyatt says it should be possible to “start making sense of an image, especially if it moves.”

However, researchers still don’t know if they can control light levels, or even their color. Volunteers in the Hopkins studies, says Humayun, saw “yellow, green, and blue, but we haven’t figured out what we’re doing to generate those colors.” Nor do they know whether they can stimulate vision long-term, nor what the ideal current is, nor what amount of spatial resolution they can realistically hope for. To answer these questions, more sophisticated arrays need to be tested in people.

While Wyatt won’t forecast when he and Rizzo will be ready to do that, NCSU’s Liu says his group already has the three key elements-camera, external video processor, and an implant with 100 electrodes. Integration is the next step, says Liu, and he predicts that “within a year or two we will definitely have a completed device.”

Although the epiretinal approach appears to be the most advanced, it is not the only retinal repair system. Alan Chow, a Wheaton, Ill., ophthalmologist (and an alum of Wilmer Eye Institute) is working on a “subretinal” implant that he says will require no external camera, power source or transistors.

The subretinal implant, Chow explains, is a collection of microphotodiodes-think of the conventional solar cells that convert sunlight to electricity, except tinier-that will be implanted behind the retina. Chow’s idea is that as ambient light passes through the retina and strikes the microphotodiodes, they will generate enough electricity to activate healthy nerve cells. He figures that by electrifying cells upstream of the ganglion layer, his implant will take advantage of whatever signal processing capacity remains in the retina.

Chow’s prototype of what he calls the “Artificial Silicon Retina” is 3 millimeters in diameter, 25 micrometers thick, and contains more than 7,000 microphotodiodes. The results of tests in rabbits, says Chow, have been encouraging. “We were excited to find that the eye tolerated the chip very well, and it was able to stimulate the remaining cells to produce signals that seem to indicate that vision was being produced,” says Chow. But, he acknowledges, “while we know that some form of vision is being produced… we won’t know what is seen until we put this into a human.” He estimates that human tests could occur within two years.

Chow, who comes from a family of high-tech entrepreneurs, has raised $2.5 million from venture capitalists to fund his startup company, Optobionics Inc., which is developing his device. This success in the venture capital market may indicate that artificial eyes are ready to make the leap from academic research to corporate R&D project. So far, the Johns Hopkins/NCSU and Harvard/MIT teams have been surviving on research grants, but Humayun says “all the groups” are now looking for corporate allies to help with development. In Germany, a startup company named Intelligent Implants is working on a visual prosthesis based on technology from the University of Bonn, one of two retinal implant groups that the German government has funded with $10 million over five years.

Commercial interest is one indication of the excitement that recent progress has generated. Yet researchers in the field are trying hard to balance that excitement with a deep reluctance to raise false hopes among the blind. Today, all the researchers interviewed by TR say, there is no way to restore to a blind person anything that even remotely approximates normal sight. But the NIH’s Hambrecht observes that the scientific and technological fundamentals have now come into place to change that. Noting that it took cochlear implants about a decade to move from the “crude electrode” stage to commercial availability, Hambrecht puts vision prostheses on the market as early as “10 years from now.” Wyatt and Rizzo take a more conservative stance, arguing that “We don’t know if it will ever work, and if it does work, we don’t know when. Other people may tell you differently, but they don’t know either.”

Yet it is impossible to ignore the advances made so far-just ask Harold Churchey. Thinking back to the first experiments at Duke in 1992, he remembers vividly how “from the time they put that probe in my eye, I knew they were on the right track. That’s the first I’d seen anything in that eye for hard to tell how long.” For Churchey, seeing was believing-as it may ultimately be for many others.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.