Skip to Content

Decoding the Human Eye

Superdense arrays of electrodes will bring scientists closer to an artificial retina that approximates normal vision.
October 24, 2007

Artificial retinas are already in human clinical trials at the University of Southern California, where they have helped blind patients distinguish walls from doorways and even watch soccer games, albeit as blurs of motion. But approximating normal vision–and possibly enabling people to read–will require devices that can deliver electrical current with much greater control and precision. A new chip densely packed with electrodes, developed by scientists at the University of California, Santa Cruz (UCSC), is the first step in that direction.

Test bed: A 512-electrode array (gold circle), modeled after detectors used to capture particles in high-energy physics, is helping to decipher the neural code of the retina. The findings will aid in the design of future retinal prostheses.

Currently being used in research, the chip can stimulate and record from individual cells in retinal samples. The technology will provide insight into how the retina codes information and how to mimic that coding–lessons that will be crucial in developing the next generation of retinal implants. Further down the road, some version of the technology might be used to send visual information down the optic nerve.

“The retina is a very sophisticated visual-information-processing device,” says Alan Litke, a physicist at UCSC who is applying his expertise to neurobiology. “To have a human patient someday approach normal visual functioning, such as reading, you need to have a very accurate level of control.”

The retina is a thin layer of cells at the back of the eye; photoreceptor cells in the retina detect light and send signals to the retinal ganglion cells, which then transmit the signals to the brain through the optic nerve. In macular degeneration and retinitis pigmentosa, two leading causes of blindness, photoreceptor cells are damaged, but the remaining retinal ganglion cells are left largely intact. Artificial retinas, which rely on an external camera to capture visual information, consist of a processor that translates that information into an electrical code intelligible to the nerve cells of the eye, and a chip dotted with tiny electrodes that transmit the electrical signals to the retinal ganglion cells.

Multimedia

  • View images of an artificial retina and its effect on vision.

Litke and his collaborators modeled their chip after the silicon microchip detectors that line supercolliders to capture signs of elusive, high-energy, subatomic particles, such as the Higgs boson. Using common integrated-circuit fabrication techniques, the researchers custom-built more than 500 electrodes and amplifiers onto a small glass strip. “There are other commercial, multi-electrode recording systems available, but the team at UCSC has really pushed the technology forward by coming up with a system with the capability to record many more neural responses,” says Matt McMahon, a scientist at Second Sight, the company based in Sylmar, CA, that’s developing the retinal prostheses used in the USC study. Second Sight is using Litke’s device to inform the design of future prostheses. The company’s first-generation device had 16 electrodes, the second-generation device currently in human trials has 60, and a 200-electrode version is under development. (See “Next-Generation Retinal Implant.”)

With the UCSC device, scientists can precisely control individual retinal ganglion cells, a capability that will be key in next-generation implants. One of the reasons the prostheses currently in human testing have limited resolution is that they stimulate hundreds of cells simultaneously. (The diameter of the electrodes is an order of magnitude larger than that of most cells.) The five-micrometer-diameter electrodes in Litke’s chip are on par with the size of retinal ganglion cells, allowing them to stimulate individual cells. The researchers previously showed that they could simultaneously control multiple cells with a 60-electrode version of the chip, and they are developing a version with 512 electrodes.

Now that scientists have created a technology with such a precise level of control, they are using it to study the language of the retina–a language they hope prostheses will ultimately be able to speak. While the retina is often likened to a camera, it is in reality much more complicated. Light signals are captured and processed in the retina; the sequences of electrical bursts sent to the brain by the various and distinct retinal ganglion cell types encode different aspects of the visual field, such as movement, spatial patterns, color. Current prostheses use a simplified code and thus lose information, just as Morse code loses the nuanced intonations of the spoken word and the facial expressions of the speaker. “What are the patterns that really emulate what the healthy retina would be doing?” asks Alexander Sher, an assistant researcher at UCSC who is collaborating with Litke. “If you get to the point where you can stimulate individual cells, and you know how individual cells encode information, you can simulate that exactly, or nearly exactly.”

Scientists at Second Sight say that the lessons learned from these studies will be crucial to the development of next-generation prostheses. But turning the UCSC researchers’ device into an implant fit for the human eye will be challenging. “A lot of technical considerations are preventing us from jumping to really tiny electrodes,” says McMahon. “That will require further developments in electronics and packaging and software.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.