Decoding the Human Eye
Artificial retinas are already in human clinical trials at the University of Southern California, where they have helped blind patients distinguish walls from doorways and even watch soccer games, albeit as blurs of motion. But approximating normal vision–and possibly enabling people to read–will require devices that can deliver electrical current with much greater control and precision. A new chip densely packed with electrodes, developed by scientists at the University of California, Santa Cruz (UCSC), is the first step in that direction.
Currently being used in research, the chip can stimulate and record from individual cells in retinal samples. The technology will provide insight into how the retina codes information and how to mimic that coding–lessons that will be crucial in developing the next generation of retinal implants. Further down the road, some version of the technology might be used to send visual information down the optic nerve.
“The retina is a very sophisticated visual-information-processing device,” says Alan Litke, a physicist at UCSC who is applying his expertise to neurobiology. “To have a human patient someday approach normal visual functioning, such as reading, you need to have a very accurate level of control.”
The retina is a thin layer of cells at the back of the eye; photoreceptor cells in the retina detect light and send signals to the retinal ganglion cells, which then transmit the signals to the brain through the optic nerve. In macular degeneration and retinitis pigmentosa, two leading causes of blindness, photoreceptor cells are damaged, but the remaining retinal ganglion cells are left largely intact. Artificial retinas, which rely on an external camera to capture visual information, consist of a processor that translates that information into an electrical code intelligible to the nerve cells of the eye, and a chip dotted with tiny electrodes that transmit the electrical signals to the retinal ganglion cells.
View images of an artificial retina and its effect on vision.
Litke and his collaborators modeled their chip after the silicon microchip detectors that line supercolliders to capture signs of elusive, high-energy, subatomic particles, such as the Higgs boson. Using common integrated-circuit fabrication techniques, the researchers custom-built more than 500 electrodes and amplifiers onto a small glass strip. “There are other commercial, multi-electrode recording systems available, but the team at UCSC has really pushed the technology forward by coming up with a system with the capability to record many more neural responses,” says Matt McMahon, a scientist at Second Sight, the company based in Sylmar, CA, that’s developing the retinal prostheses used in the USC study. Second Sight is using Litke’s device to inform the design of future prostheses. The company’s first-generation device had 16 electrodes, the second-generation device currently in human trials has 60, and a 200-electrode version is under development. (See “Next-Generation Retinal Implant.”)
With the UCSC device, scientists can precisely control individual retinal ganglion cells, a capability that will be key in next-generation implants. One of the reasons the prostheses currently in human testing have limited resolution is that they stimulate hundreds of cells simultaneously. (The diameter of the electrodes is an order of magnitude larger than that of most cells.) The five-micrometer-diameter electrodes in Litke’s chip are on par with the size of retinal ganglion cells, allowing them to stimulate individual cells. The researchers previously showed that they could simultaneously control multiple cells with a 60-electrode version of the chip, and they are developing a version with 512 electrodes.
Now that scientists have created a technology with such a precise level of control, they are using it to study the language of the retina–a language they hope prostheses will ultimately be able to speak. While the retina is often likened to a camera, it is in reality much more complicated. Light signals are captured and processed in the retina; the sequences of electrical bursts sent to the brain by the various and distinct retinal ganglion cell types encode different aspects of the visual field, such as movement, spatial patterns, color. Current prostheses use a simplified code and thus lose information, just as Morse code loses the nuanced intonations of the spoken word and the facial expressions of the speaker. “What are the patterns that really emulate what the healthy retina would be doing?” asks Alexander Sher, an assistant researcher at UCSC who is collaborating with Litke. “If you get to the point where you can stimulate individual cells, and you know how individual cells encode information, you can simulate that exactly, or nearly exactly.”
Scientists at Second Sight say that the lessons learned from these studies will be crucial to the development of next-generation prostheses. But turning the UCSC researchers’ device into an implant fit for the human eye will be challenging. “A lot of technical considerations are preventing us from jumping to really tiny electrodes,” says McMahon. “That will require further developments in electronics and packaging and software.”
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.