A desktop seeing machine created at MIT by a visually impaired artist could help people with poor vision view images, use the Internet, virtually “previsit” unfamiliar buildings, or see the faces of friends.
Its creator, Elizabeth Goldring, is a senior fellow at MIT’s Center for Advanced Visual Studies and has no vision in one eye and little in the other. She got the idea for her invention more than 15 years ago, when she had her eyes examined with a large, expensive machine called a scanning laser ophthalmoscope.
The machine projects an image directly onto a patient’s retina to help determine how much, if any, retinal function he or she has. Someone who still has some healthy retinal cells will be able to see the image when it’s projected onto them. Goldring did see images of stick figures, but as a poet as well as an artist, she very much wanted to see a word. At her request, she was shown the word “sun.” She was thrilled.
After that exam, Goldring used the ophthalmoscope–which costs more than $100,000–for a project of her own. She created a “visual language” with hundreds of symbols–representing both nouns and verbs–that could be projected onto the retina. Each symbol is a combination of letters and simple graphics; for example, the word “door” is spelled with a d, the outline of a door, and an r. The symbols, Goldring says, are more visually economical than their text equivalents.
Goldring’s next goal was to make a cheaper, more portable version of the costly device. With the collaboration of Rob Webb, the ophthalmoscope’s inventor, and dozens of scientists, engineers, and students, that’s what she did. The seeing machine–about the size of a bread box–has an eyepiece, a projector, a computer, a monitor, and a joystick. To cut expenses (the prototype cost about $4,000 to build), she used light-emitting diodes instead of a laser. When a person looks through the machine’s eyepiece, the LEDs project black-and-white images and words from Goldring’s visual language across the entire retina. If any part of the retina is healthy, the person may see the image.
Goldring conducted a pilot clinical trial with 10 visually impaired people. Six correctly interpreted every word-image they were presented with, and all could navigate the corridors of a simulated building using a joystick to move forward, backward, and sideways.
Although the device has been called a “seeing machine,” Goldring is not developing a wearable version to help visually impaired people get around. “It’s too much to expect someone who is visually challenged to see and walk at the same time,” she says. But she is now working on a smaller, even cheaper machine that will allow people to see in color.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.