Skip to Content

A Rare Sight

MIT’s Best and Brightest
September 1, 1998

Poet and artist Elizabeth Goldring’s eyes fill with light when she works–laser light, that is. A degenerative eye disease damaged Goldring’s vision, so she must go to great lengths to see her own art; she uses a laser to project the video and computer images she creates directly onto a small, functional part of her retina.

The device that brings Goldring’s work back into her view is a modified version of a diagnostic tool called the scanning laser ophthalmoscope (SLO). By connecting the SLO to cameras, computers and even the Internet, Goldring can see friends’ faces, unfamiliar buildings and–for the first time in years–words. With help from the SLO’s inventor and other researchers, students and artists, Goldring hopes someday to use the machine to share visual experiences with others she believes have been encouraged unnecessarily to “turn off their eyes.”

But, as TR Associate Editor Rebecca Zacks discovered when she visited Goldring at MIT’s Center for Advanced Visual Studies, seeing with the SLO is an intense and tiring experience. And even with the device, Goldring’s vision is too poor to read traditional text. So Goldring is developing a terse visual language of “word-images,” hybrids of letters and graphics that make intuitive and immediate sense.

Without the SLO, Goldring sees faces only as “moons,” but her gaze is steady as she speaks:

For the last seven years, I’ve been working on creating visual experiences and digital language–poetry–for people who have very limited eyesight. Before that, for close to four years, I saw virtually nothing: some light, some shadow perception. When my eyesight began to deteriorate, I spent a lot of time writing about it–both poems and “eye journals” describing what I saw as I looked out of damaged eyes. I had to try to figure out how to write with a tape recorder, and it’s really difficult for a writer to write without seeing any words.

Goldring points out a poster on the wall that shows a word-image–a simple outline of a door flanked by the letters d and r–projected onto the veined surface of a human retina.

Images from a scanning laser ophthalmoscope have a really indelible quality. It’s almost as though your retina is a stone and the image is carved in by the laser. This is a healthy retina looking at a word image that I’ve created: door. I’ve worked a lot with this particular word because it’s so difficult for me to see–the two os and the d are all so similar, they get in the way of each other. But if I separate the d and the r with this image I can see the whole thing much faster. I’ve also tried door in another way, which I’m quite excited about because it is one of the first times that I have been able to get any sense of depth when using the scanning laser ophthalmoscope.

She demonstrates, holding her hands up next to one another, palms toward her face, and pivoting them apart like swinging saloon doors.

The word opened: d-o and o-r swung back like a door, and it worked because it was separating the os. It also worked spatially and it enhanced the meaning of the word so that you get it instantly. Bad seeing or significant visual impairment means, among other things, very slow seeing. So anything you can do to convey the meaning faster helps. When people with normal vision read words, they scan across the tops of the letters. Well, I don’t–I have to look around, up and down each letter. By the time I get to the end of a three-letter word, I have put in a great deal of effort. That’s why I’m interested in developing succinct symbols.

After years spent creating words, poems and video images for the SLO, Goldring believes she’s “on the brink” of being ready to show her work to other visually challenged people.

For people to want to use eyes that don’t work very well, they will have to have compelling visual images. That’s really what I’m working on, not only with language but also other kinds of visual experiences. I think the ways in which people see what’s being presented will need to be individualized: Some people may need the laser brighter, some people may need it dimmer. In the case of word images, some people may need curves, others may need hard edges. I don’t think this individualized tailoring is difficult. The hard part is getting something that is compelling enough and satisfying enough to warrant the extreme amount of energy and dedication it takes to look.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.