Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

In addition to guiding users, the system describes to them what’s around. It aims for an easy, literal translation of the environment into sound, says Dellaert. Surrounding objects sound like what they are. As users pass a park, for instance, the sound of wind blowing through trees comes through the bone phones in the direction of the park. Indoors, knocking sounds announce doors. Objects with no sound in real life can be transformed by word compacting, says Walker. “‘Mom’s house’ becomes ‘m’souse,’” for example, and the system can learn to compact new words.

The Georgia team’s device joins a number of other high-tech solutions designed to help blind people get around. The most common ones use a single GPS receiver, GIS maps, and spoken, turn-right, turn-left directions to guide people along routes. But single GPS units, with an error radius of up to 30 feet, can be dangerously imprecise for pedestrians, says Walker, and provide no help indoors.

Experimental solutions, like the city of San Francisco’s Talking Signs and the University of Florida’s DRISHTI system, make cities smart by pasting information-carrying RFID tags on doors, exits, sidewalks, and street signs. When blind people walk by the tags with a reader, the objects announce themselves. This eliminates the need for GPS and maps, says Sumi Helal, professor of computer and information science and engineering at the University of Florida and head of the DRISHTI project. But it’s impractical and expensive to put tags everywhere.

The Georgia team’s sound beacon system is ideal for navigation, says Jack Loomis, professor of psychology at the University of California, Santa Barbara. His research shows that sounds projected in 3-D help visually impaired users navigate faster and more accurately than spoken, turn-by-turn directions. His group’s Personal Guidance System, a navigation system similar to the Georgia team’s, projects words in 3-D space for users to follow.

A smaller, smarter version of the current prototype is on the way, the Georgia researchers say. The computer vision cameras will be scaled down to fit on a pair of glasses, and a cell phone or PDA will replace the laptop. In the far future, says Dellaert, the computer vision system will draw its own maps of building interiors, solving the “biggest downside” of the system; currently, the researchers must manually make digital maps of buildings from data or floorplans in advance of a user’s visit.

The team recently tested the sound beacons on blindfolded students who navigated by joystick around a computer maze. “In about four minutes, they got it,” says Walker. “After 20, they were moving quickly through complex paths.” Next month, the team plans to test the full hardware on blind users navigating around the Georgia Institute of Technology campus.

0 comments about this story. Start the discussion »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me