Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Blind people traversing a city face a formidable challenge: quickly and safely navigating a complex environment. Researchers at the Georgia Institute of Technology say their wearable computer provides the newest high-tech solution.

The system’s hardware includes two Global Positioning System (GPS) receivers, a laptop, head and body compasses, a gyroscope-based tracker that measures the head’s tilt, and four small cameras mounted on a helmet. For audio (the device uses a speech interface), users listen to “bone phones,” which fit behind the ears and transmit sound by vibrating against the skull. A user’s ears are thus free to listen to important ambient noise, such as city traffic. It weighs about three pounds in total, and most parts tuck neatly into a backpack.

The device uses GPS and digital maps to guide the wearer to a destination. Outdoors, GPS pinpoints a user’s location. Users verbally tell the device where they want to go, and the system wirelessly extracts an area map, which includes everything from businesses to bushes, from a remote Geographic Information System (GIS) database. Then, “sound beacons,” soft tones emanating in stereo through the bone phones, guide the person to a destination.

“Imagine there’s a ring around your head a meter away from your body,” explains Bruce Walker, assistant professor of psychology and designer of the auditory interface. “If you need to walk straight, the sound will come from straight ahead. If you need to turn a corner, the sound will seem to come from the right. Turn your body until the sound is in front of you again – and away you go.” And the tones speed up as users approach their destination.

The navigation is precise without requiring bulky antennas, the researchers say. By combining data from multiple GPS receivers and other location sensors, then accounting for error in the devices’ estimates, the system pinpoints users’ locations much more accurately than GPS alone, to within a foot of where they really are.

Although GPS loses a signal indoors and between tall buildings, the cameras, which are part of a computer vision system, pick up the slack. “By having computer vision on board, we can go where GPS can’t,” says Frank Dellaert, assistant professor of computing at Georgia Tech. Indoors, the cameras “see” building interiors in lines and patches of color. The computer searches stored building floorplans for these shapes, finally pinpointing the user’s location, by matching the cameras’ input to a location on a digitized floor plan of the building. Sound beacons then guide users as they do outside.

0 comments about this story. Start the discussion »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me