Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Bilingual dictionaries are usually a two-way street: you can look up a word in English and find, say, its Spanish equivalent, but you can also do the reverse. Sign-language dictionaries, however, translate only from written words to gestures. This can be hugely frustrating, particularly for parents of deaf children who want to understand unfamiliar gestures, or deaf people who want to interact online using their primary language. So Boston University (BU) researchers are developing a searchable dictionary for sign language, in which any user can enter a gesture into a dictionary’s search engine from her own laptop by signing in front of a built-in camera.

“You might have a collection of sign language in YouTube, and now to search, you have to search in English,” says Stan Sclaroff, a professor of computer science at BU. It’s the equivalent, Sclaroff says, of searching for Spanish text using English translations. “It’s unnatural,” he says, “and it’s not fair.”

Sclaroff is developing the dictionary in collaboration with Carol Neidle, a professor of linguistics at BU, and Vassilis Athitsos, assistant professor of computer science and engineering at the University of Texas at Arlington. Once the user performs a gesture, the dictionary will analyze it and pull up the top five possible matches and meanings.

“Today’s sign-language recognition is [at] about the stage where speech recognition was 20 years ago,” says Thad Starner, head of the Contextual Computing Group at the Georgia Institute of Technology. Starner’s group has been developing sign-language recognition software for children, using sensor-laden gloves to track hand movements. He and his students have designed educational games in which hearing-impaired children, wearing the gloves, learn sign language. A computer evaluates hand shape and moves on to the next exercise if a child has signed correctly.

Unlike Starner’s work, Sclaroff and Neidle’s aims for a sensorless system in which anyone with a camera and Internet connection can learn sign language and interact. The approach, according to Starner, is unique in the field of sign-language recognition, as well as in the field of computer vision.

“This takes a lot of processing power, and trying to deal with sign language in different video qualities is very hard,” says Starner. “So if they’re successful, it would be very cool to actually be able to search the Web in sign language.”

2 comments. Share your thoughts »

Credit: Devin Hahn, BU Productions
Video by Stan Sclaroff, Boston University

Tagged: Communications, Web, camera, facial recognition, computation, gesture interface

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me