Skip to Content

An App to Turn Sign Language to Text

Could portable computing transform the ways the deaf communicate?
March 14, 2012

Scientists are working on an app that they say could act as a sort of translator for the deaf. Specifically, the app would leverage the video camera on a portable device to capture sign language and render it as text. The technology, developed by Technabling, a spin-out of the University of Aberdeen, is being called the portable sign language translator, or PSLT.

Said Ernesto Compatangelo, one of the technology’s developers, in a statement: ”The user signs into a standard camera integrated into a laptop, netbook, smart phone or other portable device such as a tablet. Their signs are immediately translated into text, which can be read by the person they are conversing with …The intent is to develop an application—an ‘app’ in smart phone terms—that is easily accessible and could be used on different devices, including smart phones, laptops, and PCs.”

I have to confess that I have minimal personal experience communicating with the deaf, so it’s hard for me to get my head around how exactly this app would be more effective than, say, using pad and paper to communicate. Would the deaf user have to position the device at a distance, sign, and then hand the device over to his interlocutor for reading? It seems to me that the best way for this app to enable a seamless, Babelfish-like experience would be for the hearing person to have his smart phone loaded with the app, have his camera trained on the person who is signing, and furthermore have headphones immediately rendering the text into speech. Technabling, for now, appears to be focusing on the gesture recognition tech, and is less concerned with exactly how the tech would be implemented in app form.

Setting aside the exact manner of implementation, there are actually other, less intuitive reasons to get excited about such an app. One of the coolest things about the app is that it would let users create their own private, or semi-private, languages. Writes Technabling:

“This means that any signer can create her/his own set of signs and … and associate to them their own words and concepts. In this way, signers can bridge the current communication gap with the wider community around them, being able to use whatever jargon they need in whatever situation they may find themselves (e.g., in education, in training, at work, at home, on the go).”

In other words, say you’re hard of hearing, and you’re also extremely interested in computer science. Plenty of new jargon and terminology is constantly emerging in computer science, but American Sign Language is unlikely to keep up with all that jargon. You could use the app to invent signs to express these bits of jargon, saving you the trouble of having to spell out every word letter by letter.

The scientists also see a possible application that doesn’t involve being deaf or hard of hearing at all. They think that people with “reduced mobility and speech difficulties due to ill health or accidents” might also use the technology to issue commands to smart appliances around the house, “using a simple but effective set of hand gestures tailored to their physical capabilities.” In other words, the app might eventually become something like a much smarter version of those clap-on-clap-off lights whose commercials were a parody staple on playgrounds some 30 years ago.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.