A View from David Zax
An App to Turn Sign Language to Text
Could portable computing transform the ways the deaf communicate?
Scientists are working on an app that they say could act as a sort of translator for the deaf. Specifically, the app would leverage the video camera on a portable device to capture sign language and render it as text. The technology, developed by Technabling, a spin-out of the University of Aberdeen, is being called the portable sign language translator, or PSLT.
Said Ernesto Compatangelo, one of the technology’s developers, in a statement: ”The user signs into a standard camera integrated into a laptop, netbook, smart phone or other portable device such as a tablet. Their signs are immediately translated into text, which can be read by the person they are conversing with …The intent is to develop an application—an ‘app’ in smart phone terms—that is easily accessible and could be used on different devices, including smart phones, laptops, and PCs.”
I have to confess that I have minimal personal experience communicating with the deaf, so it’s hard for me to get my head around how exactly this app would be more effective than, say, using pad and paper to communicate. Would the deaf user have to position the device at a distance, sign, and then hand the device over to his interlocutor for reading? It seems to me that the best way for this app to enable a seamless, Babelfish-like experience would be for the hearing person to have his smart phone loaded with the app, have his camera trained on the person who is signing, and furthermore have headphones immediately rendering the text into speech. Technabling, for now, appears to be focusing on the gesture recognition tech, and is less concerned with exactly how the tech would be implemented in app form.
Setting aside the exact manner of implementation, there are actually other, less intuitive reasons to get excited about such an app. One of the coolest things about the app is that it would let users create their own private, or semi-private, languages. Writes Technabling:
“This means that any signer can create her/his own set of signs and … and associate to them their own words and concepts. In this way, signers can bridge the current communication gap with the wider community around them, being able to use whatever jargon they need in whatever situation they may find themselves (e.g., in education, in training, at work, at home, on the go).”
In other words, say you’re hard of hearing, and you’re also extremely interested in computer science. Plenty of new jargon and terminology is constantly emerging in computer science, but American Sign Language is unlikely to keep up with all that jargon. You could use the app to invent signs to express these bits of jargon, saving you the trouble of having to spell out every word letter by letter.
The scientists also see a possible application that doesn’t involve being deaf or hard of hearing at all. They think that people with “reduced mobility and speech difficulties due to ill health or accidents” might also use the technology to issue commands to smart appliances around the house, “using a simple but effective set of hand gestures tailored to their physical capabilities.” In other words, the app might eventually become something like a much smarter version of those clap-on-clap-off lights whose commercials were a parody staple on playgrounds some 30 years ago.
Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.Subscribe today