Skip to Content

How Armbands Can Translate Sign Language

A research project looks at how gesture-recognition armbands can help the hearing impaired communicate more easily with those who don’t understand sign language.
February 17, 2016

A pair of Myo gesture-control armbands and a computer or smartphone may make it faster and easier for the hearing impaired to communicate using sign language with those who don’t understand it.

That’s what researchers at Arizona State University say they can do with a project called Sceptre. They use the armbands to teach software a range of American Sign Language gestures; then, when a person wearing the bands makes one of these signs, it can be matched up with its corresponding word or phrase in Sceptre’s database and shows up as text on a screen.

The hope is to facilitate communication in emergency situations in particular, such as at a doctor’s office or hospital, without relying on written communication or, as some other sign-language-recognition research has done, using cameras to recognize sign-language gestures. A paper on the work will be presented at an intelligent user interfaces conference in March.

Researchers relied on the Myo armbands to make Sceptre work because they include both an inertial measurement unit for tracking motion and electromyography sensors for muscle sensing, which can be used to help determine finger configurations. They trained software to recognize a variety of ASL gestures, as well as the signs for individual letters and the numbers one through 10, all performed by someone wearing the Myo armbands.

Sceptre, a research project that translates sign language as text using Myo armbands, could be made to work with a computer or a smartphone.

After having users train their software on 20 different ASL gestures like “pizza,” “happy,” and “orange” by repeating each of them three times, the researchers found Sceptre was then able to decipher the sign correctly nearly 98 percent of the time.

In a video demonstrating how it works, Prajwal Paudyal, a graduate student at Arizona State University who coauthored the paper, wears the armbands and signs several different things that are then illustrated on a computer display, like “all morning,” “headache,” and “can’t sleep.”

Though the signs were shown as text in the study, they could also be spoken aloud by an app to facilitate a conversation, Paudyal says. And while the researchers’ demo showed the text on a computer’s display, which the Myo armbands connected to via Bluetooth, Sceptre could also be used with just a smartphone—something the researchers are also working on (Myo supports streaming data from two wristbands to one smartphone, but the researchers say this wasn’t possible when they conducted their initial work). 

“Ideally, the person can use this anywhere they go,” Paudyal says.

Sceptre, an Arizona State University research project, uses two Myo armbands to translate sign language as text.

Roozbeh Jafari, an associate professor at Texas A&M’s Center for Remote Healthcare Technologies and Systems, has done similar work, though it involved building the sensors rather than using off-the-shelf devices as the ASU group did.

He says there are a number of issues that would have to be solved to make something like Sceptre work for consumers. Typically, when you place electromyography sensors on the body, the system using them has to be calibrated unless they’re in the exact same location they were in previously, he says. There’s also a need to account for variations that naturally occur in the ways people sign the same things. Despite these obstacles, he says, “I think we are moving in the right direction.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.