Skip to Content

How Armbands Can Translate Sign Language

A research project looks at how gesture-recognition armbands can help the hearing impaired communicate more easily with those who don’t understand sign language.
February 17, 2016

A pair of Myo gesture-control armbands and a computer or smartphone may make it faster and easier for the hearing impaired to communicate using sign language with those who don’t understand it.

That’s what researchers at Arizona State University say they can do with a project called Sceptre. They use the armbands to teach software a range of American Sign Language gestures; then, when a person wearing the bands makes one of these signs, it can be matched up with its corresponding word or phrase in Sceptre’s database and shows up as text on a screen.

The hope is to facilitate communication in emergency situations in particular, such as at a doctor’s office or hospital, without relying on written communication or, as some other sign-language-recognition research has done, using cameras to recognize sign-language gestures. A paper on the work will be presented at an intelligent user interfaces conference in March.

Researchers relied on the Myo armbands to make Sceptre work because they include both an inertial measurement unit for tracking motion and electromyography sensors for muscle sensing, which can be used to help determine finger configurations. They trained software to recognize a variety of ASL gestures, as well as the signs for individual letters and the numbers one through 10, all performed by someone wearing the Myo armbands.

Sceptre, a research project that translates sign language as text using Myo armbands, could be made to work with a computer or a smartphone.

After having users train their software on 20 different ASL gestures like “pizza,” “happy,” and “orange” by repeating each of them three times, the researchers found Sceptre was then able to decipher the sign correctly nearly 98 percent of the time.

In a video demonstrating how it works, Prajwal Paudyal, a graduate student at Arizona State University who coauthored the paper, wears the armbands and signs several different things that are then illustrated on a computer display, like “all morning,” “headache,” and “can’t sleep.”

Though the signs were shown as text in the study, they could also be spoken aloud by an app to facilitate a conversation, Paudyal says. And while the researchers’ demo showed the text on a computer’s display, which the Myo armbands connected to via Bluetooth, Sceptre could also be used with just a smartphone—something the researchers are also working on (Myo supports streaming data from two wristbands to one smartphone, but the researchers say this wasn’t possible when they conducted their initial work). 

“Ideally, the person can use this anywhere they go,” Paudyal says.

Sceptre, an Arizona State University research project, uses two Myo armbands to translate sign language as text.

Roozbeh Jafari, an associate professor at Texas A&M’s Center for Remote Healthcare Technologies and Systems, has done similar work, though it involved building the sensors rather than using off-the-shelf devices as the ASU group did.

He says there are a number of issues that would have to be solved to make something like Sceptre work for consumers. Typically, when you place electromyography sensors on the body, the system using them has to be calibrated unless they’re in the exact same location they were in previously, he says. There’s also a need to account for variations that naturally occur in the ways people sign the same things. Despite these obstacles, he says, “I think we are moving in the right direction.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.