Smart watches aren’t exactly models of efficiency—they require one hand to operate while the other wrist wears it. A team of Carnegie Mellon University researchers figured there had to be a better way to pinch, swipe, and click using just the hand that wears it.
They tapped into the existing smart-watch sensors like gyroscopes and accelerometers and, using machine learning, taught an off-the-shelf Samsung Galaxy Gear smart watch to recognize five different gestures performed by the hand wearing the watch. It included relatively fine gestures like pinch and tap, plus larger hand movements such as rub, squeeze, and wave.
"We wanted to do gestures that are not awkward for people to do while they’re on their smart watch,” says Julian Andres Ramos Rojas, a PhD student in Carnegie Mellon’s Human-Computer Interaction Institute.
The research team turned to 10 volunteers to test the accuracy of its system. Overall, it was about 87 percent accurate. Rojas estimates that a commercial version would have to be more than 95 percent accurate.
Smart watches in their modern form are still relatively new, and their makers are experimenting with different input methods. They often involve bulky buttons or lots of tapping or scrolling with the user’s finger.
Gesture recognition would dramatically improve the functionality of smart watches, which have struggled to take off. Tying up both hands to interact with a smart watch often doesn’t make much sense. Someone with dirty hands might not want to touch his screen just to change the music.
“(There) are times when speed, subtlety, hands-free interaction, or pure enjoyment call for gesture control,” according to Stephen Lake, CEO of gesture control armband maker Thalmic Labs. “When we have the opportunity to further immerse ourselves into an experience or provide a shortcut to the alternative, gesture control can prevail.”
Smart-watch makers are in the early stages of adding gesture control. Late last year, Google made it possible to add a few gesture controls to Android smart watches. Users can flick, shake, or lift their wrist to interact. The Carnegie Mellon team’s approach differs in that users can actually make relatively fine finger movements and still have their watch pick them up.
The researchers believe their work could be especially useful in health care, where it could be used by patients with motor neuron diseases. It would also be possible to use the technology to control more than a smart watch; phones, laptops, and virtual- or augmented-reality headsets can also benefit from gesture control.
Rojas says researchers are already looking beyond smart watches for interacting with our devices. Another Carnegie Mellon team is working on turning skin into a touch pad (see “Use Your Arm as a Smart-Watch Touch Pad”). But we aren’t likely to all immediately jump to the most sci-fi solution.
"I don’t think we’re going to settle into a single mode of interaction. Instead, it’s going to be this really nice combination of techniques,” Rojas says.