Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

It’s a good time to be communicating with computers. No longer are we constrained by the mouse and keyboard–touch screens and gesture-based controllers are becoming increasingly common. A startup called Emotiv Systems even sells a cap that reads brain activity, allowing the wearer to control a computer game with her thoughts.

Now, researchers at Microsoft, the University of Washington in Seattle, and the University of Toronto in Canada have come up with another way to interact with computers: a muscle-controlled interface that allows for hands-free, gestural interaction.

A band of electrodes attach to a person’s forearm and read electrical activity from different arm muscles. These signals are then correlated to specific hand gestures, such as touching a finger and thumb together, or gripping an object tighter than normal. The researchers envision using the technology to change songs in an MP3 player while running or to play a game like Guitar Hero without the usual plastic controller.

Muscle-based computer interaction isn’t new. In fact, the muscles near an amputated or missing limb are sometimes used to control mechanical prosthetics. But, while researchers have explored muscle-computer interaction for nondisabled users before, the approach has had limited practicality. Inferring gestures reliably from muscle movement is difficult, so such interfaces have often been restricted to sensing a limited range of gestures or movements.

The new muscle-sensing project is “going after healthy consumers who want richer input modalities,” says Desney Tan, a researcher at Microsoft. As a result, he and his colleagues had to come up with a system that was inexpensive and unobtrusive and that reliably sensed a range of gestures.

The group’s most recent interface, presented at the User Interface Software and Technology conference earlier this month in Victoria, British Columbia, uses six electromyography sensors (EMG) and two ground electrodes arranged in a ring around a person’s upper right forearm for sensing finger movement, and two sensors on the upper left forearm for recognizing hand squeezes. While these sensors are wired and individually placed, their orientation isn’t exact–that is, specific muscles aren’t targeted. This means that the results should be similar for a thin, EMG armband that an untrained person could slip on without assistance, Tan says. The research builds on previous work that involved a more expensive EMG system to sense finger gestures when a hand is laid on a flat surface.

The sensors cannot accurately interpret muscle activity straight away. Software must be trained to associate the electrical signals with different gestures. The researchers used standard machine-learning algorithms, which improve their accuracy over time (the approach is similar to the one Tan uses for his brain-computer interfaces.)

1 comment. Share your thoughts »

Credit: Microsoft
Video by Microsoft

Tagged: Computing, software, sensors, machine learning, muscle, gesture recognition, computer interfaces

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me