Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo


Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

“We spent a lot of time trying to figure out how to get the user to calibrate the device in an appropriate way,” says Tan. The software learns to recognize EMG signals produced as the user performs gestures in a specific, controlled way.

The algorithms focus on three specific features from the EMG data: the magnitude of muscle activity, the rate of muscle activity, and the wave-like patterns of activity that occur across several sensors at once. These three features, says Tan, provide a fairly accurate way to identify certain types of gesture. After training, the software could accurately determine many of the participants’ gestures more than 85 percent of the time, and some gestures more than 90 percent.

Especially in the early stages of training, a participant’s gestures need to be carefully guided to ensure that the machine-learning algorithms are trained correctly. But Tan says that even with a small amount of feedback, test subjects “would fairly naturally adapt and change postures and gestures to get drastically improved performance.” He says that having users trigger the appropriate response from the system became an important part of the training process.

“Most of today’s computer interfaces require the user’s complete attention,” says Pattie Maes, professor of media arts and sciences at MIT. “We desperately need novel interfaces such as the one developed by the Microsoft team to enable a more seamless integration of digital information and applications into our busy daily lives.”

Tan and colleagues are now working on a prototype that uses a wireless band that can easily be slipped onto a person’s arm, as well as a “very quick training system.” The researchers are also testing how well the system works when people walk and run while wearing it.

Ultimately, says Tan, full-body control will lead to fundamentally new ways of using computers. “We know it has something to do with gestures being mobile, always available, and natural, but we’re still working on the exact paradigm,” he says.

1 comment. Share your thoughts »

Credit: Microsoft
Video by Microsoft

Tagged: Computing, software, sensors, machine learning, muscle, gesture recognition, computer interfaces

Reprints and Permissions | Send feedback to the editor

From the Archives


Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me