Skip to Content

Gestural Interfaces Go Mainstream

Taking control of computers with our hands and bodies is set to become commonplace.
November 8, 2011

Starting with the handheld controllers introduced by the Nintendo Wii console in 2006, gamers have been able to control computers by making gestures in the air rather than with joysticks, game pads, or keyboards. Microsoft brought the technology to the next level in 2010 with the release of the Kinect, allowing Xbox consoles to be operated without any controllers at all: arm and body motions suffice. Now gestural interfaces are beginning to spread to other areas. In particular, they have the potential to change the way consumers interact with their televisions.

Depth of View: The Kinect uses a camera system that can tell how far away objects are, allowing it to identify an arm against a more distant background, for example. This information can be used to recognize gestures.

The first demonstrations of what gestural interfaces could offer beyond gaming came from enterprising hackers who did things like using a Wii controller to steer a Roomba robotic vacuum, and academic researchers like those in Microsoft’s labs who adapted the Kinect to do things such as creating a 3-D model of a user’s whole body. Analyst firm Markets & Markets estimates that the market for the hardware and software components needed to enable gesture recognition in products such as the Kinect was worth $200 million in 2010 and will be worth $625 million by 2015.

Aviad Maizels, founder of PrimeSense, the Israeli company that supplies the Kinect’s gesture-sensing hardware, says he is most excited about the potential for controlling nongaming technology in the living room. “We’re really focused on the living room because it really needs to change,” he says. Maizels points out that previous attempts to integrate computers into television watching, such as Google TV, have been hamstrung by the need for complicated remote controls that often incorporate a keyboard.

Early this year PrimeSense announced a partnership with the Chinese computer manufacturer Asus to make a product called WAVI Xtion, a device similar to the Kinect that’s intended to control a PC serving up multimedia content to a TV. Maizels says that PrimeSense is also working on the next generation of its hardware, which is being developed with nongaming digital applications in mind and will support new kinds of gestural controls specifically suited for that purpose.

See the rest of our Business Impact report on The Business of Games.

Daniel Simpkins, founder and CEO of Hillcrest Labs, which develops motion sensing technology used by companies including LG, Broadcom, and Logitech, cites LG as the manufacturer making the greatest strides toward bringing gesture control to the living room. LG’s Magic Motion remote control is compatible with LG’s latest televisions and, thanks to Hillcrest’s sensor technology, has only a fraction as many buttons as most other remotes. A user can control the TV using gestures to interact with an on-screen interface, moving the remote around like a Wii controller. Simpkins claims his technology provides an easier introduction to gestural control both for consumers and for television manufacturers trying to incorporate the technology: “It gives familiarity to people as they move from a world where they just push buttons on a remote,” he says, “and it also allows you to pass the baton so that one person is in control.” No one has yet designed an intuitive way for a PrimeSense-style system to know which person’s movements to follow when, say, a family watches TV together.

Looking further ahead, the controllerless-based approach has the potential to take gesture control far beyond the living room. The Belgian company SoftKinetic offers 3-D cameras with capabilities similar to those of the Kinect; Disney and other companies have used them to create interactive billboard ads that let passers-by explore video clips and play games. Israeli startup EyeSight makes apps that bring simple gesture recognition to smart phones and tablets with front-facing cameras, making it possible to dismiss a unwanted call with a hand wave.

Maizels says that PrimeSense’s technology could find uses in cars, too, providing a simple way to control entertainment or deal with incoming phone calls. Improvements to the software that processes the data from the gesture-sensing hardware will make it possible for very precise, or even subconscious, body language to be tracked. “There’s a lot more that can be extracted from the data we collect,” he says.

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.