Gestural Interfaces Go Mainstream
Taking control of computers with our hands and bodies is set to become commonplace.
Starting with the handheld controllers introduced by the Nintendo Wii console in 2006, gamers have been able to control computers by making gestures in the air rather than with joysticks, game pads, or keyboards. Microsoft brought the technology to the next level in 2010 with the release of the Kinect, allowing Xbox consoles to be operated without any controllers at all: arm and body motions suffice. Now gestural interfaces are beginning to spread to other areas. In particular, they have the potential to change the way consumers interact with their televisions.
The first demonstrations of what gestural interfaces could offer beyond gaming came from enterprising hackers who did things like using a Wii controller to steer a Roomba robotic vacuum, and academic researchers like those in Microsoft’s labs who adapted the Kinect to do things such as creating a 3-D model of a user’s whole body. Analyst firm Markets & Markets estimates that the market for the hardware and software components needed to enable gesture recognition in products such as the Kinect was worth $200 million in 2010 and will be worth $625 million by 2015.
Aviad Maizels, founder of PrimeSense, the Israeli company that supplies the Kinect’s gesture-sensing hardware, says he is most excited about the potential for controlling nongaming technology in the living room. “We’re really focused on the living room because it really needs to change,” he says. Maizels points out that previous attempts to integrate computers into television watching, such as Google TV, have been hamstrung by the need for complicated remote controls that often incorporate a keyboard.
Early this year PrimeSense announced a partnership with the Chinese computer manufacturer Asus to make a product called WAVI Xtion, a device similar to the Kinect that’s intended to control a PC serving up multimedia content to a TV. Maizels says that PrimeSense is also working on the next generation of its hardware, which is being developed with nongaming digital applications in mind and will support new kinds of gestural controls specifically suited for that purpose.
See the rest of our Business Impact report on The Business of Games.
Daniel Simpkins, founder and CEO of Hillcrest Labs, which develops motion sensing technology used by companies including LG, Broadcom, and Logitech, cites LG as the manufacturer making the greatest strides toward bringing gesture control to the living room. LG’s Magic Motion remote control is compatible with LG’s latest televisions and, thanks to Hillcrest’s sensor technology, has only a fraction as many buttons as most other remotes. A user can control the TV using gestures to interact with an on-screen interface, moving the remote around like a Wii controller. Simpkins claims his technology provides an easier introduction to gestural control both for consumers and for television manufacturers trying to incorporate the technology: “It gives familiarity to people as they move from a world where they just push buttons on a remote,” he says, “and it also allows you to pass the baton so that one person is in control.” No one has yet designed an intuitive way for a PrimeSense-style system to know which person’s movements to follow when, say, a family watches TV together.
Looking further ahead, the controllerless-based approach has the potential to take gesture control far beyond the living room. The Belgian company SoftKinetic offers 3-D cameras with capabilities similar to those of the Kinect; Disney and other companies have used them to create interactive billboard ads that let passers-by explore video clips and play games. Israeli startup EyeSight makes apps that bring simple gesture recognition to smart phones and tablets with front-facing cameras, making it possible to dismiss a unwanted call with a hand wave.
Maizels says that PrimeSense’s technology could find uses in cars, too, providing a simple way to control entertainment or deal with incoming phone calls. Improvements to the software that processes the data from the gesture-sensing hardware will make it possible for very precise, or even subconscious, body language to be tracked. “There’s a lot more that can be extracted from the data we collect,” he says.
Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.Subscribe today