Leaping Into the Gesture-Control Era
Technology that accurately tracks finger motions could revolutionize desktop and mobile computing.
A low-cost technology able to accurately track finger and hand gestures could reshape the way we interact with computers.
A trip to any big electronics store this fall will tell you that computer makers from Samsung to Microsoft think laptop and desktop computers need touch screens. But that notion could seem outdated by early next year—thanks to the launch of a matchbox-sized device that adds intuitive gesture control to any computer. The technology, which is also being adapted for mobile devices, could even leave the beloved pocket touch screen looking outmoded.
Leap Motion has racked up millions of views with a demo video of its gesture-control technology and is taking orders for the $70 device, due to ship in early 2013. A demo of the technology at the startup’s offices last week showed how mid-air swipes, pokes, and grabs could control 3-D environments and existing software such as the game Fruit Ninja.
The black glass on the Leap’s upper side hides two small cameras and a handful of infrared LEDs, which track the motion of a person’s fingers to an accuracy of a hundredth of a millimeter, says the company’s cofounder and CEO, Michael Buckwald.
Buckwald says that Leap provides the solution to “gorilla arm,” a term used to describe the dubious ergonomics of a person repeatedly lifting his or her hands from the keyboard or mouse and reaching out to operate a computer’s touch screen. Users of Leap’s device can lift their hands just slightly off the keyboard and make more economic gestures with their fingers.
“If you’re controlling a cursor [with Leap], you don’t have to move one-to-one with the screen, like you do with touch,” says Buckwald, so a small finger motion can traverse a much larger distance on screen. This usually makes it significantly faster than using a mouse and keyboard, he says.
The value of the approach is already apparent to some computer manufacturers. “We’re working with lots of consumer OEMs and for laptops,” says Buckwald, “but also automotive and medical companies.”
Leap Motion’s chief technology officer and cofounder David Holz adds that mobile devices will also get the technology. “We’re trying to integrate into smartphones and tiny things like that,” he says. “There will also be new devices—you can’t put a keyboard on a head-mounted display.”
Holz says touch screens soared in popularity because they are more intuitive to use than keyboards and mice, but believes they are limited in a way the Leap is not. “The fact is that you can’t really do anything with a tablet, with tap and swipe, but it feels natural,” he says, meaning that people love touch screens but can’t easily create content using them. “We have that same natural experience but we have more power.”
Independent verification of Leap Motion’s potential has started to trickle out in recent days, due to the company sending thousands of pre-production versions of the device to software developers interested in creating applications that use the technology. Many developers have posted videos showing their experiments, and although Leap’s founders stress that they are only the results of short-term work, the videos seem impressive. One engineer at National Instruments, for example, took less than 24 hours to create software that allows a small quadrotor aircraft to be controlled by the position and angle a person holds their hand.
“It was incredibly easy to take the code that was provided, interface with it, and start creating demos,” says Milan Raj, who built that demo. “What’s even more exciting is that since the [developer’s kit] is still in the preview stage, more features are being added that make capturing specific motions even easier.”
Leap may rely heavily on ideas from third-party developers to make its novel interface compelling. And the company will operate an app store to provide a central resource for Leap-capable software.
Holz says that while the first apps will be relatively simple, like the photo-browsing demo created by another developer, over the longer term, the Leap will be used for very complex interactions. “You’ll be reaching into a 3-D world and grabbing hold of and moving things.”
One of the most impressive parts of an in-person demo by Holz was when he started working on a chunk of simulated clay. He reached out and pushed and pulled with his fingers in the air in front of the monitor, creating a stylized human head in about a minute.
Juan Wachs, an assistant professor at Purdue University who builds gestural interfaces to help surgeons work with robots in the operating room, says gestures can help remove a barrier between people and their technology. “If gestures are well designed, they can be intuitive [and] easy to remember and perform,” he says, adding that demos of the Leap interface look “amazing.”
Wachs says if Leap Motion makes it easy to recognize custom gestures, it would speed up the development time of projects like his own. “We are looking forward to the release to purchase a few for my lab.”
Leap Motion’s closest competitor may be Kinect, Microsoft’s body-tracking sensor for the Xbox games console. However, although software developers quickly showed how it could be used for more than gaming (see “Hackers Take the Kinect to New Levels”), Microsoft waited almost a year before releasing tools to encourage such experimentation. A version of Kinect designed to bring gesture control to Windows desktop and laptop computers is now available for developers but not consumers (“Microsoft’s Plan to Bring About an Era of Gesture Control”).
Leap’s founders won’t share exact details of their technology, but Holz says that unlike the Kinect, the Leap doesn’t project a grid of infrared points onto the world that are tracked to figure out what is moving and where (see the pattern produced by the Kinect sensor).
Despite having two cameras, the Leap does not use stereovision techniques to determine depth, says Holz. Instead, the second camera is to provide an extra source of information and prevent errors due to parts of a person’s hand obscuring itself or the other hand.
The $70 device contains two relatively simple circuit boards, with the largest chip being the one that handles the USB connection. All of the processing needed for gesture tracking is done by driver software installed on a user’s computer. “The hardware does almost no work,” says Holz. “The goal was to use the least amount of hardware possible.”
Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.Subscribe today