New software makes it easier to build games controlled by a user’s body position.
The massive success of the Nintendo Wii proved the appeal of motion-controlled gaming. Now Softkinetic, a company based in Belgium, is working to let video-game players use a wider range of more-natural movements to control the on-screen action. Softkinetic’s software is meant to work with depth-sensing cameras, which can be used to determine a player’s body position and motions. “You don’t need a controller in your hand,” says CEO Michel Tombroff. “You don’t need to wear a special outfit. You just come in front of the camera in your living room, and you start playing by moving your entire body.”
Attempts to commercialize gestural interfaces date back to at least the late 1980s and the Power Glove, an accessory for the original Nintendo Entertainment System. Many such systems, however, have been defeated by the need for awkward, bulky accessories; others just didn’t work that well.
The Wii controller was the field’s first success. But the motions it requires can sometimes feel stiff and unnatural, and it’s sensitive only to gestures made by the hand in which it’s held. Depth-sensing cameras, on the other hand, can pick up gestures made by a variety of body parts, Tombroff says. They can also be tuned to pick up motions more precisely. Designing programs that work with the cameras, however, is difficult: translating depth measurements into a map of a human figure, and determining what motions that figure is making, are computationally daunting tasks. This is where Softkinetic comes in.
Softkinetic’s technology started out as research at the University of Brussels, in Belgium, aimed at exploring the user interfaces made possible by stereoscopic cameras, which sense depth by using two input sources, in much the way that the human brain perceives depth by comparing data from two eyes. The group created Softkinetic in mid-2007 and has adapted its research to work with newer depth-sensing cameras as well. Tombroff explains that the newer cameras have better commercial prospects because they’ve done away with the need for two input sources. As a consequence, they’re smaller, with cheaper parts, and easier to incorporate into existing devices such as laptops.
Tombroff says that the new cameras sense depth by using infrared light in one of two ways. First, the camera might send out infrared light and receive the reflections of that light off objects in a room. The sending and receiving information can be compared to determine details of position and depth around the camera. Alternatively, the camera could project a grid of infrared light onto a room, and calculate the positions of objects based on how the grid is distorted.
Whatever the specific depth-sensing tactics of a given camera, Tombroff says, Softkinetic aims at being “a bridge between the end product and the hardware, which is the camera.” This means that the company’s software is built to work with the four major depth-sensing cameras on the market, including the ZCam from 3DV Systems, which was a finalist in the Best of CES awards given earlier this year at the Consumer Electronics Showcase in Las Vegas. With Softkinetic’s software, game designers can avoid retooling their applications to work with each of those cameras.
But Tombroff adds that interpreting data from different types of hardware isn’t the heaviest lifting that Softkinetic does. The software’s chief value, he says, is that it can “classify the scene so we know how to find the player and remove the rest, and reconstruct the person’s structure.” The first half of that task involves filtering out a great deal of noise from the signal. “We need to zoom in on the important thing, which, for video games, is you, the player, and not the person next to you sitting on the couch and making fun of you.” Secondly, the software creates a 3-D volume from the fuzzy cloud of points the camera detects and identifies body parts important to an application. So instead of interacting directly with the depth map produced by the camera, designers get information from Softkinetic’s software about which body parts are moving and how quickly. The company has also identified sets of gestures people commonly make when trying to control a program in a particular way.
Anind Dey, an assistant professor at the Human-Computer Interaction Institute, at Carnegie Mellon University, says that Softkinetic’s technology is particularly exciting because of the potential for full-body interaction. While he notes that all software of this type must mediate between doing too much for developers, which can stifle their creativity, and doing too little, which can leave them to reinvent the wheel, he is enthusiastic about its prospects. “If the technology works as they’re claiming it works,” says Dey, “I think it’s a really exciting thing for the field, and not just for gaming.” For example, Dey says, knowing a person’s body position could help with applications such as health-care monitoring in the home, or other applications in the field of ubiquitous computing.
Tombroff says that Softkinetic has built 12 or 15 sample games in-house and is now working with game developers to help them understand the technology and what they can do with it.
Hear more about virtual reality from the experts at the EmTech Digital Conference, March 26-27, 2018 in San Francisco.Learn more and register