Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

The Microsoft Kinect, a sensor that works with the Xbox 360 game console, offers the first experience most people will have with a “natural” user interface. A player controls the $150 device with voice and gestures; there’s no need to hold any sort of controller or wear any special gloves or clothing. In a recent talk at MIT, Microsoft’s chief research and strategy officer, Craig Mundie, described the Kinect as a preview of what’s to come for user interfaces, suggesting that what works in gaming now will soon be used for shopping, design, and many other common computing tasks. Instead of thinking about controllers, keyboards, and other “application-specific prosthetics,” Mundie said, people could focus on the task at hand, making software much more appealing and easy to use.

But while using the Kinect for gaming is a fun and interesting experience, the device also illustrates that natural user interfaces have a long way to go before they could be suited to most everyday applications.

The Kinect uses both software and hardware to pick up a person’s position, gestures, and voice. To measure position, it emits an infrared beam and measures how long that light takes to bounce back from objects it encounters. Four microphones can receive voice commands, and software filters out background noise and even conversation from other people in the room.

Since all these systems need to be calibrated, setting up the Kinect takes some time. After you connect the sensor to an Xbox 360 and position it near the center line of a television, the Kinect’s motors automatically adjust its angle so that it can get a complete picture of the user.

The Kinect also needs a lot of space. It needs to be able to see the floor as a reference point for objects in the room, and the user has to stand at least six feet from the device (eight feet if two people plan to use it).

It also tests the sound levels in the room and adjusts for noise coming from the television’s speakers. If anything changes in the room—if furniture is moved, for instance, or the sound environment changes significantly—the device is thrown off and needs to be recalibrated.

All this means that as an everyday interface, the Kinect would make little practical sense. Its space requirements strain the capacity of a typical urban apartment. If Microsoft wants to make natural user interfaces accessible to everyone, it will have to consider the needs of the dorm room and the cubicle. The calibration process is also too finicky to make the Kinect useful for any critical application. Users would never tolerate needing to recalibrate in order to check e-mail.

14 comments. Share your thoughts »

Credit: Microsoft

Tagged: Computing, Microsoft, video games, voice, gesture interfaces, Human-Computer Interaction

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me