Skip to Content

PC Makers Bet on Gaze, Gesture, Voice, and Touch

PC makers hope that new ways of interacting with computers will boost sales.
January 10, 2013

Products that could make it common to control a computer, TV, or something else using eye gaze, gesture, voice, and even facial expression were launched at the Consumer Electronics Show in Las Vegas this week. The technology promises to make computers and other devices easier to use, let devices do new things, and perhaps boost the prospects of companies reliant on PC sales. Industry figures suggest that interest in laptop and desktop computers is waning as consumers’ heads are turned by smartphones and tablets.

Hands off: Visitors to Intel’s stand at CES could try its gesture-control technology.

Intel led the charge, using its press briefing Monday to announce a new webcam-like device and supporting software intended to bring gesture, voice control, and facial expression recognition to PCs.

“This will be available as a low-cost peripheral this year,” said Kirk Skaugen, vice president for Intel’s PC client group. “Rest assured that Intel’s working to integrate this with all-in-ones and Ultrabooks, too.”

Intel also announced that, before the end of the year, it would release software that adds a voice-activated assistant to PCs, powered by technology from voice-recognition company Nuance.

Intel’s new gesture-sensing hardware device, made in partnership with the software company SoftKinetic and webcam maker Creative, has a combination of conventional and infrared cameras, and several microphones. The supporting software enables applications on a computer to track each of a person’s 10 fingers, recognize faces, and interpret words spoken in nine languages.

At Intel’s booth it was possible to try the technology out with a version of the game Portal modified to be controlled by hand gestures. Objects in the game could be manipulated using grasping motions in space and moving a hand relative to the screen. The experience was smooth and easy even the first time. Researchers from Intel’s perceptual computing research group say that the facial tracking functions could be used to infer emotions from a person’s expression, and that its infrared capabilities can measure heart rate by observing blood flow in a person’s face.

Software developers can already download Intel’s enabling software and ask the company to send one of the prototype devices, a move intended to encourage the development of applications that support new forms of interaction.

But even if Intel delivers on Skaugen’s promise to release a consumer version of the hardware “this year,” the company will lag behind the startup company Leap Motion (see “Leaping Into the Gesture-Control Era”). That company says its $70 device will begin shipping in the first quarter of this year, and it will also be bundled with some Asus laptops (see “Asus Laptops to Ship With Gesture Control”).

Neither Asus nor Leap Motion showed the technology at CES. But PrimeSense, the company that provides the hardware for the Xbox Kinect, the gadget that introduced gesture control to consumers, hinted at its own ambitions to empower PC users by announcing a new, smaller version of its 3-D gesture-sensing hardware, called Capri.

“Capri is about 10 times smaller than PrimeSense’s current generation of 3-D sensors,” said Inon Beracha, the company’s CEO, in a statement. “Capri is small enough to fit into today’s most popular smartphones while still providing the highest depth performance at short and long range.”

PrimeSense made no announcements of deals to include the technology in any PCs or mobile devices, but it has a close relationship with Microsoft and has helped the company make a version of Kinect for PCs (see “Microsoft’s Plan to Bring About an Era of Gesture Control”). If that partnership extends to the new hardware, Microsoft could help introduce it to PCs and mobile devices.

Intel made no mention of its own gesture-sensing technology appearing in mobile devices, and Leap Motion will say only that its technology could be scaled down enough for that in the future.

Gaze control was touted as a crucial feature of future PCs and other gadgets by two companies at CES.

Tobii, a Swedish company, introduced a standalone USB device called the Rex that allows any Windows 8 PC to track eye movement. The small black box is initially being made available to software developers, but will go on general sale late in 2013. Tobii’s eye-tracking technology shines infrared light at a PC user, and tracks the reflection of it in his pupils.

EyeTech, a smaller company based in the U.S. that has previously focused on users unable to operate mice and keyboards, showed similar technology, touting a new sensor that can be integrated into PC peripherals, large desktop computers, and TVs.

“The front runners are games and gaming,” said Peter Tiberg, who leads business development for Tobii, when asked about the use cases for the technology. It might also be applied to complex programs for creating and editing content, such as computer-aided design packages, he said. “Adding gaze for complex tasks could make everyday usage become easier,” said Tiberg.

Intel owns a stake in Tobii and said this week it was counting on the company to make eye tracking a standard feature of laptop computers. “Intel want us to make it smaller, to make sure that we can put it into future Ultrabooks,” said Tiberg. A more compact version of the technology that would make that possible will be ready in around a year, he said. Shrinking it enough to use in smartphones is a more distant prospect.

Tobii showed laptops connected to Rex devices at CES, and allowed people to try out selecting icons, or switching between applications with their eyes and see text automatically scroll when they reached the bottom of the page.

One of the most intuitive demonstrations involved selecting an on-screen object by looking at the target, and tapping the spacebar. It allowed for speedier selection than with a mouse cursor and double clicking. A maps application provided another example; the user focused his gaze on a spot, and scrolling the mouse zoomed the map in on that location.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.