Skip to Content

Intel’s New Interface Idea Is a Mash-up of All the Others

The approach could help keep laptops relevant—if intuitive applications can be found.
January 23, 2013

At this year’s Consumer Electronics Show, chipmaker Intel demoed its latest big idea: “perceptual computing.”

Free form: The writer tries one of the demo applications issued with Intel’s Perceptive Computing technology.

What is perceptual computing, exactly? At first glance, it looks like little more than a me-too version of Microsoft’s Kinect: clip a camera-like peripheral onto your Ultrabook, and presto, instant gestural interface!

But unlike Kinect, or competitors like Leap Motion (see “Leap 3-D Out-Kinects Kinect”), perceptual computing isn’t a specific product or platform. Instead, like “cloud computing,” it’s an open-ended vision for what computers should be able to do. With perceptual computing, Intel envisions a new kind of interface for devices that will let users switch fluently between keyboards, trackpads, touch screens, voice commands, and gestures—or use several modes of interaction at once.

“We’re not trying to replace anything. We’re just trying to augment existing modes of interaction,” says Barry Solomon, product planner and strategist at Intel. “We’re adding senses to the computer’s brain so it can perceive its surroundings, who’s interacting with it, and make those interactions more intuitive.”

If a gestural interface is essentially a graphical pointing device on steroids, as seen in Minority Report, a perceptual computing UI would ideally enable interactions more like the ones seen in Star Trek: talking to your computer one moment, tapping a touch screen the next, and so on.

If that sounds ambitious, it has to: in a practical sense, perceptual computing is Intel’s attempt to keep laptops relevant in a consumer-tech landscape increasingly overtaken by phones, tablets, motion-controlled gaming consoles, and other post-Wintel devices offering novel, intuitive user experiences (see “The Pressure’s On for Intel”).

The Lenovo ThinkPad that Intel sent me already combines a touch screen interface with a laptop form factor; with the additional interactions made possible by perceptual computing, an Intel Ultrabook should be the ultimate do-everything device.

“We want to go beyond simply delivering technology,” Solomon says. “The tech world has morphed into delivering experiences.” In other words, “Intel inside” is no longer enough, and the company knows it.

The hardware Intel lent me for this review consisted of the aforementioned ThinkPad running Windows 8; a small peripheral device by Creative Labs containing an infrared depth sensor, HD webcam, and dual array microphones; and a software development kit (SDK)—a package of coding tools that will let programmers build their own perceptual computing apps. The SDK also included a quartet of demo apps designed to show off some of the Creative camera’s basic functions to nondevelopers (like myself).

Setting up my personal perceptual computing rig was as easy as attaching a webcam: the Creative Labs camera plugs into a standard USB 2.0 port and clips snugly to the top of the laptop screen. Each of the four demo apps offers a simple, game-like interaction focused on gestures. “Kung Pao Kevin” displays a cartoon beaver who invites you to mimic his clapping and high-fiving gestures onscreen while keeping to a musical beat; “Lightning” and “Solar System” cause crackles of electricity and a 3-D planetary model, respectively, to burst into being between your outstretched hands; and “Ballista” lets you fling cannonballs at a distant castle by pulling and pinching a virtual catapult.

These primitive not-quite-games didn’t strike me as harbingers of anything that will change the landscape of computing. Instead, they felt like little examples of how truly difficult it is—and will be—to design intuitive gestural interfaces.

Perceptual computing, at least for now, focuses on a close-range use case—”between six inches and three feet” from the camera, says Solomon. On paper this makes complete sense, since that’s the distance you already are from your laptop screen. But in practice, awkwardness abounds.

All of the demos required me to make broad, full-handed gestures (such as grasping, twisting, or waving) within a relatively limited “capture volume” between the laptop screen and my face. This meant that my gestures tended to obscure my own view of what I was doing onscreen, and felt less precise than using a trackpad or simply reaching out to directly manipulate the laptop’s touch screen.

Combining four very different input methods—keyboard, trackpad, touch screen, and gesture—within this same close range will require a subtlety of interface design that Intel’s apps completely fail to demonstrate. If my gestures get in my own way, what good are they?

“I find it gratifying that a company like Intel is turning its attention to this, because the future is about creating a harmonious plurality of [human-computer] interactions,” says gestural computing expert John Underkoffler, chief scientist of Oblong Industries and inventor of the so-called “Minority Report interface.” “The trick is, you have to be very careful about how you juxtapose these interactions—otherwise it’s just a jumble. Suppose Logitech invented the mouse before Windows existed to provide a useful context for that kind of input?”

Instead of building whole apps around the peculiarities of gestural input, Intel’s close-range interactions seem more useful as a kind of “glue” that could better connect the jumble of keyboard, mouse, and touch screen modes that its Ultrabooks currently offer. For example, Windows 8’s tiled Start screen must be invoked with a keystroke, then manipulated with a mouse pointer or by touching the screen. I found myself wishing that I could bring up the Start screen with a keystroke, then swipe through the apps with a quick waving motion while moving my hand from the keyboard up to the screen to touch the tile I wanted.

This kind of casual, “in between” interaction seems ideally suited to close-range gestures on a laptop: it doesn’t require precision, and it’s over before you really have to think about it. It also aligns with Intel’s own Human Interface Guidelines for perceptual computing, which state that interactions should be “reality-inspired, but not a clone of reality.” At the same time, Intel’s initial hardware assumptions—that the camera should be above the screen, and the capture volume directly in front of it—would make implementing this interaction difficult, if not impossible. (The camera would need to be mounted near the bottom edge of the screen with its capture volume hovering about six inches above the keyboard, below the user’s eye line.)

To its credit, Intel does recognize the importance of UI design in perceptual computing, as well as the “unknown unknowns” that developers will inevitably encounter when building multi-modal apps. “Do we establish best practices, or do we just let the apps evolve organically?” Solomon says. “We don’t know all the answer, but we’re thinking about it. It would be a bad thing if adding these capabilities became a burden, or confusing to users.”

Perceptual computing is the first mainstream attempt by a major tech company to usher in a Star Trek-like “harmonious plurality” of user interfaces. With its open-ended SDK, Intel has made a great start on the plurality. The harmonious part, however, still has a long way to go.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.