Skip to Content

Omek’s Gestural Interface Makes Perceptual Computing Human-Friendly

By studying human factors (shocking!), an Israeli company makes close-range gestural input make sense.
February 6, 2013

I recently reviewed Intel’s prototype “perceptual computing” interface, and while the product vision was compelling, the user experience needed a lot of work. Just because you can plop a depth camera on top of your laptop and wave your hands in front of the screen, does that mean you should? Luckily, an Israeli company called Omek Interactive has applied some actual thought and research toward answering this question. Their “Arc Menu” is the first close-range, consumer-grade gestural UI I’ve seen that takes comfort and basic ergonomics into account. Here’s a demo the company did at CES for LazyTechGuys:

The basic idea behind the Arc Menu is the kind of thing that sounds obvious only in retrospect: human beings tend to move their limbs in arcing motions, not vertical or horizontal lines. This means that gridlike interfaces, like the Windows 8 Start screen, feel subtly cumbersome and tiring in a gestural context. Omek did an enormous amount of user testing and iteration to arrive at this insight, but once they had it, the solution was clear: redesign the gestural menu into an arc that curves from the upper right part of the screen downwards. If you’re typing at your computer and raise your right hand to make a swiping movement, it will probably follow this arcing path through the air. 

Consider, though, the other assumptions that Omek employs to make this design work. For one, their depth camera isn’t mounted on top of the computer screen like a webcam; it’s positioned between the screen and the keyboard, so that its “capture volume” hovers just above the keys, but below the user’s eyeline to the screen. As I said in my Intel review, what good are close-range gestures if they obscure your own view of what you’re doing?

Omek’s designers clearly understand this constraint–and what it implies about how close-range gestures might actually be useful. Quick, casual movements in between touching the screen or typing on the keyboard–not precise, complicated, Minority-Report-like “gestural commands”–are what makes sense for perceptual computing, at least for now. Further iterations will reveal even more subtleties about how real live humans want to interact with their technology with gestures. But Omek’s work is a welcome reminder that we have to focus on the humans first, not the technology, to make that kind of progress.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.