Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

Gestural interactions may be the new hotness in interface design, but there's something about these hand-wavey children of Minority Report that's hard to get exactly right. It can feel like learning a pidgin sign language: instead of pressing, clicking, swiping or tapping a physical surface, you're “speaking” to the computer by making (often arbitrary-seeming) symbols with your hands. Computer scientists at the University of St. Andrews (who've been making a lot of interesting moves in UI design lately) think that letting users design their own gestures will make it much easier to remember them.

The idea certainly makes intuitive sense. Every day I watch my two-year-old protest “I do it!” when I try to show her some new physical skill. She wants to do it in her own way, so it sticks. And when it comes to learning unfamiliar gestural UIs—with no obvious physical affordances to exploit, and few instances of useful previous experience to call upon for reference—we adults aren’t much better off than toddlers. So why not design these systems to let each user “speak” to them in their own way, and streamline the learning process?

The researchers discovered something interesting about the pain-in-the-ass-ness of gestural UIs: the difficulty wasn’t in remembering exactly how to do the gestures (“do I swipe with two fingers or three?”) but in so-called “association errors” (“what does swiping with two fingers do, again?”) . Letting users define the gestures themselves reduced these association errors. If decide, “OK, swiping upwards with two fingers means ‘undo’”, I’m more likely to remember the interaction and correctly perform it.

Still, not everyone is a professional interaction designer. Supposedly “simple” interactions like “pinch to zoom” on iOS only emerged from doing intense research on many gestural variations. DIY gestures, created on the fly by amateurs, could feel simple and easy the first few times and then turn annoying or even physically harmful with repeated use. If you give yourself some gestural-UI version of carpal tunnel, who are you going to sue when you’re the person who created the harmful pattern in the first place?

OK, I kid. And you could always just change any gestural pattern that started to bug you. Asking users to assume the cognitive load not just of learning these new interactions, but also designing them, seems like it could backfire—but it also might have unintended benefits as well. Perhaps by “crowdsourcing” gestural-interaction patterns in this way, we’d be able to converge upon useful conventions for gestural UIs in general—just as Twitter users invented “RT” and “@”-messaging conventions in a bottom-up manner.

In any case, gestural interfaces have a long way to go before they feel as legible and evident as touchscreens, keyboards and mice. That old cliché—“I hear and I forget; I see and I remember; I do and I understand”—could help point the way forward.

0 comments about this story. Start the discussion »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me