The best essay on human-computer interaction I’ve read this year was a fake news piece in The Onion. Its title: “Internet Users Demand Less Interactivity.” What if people just “want to visit websites and look at them”? What if “using” a piece of software is simply not what we want to do with it, most of the time?
I couldn’t help but think of that Onion article when I came across the news that Samsung’s latest phablet will use eye-tracking software to scroll its display for you. It’s a fine idea: after all, how much of our interaction with our smartphones consists of merely dragging the next tiny “page” of content up into view? Not that it’s a terribly taxing thing to do. But it’s not a very high-value physical interaction to repeat hundreds of times a day, either. Why not automate it?
Bret Victor, an ex-Apple interface designer, wrote a serious treatise on the idea of post-interactive software interfaces way back in 2006 – before glass-slab smartphones even existed. Its central argument is, essentially, exactly the same as that Onion headline: interaction is not most what software is actually for. Nine times out of ten, we engage with a piece of software primarily because we want to “read” it like a text, not manipulate it like an object. The latter is something we’re forced to do in order to achieve the former. But why should it be this way? The beauty of software, according to Victor, is that it’s really just graphic (or typographic) design rendered in “magic ink”: it can rearrange itself into exactly the right pattern for exactly the right context, from moment to moment.
Of course, in order for software to fully exploit its “magic ink” potential, it has to be able to accurately sense our intent. Cheap, ubiquitous sensors and machine-learning algorithms (like the eye-tracking technology in Samsung’s new phone) make this possible. To be fair, it’s not exactly removing the interaction from software. Instead, it’s submerging it: treating it as noise and complication best abstracted away from the user’s direct attention.
That’s the best-case scenario, of course. In practice, “interaction-less” software interfaces are likely to introduce their own annoying cognitive loads, simply because they won’t be smart enough to accurately anticipate our intent 100% of the time. To take the example of Samsung’s phone: how is the eye-tracking software going to know, perfectly, when certain eye movements near the bottom of the screen mean “advance the page now, please” while others may just be semi-random saccades, or some other, subtler sort of attentional behavior (perhaps you might be re-reading a certain word or phrase to savor or study it, and you don’t want to advance the page)?
What Samsung’s eye-tracking feature sounds like isn’t really “post-interactive” software behavior at all – instead, it’s simply replacing one kind of manipulation with another. Instead of dragging your finger (or pressing a button) to advance the page, you direct your gaze to a specific place in a specific way. The software isn’t really acting like “magic ink” that can anticipate your intent; it’s just making you issue the same old UI-manipulation commands with your eyes instead of your hand (or mouse).
But your eyes are not hands. You use them to sense, not act. Software will have to get a heck of a lot more magical before it can really act like Victor’s magic ink. Until then, jabbing, pushing and poking at our software – er, interacting with it – will probably still be a necessary evil.
Smaller design teams can now prototype and deploy faster.