Skip to Content

Samsung’s “Eye Scroll” Hints at Post-Interactive Interfaces

What if the future of human-computer interaction had a lot less interaction in it?
March 4, 2013

The best essay on human-computer interaction I’ve read this year was a fake news piece in The Onion. Its title: “Internet Users Demand Less Interactivity.” What if people just “want to visit websites and look at them”? What if “using” a piece of software is simply not what we want to do with it, most of the time? 

I couldn’t help but think of that Onion article when I came across the news that Samsung’s latest phablet will use eye-tracking software to scroll its display for you. It’s a fine idea: after all, how much of our interaction with our smartphones consists of merely dragging the next tiny “page” of content up into view? Not that it’s a terribly taxing thing to do. But it’s not a very high-value physical interaction to repeat hundreds of times a day, either. Why not automate it?

Bret Victor, an ex-Apple interface designer, wrote a serious treatise on the idea of post-interactive software interfaces way back in 2006 – before glass-slab smartphones even existed. Its central argument is, essentially, exactly the same as that Onion headline: interaction is not most what software is actually for. Nine times out of ten, we engage with a piece of software primarily because we want to “read” it like a text, not manipulate it like an object. The latter is something we’re forced to do in order to achieve the former. But why should it be this way? The beauty of software, according to Victor, is that it’s really just graphic (or typographic) design rendered in “magic ink”: it can rearrange itself into exactly the right pattern for exactly the right context, from moment to moment. 

Of course, in order for software to fully exploit its “magic ink” potential, it has to be able to accurately sense our intent. Cheap, ubiquitous sensors and machine-learning algorithms (like the eye-tracking technology in Samsung’s new phone) make this possible. To be fair, it’s not exactly removing the interaction from software. Instead, it’s submerging it: treating it as noise and complication best abstracted away from the user’s direct attention. 

That’s the best-case scenario, of course. In practice, “interaction-less” software interfaces are likely to introduce their own annoying cognitive loads, simply because they won’t be smart enough to accurately anticipate our intent 100% of the time. To take the example of Samsung’s phone: how is the eye-tracking software going to know, perfectly, when certain eye movements near the bottom of the screen mean “advance the page now, please” while others may just be semi-random saccades, or some other, subtler sort of attentional behavior (perhaps you might be re-reading a certain word or phrase to savor or study it, and you don’t want to advance the page)?

What Samsung’s eye-tracking feature sounds like isn’t really “post-interactive” software behavior at all – instead, it’s simply replacing one kind of manipulation with another. Instead of dragging your finger (or pressing a button) to advance the page, you direct your gaze to a specific place in a specific way. The software isn’t really acting like “magic ink” that can anticipate your intent; it’s just making you issue the same old UI-manipulation commands with your eyes instead of your hand (or mouse). 

But your eyes are not hands. You use them to sense, not act. Software will have to get a heck of a lot more magical before it can really act like Victor’s magic ink. Until then, jabbing, pushing and poking at our software – er, interacting with it – will probably still be a necessary evil. 

Keep Reading

Most Popular

This new data poisoning tool lets artists fight back against generative AI

The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. 

Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist

An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.

The Biggest Questions: What is death?

New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.

Driving companywide efficiencies with AI

Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.