Skip to Content

What’s Next for Computer Interfaces?

Touch tricks for small and large displays could be the next big thing.
December 11, 2008

Earlier this week, the humble computer mouse celebrated its 40th birthday. While surprisingly little has changed since Doug Engelbart, an engineer at Stanford Research Institute, in Palo Alto, CA, first demonstrated the mouse to a skeptical crowd in San Francisco, we may have already seen a few glimpses of the future of computer interfaces. If so, over the next few years, the future of the computer interface will likely revolve around touch.

Tiny touch: A device called nanoTouch has a touch-sensitive back to make it easier to view the front-side display. Here, a credit-card-size gadget shows an image of a person’s finger on the back to help him move a cursor around the screen.

Thanks to the popularity of the iPhone, the touch screen has gained recognition as a practical interface for computers. In the coming years, we may see increasingly useful variations on the same theme. A couple of projects, in particular, point the way toward interacting more easily with miniature touch screens, as well as with displays the size of walls.

One problem with devices like the iPhone is that users’ fingers tend to cover up important information on the screen. Yet making touch screens much larger would make a device too bulky to slip discreetly into a pocket.

A project called nanoTouch, developed at Microsoft Research, tackles the challenges of adding touch sensitivity to ever-shrinking displays. Patrick Baudisch and his colleagues have added touch interaction to the back of devices that range in size from an iPod nano to a watch or a pendant. The researchers’ concept is for a gadget to have a front that is entirely a display, a back that is entirely touch sensitive, and a side that features buttons.

Multimedia

  • See images of interface technology.

To make the back of a gadget touch sensitive, the researchers added a capacitive surface, similar to those used on laptop touch pads. In one demonstration, the team shows that the interface can be used to play a first-person video game on a screen the size of a credit card. In another demo, the device produces a semitransparent image of a finger as if the device were completely see-through.

When a transparent finger or a cursor is shown onscreen, people can still operate the device reliably, says Baudisch, who is a part-time researcher at Microsoft Research and a professor of computer science and human-computer interaction at the Hasso Plattner Institute at Postdam University, in Germany.

Details of the device will be presented at the Computer Human Interaction conference in Boston next April. The researchers tested four sizes of square displays, measuring 2.4 inches, 1.2 inches, 0.6 inches, and 0.3 inches wide. They found that people could complete tasks at roughly the same speed using even the smallest display, and that they made about the same number of errors using all sizes of the device. Furthermore, the back-of-the-screen prototypes performed better than the smallest front-touch device.

Baudisch is encouraged by the results and is in the process of establishing guidelines for building rear-touch interfaces into tiny devices. “Envision the future where you buy a video game that’s the size of a quarter … and you wear electronic pendants,” he says.

Jeff Han, founder of a startup called Perceptive Pixel, based in New York, says that Baudisch’s concepts are impressive, but he’s more interested in using touch technology on large displays. He has already had some success: he has supplied wall-size touch screens to a number of U.S. government agencies and several news outlets. In fact, his company’s touch screens were used by news anchors during the November presidential election to show viewers electoral progress across the country.

Traditionally, large touch screens have been built in the same way as smaller ones, making them very expensive to create. Han’s displays take advantage of a physical phenomenon called total internal reflection: light is shone into an acrylic panel, which acts as the display and is completely contained within the material. When a finger or another object comes in contact with the surface, light scatters out and is detected by cameras positioned just behind the display. Because a thin layer of material covers the acrylic, the scattered light also depends on the amount of pressure that is applied to the display.

In a paper presented in October at the User Interface Software and Technology Symposium, in Monterey, CA, Han’s colleague Philip Davidson describes software that takes touch beyond the surface, using pressure to add another dimension to a screen.

Davidson created software that recognizes how hard a person is pressing a surface. If a user presses hard enough on an image of, say, a playing card and slides it along the display to another card, it will slide underneath. Additionally, if a person presses hard on one corner of an object on the screen, the opposite corner pops up, enabling the user to slide things underneath it. This provides a way to prevent displays from getting too cluttered, Davidson says.

However, Davidson also notes that pressure sensitivity should not make the device uncomfortable to use, and he has studied the natural fatigue that a person feels when she presses on a display and drags an object from one side to the other. The new pressure-sensitive features are expected to ship by the middle of next year, Davidson says.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.