What’s Next for Computer Interfaces?
Earlier this week, the humble computer mouse celebrated its 40th birthday. While surprisingly little has changed since Doug Engelbart, an engineer at Stanford Research Institute, in Palo Alto, CA, first demonstrated the mouse to a skeptical crowd in San Francisco, we may have already seen a few glimpses of the future of computer interfaces. If so, over the next few years, the future of the computer interface will likely revolve around touch.
Thanks to the popularity of the iPhone, the touch screen has gained recognition as a practical interface for computers. In the coming years, we may see increasingly useful variations on the same theme. A couple of projects, in particular, point the way toward interacting more easily with miniature touch screens, as well as with displays the size of walls.
One problem with devices like the iPhone is that users’ fingers tend to cover up important information on the screen. Yet making touch screens much larger would make a device too bulky to slip discreetly into a pocket.
A project called nanoTouch, developed at Microsoft Research, tackles the challenges of adding touch sensitivity to ever-shrinking displays. Patrick Baudisch and his colleagues have added touch interaction to the back of devices that range in size from an iPod nano to a watch or a pendant. The researchers’ concept is for a gadget to have a front that is entirely a display, a back that is entirely touch sensitive, and a side that features buttons.
See images of interface technology.
To make the back of a gadget touch sensitive, the researchers added a capacitive surface, similar to those used on laptop touch pads. In one demonstration, the team shows that the interface can be used to play a first-person video game on a screen the size of a credit card. In another demo, the device produces a semitransparent image of a finger as if the device were completely see-through.
When a transparent finger or a cursor is shown onscreen, people can still operate the device reliably, says Baudisch, who is a part-time researcher at Microsoft Research and a professor of computer science and human-computer interaction at the Hasso Plattner Institute at Postdam University, in Germany.
Details of the device will be presented at the Computer Human Interaction conference in Boston next April. The researchers tested four sizes of square displays, measuring 2.4 inches, 1.2 inches, 0.6 inches, and 0.3 inches wide. They found that people could complete tasks at roughly the same speed using even the smallest display, and that they made about the same number of errors using all sizes of the device. Furthermore, the back-of-the-screen prototypes performed better than the smallest front-touch device.
Baudisch is encouraged by the results and is in the process of establishing guidelines for building rear-touch interfaces into tiny devices. “Envision the future where you buy a video game that’s the size of a quarter … and you wear electronic pendants,” he says.
Jeff Han, founder of a startup called Perceptive Pixel, based in New York, says that Baudisch’s concepts are impressive, but he’s more interested in using touch technology on large displays. He has already had some success: he has supplied wall-size touch screens to a number of U.S. government agencies and several news outlets. In fact, his company’s touch screens were used by news anchors during the November presidential election to show viewers electoral progress across the country.
Traditionally, large touch screens have been built in the same way as smaller ones, making them very expensive to create. Han’s displays take advantage of a physical phenomenon called total internal reflection: light is shone into an acrylic panel, which acts as the display and is completely contained within the material. When a finger or another object comes in contact with the surface, light scatters out and is detected by cameras positioned just behind the display. Because a thin layer of material covers the acrylic, the scattered light also depends on the amount of pressure that is applied to the display.
In a paper presented in October at the User Interface Software and Technology Symposium, in Monterey, CA, Han’s colleague Philip Davidson describes software that takes touch beyond the surface, using pressure to add another dimension to a screen.
Davidson created software that recognizes how hard a person is pressing a surface. If a user presses hard enough on an image of, say, a playing card and slides it along the display to another card, it will slide underneath. Additionally, if a person presses hard on one corner of an object on the screen, the opposite corner pops up, enabling the user to slide things underneath it. This provides a way to prevent displays from getting too cluttered, Davidson says.
However, Davidson also notes that pressure sensitivity should not make the device uncomfortable to use, and he has studied the natural fatigue that a person feels when she presses on a display and drags an object from one side to the other. The new pressure-sensitive features are expected to ship by the middle of next year, Davidson says.
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.