Taking Touch beyond the Touch Screen
A prototype tablet can sense gestures, and objects placed next to it.
A tablet computer developed collaboratively by researchers at Intel, Microsoft, and the University of Washington can be controlled not only by swiping and pinching at the screen, but by touching any surface on which it is placed.
Finding new ways to interact with computers has become an important area of research among computer scientists, especially now that touch-screen smart phones and tablets have grown so popular. The project that produced the new device, called Portico, could eventually result in smart phones or tablets that take touch beyond the physical confines of the device.
“The idea is to allow the interactive space to go beyond the display space or screen space,” says Jacob Wobbrock, an associate professor at the University of Washington’s Information School, in Seattle, who helped develop the system. This is achieved with two foldout cameras that sit above the display on either side, detecting and tracking motion around the screen. The system detects the height of objects and determines whether they are touching the surrounding surface by comparing the two views captured by the cameras. The approach make it possible to detect hand gestures as well as physical objects so that they can interact with the display, says Wobbrock.
In one demonstration, software tracks a small ball as it moves across the surface the tablet sits on. As the ball strikes the side of the tablet, a virtual ball appears on-screen following the same trajectory, as if the physical ball had entered the device. In this way the ball can be used to score on-screen goals. In another demonstration, the angle of a toy spaceship placed on the table next to the tablet controls the angle of a virtual spaceship onscreen, allowing the user to shoot down “asteroids.”
Wobbrock says the same approach would work on smart phones and other pocket-sized devices. “As devices continue to shrink, they compromise the screen space. But with Portico you can reclaim the surrounding area for interactivity,” he says.
With the tablet, Portico increases the usable area sixfold, says Daniel Avrahami a senior researcher at Intel Labs Seattle, who came up with the idea for Portico, and led its development, with help from Shahram Izadi at Microsoft Research in Cambridge, UK. For a 12-inch tablet, “that’s the equivalent of a 26-inch screen,” says Avrahami, who will present the work in October at the ACM User Interface, Software and Technology Symposium in Santa Barbara, California.
Eventually, says Wobbrock, it may be more practical, especially from a commercial standpoint, to use clip-on cameras instead of foldout ones, which tend to break more easily. But he also notes that the entire display might be replaced with a fold-up frame containing both cameras and a pico projector to produce the image on the surface below.
Eva Hornecker, a lecturer specializing in human-computer interaction at the University of Strathclyde, in Glasgow, Scotland, says there is growing interest in using cameras to detect hand gestures and objects among researchers.
“The problem with touch screens is you can’t detect anything that’s happening over the surface,” Hornecker says. However, she notes that allowing interaction beyond the screen could introduce new challenges such as how to provide feedback so the user knows where the interactive area starts and ends.