Tabletop touch screens such as Microsoft’s Surface are designed for sharing and collaboration, but it’s difficult for them to tell one person from another. Researchers in the U.K. have developed a new way to identify different users: via mobile phones.
The prototype system, called PhoneTouch, lets users manipulate onscreen objects, such as photos, or select buttons, by touching any part of their phone to the screen. This also makes it possible to personalize interactions, says Hans Gellersen, a professor of interactive systems at the University of Lancaster, who developed the system with his student Dominik Schmidt.
PhoneTouch also makes it possible to transfer files between the phone and the surface. “Surfaces in general are good for working together in parallel,” says Gellersen. “But when people work together they also want to bring information into the group.”
PhoneTouch uses a camera positioned beneath the surface to recognize finger contact. The system can also discern the pattern made when the edge of a phone touches the surface. “The phone gives a different visual blob than the finger,” says Gellersen.
To identify which phone is in contact with the surface, the PhoneTouch interrogates the accelerometers built into connected phones to see which of them experienced a slight bump at precisely the moment of contact. “These two events are correlated in time,” he says. This is an approach known as separate event detection.
“It’s very clever,” says Eva Hornecker, who studies the usability of touch surfaces at Strathclyde University. “Normally surfaces don’t know who’s who.” PhoneTouch could, perhaps, ensure that files taken from a phone can be shared with others, but without allowing anyone else to alter or save them, Hornecker notes.
Separate event detection is already used by the popular smart phone app Bump, which lets users exchange information by shaking, or “bumping,” two phones close together. PhoneTouch differs in that it allows the pairing of a personal device with a shared device, says Gellersen. “PhoneTouch not only establishes a connection but allows the Phone to be used as a stylus on the surface, to select specific widgets.”
Schmidt says that there is a slim chance that the system will be confused when two phones touch it simultaneously. But a user study has shown that it identifies the correct device 99.99 of the time. The researchers will present the work at the User Interface Software and Technology symposium in New York this week.
“It’s a great idea,” says Rob Miller, head of the User Interface Design Group at MIT. “The traditional desktop approach of username and password entry doesn’t make much sense on a multitouch tabletop,” because text entry is less natural.
Mitsubishi Electric has demonstrated a touch surface called DiamondTouch that uses sensors in chairs to match touches to different people. But using a phone may be more convenient. “Mobile phones are already ubiquitous–people carry them anyway.” Miller agrees. “Phones are very personal. We almost always have them with us.”
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
The walls are closing in on Clearview AI
The controversial face recognition company was just fined $10 million for scraping UK faces from the web. That might not be the end of it.
This horse-riding astronaut is a milestone in AI’s journey to make sense of the world
OpenAI’s latest picture-making AI is amazing—but raises questions about what we mean by intelligence.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.