Skip to Content

Tabletop Computer Knows You by Your Shoes

A system with foot-level cameras aims to cure the problem of multiple people using one touch screen.
January 23, 2012

New research from the Hasso Plattner Institute in Potsdam, Germany, aims to quell the frustration and strife that can come when multiple people use a single touch screen. The project, called Bootstrapper, uses cameras below a table to identify different users by their shoes. Each set of shoes is linked to an account that keeps track of a person’s actions and preferences.

Sneaker reader: Bootstrapper consists of lights and cameras that reside in a box below a touch-screen table.

Unlike other approaches to differentiating between users, Bootstrapper uses low-cost hardware and allows a person’s hands to freely interact with the surface. As an added benefit, a user’s preferences can be stored according to her shoes, so when she leaves the table, it’s easier to resume an activity when she returns.

Previous approaches to the problem have involved affixing sensors to chairs, or using cameras positioned above a table. One approach required users to wear a ring that emits infrared, which was then tracked by the touch-table’s cameras.

Patrick Baudisch, professor of computer science at the Hasso Plattner Institute, who developed the prototype system with graduate students Stephan Richter and Christian Holz, says shoes are ideal to track because they offer distinct features such as colors, seams, laces, logos, or stripes. They also typically maintain contact with the ground, unlike hands on a tabletop or bottoms in chairs, so they’re easier to track.

Baudisch stresses that Bootstrapper is not intended as a security feature. “People can always spoof the system by buying the same shoes as someone else,” he notes. The goal is to make collaboration easier and to log different people’s usage over many sessions. The researchers, for example, used it to summarize users’ achievements in a mathematics software program.

Bootstrapper collects video of shoes using cameras positioned below the surface of the table. Software extracts information about the texture of the shoe and links it with actions on the touch screen that correspond to hands and arms aligned with the shoes. With a small sample of 18 users and 18 different shoes, the researchers demonstrated that the system could recognize a user with 89 percent accuracy.

“Everyone who does development for large touch screens knows that [user differentiation] is a problem,” says Daniel Wigdor, a professor of computer science at the University of Toronto. Wigdor was not involved in the research. “Bootstrapper’s technique is elegant because it’s all contained in a particular box,” he says, referring to the prototype’s housing for the cameras and lights.

Still, Bootstrapper isn’t perfect. Baudisch notes that if a person contorts his or her arm in a way that makes it appear to align with someone else’s feet, the system might mismatch a user with a gesture. The current system also requires that at least one foot maintain direct contact with the floor. And if different users wear the same type of shoe, as they would if in the military, for instance, the system’s main function is rendered useless.

The best way to identify users around a touch table is probably to combine several approaches, says Wigdor. For instance, a Bootstrapper-like system could be paired with sensors in a chair. “I can see it as one of three or four techniques,” he says.

Baudisch thinks elements of Bootstrapper could find a home in open spaces like department stores. Cameras could track whether a person paused at the sweaters or purses, for example, and then suggest a sale via a digital advertisement.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.