Skip to Content

A Simple Way to Turn Any LCD into a Touch Screen

Electromagnetic interference can turn a plain LCD into a touch screen on the cheap.
April 24, 2013

Electromagnetic interference can screw up cell phone and radio reception. But it may also be the key to cheaply transforming regular LCD screens into touch- and gesture-sensing displays, according to recent research.

A group of researchers from the University of Washington’s Ubiquitous Computing Lab developed a method called uTouch that uses a simple sensor and software to turn an ordinary LCD into a touch screen display. The system takes advantage of the low levels of electromagnetic interference produced by many consumer electronics, harnessing it to do things like control video playback with pokes and motions on an otherwise noninteractive screen.

“All these devices around you have all these signals coming out of them, and we ignore them because we think they’re noise,” says Sidhant Gupta, a PhD candidate at the University of Washington’s Ubiquitous Computing Lab and one of the co-authors of the paper.

While touch screens are the norm on smartphones and tablets, they’re still not common on TVs, computer monitors, and other big displays. Existing methods that turn passive LCDs into touch screens typically use cameras or other sensors, but they’re not always practical. The group’s findings, explained in a paper that will be presented in May at the Computer Human Interaction conference in Paris, could eventually be used to cheaply add touch and gesture interactions to TVs, computers, and much larger displays, too.

Gupta says his group’s method works by measuring signals that are normally given off by an LCD display and how they change when a user brings a hand near the screen. These signals show up as electromagnetic interference, and can be measured with a $5 sensor that plugs into a wall outlet.

In the study, users’ gestures and touches controlled an on-screen video player. Information about how the user’s actions changed the LCD’s electromagnetic interference was gathered by the sensor, and then sent to a connected PC, where software isolated the display’s signal and tracked how it changed over time. The software used machine learning to predict if changes were simply “noise” or one of five gestures and touches that it had been set to respond to. Once the touch or gesture was determined, it would elicit an appropriate on-screen response—like pausing or resizing a video.

“What we’re trying to find out is how that signal changes, and in particular we’re looking for changes in the intensity of that signal,” Gupta says.

The system can tell the difference between different displays, since each has its own electromagnetic interference “fingerprint,” and a single sensor can be used to track interactions on numerous displays. Eventually, Gupta says, the sensing and processing could be done in a single unit that’s plugged into a wall socket.

The technology won’t make a noninteractive display as touch-sensitive as an iPhone or Android smartphone. The gestures are much simpler than the complex swipes and pinches you can make on those gadgets.

Still, Gupta can imagine it being used to do things like make large screens at museums interactive. It could also be used to add interactivity to other devices that emit electromagnetic interference—something Gupta and some of his uTouch colleagues explored in an earlier project called LightWave that uses a plug-in sensor to enable compact fluorescent lightbulbs to sense human proximity.

 “The more things we can make interactive that already exist, the better,” says Chris Harrison, cofounder of a startup whose touch-screen technology can tell the difference between fingernail and knuckle taps and a PhD candidate at Carnegie Mellon University’s Human-Computer Interaction Institute. “It’s very expensive to just put touch screens everywhere.”

The researchers aren’t planning to commercialize the technology, but Gupta says the sensor uses off-the-shelf parts, and the algorithms are included in the paper, so any motivated person could put together the same system.

The challenge to building interest, Harrison thinks, will be in refining the gestures that uTouch can understand—which are currently quite coarse—and finding the right applications. “You could never write an e-mail with this system, but you could do some cool gestural interactions,” he says.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.