Our lives are awash with ambient electromagnetic radiation, from the fields generated by power lines to the signals used to send data between Wi-Fi transmitters. Researchers at Microsoft and the University of Washington have found a way to harness this radiation for a computer interface that turns any wall in a building into a touch-sensitive surface.
The technology could allow light switches, thermostats, stereos, televisions, and security systems to be controlled from anywhere in the house, and could lead to new interfaces for games.
“There’s all this electromagnetic radiation in the air,” says Desney Tan, senior researcher at Microsoft (and a TR35 honoree in 2007). Radio antennas pick up some of the signals, Tan explains, but people can do this too. “It turns out that the body is a relatively good antenna,” he says.
The ambient electromagnetic radiation emitted by home appliances, mobile phones, computers, and the electrical wiring within walls is usually considered noise. But the researchers chose to put it at the core of their new interface.
When a person touches a wall with electrical wiring behind it, she becomes an antenna that tunes the background radiation, producing a distinct electrical signal, depending on her body position and proximity to and location on the wall. This unique electrical signal can be collected and interpreted by a device in contact with or close to her body. When a person touches a spot on the wall behind her couch, the gesture can be recognized, and it could be used, for example, to turn down the volume on the stereo.
So far, the researchers have demonstrated only that a body can turn electromagnetic noise into a usable signal for a gesture-based interface. A paper outlining this will be presented next week at the CHI Conference on Human Factors in Computing Systems in Vancouver, BC.
In an experiment, test subjects wore a grounding strap on their wrist—a bracelet that is normally used to prevent the buildup of static electricity in the body. A wire from the strap was connected to an analog-to-digital converter, which fed data from the strap to a laptop worn in a backpack. Machine-learning algorithms then processed the data to identify characteristic changes in the electrical signals corresponding to a person’s proximity to a wall, the position of her hand on the wall, and her location within the house.
“Now we can turn any arbitrary wall surface into a touch-input surface,” says Shwetak Patel, professor of computer science and engineering and electrical engineering at the University of Washington (and a TR35 honoree in 2009), who was involved with the work. The next step, he says, is to make the data analysis real-time and to make the system even smaller—with a phone or a watch instead of a laptop collecting and analyzing data.
“With Nintendo Wii and Microsoft’s Kinect, people are starting to realize that these gesture interfaces can be quite compelling and useful,” says Thad Starner, professor in Georgia Tech’s College of Computing. “This is the sort of paper that says here is a new direction, an interesting idea; now can we refine it and make it better over time.”
Refining the system to make it more user-friendly will be important, says Pattie Maes, a professor in MIT’s Media Lab who specializes in computer interfaces. “Many interfaces require some visual, tangible, or auditory feedback so the user knows where to touch.” While the researchers suggest using stickers or other marks to denote wall-based controls, this approach might not appeal to everyone. “I think it is intriguing,” says Maes, “but may only have limited-use cases.”
Joe Paradiso, another professor in MIT’s Media Lab, says, “The idea is wild and different enough to attract attention,” but he notes that the signal produced could vary depending on the way a person wears the device that collects the signal.
Patel has previously used a building’s electrical, water, and ventilation systems to locate people indoors. Tan has worked with sensors that use human brain power for computing and muscle activity to control electronics wirelessly. The two researchers share an interest in pulling useful information out of noisy signals. With the recent joint project, Tan says, the researchers are “taking junk and making sense of it.”
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.