Speakers and microphones can be used for a lot more than just playing and recording sounds.
Researchers at Carnegie Mellon University are exploring what else they can do with a project called SweepSense. It takes advantage of the speakers and microphones already common in gadgets like smartphones, laptops, and earbuds, sending out ultrasonic frequencies and measuring the strength of the reflected sound, which is then used to trigger actions such as halting tunes when you remove your earbuds from your ears.
Gierad Laput, a graduate student at Carnegie Mellon and the leader of the project, says the idea is to use speaker-and-microphone pairs to add functionality beyond what they’re designed to do, without having to add any more hardware.
“The infrastructure is already there, so you’re just riding on top of it,” he says.
The project is the latest signal that ultrasound technology could become increasingly useful in a range of gadgets, and also perhaps in settings like cars or even subway stations. There are a number of companies working on bringing ultrasound technology to electronics already, such as Elliptic Labs, which makes software that employs ultrasound for gesture recognition and proximity sensing on phones, and Chirp Microsystems, which uses ultrasound for gesture recognition, though the technology has not yet become widespread.
The researchers came up with a couple of ways to use the SweepSense approach with a smartphone and a laptop. On the phone, with a pair of earbuds plugged in and each bud emitting different ultrasonic frequencies that were picked up by its built-in microphone, they were able to analyze the sounds to determine if a user was wearing the left or right earbud, both earbuds, or neither of them. Software could then use this information to, say, pause music if both earbuds are taken out of your ears—an application researchers actually tried out. A user might also be able to answer an incoming call by removing just one of the earbuds.
On a laptop, the researchers used its built-in speakers and microphone with SweepSense software to figure out when the computer display’s angle was changed. Ultrasound frequencies should reflect differently according to the screen’s angle, Laput says; spotting this change would make it possible to do things like pull up a utility dashboard when someone tilts the display forward, for example.
Researchers have also started testing out SweepSense with a car, Laput says, since a number of vehicles have speakers and microphones, too. The idea there is to see if ultrasound sensing can determine whether a door is open (and if it’s open, how far open) and whether or not there are passengers in the car, he says.
One problem with using ultrasound frequencies, as the researchers note, is that low-frequency ultrasound may be audible to some people (such as kids and the elderly) and to animals, which could annoy them. Laput says using a different range of ultrasound that isn’t audible is possible, but not all speakers that are already embedded in devices can emit such sounds; he hopes that future hardware improvements will make this more common.
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.