This is a subscriber-only story.
Future noise-canceling headphones could let users opt back in to certain sounds they’d like to hear, such as babies crying, birds tweeting, or alarms ringing.
The technology that makes it possible, called semantic hearing, could pave the way for smarter hearing aids and earphones, allowing the wearer to filter out some sounds while boosting others.
The system, which is still in prototype, works by connecting off-the-shelf noise-canceling headphones to a smartphone app. The microphones embedded in these headphones, which are used to cancel out noise, are repurposed to also detect the sounds in the world around the wearer. These sounds are then played back to a neural network, which is running on the smartphone; then certain sounds are boosted or suppressed in real time, depending on the user’s preferences. It was developed by researchers from the University of Washington, who presented the research at the ACM Symposium on User Interface Software and Technology (UIST) last week.
The team trained the network on thousands of audio samples from online data sets and sounds collected from various noisy environments. Then they taught it to recognize 20 everyday sounds, such as a thunderstorm, a toilet flushing, or glass breaking.
It was tested on nine participants, who wandered around offices, parks, and streets. The researchers found that their system performed well at muffling and boosting sounds, even in situations it hadn’t been trained for. However, it struggled slightly at separating human speech from background music, especially rap music.
Mimicking human ability
Researchers have long tried to solve the “cocktail party problem”—that is, to get a computer to focus on a single voice in a crowded room, as humans are able to do. This new method represents a significant step forward and demonstrates the technology’s potential, says Marc Delcroix, a senior research scientist at NTT Communication Science Laboratories, Kyoto, who studies speech enhancement and recognition and was not involved in the project.
“This kind of achievement is very helpful for the field,” he says. “Similar ideas have been around, especially in the field of speech separation, but they are the first to propose a complete real-time binaural target sound extraction system.”
“Noise-canceling headsets today have this capability where you can still play music even when the noise canceling is turned on,” says Shyam Gollakota, an assistant professor at the University of Washington, who worked on the project. “Instead of playing music, we are playing back the actual sounds of interest from the environment, which we extracted from our machine-learning algorithms.”
Gollakota is excited by the technology’s potential for helping people with hearing loss, as hearing aids can be of limited use in noisy environments. “It’s a unique opportunity to create the future of intelligent hearables through enhanced hearing,” he says.
The ability to be more selective about what we can and can’t hear could also benefit people who require focused listening for their job, such as health-care, military, and engineering professionals, or for factory or construction workers who want to protect their hearing while still being able to communicate.
Filtering out the world
This type of system could for the first time give us a degree of control over the sounds that surround us—for better or worse, says Mack Hagood, an associate professor of media and communication at Miami University in Ohio, and author of Hush: Media and Sonic Self-Control, who did not work on the project.
“This is the dream—I’ve seen people fantasizing about this for a long time,” he says. “We’re basically getting to tick a box whether we want to hear those sounds or not, and there could be times where this narrowing of experience is really beneficial—something we really should do that might actually help promote better communication.”
But whenever we opt for control and choice, we’re pushing aside serendipity and happy accidents, he says. “We’re deciding in advance what we do and don’t want to hear,” he adds. “And that doesn’t give us the opportunity to know whether we really would have enjoyed hearing something.”
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Driving companywide efficiencies with AI
Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
Unpacking the hype around OpenAI’s rumored new Q* model
If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.