Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

From left to right: Traditional representation of white noise; white noise analyzed using the researchers’ high-resolution method; a composite image, showing that the new approach faithfully captures the features of the traditional representation. (Courtesy of Timothy J. Gardner and Marcelo O. Magnasco)

Ultrahigh-Resolution Signal Analysis
An obscure algorithm could lead to more precise radar and a better understanding of human hearing

Source: “Sparse Time-Frequency Representations”
Timothy J. Gardner et al.
Proceedings of the National Academy of Sciences 103(16): 6094-6099

Results: Timothy Gardner of MIT and Marcelo Magnasco of Rockefeller University have proved that a method of high-­resolution signal analysis produces faithful representations of sounds.

Why it matters: Signal analysis algorithms have been used for decades in speech recognition software, radar, and geological imaging. A method called “reassigned time-frequencyrepresentation” can capture sound data at a theoretically unlimited level of resolution. If used in radar, for example, it could help to measure the speed of a helicopter’s blades, whereas radar using traditional methods could identify only the basic shape of the helicopter. However, no one had mathematically demonstrated that the method, for all its precision, produced faithful representations–in part because it is relatively obscure in the signal-processing field. Its fidelity proven, the method could be incorporated into radar systems and signal-processing software.

Methods: To illustrate their proof, the researchers analyzed white noise, which is similar to the static of a radio tuned between stations. They represented the noise as a two-­dimensional picture with time displayed on the hori­zontal axis and frequency on the vertical axis. They then compared their white-noise image with one produced using traditional methods. The traditional image was blurry, with black dots that represented the complete absence of sound. The researchers found that their algorithm faithfully represented the shape of the noise pattern, including the positions of the black dots, but at a higher resolution.

Next steps: The researchers plan to apply their understanding of re­assigned time-frequency representation in an investigation of human hearing. By testing their algorithms against artificial neural networks that represent auditory nerves, they will try to create better neurological models of the way the brain makes sense of sound.

0 comments about this story. Start the discussion »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me