From left to right: Traditional representation of white noise; white noise analyzed using the researchers’ high-resolution method; a composite image, showing that the new approach faithfully captures the features of the traditional representation. (Courtesy of Timothy J. Gardner and Marcelo O. Magnasco)
Ultrahigh-Resolution Signal Analysis
An obscure algorithm could lead to more precise radar and a better understanding of human hearing
Source: “Sparse Time-Frequency Representations”
Timothy J. Gardner et al.
Proceedings of the National Academy of Sciences 103(16): 6094-6099
Results: Timothy Gardner of MIT and Marcelo Magnasco of Rockefeller University have proved that a method of high-resolution signal analysis produces faithful representations of sounds.
Why it matters: Signal analysis algorithms have been used for decades in speech recognition software, radar, and geological imaging. A method called “reassigned time-frequencyrepresentation” can capture sound data at a theoretically unlimited level of resolution. If used in radar, for example, it could help to measure the speed of a helicopter’s blades, whereas radar using traditional methods could identify only the basic shape of the helicopter. However, no one had mathematically demonstrated that the method, for all its precision, produced faithful representations–in part because it is relatively obscure in the signal-processing field. Its fidelity proven, the method could be incorporated into radar systems and signal-processing software.
Methods: To illustrate their proof, the researchers analyzed white noise, which is similar to the static of a radio tuned between stations. They represented the noise as a two-dimensional picture with time displayed on the horizontal axis and frequency on the vertical axis. They then compared their white-noise image with one produced using traditional methods. The traditional image was blurry, with black dots that represented the complete absence of sound. The researchers found that their algorithm faithfully represented the shape of the noise pattern, including the positions of the black dots, but at a higher resolution.
Next steps: The researchers plan to apply their understanding of reassigned time-frequency representation in an investigation of human hearing. By testing their algorithms against artificial neural networks that represent auditory nerves, they will try to create better neurological models of the way the brain makes sense of sound.