Skip to Content
MIT News magazine

Perfecting Pitch

Dennis Freeman, SM ‘76, PhD ‘86, helps explain why the ear is so good at telling one sound from another.

Jack Freeman worked for four decades in a noisy brick-making factory, but for years his wife found it hard to believe that he had a hearing loss. He would often stay up late watching TV–always with the volume turned low. “How can he claim not to hear me, too, when I come in to talk to him?” she would ask.

The Freemans’ son, Dennis, SM ‘76, PhD ‘86, an MIT professor of electrical engineering, has been studying the inner ear for more than 30 years. But only recently has he gotten to the bottom of his mother’s question. Freeman’s lab, in the Research Laboratory of Electronics’ Auditory Physiology Group, has made a fundamental discovery about the inner ear, one that helps explain why ­Freeman’s father has trouble with sounds from different sources.

Scientists have long known that people lose the ability to discriminate between sounds when exposure to excessive noise damages the delicate structures of the inner ear. (The problem can also be congenital.) But they have yet to uncover why the inner ear is normally such an extraordinary sensor–allowing us to hear everything from a low whisper to the roar of a jet engine, and to distinguish up to 30 tones between the frequencies of adjacent keys on a piano.

These remarkable abilities are thought to arise from cochlear amplification, a process by which the inner ear’s response to sounds is amplified as much as a thousandfold by the collective action of 12,000 sensory receptor cells. Many researchers have studied how individual sensory­ cells–particularly those known as outer hair cells–work to magnify sounds, either making them loud enough to hear or enabling detection of minute changes in frequency. But scientists are just beginning to understand how different parts of the ear interact with those hair cells.

“There are 12,000 sensory cells in each ear, and they’re talking to each other in a feedback system,” Freeman says. “And that system is what we’re trying to understand. “

Freeman’s interest is personal as well as academic: when he got rheumatic fever in fourth grade, the streptomycin used to treat it weakened his hearing. Then, after his freshman year at Penn State, his hearing was further damaged by a summer job in the same thundering factory where his father worked. Even so, ­Freeman didn’t come to MIT in the 1970s to study the ear. He came to build computers. Then he met Professor Campbell Searle–author of his first circuitry textbook–and realized that he could apply electrical engineering to the study of hearing. Freeman worked with Searle and others to try to develop hearing aids that made speech sounds ­easier to understand by using signal processing to do some of the ear’s work for it. But that approach, Freeman says, “just didn’t work.”

By the early 1980s, Freeman had concluded that existing models of the ear were incomplete. So instead of trying to build a better hearing aid using those models, he embarked on a crash course in neurophysiology and cell physiology, so he could do his doctoral research on cochlear hydrodynamics. Over the last two decades, Freeman has refined his models to reflect new evidence, such as the discovery, by William Brownell of the Baylor College of Medicine, that sensory receptor cells act as mechanical amplifiers, actually generating motion in inner-ear structures in response to sound instead of simply reporting sound-induced motions to the brain.

Now Freeman’s lab has uncovered a key role played by a ­little-­understood part of the inner ear. Using a clever experimental setup designed by graduate student Roozbeh Ghaffari ‘01, Mng ‘03, ­Freeman’s team demonstrated that the tectorial membrane, a structure traditionally thought to be inert, in fact moves, transmitting waves that travel at a precise speed, and in a direction perpendicular to that of other wave motion in the ear. Interaction between the two kinds of waves appears to make the hair cells more sensitive.

“It’s a very fundamental piece of work,” says Rahul Sarpeshkar ‘90, an MIT associate professor of electrical engineering who works on bionic ears and cochlear implants. “People have suspected that the tectorial membrane could be part of a resonant system. But until now, no one has ever shown it experimentally.”

For about 60 years, inner-ear studies have focused on the sensory cells and their interaction with the basilar membrane, a group of thin elastic fibers. When a sound enters the ear, it causes the basilar membrane to move up and down, propagating a wave. The wave travels quickly along the membrane and down the ­spiral-­shaped portion of the inner ear known as the cochlea, which is tuned to different frequencies along its length. When a wave reaches the part of the cochlea tuned to its frequency, it slows down. And as waves travel, they stimulate the hair cells located above the basilar membrane, which convert the waves into nerve impulses and also vibrate in a way that amplifies the wave motion.

Individual sensory cells can’t produce cochlear amplification by themselves. To figure out how they collaborate, Freeman’s team looked to the tectorial membrane, which lies above the hair cells and in which they’re embedded.

But the tectorial membrane isn’t easy to study. “It’s like a slab of Jell-O,” says Alexander Aranyosi, PhD ‘02, a research scientist who worked on the study. Roughly two centimeters long, less than half a millimeter wide, and thinner than a human hair, the membrane is hard to manipulate–and nearly transparent. If exposed to air, it shrivels up, since it’s 97 percent water.

The contents of the remaining 3 percent, however, are intriguing. In addition to sugar, the membrane contains alpha-tectorin and beta-tectorin, two proteins found nowhere else; mammals lacking the genes that make them have congenital hearing impairments. So Freeman encouraged Ghaffari to think about how to simulate natural stimulation of the tectorial membrane in the lab.

Ghaffari suspended a half-millimeter piece of a mouse’s tectorial membrane across two tiny supports, each 300 micrometers thick, which he built on a glass slide and placed in a saline solution that simulates the cochlear environment. One support is glued to the slide; the other is attached to a piezoelectric actuator and loosely coupled to the slide. When an oscillating voltage is applied to the actuator, it vibrates at a corresponding audio frequency and moves the attached support, causing a wave to travel down the suspended membrane. Using a stroboscopic imaging system developed earlier in Freeman’s lab and built by Aranyosi, Ghaffari measured ­nanometer-­scale displacements of the membrane at up to several thousand cycles per second–frequencies perfect for hearing.

The team observed that waves move side to side along the tectorial membrane (waves traveling along the basilar membrane move up and down). The researchers also discovered that waves move along the tectorial membrane at about the same speed as basilar-membrane waves that have reached the part of the cochlea tuned to their frequency. “When you’ve got two waves moving at the same speed, that gives them the possibility to interact,” Aranyosi says. “They can trade energy back and forth.” The two kinds of waves travel at the same speed at only one spot–where the cochlea is tuned to a sound’s frequency. Here, the ear is able to selectively amplify, and thus distinguish, a specific frequency.

The group’s next step is to measure these interactions in vivo. “Once we have a better understanding of how those wave interactions take place, then we can build hearing aids that actually correct for the real problem rather than simply trying to make everything sound louder,” Aranyosi says. The researchers also plan to study the genes that produce the tectorial membrane’s two unique proteins for more clues about how cochlear amplification works.

In the nonhierarchical Freeman lab, discussion topics range from Eastern philosophies to new methodologies for probing the cochlea. “We all treat each other as colleagues and coworkers, as opposed to professor and student or research scientist and student,” Aranyosi says. “Everybody has something to contribute, and everyone is given an equal voice in how we do things.”

“A lot of subtle ideas come out of these meetings where we’re all just hanging out with Denny,” Ghaffari says. “That’s just the way Denny is.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.