Most hearing aids on the market today are designed to mimic what happens in our inner ear - specifically the amplifying role of the outer hair cells.
However, the lab of Laurel Carney, Professor of Biomedical Engineering, is studying what happens beyond the inner ear - in the complex network of auditory nerve fibers that transmit the inner ear's electrical signals to the brain, and in the auditory center of the midbrain, which processes those signals.
Therein lies the key to creating hearing aids that not only make human speech louder but clearer, Carney believes.
An important focus of her research uses a combination of physiological and behavioral studies, and computer modeling, to study the 30,000 auditory nerve fibers on each side of our brain that transmit electrical signals from the inner ear. Critical to this is the initial transduction
of mechanical energy to electrical signals that occurs in the inner hair cells of the inner ear's organ of Corti.
This is critical for shaping the patterning of responses in the auditory nerves, and the patterning of those responses at this first level, where the signal comes into the brain, has a big effect on the way the mid brain responds to the relatively low frequencies of the human voice,
Carney explained.
In people with healthy hearing, the initial transduction results in a wide contrast in how various auditory nerve fibers transmit this information. The responses of some fibers are dominated by a single tone, or harmonic, within the sound; others respond to fluctuations that are set up by the beating of multiple harmonics,
Carney said. In the mid brain, neurons are capable of assimilating this contrast of fluctuating and nonfluctuating inputs across varying frequencies. They begin the process of parsing out the sounds of speech and any other vocalizations that involve low frequencies. A better understanding of how this process works in the midbrain, Carney believes, could yield new strategies for designing hearing aids.
A lot of people have tried to design hearing aids based just on what is going on in the inner ear, but there's a lot of redundancies in the information generated there. We argue that you need to step back and, from the viewpoint of the midbrain, focus on what really matters. It's the pattern of fluctuations in the auditory nerve fibers that the midbrain responds to. The sort of strategies we're suggesting are not intuitive. The idea of trying to restore the contrast in the fluctuations across different frequency channels has not been tried before. The burden is on us to prove that it works,
she added.
To that end, Carney works closely with Joyce McDonough, Professor of Linguistics, in exploring how auditory nerve fiber transmissions play a role in coding speech sounds. Her lab also works closely with that of Jong-Hoon Nam, Assistant Professor of Mechanical Engineering and of Biomedical Engineering, whose inner-ear studies were described in this newsletter last week. Carney shares what her lab is learning about the interface of auditory nerve fiber signaling with the brain, and in return, we try to include in our models a lot of the nonlinear properties of the inner ear that he (Nam) has been working on. By interacting with his lab, we hope to continue to modernize our model as he discovers more,
Carney said.