Computer Models of Neuronal Sound Processing in the Brain Lead to Cochlear Implant Improvements

Children learning to speak depend on functional hearing. So-called cochlear implants allow deaf people to hear again by stimulating the auditory nerve directly. Researchers at the Technische Universitaet Muenchen (TUM) are working to overcome current limits of the technology. They are investigating the implementation of signals in the auditory nerve and the subsequent neuronal processing in the brain. Using the computer models developed at the TUM manufacturers of cochlear implants improve their devices.

Intact hearing is a prerequisite for learning to speak. This is why children who are born deaf are fitted with so-called cochlear implants as early as possible. Cochlear implants consist of a speech processor and a transmitter coil worn behind the ear, together with the actual implant, an encapsulated microprocessor placed under the skin to directly stimulate the auditory nerve via an electrode with up to 22 contacts.

Adults who have lost their hearing can also benefit from cochlear implants. The devices have advanced to the most successful neuroprostheses. They allow patients to understand the spoken word quite well again. But the limits of the technology are reached when listening to music, for example, or when many people speak at once. Initial improvements are realized by using cochlear implants in both ears.

A further major development leap would ensue if spatial hearing could be restored. Since our ears are located a few centimeters apart, sound waves form a given source generally reach one ear before the other. The difference is only a few millionths of a second, but that is enough for the brain to localize the sound source. Modern microprocessors can react sufficiently fast, but a nerve impulse takes around one hundred times longer. To achieve a perfect interplay, new strategies need to be developed.

Modeling the auditory system
The perception of sound information begins in the inner ear. There, hair cells translate the mechanical vibrations into so-called action potentials, the language of nerve cells. Neural circuitry in the brain stem, mesencephalon and diencephalon transmits the signals to the auditory cortex, where around 100 million nerve cells are responsible for creating our perception of sound. Unfortunately, this "coding" is still poorly understood by science.

"Getting implants to operate more precisely will require strategies that are better geared to the information processing of the neuronal circuits in the brain. The prerequisite for this is a better understanding of the auditory system," explains Professor Werner Hemmert, director of the Department for Bio-Inspired Information Processing, at the TUM Institute of Medical Engineering (IMETUM).

Based on physiological measurements of neurons, his working group successfully built a computer model of acoustic coding in the inner ear and the neuronal information processing by the brain stem. This model will allow the researchers to further develop coding strategies and test them in experiments on people with normal hearing, as well as people carrying implants.

The fast track to better hearing aids
For manufacturers of cochlear implants collaborating with the TUM researchers, these models are very beneficial evaluation tools. Preliminary testing at the computer translates into enormous time and cost savings. "Many ideas can now be tested significantly faster. Then only the most promising processes need to be evaluated in cumbersome patient trials," says Werner Hemmert. The new models thus have the potential to significantly reduce development cycles. "In this way, patients will benefit from better devices sooner."

The working group reports on its work in the newly published book, "The Technology of Binaural Listening," which will be presented at the 166th conference of the Acoustical Society of America in San Francisco (2nd - 6th December 2013).

M. Nicoletti, C. Wirtz, W. Hemmert: Modeling Sound Localization with Cochlear Implants, The Technology of Binaural Listening, Springer-Verlag Berlin Heidelberg, 2013.

Most Popular Now

AI Tools Help Predict Severe Asthma Risk…

Mayo Clinic researchers have developed artificial intelligence (AI) tools that help identify which children with asthma face the highest risk of serious asthma exacerbation and acute respiratory infections. The study...

ChatGPT 4o Therapeutic Chatbot 'Ama…

One of the first randomized controlled trials assessing the effectiveness of a large language model (LLM) chatbot 'Amanda' for relationship support shows that a single session of chatbot therapy...

AI Distinguishes Glioblastoma from Look-…

A Harvard Medical School–led research team has developed an AI tool that can reliably tell apart two look-alike cancers found in the brain but with different origins, behaviors, and treatments. The...

AI Model Forecasts Disease Risk Decades …

Imagine a future where your medical history could help predict what health conditions you might face in the next two decades. Researchers have developed a generative AI model that uses...

Overcoming the AI Applicability Crisis a…

Opinion Article by Harry Lykostratis, Chief Executive, Open Medical. The government’s 10 Year Health Plan makes a lot of the potential of AI-software to support clinical decision making, improve productivity, and...

Smart Device Uses AI and Bioelectronics …

As a wound heals, it goes through several stages: clotting to stop bleeding, immune system response, scabbing, and scarring. A wearable device called "a-Heal," designed by engineers at the University...

AI Model Indicates Four out of Ten Breas…

A project at Lund University in Sweden has trained an AI model to identify breast cancer patients who could be spared from axillary surgery. The model analyses previously unutilised information...

Dartford and Gravesham Implements Clinis…

Dartford and Gravesham NHS Trust has taken a significant step towards a more digital future by rolling out electronic test ordering using Clinisys ICE. The trust deployed the order communications...