It could mean good news for the hard of hearing who want to focus on an entertaining guest at a party. Or possibly bad news for those who would really prefer not to know what’s going on at a hellish dinner.
Either way, scientists have invented a hearing aid that is controlled by the mind and is able to filter out background noise in loud places and focus on one strand of conversation.
Engineers at Columbia University in New York are developing technology that constantly monitors the brain activity of the wearer to determine if they are conversing with a specific person and then amplifies that voice.
Existing hearing aids can suppress background noise but are unable to pick out to which person a user is listening in a noisy environment.
However, a breakthrough in auditory attention decoding (AAD) — how humans sift sounds — means that researchers are closer to developing a hearing aid that can cut through multiple conversations and background noise.
The team of engineers at Columbia developed a system that receives a single audio channel containing a mixture of speakers.
The system then automatically separates out the individual speakers and uses the listener’s neural signals to determine which one is being listened to, and amplifies it.
Nima Mesgarani, associate professor of electrical engineering, said that the entire process was achieved within ten seconds.
“This work combines the state of the art from two disciplines: speech engineering and auditory attention decoding,” he said.
“We were able to develop this system once we made the breakthrough in using deep neural network models to separate speech.”
The project builds on earlier research by Professor Mesgarani’s team that discovered it was possible to tell a listener’s target by tracking the nerves’ responses in the brain.
“Translating these findings to real-world applications poses many challenges,” James O’Sullivan, a research scientist working with Professor Mesgarani, said. “Our study takes a significant step towards automatically separating an attended speaker from the mixture.”
Professor Mesgarani added: “Our system demonstrates a significant improvement in both subjective and objective speech quality measures — almost all of our subjects said they wanted to continue to use it.
“Our novel framework for AAD bridges the gap between the most recent advancements in speech- processing technologies and speech prosthesis research and moves us closer to the development of realistic hearing aid devices that can automatically and dynamically track a user’s direction of attention and amplify an attended speaker.”
The research, carried out in collaboration with Columbia University Medical Center’s department of neurosurgery, Hofstra-Northwell School of Medicine, and the Feinstein Institute for Medical Research, is published in the Journal of Neural Engineering.
Mr Alan Hopkirk, Clinical Director of The Invisible Hearing Clinic says “Exciting but not actually new, as the article states at present there is a ten second delay which would be unacceptable to most users but also the size, research subjects by their very nature tend to be quite happy trundling a shopping trolley or back-pack around with them but today’s patient want something the size of a pea!”.