Tech Gives Sound Directionality to Hearing Aids

System identifies faces and links them to speech

3 min read

different colored wavy lines leading to an ear opening against a blue background
iStock

This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Hearing aids can be critical for people suffering from hearing loss, yet these devices still have some flaws, including the fact that they lack adequate sound directionality. To overcome this issue, one research team in Taiwan has combined several technologies—such as computer vision, specialized algorithms, and microphone arrays—that provide users with a better ear for where sound is coming from. The new device, which is primarily designed for people with mild to moderate hearing loss, is described in a study published 5 December in IEEE Sensors Journal.

The proposed design includes an innovative dual-layer microphone array placed on the ears, and a necklace-style wearable device, which incorporates a camera with computer-vision AI. An algorithm helps the computer-vision component find faces in the scene to predict which face the sound is coming from. When the speaker is out of range of the computer-vision system, an algorithm that predicts that sound’s origin based on the angle and time of arrival kicks in.

In the last step, a mixing algorithm helps modify the sound that users hear to help them better detect the sound’s directionality, and subsequently adjusts the volume to achieve an immersive auditory experience.

Yi-Chun “Terry” Du, an associate professor of biomedical engineering at National Cheng Kung University, in Taiwan, says sound directionality is very important for the quality of life and safety of people with hearing loss. “We hope to apply this module in the daily life of the hearing [impaired] elderly patients, further improving the life qualities of the mild-to-moderate hearing-loss patients,” he says.

Directional Hearing Aids

In their study, Du’s team tested the hearing aid in a group of 30 patients. They found that study participants were able to correctly identify the source of sounds using the computer-vision component of their hearing aid with 94 percent or higher accuracy, at distances that people typically have conversations (160 centimeters or less). When sound is originating from an area detectable by the microphones but not the computer-vision device, users were still able to detect the source of sounds with more than 90 percent accuracy.

“Lastly, [we found that] the mixing algorithm effectively adjusts the volume of the left and right channels, enabling users to determine the sound-source location,” says Du. In a separate study of elderly patients, the combined technology allowed users to achieve 100 percent success rate on a clinical directional test.

He notes that the recognition range of the computer-vision camera is limited to 75 degrees, and therefore cannot achieve the same breadth of recognition as the human eye, which extends to 120 degrees for binocular vision. “In the future, the use of wide-angle lenses or dual cameras will be considered to achieve a similar angle as the human eye, making it more suitable for daily use,” says Du.

While users in this study reported a notable difference in their ability to hear and determine the direction of those sounds, unfortunately many of them said they would still be hesitant to use the device. This has been a long-standing stance for many people who qualify to use hearing aids. Du says that future research efforts could focus on exploring the underlying reasons and coming up with solutions to boost hearing-aid adoption among those who could benefit from them.

Nevertheless, this current device developed by Du and his colleagues shows advantages over existing aids, and the researchers are in early collaborations with companies potentially interested in commercializing the product.

Du says his team is also interested in expanding this technology to help users recognize who is speaking. “We plan to integrate [a] computer-vision-based smart reminder function for facial recognition which reminds the user who they are speaking to,” he explains. “This allows a smoother transition for conversation and allows the potential to create closer bonds between the user and the target speaker.”

The Conversation (2)
Esther Lumsdon
Esther Lumsdon13 Jan, 2024
SM

The most important part of this for my hearing loss is boosting the volume of people who are speaking to me while not looking at me (looking at the object they are talking about). If I can see the back of your head while you are speaking to me, my comprehension of your speech is low.

John Woodgate
John Woodgate28 Dec, 2023
LS

Apart from the not-sensible aversion to admitting deafness, in contrast to the ready acceptance of spectacles, people with mild to moderate hearing loss can 'get by' in most situations, so don't see a need to spend $$$$ on hearing aids. The availablility of OTC aids at lower cost may slowly affect acceptance.