Technology Update SessionSession 3C In recent years, artificial intelligence (AI) has changed the noise suppression capabilities of hearing aids tremendously. Until recently, the performance of single-channel noise reduction algorithms remained very limited, with signal-to-noise ratio improvements that were often insufficient to yield significant clinical benefits. Help-in-noise systems in hearing aids thus had to rely mainly on spatial sound processing via advanced beamforming techniques to provide sufficient support to users. In 2020, the first deep neural network (DNN) trained to make speech stand out in noisy sound scenes and able to run directly on a commercial hearing aid chip was introduced. This first implementation was shown to provide significant speech intelligibility benefits, even without the use of spatial sound processing, outperforming traditional noise reduction approaches. Recently, a second generation DNN has allowed even better separation of speech from noise thanks to more diverse and precise training. When coupled with new strategies to prevent wind noise from contaminating the input signal path and advanced processing of potentially disturbing sounds such as sudden sounds, this system has become very powerful at providing users with enhanced speech clarity. DNN-based noise suppression has the advantage that it can handle noise sources from any direction around the user and is now also becoming available in some of the smallest hearing devices on the market, allowing an even wider range of users to benefit from this type of technology. As these innovations evolve quickly, it becomes increasingly relevant to find the right balance between providing sufficient support in difficult situations while keeping users connected to their surroundings, as well as ensuring good sound quality and a good listening experience also in more common, less complex situations. To achieve this, considering user needs is essential to make the best use of advanced hearing aid features and increase user benefit. User needs can take different forms. In-the-moment listening needs are defined by the acoustic environment at a given time and what intentions the user has in that environment. By integrating input from acoustic and motion sensors, hearing aids can now seamlessly adjust how much processing AI-based advanced features apply in real time so that the sound processing optimally supports the situation the user is in and their inferred intentions in that situation. In addition, general audiological needs must be considered to provide an appropriate processing baseline for individual users. These are the needs that the hearing care professional identifies in the clinic via subjective or objective diagnostic tools, so that they can personalize hearing care beyond audibility. The Audible Contrast Threshold (ACT) test is an example of such a tool that can now allow an automatic adjustment of DNN-based noise suppression settings in hearing aids, based on a more objective assessment of how much the user struggles hearing in noise. As the evolution of AI technology and its applicability to small hearing devices opens new avenues for sound processing, it remains crucial to tailor such solutions to the needs of users to ensure they benefit from them in their everyday lives.
|