Technology Update SessionSession 2C Machine learning techniques have been used in hearing aids to identify different types of acoustic environments, especially when speech is present. These algorithms are trained on an external computer, and a scaled down version is implemented in the hearing aid limited only by processing capabilities and memory constraints. Deep Neural Networks (DNNs), a subset of machine learning, open the possibility of creating more sophisticated algorithms with better accuracy. DNNs try to mimic how the brain processes information by creating a mesh of nodes and layers that can decode information after it has been extensively trained. Noise reduction is one of many hearing aid features that can benefit from DNN. Starkey's advanced processor incorporates an on-chip DNN accelerator capable of real-time processing, without having to sacrifice processing power or battery life. The output of DNN processing creates a more accurate snapshot of the listener's acoustic environment and allows for a better listening experience while navigating a changing sound scene. With Edge AI, DNN processing is incorporated into the hearing aid's signal path to better help discern between speech and noise signals. Conventional noise reduction algorithms primarily rely on the modulation of sound. Speech exhibits distinctive temporal fluctuations, or modulations, which help differentiate it from various types of noise that typically display far fewer modulations. However, modulation-based features have difficulty distinguishing between speech and non-stationary noise types. These noises can easily be misclassified as speech, leading to errors in classification. Edge AI builds on breakthroughs in DNN processing by incorporating a speech presence predictor with a proprietary sound management system that is better able to differentiate between speech and non-stationary noise components, an important distinction when determining when to apply the appropriate noise reduction scheme. Benchtop measurements show that the DNN-based noise reduction algorithm can provide more gain reduction than that achieved by a prior generation of hearing aids, particularly when the competing background is speech or modulation-based noise, such as the babble noise found commonly in restaurants, crowds, transportation, etc. Furthermore, perceptual data show a preference in hearing-impaired listeners for the DNN-based noise reduction algorithm over the prior generation algorithm in common, everyday noise and speech-in-noise environments.
|