Auditory brainstem response (ABR) interpretation in clinical practice often relies on visual inspection by audiologists, which is prone to inter-practitioner variability. While deep learning (DL) algorithms have shown promise in objectifying ABR detection in controlled settings, their applicability to real-world clinical data is hindered by small datasets and insufficient heterogeneity. This study evaluates the generalizability of nine DL models for ABR detection using large, multicenter datasets. The primary dataset analyzed, Clinical Dataset I, comprises 128,123 labeled ABRs from 13,813 participants across a wide range of ages and hearing levels, and was divided into a training set (90%) and a held-out test set (10%). The models included convolutional neural networks (CNNs; AlexNet, VGG, ResNet), transformer-based architectures (Transformer, Patch Time Series Transformer [PatchTST], Differential Transformer, and Differential PatchTST), and hybrid CNN-transformer models (ResTransformer, ResPatchTST). Performance was assessed on the held-out test set and four external datasets (Clinical II, Southampton, PhysioNet, Mendeley) using accuracy and area under the receiver operating characteristic curve (AUC). ResPatchTST achieved the highest performance on the held-out test set (accuracy: 91.90%, AUC: 0.976). Transformer-based models, particularly PatchTST, showed superior generalization to external datasets, maintaining robust accuracy across diverse clinical settings. Additional experiments highlighted the critical role of dataset size and diversity in enhancing model robustness. We also observed that incorporating acquisition parameters and demographic features as auxiliary inputs yielded performance gains in cross-center generalization. These findings underscore the potential of DL models-especially transformer-based architectures-for accurate and generalizable ABR detection, and highlight the necessity of large, diverse datasets in developing clinically reliable systems.
{"title":"Comparison of Deep Learning Models for Objective Auditory Brainstem Response Detection: A Multicenter Validation Study.","authors":"Yin Liu, Lingjie Xiang, Qiang Li, Kangkang Li, Yihan Yang, Tiantian Wang, Yuting Qin, Xinxing Fu, Yu Zhao, Chenqiang Gao","doi":"10.1177/23312165251347773","DOIUrl":"10.1177/23312165251347773","url":null,"abstract":"<p><p>Auditory brainstem response (ABR) interpretation in clinical practice often relies on visual inspection by audiologists, which is prone to inter-practitioner variability. While deep learning (DL) algorithms have shown promise in objectifying ABR detection in controlled settings, their applicability to real-world clinical data is hindered by small datasets and insufficient heterogeneity. This study evaluates the generalizability of nine DL models for ABR detection using large, multicenter datasets. The primary dataset analyzed, Clinical Dataset I, comprises 128,123 labeled ABRs from 13,813 participants across a wide range of ages and hearing levels, and was divided into a training set (90%) and a held-out test set (10%). The models included convolutional neural networks (CNNs; AlexNet, VGG, ResNet), transformer-based architectures (Transformer, Patch Time Series Transformer [PatchTST], Differential Transformer, and Differential PatchTST), and hybrid CNN-transformer models (ResTransformer, ResPatchTST). Performance was assessed on the held-out test set and four external datasets (Clinical II, Southampton, PhysioNet, Mendeley) using accuracy and area under the receiver operating characteristic curve (AUC). ResPatchTST achieved the highest performance on the held-out test set (accuracy: 91.90%, AUC: 0.976). Transformer-based models, particularly PatchTST, showed superior generalization to external datasets, maintaining robust accuracy across diverse clinical settings. Additional experiments highlighted the critical role of dataset size and diversity in enhancing model robustness. We also observed that incorporating acquisition parameters and demographic features as auxiliary inputs yielded performance gains in cross-center generalization. These findings underscore the potential of DL models-especially transformer-based architectures-for accurate and generalizable ABR detection, and highlight the necessity of large, diverse datasets in developing clinically reliable systems.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251347773"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12134522/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144209976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-08-25DOI: 10.1177/23312165251372462
Heidi B Borges, Emina Alickovic, Christian B Christensen, Preben Kidmose, Johannes Zaar
Previous studies have demonstrated the feasibility of estimating the speech reception threshold (SRT) based on electroencephalography (EEG), termed SRTneuro, in younger normal-hearing (YNH) participants. This method may support speech perception in hearing-aid users through continuous adaptation of noise-reduction algorithms. The prevalence of hearing impairment and thereby hearing-aid use increases with age. The SRTneuro estimation is based on envelope reconstruction accuracy, which has also been shown to increase with age, possibly due to excitatory/inhibitory imbalance or recruitment of additional cortical regions. This could affect the estimated SRTneuro. This study investigated the age-related changes in the temporal response function (TRF) and the feasibility of SRTneuro estimation across age. Twenty YNH and 22 older normal-hearing (ONH) participants listened to audiobook excerpts at various signal-to-noise ratios (SNRs) while EEG was recorded using 66 scalp electrodes and 12 in-ear-EEG electrodes. A linear decoder reconstructed the speech envelope, and the Pearson's correlation was calculated between the reconstructed and speech-stimulus envelopes. A sigmoid function was fitted to the reconstruction-accuracy-versus-SNR data points, and the midpoint was used as the estimated SRTneuro. The results show that the SRTneuro can be estimated with similar precision in both age groups, whether using all scalp electrodes or only those in and around the ear. This consistency across age groups was observed despite physiological differences, with the ONH participants showing higher reconstruction accuracies and greater TRF amplitudes. Overall, these findings demonstrate the robustness of the SRTneuro method in older individuals and highlight its potential for applications in age-related hearing loss and hearing-aid technology.
{"title":"Age-Related Differences in EEG-Based Speech Reception Threshold Estimation Using Scalp and Ear-EEG.","authors":"Heidi B Borges, Emina Alickovic, Christian B Christensen, Preben Kidmose, Johannes Zaar","doi":"10.1177/23312165251372462","DOIUrl":"https://doi.org/10.1177/23312165251372462","url":null,"abstract":"<p><p>Previous studies have demonstrated the feasibility of estimating the speech reception threshold (SRT) based on electroencephalography (EEG), termed SRT<sub>neuro</sub>, in younger normal-hearing (YNH) participants. This method may support speech perception in hearing-aid users through continuous adaptation of noise-reduction algorithms. The prevalence of hearing impairment and thereby hearing-aid use increases with age. The SRT<sub>neuro</sub> estimation is based on envelope reconstruction accuracy, which has also been shown to increase with age, possibly due to excitatory/inhibitory imbalance or recruitment of additional cortical regions. This could affect the estimated SRT<sub>neuro</sub>. This study investigated the age-related changes in the temporal response function (TRF) and the feasibility of SRT<sub>neuro</sub> estimation across age. Twenty YNH and 22 older normal-hearing (ONH) participants listened to audiobook excerpts at various signal-to-noise ratios (SNRs) while EEG was recorded using 66 scalp electrodes and 12 in-ear-EEG electrodes. A linear decoder reconstructed the speech envelope, and the Pearson's correlation was calculated between the reconstructed and speech-stimulus envelopes. A sigmoid function was fitted to the reconstruction-accuracy-versus-SNR data points, and the midpoint was used as the estimated SRT<sub>neuro</sub>. The results show that the SRT<sub>neuro</sub> can be estimated with similar precision in both age groups, whether using all scalp electrodes or only those in and around the ear. This consistency across age groups was observed despite physiological differences, with the ONH participants showing higher reconstruction accuracies and greater TRF amplitudes. Overall, these findings demonstrate the robustness of the SRT<sub>neuro</sub> method in older individuals and highlight its potential for applications in age-related hearing loss and hearing-aid technology.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251372462"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12378310/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144975106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-09-05DOI: 10.1177/23312165251375891
Maxime Perron, Andrew Dimitrijevic, Claude Alain
Understanding speech in noise is a common challenge for older adults, often requiring increased listening effort that can deplete cognitive resources and impair higher-order functions. Hearing aids are the gold standard intervention for hearing loss, but cost and accessibility barriers have driven interest in alternatives such as Personal Sound Amplification Products (PSAPs). While PSAPs are not medical devices, they may help reduce listening effort in certain contexts, though supporting evidence remains limited. This study examined the short-term effects of bilateral PSAP use on listening effort using self-report measures and electroencephalography (EEG) recordings of alpha-band activity (8-12 Hz) in older adults with and without hearing loss. Twenty-five participants aged 60 to 87 years completed a hearing assessment and a phonological discrimination task under three signal-to-noise ratio (SNR) conditions during two counterbalanced sessions (unaided and aided). Results showed that PSAPs significantly reduced self-reported effort. Alpha activity in the left parietotemporal regions showed event-related desynchronization (ERD) during the task, reflecting brain engagement in response to speech in noise. In the unaided condition, alpha ERD weakened as SNR decreased, with activity approaching baseline. PSAP use moderated this effect, maintaining stronger ERD under the most challenging SNR condition. Reduced alpha ERD was associated with greater self-reported effort, suggesting neural and subjective measures reflect related dimensions of listening demand. These results suggest that even brief PSAP use can reduce perceived and neural markers of listening effort. While not a replacement for hearing aids, PSAPs may offer a means for easing cognitive load during effortful listening. ClinicalTrials.gov, NCT05076045, https://clinicaltrials.gov/study/NCT05076045.
{"title":"Rapid Brain Adaptation to Hearing Amplification: A Randomized Crossover Trial of Personal Sound Amplification Products.","authors":"Maxime Perron, Andrew Dimitrijevic, Claude Alain","doi":"10.1177/23312165251375891","DOIUrl":"10.1177/23312165251375891","url":null,"abstract":"<p><p>Understanding speech in noise is a common challenge for older adults, often requiring increased listening effort that can deplete cognitive resources and impair higher-order functions. Hearing aids are the gold standard intervention for hearing loss, but cost and accessibility barriers have driven interest in alternatives such as Personal Sound Amplification Products (PSAPs). While PSAPs are not medical devices, they may help reduce listening effort in certain contexts, though supporting evidence remains limited. This study examined the short-term effects of bilateral PSAP use on listening effort using self-report measures and electroencephalography (EEG) recordings of alpha-band activity (8-12 Hz) in older adults with and without hearing loss. Twenty-five participants aged 60 to 87 years completed a hearing assessment and a phonological discrimination task under three signal-to-noise ratio (SNR) conditions during two counterbalanced sessions (unaided and aided). Results showed that PSAPs significantly reduced self-reported effort. Alpha activity in the left parietotemporal regions showed event-related desynchronization (ERD) during the task, reflecting brain engagement in response to speech in noise. In the unaided condition, alpha ERD weakened as SNR decreased, with activity approaching baseline. PSAP use moderated this effect, maintaining stronger ERD under the most challenging SNR condition. Reduced alpha ERD was associated with greater self-reported effort, suggesting neural and subjective measures reflect related dimensions of listening demand. These results suggest that even brief PSAP use can reduce perceived and neural markers of listening effort. While not a replacement for hearing aids, PSAPs may offer a means for easing cognitive load during effortful listening. ClinicalTrials.gov, NCT05076045, https://clinicaltrials.gov/study/NCT05076045.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251375891"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12413528/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145001725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-11-19DOI: 10.1177/23312165251398130
Carina J Sabourin, Stephen G Lomber, Jaina Negandhi, Sharon L Cushing, Blake C Papsin, Karen A Gordon
The long-term stability of neural responses to cochlear implant (CI) stimulation and programmed stimulation levels remains unclear. Although smaller cohort studies suggest stabilization within months postimplant, reprogramming still consumes significant clinical time. The aim of this study was to investigate the resilience of the auditory nerve to prolonged stimulation from CIs and identify changes in the clinically provided stimulation levels over time. Stimulation parameters (n = 14,072 MAPs), electrophysiological auditory nerve thresholds (n = 23,215), and slopes of amplitude growth functions (n = 17,849) were obtained from 664 bilaterally implanted children (n = 1,291 devices) followed between September 2003 and July 2022. Stimulation parameters stabilized within 12 months following implantation for most, but not all, devices (75.3% and 75.4% of devices for C-levels and T-levels, respectively). Electrophysiological measures demonstrated very minor changes per year postimplant (slopes: mean [SE] = 0.03 [0.002] μV/CU/year [95% CI: 0.02-0.03]; thresholds: mean [SE] = 0.35 [0.06] CU/year [95% CI: 0.24-0.47]). While age at implantation did not relate to clinically meaningful changes in electrophysiological measures (slopes: mean [SE] = 0.02 [0.002] μV/CU/year [95% CI: 0.01-0.02]; thresholds: mean [SE] = 0.07 [0.08] CU/year [95% CI: -0.08 to 0.23]), stimulation levels decreased for children implanted at older ages (T-levels before plateau: mean [SE] = -0.47 [0.03] CU/year [95% CI: -0.53 to -0.42]; C-levels before plateau: mean [SE] = -0.78 [0.03] CU/year [95% CI: -0.85 to -0.72]). These findings indicate long-term neural and CI programming stability, suggesting potential for directing clinical time to care in areas other than reprogramming after the first year of implant use.
{"title":"Long-Term Stability of Electrical Stimulation in Children with Bilateral Cochlear Implants.","authors":"Carina J Sabourin, Stephen G Lomber, Jaina Negandhi, Sharon L Cushing, Blake C Papsin, Karen A Gordon","doi":"10.1177/23312165251398130","DOIUrl":"10.1177/23312165251398130","url":null,"abstract":"<p><p>The long-term stability of neural responses to cochlear implant (CI) stimulation and programmed stimulation levels remains unclear. Although smaller cohort studies suggest stabilization within months postimplant, reprogramming still consumes significant clinical time. The aim of this study was to investigate the resilience of the auditory nerve to prolonged stimulation from CIs and identify changes in the clinically provided stimulation levels over time. Stimulation parameters (<i>n</i> = 14,072 MAPs), electrophysiological auditory nerve thresholds (<i>n</i> = 23,215), and slopes of amplitude growth functions (<i>n</i> = 17,849) were obtained from 664 bilaterally implanted children (<i>n</i> = 1,291 devices) followed between September 2003 and July 2022. Stimulation parameters stabilized within 12 months following implantation for most, but not all, devices (75.3% and 75.4% of devices for C-levels and T-levels, respectively). Electrophysiological measures demonstrated very minor changes per year postimplant (slopes: mean [SE] = 0.03 [0.002] μV/CU/year [95% CI: 0.02-0.03]; thresholds: mean [SE] = 0.35 [0.06] CU/year [95% CI: 0.24-0.47]). While age at implantation did not relate to clinically meaningful changes in electrophysiological measures (slopes: mean [SE] = 0.02 [0.002] μV/CU/year [95% CI: 0.01-0.02]; thresholds: mean [SE] = 0.07 [0.08] CU/year [95% CI: -0.08 to 0.23]), stimulation levels decreased for children implanted at older ages (T-levels before plateau: mean [SE] = -0.47 [0.03] CU/year [95% CI: -0.53 to -0.42]; C-levels before plateau: mean [SE] = -0.78 [0.03] CU/year [95% CI: -0.85 to -0.72]). These findings indicate long-term neural and CI programming stability, suggesting potential for directing clinical time to care in areas other than reprogramming after the first year of implant use.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251398130"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12635042/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145558037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-05-06DOI: 10.1177/23312165251336625
S Theo Goverts, Virginia Best, Julia Bouwmeester, Cas Smits, H Steven Colburn
Speech-in-noise testing is a valuable component of audiological examination that can provide estimates of a listener's ability to communicate in their everyday life. It has long been recognized, however, that the acoustics of real-world environments are complex and variable and not well represented by a typical clinical test setup. The first aim of this study was to quantify real-world environments in terms of several acoustic parameters that may be relevant for speech understanding (namely speech-likeness, interaural coherence, and interaural time and level differences). Earlier acoustic analyses of binaural recordings in natural environments were extended to binaural re-creations of natural environments that included conversational speech embedded in recorded backgrounds and allowed a systematic manipulation of signal-to-noise ratio. The second aim of the study was to examine these same parameters in typical clinical speech-in-noise tests and consider the "acoustic realism" of such tests. We confirmed that the parameter spaces of natural environments are poorly covered by those of the most commonly used clinical test with one frontal loudspeaker. We also demonstrated that a simple variation of the clinical test, which uses two spatially separated loudspeakers to present speech and noise, leads to better coverage of the parameter spaces of natural environments. Overall, the results provide a framework for characterizing different listening environments that may guide future efforts to increase the real-world relevance of clinical speech-in-noise testing.
{"title":"Acoustic Realism of Clinical Speech-in-Noise Testing: Parameter Ranges of Speech-Likeness, Interaural Coherence, and Interaural Differences.","authors":"S Theo Goverts, Virginia Best, Julia Bouwmeester, Cas Smits, H Steven Colburn","doi":"10.1177/23312165251336625","DOIUrl":"https://doi.org/10.1177/23312165251336625","url":null,"abstract":"<p><p>Speech-in-noise testing is a valuable component of audiological examination that can provide estimates of a listener's ability to communicate in their everyday life. It has long been recognized, however, that the acoustics of real-world environments are complex and variable and not well represented by a typical clinical test setup. The first aim of this study was to quantify real-world environments in terms of several acoustic parameters that may be relevant for speech understanding (namely speech-likeness, interaural coherence, and interaural time and level differences). Earlier acoustic analyses of binaural recordings in natural environments were extended to binaural re-creations of natural environments that included conversational speech embedded in recorded backgrounds and allowed a systematic manipulation of signal-to-noise ratio. The second aim of the study was to examine these same parameters in typical clinical speech-in-noise tests and consider the \"acoustic realism\" of such tests. We confirmed that the parameter spaces of natural environments are poorly covered by those of the most commonly used clinical test with one frontal loudspeaker. We also demonstrated that a simple variation of the clinical test, which uses two spatially separated loudspeakers to present speech and noise, leads to better coverage of the parameter spaces of natural environments. Overall, the results provide a framework for characterizing different listening environments that may guide future efforts to increase the real-world relevance of clinical speech-in-noise testing.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251336625"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12059433/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144062910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-04-29DOI: 10.1177/23312165251336652
Jessica Herrmann, Lorenz Fiedler, Dorothea Wendt, Sébastien Santurette, Hendrik Husstedt, Tim Jürgens
The combination of directional microphones and noise reduction (DIR + NR) in hearing aids offers substantial improvement in speech intelligibility and reduction in listening effort in spatial acoustic scenarios. Pupil dilation can be used to infer ocular markers of listening effort. However, pupillometry is also known to crucially depend on luminance. The present study investigates the effects of a state-of-the-art DIR + NR algorithm (implemented in commercial hearing aids) on pupil dilation of hearing aid users both in darkness and ambient light conditions. Speech intelligibility and peak pupil dilations (PPDs) of 29 experienced hearing aid users were measured during a spatial speech-in-noise-task at a signal-to-noise ratio (SNR) matching the individual's speech reception threshold. While speech intelligibility improvements due to DIR + NR were substantial (about 35 percentage points) and independent of luminance, PPDs were only significantly reduced due to DIR + NR in ambient light, but not in darkness. This finding suggests that the reduction in PPD due to DIR + NR (most likely through improvement in SNR) is dependent on luminance and should be interpreted with caution as a marker for listening effort. Relations of reduction in PPD due to DIR + NR in ambient light to subjectively reported long-term fatigue, age, and pure-tone average were not statistically significant, which indicates that all patients benefitted similarly in listening effort from DIR + NR, irrespective of these patient-specific factors. In conclusion, careful control of luminance needs to be taken in hearing aid studies inferring listening effort from pupillometry data.
{"title":"Influence of Noise Reduction on Ocular Markers of Listening Effort in Hearing Aid Users in Darkness and Ambient Light.","authors":"Jessica Herrmann, Lorenz Fiedler, Dorothea Wendt, Sébastien Santurette, Hendrik Husstedt, Tim Jürgens","doi":"10.1177/23312165251336652","DOIUrl":"https://doi.org/10.1177/23312165251336652","url":null,"abstract":"<p><p>The combination of directional microphones and noise reduction (DIR + NR) in hearing aids offers substantial improvement in speech intelligibility and reduction in listening effort in spatial acoustic scenarios. Pupil dilation can be used to infer ocular markers of listening effort. However, pupillometry is also known to crucially depend on luminance. The present study investigates the effects of a state-of-the-art DIR + NR algorithm (implemented in commercial hearing aids) on pupil dilation of hearing aid users both in darkness and ambient light conditions. Speech intelligibility and peak pupil dilations (PPDs) of 29 experienced hearing aid users were measured during a spatial speech-in-noise-task at a signal-to-noise ratio (SNR) matching the individual's speech reception threshold. While speech intelligibility improvements due to DIR + NR were substantial (about 35 percentage points) and independent of luminance, PPDs were only significantly reduced due to DIR + NR in ambient light, but not in darkness. This finding suggests that the reduction in PPD due to DIR + NR (most likely through improvement in SNR) is dependent on luminance and should be interpreted with caution as a marker for listening effort. Relations of reduction in PPD due to DIR + NR in ambient light to subjectively reported long-term fatigue, age, and pure-tone average were not statistically significant, which indicates that all patients benefitted similarly in listening effort from DIR + NR, irrespective of these patient-specific factors. In conclusion, careful control of luminance needs to be taken in hearing aid studies inferring listening effort from pupillometry data.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251336652"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12041677/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144043707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-03-17DOI: 10.1177/23312165251317006
Robert P Carlyon, John M Deeks, Bertrand Delgutte, Yoojin Chung, Maike Vollmer, Frank W Ohl, Andrej Kral, Jochen Tillein, Ruth Y Litovsky, Jan Schnupp, Nicole Rosskothen-Kuhl, Raymond L Goldsworthy
Cochlear implant (CI) users are usually poor at using timing information to detect changes in either pitch or sound location. This deficit occurs even for listeners with good speech perception and even when the speech processor is bypassed to present simple, idealized stimuli to one or more electrodes. The present article presents seven expert opinion pieces on the likely neural bases for these limitations, the extent to which they are modifiable by sensory experience and training, and the most promising ways to overcome them in future. The article combines insights from physiology and psychophysics in cochlear-implanted humans and animals, highlights areas of agreement and controversy, and proposes new experiments that could resolve areas of disagreement.
{"title":"Limitations on Temporal Processing by Cochlear Implant Users: A Compilation of Viewpoints.","authors":"Robert P Carlyon, John M Deeks, Bertrand Delgutte, Yoojin Chung, Maike Vollmer, Frank W Ohl, Andrej Kral, Jochen Tillein, Ruth Y Litovsky, Jan Schnupp, Nicole Rosskothen-Kuhl, Raymond L Goldsworthy","doi":"10.1177/23312165251317006","DOIUrl":"10.1177/23312165251317006","url":null,"abstract":"<p><p>Cochlear implant (CI) users are usually poor at using timing information to detect changes in either pitch or sound location. This deficit occurs even for listeners with good speech perception and even when the speech processor is bypassed to present simple, idealized stimuli to one or more electrodes. The present article presents seven expert opinion pieces on the likely neural bases for these limitations, the extent to which they are modifiable by sensory experience and training, and the most promising ways to overcome them in future. The article combines insights from physiology and psychophysics in cochlear-implanted humans and animals, highlights areas of agreement and controversy, and proposes new experiments that could resolve areas of disagreement.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251317006"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12076235/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143651564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-03-29DOI: 10.1177/23312165251325983
Tim Jürgens, Peter Ihly, Jürgen Tchorz, Takanori Nishiyama, Chiemi Tanaka, Daisuke Suzuki, Seiichi Shinden, Tsubasa Kitama, Kaoru Ogawa, Johannes Zaar, Søren Laugesen, Gary Jones, Marianna Vatti, Sébastien Santurette
The combination of directional microphones (DIR) and spectral noise reduction (NR) is a common technique in hearing aid signal processing, for improving speech intelligibility in spatial acoustic scenarios. The benefit from DIR + NR varies considerably across individuals, which impedes prescribing the optimal strength of such processing during hearing aid fitting. The goal of this study was to investigate the correlation of four audiological factors with the benefit of speech reception thresholds (SRTs) from DIR + NR: the closedness of the acoustic coupling in the ear canal, audible contrast thresholds test (ACT™), the audiogram, and age. As part of a larger field study, 123 experienced hearing aid users in two centers in Germany and Japan were fitted bilaterally with the same hearing aids. SRTs were obtained with and without strong DIR + NR in a spatial speech-in-noise scenario before and after the field trials. Closedness of acoustic coupling was found to have the strongest correlation with SRT benefit from DIR + NR (most likely dominated by DIR rather than NR processing), followed by audible contrast thresholds (ACT) and the audiogram, both with the same significantly weaker correlation. Age was not correlated with the benefit from DIR + NR. The results suggest fitting hearing aid users irrespective of age with as-closed-as-possible acoustic coupling to maximize the benefit of DIR + NR. Furthermore, the closedness of acoustic coupling in combination with ACT or the audiogram may serve audiologists in predicting individual speech intelligibility benefits from strong DIR + NR for better guidance to set its strength during hearing aid fitting.
{"title":"Closedness of Acoustic Coupling and Audiological Measures Are Associated with Individual Speech-in-Noise Benefit From Noise Reduction in Hearing Aids.","authors":"Tim Jürgens, Peter Ihly, Jürgen Tchorz, Takanori Nishiyama, Chiemi Tanaka, Daisuke Suzuki, Seiichi Shinden, Tsubasa Kitama, Kaoru Ogawa, Johannes Zaar, Søren Laugesen, Gary Jones, Marianna Vatti, Sébastien Santurette","doi":"10.1177/23312165251325983","DOIUrl":"10.1177/23312165251325983","url":null,"abstract":"<p><p>The combination of directional microphones (DIR) and spectral noise reduction (NR) is a common technique in hearing aid signal processing, for improving speech intelligibility in spatial acoustic scenarios. The benefit from DIR + NR varies considerably across individuals, which impedes prescribing the optimal strength of such processing during hearing aid fitting. The goal of this study was to investigate the correlation of four audiological factors with the benefit of speech reception thresholds (SRTs) from DIR + NR: the closedness of the acoustic coupling in the ear canal, audible contrast thresholds test (ACT™), the audiogram, and age. As part of a larger field study, 123 experienced hearing aid users in two centers in Germany and Japan were fitted bilaterally with the same hearing aids. SRTs were obtained with and without strong DIR + NR in a spatial speech-in-noise scenario before and after the field trials. Closedness of acoustic coupling was found to have the strongest correlation with SRT benefit from DIR + NR (most likely dominated by DIR rather than NR processing), followed by audible contrast thresholds (ACT) and the audiogram, both with the same significantly weaker correlation. Age was not correlated with the benefit from DIR + NR. The results suggest fitting hearing aid users irrespective of age with as-closed-as-possible acoustic coupling to maximize the benefit of DIR + NR. Furthermore, the closedness of acoustic coupling in combination with ACT or the audiogram may serve audiologists in predicting individual speech intelligibility benefits from strong DIR + NR for better guidance to set its strength during hearing aid fitting.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251325983"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11954453/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143744130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-07-21DOI: 10.1177/23312165251359755
Larry E Humes, Sumitrajit Dhar, Jasleen Singh
The Abbreviated Profile of Hearing Aid Benefit (APHAB) has been one of the most frequently used patient-reported outcome measures (PROMs) since its inception 30 years ago. For the APHAB, single-valued 95% critical differences have been presented for the identification and interpretation of meaningful benefits in research and in the clinic. A narrative literature review of studies that used the global APHAB score as a hearing-aid outcome measure showed that the average benefit varied directly with the average unaided baseline score for each measure. Next, data from 584 older adults enrolled in our recently completed randomized controlled hearing-aid trial were examined. The same dependence of benefit scores on unaided baseline scores was observed in these data. Regression to the mean made relatively minor contributions to the observed dependence of APHAB scores on baseline unaided scores. These results indicate that the application of a single value for the 95% critical difference is not valid for the interpretation of APHAB scores. Rather, baseline-specific benefit criteria are needed. Based on these results, baseline-specific Minimal Detectable Differences (MDDs; or 95% critical differences) and Minimal Clinically Important Differences (MCIDs) using both distribution-based and anchor-based approaches were generated for the APHAB-global score.
{"title":"Some Considerations for the Use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) as a Hearing-Aid Outcome Measure.","authors":"Larry E Humes, Sumitrajit Dhar, Jasleen Singh","doi":"10.1177/23312165251359755","DOIUrl":"10.1177/23312165251359755","url":null,"abstract":"<p><p>The Abbreviated Profile of Hearing Aid Benefit (APHAB) has been one of the most frequently used patient-reported outcome measures (PROMs) since its inception 30 years ago. For the APHAB, single-valued 95% critical differences have been presented for the identification and interpretation of meaningful benefits in research and in the clinic. A narrative literature review of studies that used the global APHAB score as a hearing-aid outcome measure showed that the average benefit varied directly with the average unaided baseline score for each measure. Next, data from 584 older adults enrolled in our recently completed randomized controlled hearing-aid trial were examined. The same dependence of benefit scores on unaided baseline scores was observed in these data. Regression to the mean made relatively minor contributions to the observed dependence of APHAB scores on baseline unaided scores. These results indicate that the application of a single value for the 95% critical difference is not valid for the interpretation of APHAB scores. Rather, baseline-specific benefit criteria are needed. Based on these results, baseline-specific Minimal Detectable Differences (MDDs; or 95% critical differences) and Minimal Clinically Important Differences (MCIDs) using both distribution-based and anchor-based approaches were generated for the APHAB-global score.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251359755"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12290275/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144683483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Audiological datasets contain valuable knowledge about hearing loss in patients, which can be uncovered using data-driven techniques. Our previous approach summarized patient information from one audiological dataset into distinct Auditory Profiles (APs). To obtain a better estimate of the audiological patient population, however, patient patterns must be analyzed across multiple, separated datasets, and finally, be integrated into a combined set of APs. This study aimed at extending the existing profile generation pipeline with an AP merging step, enabling the combination of APs from different datasets based on their similarity across audiological measures. The 13 previously generated APs (NA = 595) were merged with 31 newly generated APs from a second dataset (NB = 1,272) using a similarity score derived from the overlapping densities of common features across the two datasets. To ensure clinical applicability, random forest models were created for various scenarios, encompassing different combinations of audiological measures. A new set with 13 combined APs is proposed, providing separable profiles, which still capture detailed patient information from various test outcome combinations. The classification performance across these profiles is satisfactory. The best performance was achieved using a combination of loudness scaling, audiogram, and speech test information, while single measures performed worst. The enhanced profile generation pipeline demonstrates the feasibility of combining APs across datasets, which should generalize to all datasets and could lead to an interpretable global profile set in the future. The classification models maintain clinical applicability.
{"title":"Integrating Audiological Datasets via Federated Merging of Auditory Profiles.","authors":"Samira Saak, Dirk Oetting, Birger Kollmeier, Mareike Buhl","doi":"10.1177/23312165251349617","DOIUrl":"10.1177/23312165251349617","url":null,"abstract":"<p><p>Audiological datasets contain valuable knowledge about hearing loss in patients, which can be uncovered using data-driven techniques. Our previous approach summarized patient information from one audiological dataset into distinct Auditory Profiles (APs). To obtain a better estimate of the audiological patient population, however, patient patterns must be analyzed across multiple, separated datasets, and finally, be integrated into a combined set of APs. This study aimed at extending the existing profile generation pipeline with an AP merging step, enabling the combination of APs from different datasets based on their similarity across audiological measures. The 13 previously generated APs (<i>N<sub>A</sub></i> = 595) were merged with 31 newly generated APs from a second dataset (<i>N<sub>B</sub></i> = 1,272) using a similarity score derived from the overlapping densities of common features across the two datasets. To ensure clinical applicability, random forest models were created for various scenarios, encompassing different combinations of audiological measures. A new set with 13 combined APs is proposed, providing separable profiles, which still capture detailed patient information from various test outcome combinations. The classification performance across these profiles is satisfactory. The best performance was achieved using a combination of loudness scaling, audiogram, and speech test information, while single measures performed worst. The enhanced profile generation pipeline demonstrates the feasibility of combining APs across datasets, which should generalize to all datasets and could lead to an interpretable global profile set in the future. The classification models maintain clinical applicability.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251349617"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12209579/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144530531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}