Pub Date : 2026-01-01Epub Date: 2026-01-22DOI: 10.1177/23312165251396517
Alinka E Greasley, Amy V Beeston, Robert J Fulford, Harriet Crook, Jackie M Salter, Robin Hake, Brian C J Moore
Hearing aids, which are primarily designed to improve the intelligibility of speech, can negatively affect the perception and enjoyment of music. This large-scale survey study, conducted between 2016 and 2018, explored hearing aid use and preference behavior in both recorded and live music listening settings, aiming to understand the challenges and strategies used by listeners to improve their experiences, and how these may be affected by level of hearing loss (HL). One thousand five hundred and seven hearing aid users (mean age = 60 years) completed an online survey about their music listening behavior and use of hearing aids. Results showed that whilst hearing aids support engagement in music listening, they also present many issues and overall helpfulness is mixed. The most commonly reported issue was distortion and poor sound quality, particularly in loud or live contexts. The most frequently reported strategy for reducing distortion was to remove hearing aids altogether. Only a third of the sample reported using a music program and effectiveness was mixed, suggesting that manufacturer music programs do not currently provide significant benefits for music listening, and further research into the use, uptake and efficacy of music programs is needed. We call for further research into signal processing strategies for music especially for high sound levels such as live music or concert settings. The positive impact of mindsets supporting proactive behaviors, perseverance, adaptation, and experimentation with different technologies, genres, and listening environments was highlighted, strengthening the evidence base for audiologists to provide music listening guidance in the clinic.
{"title":"Using Hearing Aids for Music: A UK Survey of Challenges and Strategies.","authors":"Alinka E Greasley, Amy V Beeston, Robert J Fulford, Harriet Crook, Jackie M Salter, Robin Hake, Brian C J Moore","doi":"10.1177/23312165251396517","DOIUrl":"10.1177/23312165251396517","url":null,"abstract":"<p><p>Hearing aids, which are primarily designed to improve the intelligibility of speech, can negatively affect the perception and enjoyment of music. This large-scale survey study, conducted between 2016 and 2018, explored hearing aid use and preference behavior in both recorded and live music listening settings, aiming to understand the challenges and strategies used by listeners to improve their experiences, and how these may be affected by level of hearing loss (HL). One thousand five hundred and seven hearing aid users (mean age = 60 years) completed an online survey about their music listening behavior and use of hearing aids. Results showed that whilst hearing aids support engagement in music listening, they also present many issues and overall helpfulness is mixed. The most commonly reported issue was distortion and poor sound quality, particularly in loud or live contexts. The most frequently reported strategy for reducing distortion was to remove hearing aids altogether. Only a third of the sample reported using a music program and effectiveness was mixed, suggesting that manufacturer music programs do not currently provide significant benefits for music listening, and further research into the use, uptake and efficacy of music programs is needed. We call for further research into signal processing strategies for music especially for high sound levels such as live music or concert settings. The positive impact of mindsets supporting proactive behaviors, perseverance, adaptation, and experimentation with different technologies, genres, and listening environments was highlighted, strengthening the evidence base for audiologists to provide music listening guidance in the clinic.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165251396517"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12833179/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146020269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-19DOI: 10.1177/23312165251413850
Qiaoyu Liu, Yufei Qiao, Min Zhu, Jiayan Yang, Wen Sun, Yaohan Chen, Saiyi Jiao, Hang Shen, Yingying Shang
Single-sided deafness (SSD) is a typical condition of partial auditory deprivation. Total auditory deprivation triggers cross-modal neural reorganization, but in patients with partial hearing deprivation, how residual auditory function is balanced with the compensatory plasticity of other sensory modalities remains unclear. Previous studies have reported conflicting findings, potentially due to differences in study populations or task designs. Here, we investigated hierarchical neural processing in a homogeneous cohort of 37 congenital SSD patients (31.6 ± 6.5 years, 18 males) and 32 normal-hearing (NH) controls (30.6 ± 7.3 years, 14 males) using both auditory and visual oddball tasks with electroencephalography (EEG). In the auditory task, SSD patients presented reduced amplitudes of early exogenous components (N1, P2) and mismatch negativity (MMN), but preserved late endogenous components (N2, P3), compared with NH controls. Conversely, in the visual task, SSD patients presented increased early visual N1 amplitudes with intact visual mismatch negativity (vMMN) and endogenous components (N2, P3). No latency differences in the above components were observed. These results reveal a difference in plasticity between lower- and higher-level processing. Our findings indicate that functional plasticity in SSD patients occurs predominantly at sensory stages and is characterized by diminished auditory and compensatory elevated visual neural activity, whereas higher-level discrimination processing in either modality is largely unaffected. These findings clarify prior discrepancies, establish a hierarchical framework for understanding neuroplasticity in partial sensory deprivation, and have implications for rehabilitation strategies for SSD patients.
{"title":"Functional Plasticity in Auditory and Visual Discrimination Processing in Patients with Single-Sided Deafness: An EEG Study.","authors":"Qiaoyu Liu, Yufei Qiao, Min Zhu, Jiayan Yang, Wen Sun, Yaohan Chen, Saiyi Jiao, Hang Shen, Yingying Shang","doi":"10.1177/23312165251413850","DOIUrl":"10.1177/23312165251413850","url":null,"abstract":"<p><p>Single-sided deafness (SSD) is a typical condition of partial auditory deprivation. Total auditory deprivation triggers cross-modal neural reorganization, but in patients with partial hearing deprivation, how residual auditory function is balanced with the compensatory plasticity of other sensory modalities remains unclear. Previous studies have reported conflicting findings, potentially due to differences in study populations or task designs. Here, we investigated hierarchical neural processing in a homogeneous cohort of 37 congenital SSD patients (31.6 ± 6.5 years, 18 males) and 32 normal-hearing (NH) controls (30.6 ± 7.3 years, 14 males) using both auditory and visual oddball tasks with electroencephalography (EEG). In the auditory task, SSD patients presented reduced amplitudes of early exogenous components (N1, P2) and mismatch negativity (MMN), but preserved late endogenous components (N2, P3), compared with NH controls. Conversely, in the visual task, SSD patients presented increased early visual N1 amplitudes with intact visual mismatch negativity (vMMN) and endogenous components (N2, P3). No latency differences in the above components were observed. These results reveal a difference in plasticity between lower- and higher-level processing. Our findings indicate that functional plasticity in SSD patients occurs predominantly at sensory stages and is characterized by diminished auditory and compensatory elevated visual neural activity, whereas higher-level discrimination processing in either modality is largely unaffected. These findings clarify prior discrepancies, establish a hierarchical framework for understanding neuroplasticity in partial sensory deprivation, and have implications for rehabilitation strategies for SSD patients.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165251413850"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12816557/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145999400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-30DOI: 10.1177/23312165251410988
Hannah Guest, Paul Elliott, Martie van Tongeren, Joseph Laycock, Steven Thorley-Lawson, Michael A Stone, Michael T Loughran, Christopher J Plack
Research into the long-term effects of noise on hearing is often confounded by health and lifestyle differences between individuals. UK police radio ear-pieces are capable of emitting high sound levels and, crucially, are worn in one ear, allowing between-ear comparisons which control for individual-level confounding factors. Low volume-control settings are recommended to reduce risk to police hearing, yet actual usage patterns and auditory effects remain unexamined. This study used a large-scale survey (N = 4,498) to assess ear-piece noise exposure and the associated hearing health. Most participants reported using high volume-control settings and 45.2% reported experiencing signs of temporary threshold shift (TTS) in the exposed ear. Estimated weekly-averaged noise exposures frequently exceeded the UK's 85 dBA Upper Exposure Action Value. Ear-piece use was associated with 73% (95% confidence interval [CI] 46-106%) increased risk of persistent tinnitus, which on mediation analysis appeared to be driven by a subset of users who experienced signs of TTS. Importantly, tinnitus location was associated with the side of exposure, suggesting tinnitus related to device use rather than to other factors. In contrast, Digits-In-Noise thresholds showed no relation with noise exposure; potential explanations include compensatory auditory training effects, but limitations of Digits-In-Noise data must also be considered. Findings highlight a need for further investigation into hearing risks in police personnel, including in-person auditory testing. Risk mitigation strategies might involve improved device design, training on safe use, and expanded hearing health surveillance. Given the potential for cumulative auditory damage, TTS may serve as an early warning sign, warranting attention in broader noise-exposed populations.
{"title":"Leveraging Monaural Exposures to Reveal Early Effects of Noise: Evidence from Police Radio Ear-Piece Use.","authors":"Hannah Guest, Paul Elliott, Martie van Tongeren, Joseph Laycock, Steven Thorley-Lawson, Michael A Stone, Michael T Loughran, Christopher J Plack","doi":"10.1177/23312165251410988","DOIUrl":"10.1177/23312165251410988","url":null,"abstract":"<p><p>Research into the long-term effects of noise on hearing is often confounded by health and lifestyle differences between individuals. UK police radio ear-pieces are capable of emitting high sound levels and, crucially, are worn in one ear, allowing between-ear comparisons which control for individual-level confounding factors. Low volume-control settings are recommended to reduce risk to police hearing, yet actual usage patterns and auditory effects remain unexamined. This study used a large-scale survey (<i>N</i> = 4,498) to assess ear-piece noise exposure and the associated hearing health. Most participants reported using high volume-control settings and 45.2% reported experiencing signs of temporary threshold shift (TTS) in the exposed ear. Estimated weekly-averaged noise exposures frequently exceeded the UK's 85 dBA Upper Exposure Action Value. Ear-piece use was associated with 73% (95% confidence interval [CI] 46-106%) increased risk of persistent tinnitus, which on mediation analysis appeared to be driven by a subset of users who experienced signs of TTS. Importantly, tinnitus location was associated with the side of exposure, suggesting tinnitus related to device use rather than to other factors. In contrast, Digits-In-Noise thresholds showed no relation with noise exposure; potential explanations include compensatory auditory training effects, but limitations of Digits-In-Noise data must also be considered. Findings highlight a need for further investigation into hearing risks in police personnel, including in-person auditory testing. Risk mitigation strategies might involve improved device design, training on safe use, and expanded hearing health surveillance. Given the potential for cumulative auditory damage, TTS may serve as an early warning sign, warranting attention in broader noise-exposed populations.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165251410988"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12858745/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-30DOI: 10.1177/23312165251408761
Scott Bannister, Jennifer Firth, Gerardo Roa-Dabike, Rebecca Vos, William Whitmer, Alinka E Greasley, Simone Graetzer, Bruno Fazenda, Trevor Cox, Jon Barker, Michael A Akeroyd
Music is central to many people's lives, and hearing loss (HL) is often a barrier to musical engagement. Hearing aids (HAs) help, but their efficacy in improving speech does not consistently translate to music. This research evaluated systems submitted to the 1st Cadenza Machine Learning Challenge, where entrants aimed to improve music audio quality for HA users through source separation and remixing. The HA users (N = 53, ranging from "mild" to "moderately severe" HL) assessed eight challenge systems (including one baseline using the HDemucs source separation algorithm, remixing to original mixes of music samples, and applying National Acoustic Laboratories Revised amplification) and rated 200 music samples processed for their HL. Participants rated samples on basic audio quality, clarity, harshness, distortion, frequency balance, and liking. Results suggest no entrant system surpassed the baseline for audio quality, although differences emerged in system efficacy across HL severities. Clarity and distortion ratings were most predictive of audio quality. Finally, some systems produced signals with higher objective loudness, spectral flux and clipping with increasing HL severity; these received lower audio quality ratings by listeners with moderately severe HL. Findings highlight how music enhancement requires varied solutions and tests across a range of HL severities. This challenge provided a first application of source separation to music listening with HL. However, state-of-the-art source separation algorithms limited the diversity of entrant solutions, resulting in no improvements over the baseline; to promote development of innovative processing strategies, future work should increase complexity of music listening scenarios to be addressed through source separation.
{"title":"The First Cadenza Challenge: Perceptual Evaluation of Machine Learning Systems to Improve Audio Quality of Popular Music for Those with Hearing Loss.","authors":"Scott Bannister, Jennifer Firth, Gerardo Roa-Dabike, Rebecca Vos, William Whitmer, Alinka E Greasley, Simone Graetzer, Bruno Fazenda, Trevor Cox, Jon Barker, Michael A Akeroyd","doi":"10.1177/23312165251408761","DOIUrl":"10.1177/23312165251408761","url":null,"abstract":"<p><p>Music is central to many people's lives, and hearing loss (HL) is often a barrier to musical engagement. Hearing aids (HAs) help, but their efficacy in improving speech does not consistently translate to music. This research evaluated systems submitted to the 1<sup>st</sup> Cadenza Machine Learning Challenge, where entrants aimed to improve music audio quality for HA users through source separation and remixing. The HA users (<i>N</i> = 53, ranging from \"mild\" to \"moderately severe\" HL) assessed eight challenge systems (including one baseline using the HDemucs source separation algorithm, remixing to original mixes of music samples, and applying National Acoustic Laboratories Revised amplification) and rated 200 music samples processed for their HL. Participants rated samples on <i>basic audio quality, clarity, harshness, distortion, frequency balance</i>, and <i>liking</i>. Results suggest no entrant system surpassed the baseline for audio quality, although differences emerged in system efficacy across HL severities. <i>Clarity</i> and <i>distortion</i> ratings were most predictive of audio quality. Finally, some systems produced signals with higher objective loudness, spectral flux and clipping with increasing HL severity; these received lower audio quality ratings by listeners with moderately severe HL. Findings highlight how music enhancement requires varied solutions and tests across a range of HL severities. This challenge provided a first application of source separation to music listening with HL. However, state-of-the-art source separation algorithms limited the diversity of entrant solutions, resulting in no improvements over the baseline; to promote development of innovative processing strategies, future work should increase complexity of music listening scenarios to be addressed through source separation.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165251408761"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12858752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-19DOI: 10.1177/23312165251408752
Simon E Lansbergen, Gertjan Dingemanse, Niek J Versfeld, Wouter A Dreschler, André Goedegebure
The quality of hearing-aid (HA) fitting is typically evaluated using speech intelligibility tests and/or Real-Ear Measurements (REMs). Although it is assumed that a better fit improves daily outcomes, supporting evidence is inconclusive. This study examined whether deviations from National Acoustic Laboratories Non-Linear (NAL-NL2) real-ear targets (real-ear-to-target difference, RTD) predicted changes in Speech, Spatial, and Qualities of Hearing Scale (SSQ) scores, and whether they related to aided speech recognition in quiet. The effects of hearing loss and patient characteristics were also considered. Data from 298 adults (mean age 65 years) fitted with new or replacement HAs (66%) were analyzed. Baseline measures included unaided speech recognition in quiet and a 17-item SSQ; follow-up measures included aided speech recognition in quiet, RTDs, and the SSQ. Principal Components Analysis summarized RTDs into overall gain (RTD1) and high-frequency gain (RTD2). The effects of treatment, RTD, pure-tone average (PTA), audiogram slope, asymmetry, age, gender, and HA experience on SSQ scores were investigated with mixed-effects models. Hearing-aid use improved both SSQ score (by 1.4 points) and speech in quiet. The RTD1 predicted neither SSQ nor speech scores. Underamplification above 2 kHz (RTD2) did not influence speech scores significantly, but reduced SSQ improvement. Higher PTA and steeper slopes were associated with lower aided speech scores, while higher PTA and age reduced SSQ improvement. Hearing-aid experience showed modest SSQ-domain effects. About half of SSQ variance reflected between-subject differences. HAs provide substantial benefit, despite moderate NAL-NL2 mismatches. Accurate 4-8 kHz fittings maximize outcomes by the SSQ, supporting REM-guided fitting practices.
{"title":"The Effect of Real Ear Target Deviations on SSQ and Speech Intelligibility in a Clinical Population.","authors":"Simon E Lansbergen, Gertjan Dingemanse, Niek J Versfeld, Wouter A Dreschler, André Goedegebure","doi":"10.1177/23312165251408752","DOIUrl":"10.1177/23312165251408752","url":null,"abstract":"<p><p>The quality of hearing-aid (HA) fitting is typically evaluated using speech intelligibility tests and/or Real-Ear Measurements (REMs). Although it is assumed that a better fit improves daily outcomes, supporting evidence is inconclusive. This study examined whether deviations from National Acoustic Laboratories Non-Linear (NAL-NL2) real-ear targets (real-ear-to-target difference, RTD) predicted changes in Speech, Spatial, and Qualities of Hearing Scale (SSQ) scores, and whether they related to aided speech recognition in quiet. The effects of hearing loss and patient characteristics were also considered. Data from 298 adults (mean age 65 years) fitted with new or replacement HAs (66%) were analyzed. Baseline measures included unaided speech recognition in quiet and a 17-item SSQ; follow-up measures included aided speech recognition in quiet, RTDs, and the SSQ. Principal Components Analysis summarized RTDs into overall gain (RTD<sub>1</sub>) and high-frequency gain (RTD<sub>2</sub>). The effects of treatment, RTD, pure-tone average (PTA), audiogram slope, asymmetry, age, gender, and HA experience on SSQ scores were investigated with mixed-effects models. Hearing-aid use improved both SSQ score (by 1.4 points) and speech in quiet. The RTD<sub>1</sub> predicted neither SSQ nor speech scores. Underamplification above 2 kHz (RTD<sub>2</sub>) did not influence speech scores significantly, but reduced SSQ improvement. Higher PTA and steeper slopes were associated with lower aided speech scores, while higher PTA and age reduced SSQ improvement. Hearing-aid experience showed modest SSQ-domain effects. About half of SSQ variance reflected between-subject differences. HAs provide substantial benefit, despite moderate NAL-NL2 mismatches. Accurate 4-8 kHz fittings maximize outcomes by the SSQ, supporting REM-guided fitting practices.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165251408752"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12816552/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146004425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-12DOI: 10.1177/23312165251408983
Martin J Lindenbeck, Piotr Majdak, Bernhard Laback
Cochlear-implant listeners show impaired pitch perception compared to normal-hearing listeners. One of the factors limiting pitch sensitivity in multi-electrode as compared to single-electrode stimulation can be intracochlear interactions of electrode signals (i.e., channels). We measured temporal-pitch discrimination sensitivity for loudness-balanced dual-electrode stimuli with various spatio-temporal configurations in listeners with MED-EL implants. We hypothesized a link between pitch sensitivity and tonotopic separation as well as (monaural) temporal electrode asynchrony, the latter resulting in various combinations of inter-pulse intervals in the compound stimuli received by the auditory nerve. Per-electrode stimulus types were high-rate (i.e., 1,000-pps) pulse trains with a 100-Hz amplitude modulation and both with and without additional pulses inserted with short inter-pulse intervals at modulation peaks. The temporal asynchrony had a detrimental effect for tonotopic separations below 2.2 mm but not for separations of 7.1 mm and more. This pattern was largely consistent across stimulus types and can be attributed to spectro-temporal channel interactions. When compared with sensitivity to unmodulated 100-pps pulse trains [Lindenbeck et al., Trends in Hearing, 28, Article 23312165241271340 (2024)], stimuli without short inter-pulse interval pulses yielded lower sensitivity while stimuli with short inter-pulse interval pulses approached low-rate sensitivity for some tonotopic separations. Despite lower sensitivity overall, high-rate pitch cues seemed to be integrated (i.e., improved) more across the two electrodes than low-rate pitch cues when compared to single-electrode stimulation. These results suggest that short inter-pulse interval pulses are beneficial for temporal-pitch sensitivity in dual-electrode configurations.
与听力正常的听众相比,耳蜗植入者的音高感知能力受损。与单电极刺激相比,限制多电极刺激中音调灵敏度的因素之一可能是电极信号(即通道)在耳蜗内的相互作用。我们测量了使用MED-EL植入物的听者对不同时空配置的响度平衡双电极刺激的时间-音高分辨灵敏度。我们假设音调敏感性与张力异位分离以及(单耳)颞电极异步之间存在联系,后者导致听神经接收到的复合刺激中脉冲间隔的各种组合。每电极刺激类型是100赫兹调幅的高速率(即1000 -pps)脉冲序列,以及在调制峰值插入或不插入短脉冲间隔的附加脉冲。时间不同步对异位分离小于2.2 mm有不利影响,但对异位分离大于7.1 mm无不利影响。这种模式在刺激类型上基本一致,可以归因于光谱-时间通道的相互作用。与未调制的100-pps脉冲序列的灵敏度相比[Lindenbeck et al., Trends in Hearing, 28, Article 23312165241271340(2024)],没有短脉冲间隔脉冲的刺激产生较低的灵敏度,而短脉冲间隔脉冲的刺激对某些张力分离的灵敏度接近低率。尽管整体灵敏度较低,但与单电极刺激相比,高频率音高信号在两个电极上的整合(即改善)似乎比低频率音高信号更好。这些结果表明,短脉冲间隔脉冲有利于双电极结构的时间-节距灵敏度。
{"title":"Effects of Dual-Electrode Asynchrony on Temporal Pitch Discrimination With Amplitude Modulation and Short Inter-Pulse Intervals in Cochlear Implant Listeners.","authors":"Martin J Lindenbeck, Piotr Majdak, Bernhard Laback","doi":"10.1177/23312165251408983","DOIUrl":"10.1177/23312165251408983","url":null,"abstract":"<p><p>Cochlear-implant listeners show impaired pitch perception compared to normal-hearing listeners. One of the factors limiting pitch sensitivity in multi-electrode as compared to single-electrode stimulation can be intracochlear interactions of electrode signals (i.e., channels). We measured temporal-pitch discrimination sensitivity for loudness-balanced dual-electrode stimuli with various spatio-temporal configurations in listeners with MED-EL implants. We hypothesized a link between pitch sensitivity and tonotopic separation as well as (monaural) temporal electrode asynchrony, the latter resulting in various combinations of inter-pulse intervals in the compound stimuli received by the auditory nerve. Per-electrode stimulus types were high-rate (i.e., 1,000-pps) pulse trains with a 100-Hz amplitude modulation and both with and without additional pulses inserted with short inter-pulse intervals at modulation peaks. The temporal asynchrony had a detrimental effect for tonotopic separations below 2.2 mm but not for separations of 7.1 mm and more. This pattern was largely consistent across stimulus types and can be attributed to spectro-temporal channel interactions. When compared with sensitivity to unmodulated 100-pps pulse trains [Lindenbeck et al., <i>Trends in Hearing</i>, <i>28</i>, Article 23312165241271340 (2024)], stimuli without short inter-pulse interval pulses yielded lower sensitivity while stimuli with short inter-pulse interval pulses approached low-rate sensitivity for some tonotopic separations. Despite lower sensitivity overall, high-rate pitch cues seemed to be integrated (i.e., improved) more across the two electrodes than low-rate pitch cues when compared to single-electrode stimulation. These results suggest that short inter-pulse interval pulses are beneficial for temporal-pitch sensitivity in dual-electrode configurations.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165251408983"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12796140/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145960496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-03-18DOI: 10.1177/23312165251317923
Charlotte Vercammen, Olaf Strelcyk
We describe the development and validation of a self-administered online hearing test, which screens for hearing loss and provides an estimated audiogram. The hearing test computes test results from age, self-reported hearing abilities, and self-assessed pure-tone thresholds. It relies on regression, Bayesian and binary classification, leveraging probabilistic effects of age as well as interfrequency and interaural relationships in audiograms. The test was devised based on development data, collected prospectively in an online experiment from a purposive convenience sample of 251 adult American, Australian, Canadian, and Swiss participants, 58% of whom had hearing loss. Later, we externally validated the hearing test. Validation data were collected prospectively from a representative sample of 156 adult Belgian participants, 15% of whom had hearing loss. Participants completed the hearing test and audiometric assessments at home. The results for the primary screening outcome showed that the hearing test screened for mild hearing losses with a sensitivity of 0.83 [95%-confidence interval (CI): 0.65, 0.96], specificity of 0.94 [CI: 0.89, 0.98], positive predictive value of 0.70 [CI: 0.57, 0.87], and negative predictive value of 0.97 [CI: 0.94, 0.99]. Results for the secondary audiogram estimation outcome showed mean differences between estimated and gold standard hearing thresholds ranging from 2.1 to 12.4 dB, with an average standard deviation of the differences of 14.8 dB. In conclusion, the hearing test performed comparably to state-of-the-art hearing screeners. This test, therefore, is a validated alternative to existing screening tools, and, additionally, it provides an estimated audiogram.
{"title":"Development and Validation of a Self-Administered Online Hearing Test.","authors":"Charlotte Vercammen, Olaf Strelcyk","doi":"10.1177/23312165251317923","DOIUrl":"10.1177/23312165251317923","url":null,"abstract":"<p><p>We describe the development and validation of a self-administered online hearing test, which screens for hearing loss and provides an estimated audiogram. The hearing test computes test results from age, self-reported hearing abilities, and self-assessed pure-tone thresholds. It relies on regression, Bayesian and binary classification, leveraging probabilistic effects of age as well as interfrequency and interaural relationships in audiograms. The test was devised based on development data, collected prospectively in an online experiment from a purposive convenience sample of 251 adult American, Australian, Canadian, and Swiss participants, 58% of whom had hearing loss. Later, we externally validated the hearing test. Validation data were collected prospectively from a representative sample of 156 adult Belgian participants, 15% of whom had hearing loss. Participants completed the hearing test and audiometric assessments at home. The results for the primary screening outcome showed that the hearing test screened for mild hearing losses with a sensitivity of 0.83 [95%-confidence interval (CI): 0.65, 0.96], specificity of 0.94 [CI: 0.89, 0.98], positive predictive value of 0.70 [CI: 0.57, 0.87], and negative predictive value of 0.97 [CI: 0.94, 0.99]. Results for the secondary audiogram estimation outcome showed mean differences between estimated and gold standard hearing thresholds ranging from 2.1 to 12.4 dB, with an average standard deviation of the differences of 14.8 dB. In conclusion, the hearing test performed comparably to state-of-the-art hearing screeners. This test, therefore, is a validated alternative to existing screening tools, and, additionally, it provides an estimated audiogram.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251317923"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11920986/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143659046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-04-03DOI: 10.1177/23312165251333225
Markus Kemper, Florian Denk, Hendrik Husstedt, Jonas Obleser
While hearing aids are beneficial in compensating for hearing loss and suppressing ambient noise, they may also introduce an unwanted processing burden to the listener's sensory and cognitive system. To investigate such adverse side effects, hearing aids may be set to a 'transparent mode', aiming to replicate natural hearing through the open ear as best as possible. Such transparent hearing aids have previously been demonstrated to exhibit a small but significant disadvantage in speech intelligibility, with less conclusive effects on self-rated listening effort. Here we aimed to reproduce these findings and expand them with neurophysiological measures of invested listening effort, including parietal alpha power and pupil size. Invested listening effort was measured across five task difficulties, ranging from nearly impossible to easy, with normal-hearing participants in both aided and unaided conditions. Results well reproduced a hearing aid disadvantage for speech intelligibility and subjective listening effort ratings. As to be expected, pupil size and parietal alpha power followed an inverted u-shape, peaking at moderate task difficulties (around SRT50). However, the transparent hearing aid increased pupil size and parietal alpha power at medium task demand (between SRT20 and SRT80). These neurophysiological effects were larger than those observed in speech intelligibility and subjective listening effort, respectively. The results gain plausibility by yielding a substantial association of individual pupil size and individual parietal alpha power. In sum, our findings suggest that key neurophysiological measures of invested listening effort are sensitive to the individual additional burden on speech intelligibility that hearing aid processing can introduce.
{"title":"Acoustically Transparent Hearing Aids Increase Physiological Markers of Listening Effort.","authors":"Markus Kemper, Florian Denk, Hendrik Husstedt, Jonas Obleser","doi":"10.1177/23312165251333225","DOIUrl":"10.1177/23312165251333225","url":null,"abstract":"<p><p>While hearing aids are beneficial in compensating for hearing loss and suppressing ambient noise, they may also introduce an unwanted processing burden to the listener's sensory and cognitive system. To investigate such adverse side effects, hearing aids may be set to a 'transparent mode', aiming to replicate natural hearing through the open ear as best as possible. Such transparent hearing aids have previously been demonstrated to exhibit a small but significant disadvantage in speech intelligibility, with less conclusive effects on self-rated listening effort. Here we aimed to reproduce these findings and expand them with neurophysiological measures of invested listening effort, including parietal alpha power and pupil size. Invested listening effort was measured across five task difficulties, ranging from nearly impossible to easy, with normal-hearing participants in both aided and unaided conditions. Results well reproduced a hearing aid disadvantage for speech intelligibility and subjective listening effort ratings. As to be expected, pupil size and parietal alpha power followed an inverted u-shape, peaking at moderate task difficulties (around SRT50). However, the transparent hearing aid increased pupil size and parietal alpha power at medium task demand (between SRT20 and SRT80). These neurophysiological effects were larger than those observed in speech intelligibility and subjective listening effort, respectively. The results gain plausibility by yielding a substantial association of individual pupil size and individual parietal alpha power. In sum, our findings suggest that key neurophysiological measures of invested listening effort are sensitive to the individual additional burden on speech intelligibility that hearing aid processing can introduce.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251333225"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11970058/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143781706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-05-15DOI: 10.1177/23312165251343457
Miriam I Marrufo-Pérez, Enrique A Lopez-Poveda
The recognition of isolated words in noise improves as words are delayed from the noise onset. This phenomenon, known as adaptation to noise, has been mostly investigated using synthetic noises. The aim here was to investigate whether adaptation occurs for realistic noises and to what extent it depends on the spectrum and level fluctuations of the noise. Forty-nine different realistic and synthetic noises were analyzed and classified according to how much they fluctuated in level over time and how much their spectra differed from the speech spectrum. Six representative noises were chosen that covered the observed range of level fluctuations and spectral differences but could still mask speech. For the six noises, speech reception thresholds (SRTs) were measured for natural and tone-vocoded words delayed 50 (early condition) and 800 ms (late condition) from the noise onset. Adaptation was calculated as the SRT improvement in the late relative to the early condition. Twenty-two adults with normal hearing participated in the experiments. For natural words, adaptation was small overall (mean = 0.5 dB) and similar across the six noises. For vocoded words, significant adaptation occurred for all six noises (mean = 1.3 dB) and was not statistically different across noises. For the tested noises, the amount of adaptation was independent of the spectrum and level fluctuations of the noise. The results suggest that adaptation in speech recognition can occur in realistic noisy environments.
{"title":"Speech Recognition and Noise Adaptation in Realistic Noises.","authors":"Miriam I Marrufo-Pérez, Enrique A Lopez-Poveda","doi":"10.1177/23312165251343457","DOIUrl":"10.1177/23312165251343457","url":null,"abstract":"<p><p>The recognition of isolated words in noise improves as words are delayed from the noise onset. This phenomenon, known as adaptation to noise, has been mostly investigated using synthetic noises. The aim here was to investigate whether adaptation occurs for realistic noises and to what extent it depends on the spectrum and level fluctuations of the noise. Forty-nine different realistic and synthetic noises were analyzed and classified according to how much they fluctuated in level over time and how much their spectra differed from the speech spectrum. Six representative noises were chosen that covered the observed range of level fluctuations and spectral differences but could still mask speech. For the six noises, speech reception thresholds (SRTs) were measured for natural and tone-vocoded words delayed 50 (early condition) and 800 ms (late condition) from the noise onset. Adaptation was calculated as the SRT improvement in the late relative to the early condition. Twenty-two adults with normal hearing participated in the experiments. For natural words, adaptation was small overall (mean = 0.5 dB) and similar across the six noises. For vocoded words, significant adaptation occurred for all six noises (mean = 1.3 dB) and was not statistically different across noises. For the tested noises, the amount of adaptation was independent of the spectrum and level fluctuations of the noise. The results suggest that adaptation in speech recognition can occur in realistic noisy environments.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251343457"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12081978/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Auditory brainstem response (ABR) interpretation in clinical practice often relies on visual inspection by audiologists, which is prone to inter-practitioner variability. While deep learning (DL) algorithms have shown promise in objectifying ABR detection in controlled settings, their applicability to real-world clinical data is hindered by small datasets and insufficient heterogeneity. This study evaluates the generalizability of nine DL models for ABR detection using large, multicenter datasets. The primary dataset analyzed, Clinical Dataset I, comprises 128,123 labeled ABRs from 13,813 participants across a wide range of ages and hearing levels, and was divided into a training set (90%) and a held-out test set (10%). The models included convolutional neural networks (CNNs; AlexNet, VGG, ResNet), transformer-based architectures (Transformer, Patch Time Series Transformer [PatchTST], Differential Transformer, and Differential PatchTST), and hybrid CNN-transformer models (ResTransformer, ResPatchTST). Performance was assessed on the held-out test set and four external datasets (Clinical II, Southampton, PhysioNet, Mendeley) using accuracy and area under the receiver operating characteristic curve (AUC). ResPatchTST achieved the highest performance on the held-out test set (accuracy: 91.90%, AUC: 0.976). Transformer-based models, particularly PatchTST, showed superior generalization to external datasets, maintaining robust accuracy across diverse clinical settings. Additional experiments highlighted the critical role of dataset size and diversity in enhancing model robustness. We also observed that incorporating acquisition parameters and demographic features as auxiliary inputs yielded performance gains in cross-center generalization. These findings underscore the potential of DL models-especially transformer-based architectures-for accurate and generalizable ABR detection, and highlight the necessity of large, diverse datasets in developing clinically reliable systems.
{"title":"Comparison of Deep Learning Models for Objective Auditory Brainstem Response Detection: A Multicenter Validation Study.","authors":"Yin Liu, Lingjie Xiang, Qiang Li, Kangkang Li, Yihan Yang, Tiantian Wang, Yuting Qin, Xinxing Fu, Yu Zhao, Chenqiang Gao","doi":"10.1177/23312165251347773","DOIUrl":"10.1177/23312165251347773","url":null,"abstract":"<p><p>Auditory brainstem response (ABR) interpretation in clinical practice often relies on visual inspection by audiologists, which is prone to inter-practitioner variability. While deep learning (DL) algorithms have shown promise in objectifying ABR detection in controlled settings, their applicability to real-world clinical data is hindered by small datasets and insufficient heterogeneity. This study evaluates the generalizability of nine DL models for ABR detection using large, multicenter datasets. The primary dataset analyzed, Clinical Dataset I, comprises 128,123 labeled ABRs from 13,813 participants across a wide range of ages and hearing levels, and was divided into a training set (90%) and a held-out test set (10%). The models included convolutional neural networks (CNNs; AlexNet, VGG, ResNet), transformer-based architectures (Transformer, Patch Time Series Transformer [PatchTST], Differential Transformer, and Differential PatchTST), and hybrid CNN-transformer models (ResTransformer, ResPatchTST). Performance was assessed on the held-out test set and four external datasets (Clinical II, Southampton, PhysioNet, Mendeley) using accuracy and area under the receiver operating characteristic curve (AUC). ResPatchTST achieved the highest performance on the held-out test set (accuracy: 91.90%, AUC: 0.976). Transformer-based models, particularly PatchTST, showed superior generalization to external datasets, maintaining robust accuracy across diverse clinical settings. Additional experiments highlighted the critical role of dataset size and diversity in enhancing model robustness. We also observed that incorporating acquisition parameters and demographic features as auxiliary inputs yielded performance gains in cross-center generalization. These findings underscore the potential of DL models-especially transformer-based architectures-for accurate and generalizable ABR detection, and highlight the necessity of large, diverse datasets in developing clinically reliable systems.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251347773"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12134522/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144209976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}