Pub Date : 2026-01-01Epub Date: 2026-03-03DOI: 10.1177/23312165251413329
Nicole A Huizinga, Laura Keur-Huizinga, Adriana A Zekveld, Sophia E Kramer, Eco J C de Geus
Previous research has highlighted challenges for individuals with hearing loss, including increased listening effort and fatigue. This study aimed to: (a) examine the relationship between auditory demand and listening effort, affect, and fatigue, focusing on the moderating role of hearing loss; and (b) assess whether listening effort and affect mediate the effect of auditory demand on fatigue. A total of 130 participants, with and without hearing loss, participated in EMA over 5.5 days, answering questions on listening effort, fatigue, and listening environment attributes. Auditory demand was defined by contextual and subjective components derived from EMA responses. LME models analyzed the effect of auditory demand on listening effort, affect, fatigue and the moderating role of hearing loss. Additional models tested mediation by listening effort and affect. Results: highlighted that both contextual and subjective auditory demand significantly increased listening effort with stronger effects in those with more hearing loss. No effects of contextual auditory demand on affective state were observed, nor was there a moderation effect of hearing loss. An effect of subjective auditory demand on affect was observed, but no moderation of hearing loss was present. Contextual and subjective auditory demand predicted fatigue (β = 0.07-0.14, p < .01-p < .001) with amplified effects present in those with more hearing loss (pinteraction < .01) for contextual demand. Mediation analyses highlighted that listening effort contributed to the demand-fatigue relationship, though patterns differed by demand type. The results indicate that increased listening effort, rather than negative affect, may underlie the association between auditory demand and fatigue.
先前的研究强调了听力损失患者面临的挑战,包括听力努力增加和疲劳。本研究旨在:(a)研究听力需求与听力努力、情感和疲劳之间的关系,重点关注听力损失的调节作用;(b)评估听力努力和听觉影响是否介导听觉需求对疲劳的影响。共有130名有或无听力损失的参与者参加了为期5.5天的EMA,回答了关于听力努力、疲劳和听力环境属性的问题。听觉需求由来自EMA反应的语境和主观成分定义。LME模型分析了听觉需求对听力努力、情感、疲劳的影响以及听力损失的调节作用。其他模型通过倾听努力和影响来测试调解。结果:强调上下文和主观听觉需求都显著增加了听力努力,并且在听力损失越严重的人群中效果越强。没有观察到语境听觉需求对情感状态的影响,也没有听力损失的调节作用。观察到主观听觉需求对情感的影响,但听力损失不存在调节。情境和主观听觉需求预测疲劳(β = 0.07-0.14, p 0.01 -p相互作用)
{"title":"The Effects of Daily Life Auditory Demands on Listening Effort, Affect, and Fatigue as a Function of Hearing Loss.","authors":"Nicole A Huizinga, Laura Keur-Huizinga, Adriana A Zekveld, Sophia E Kramer, Eco J C de Geus","doi":"10.1177/23312165251413329","DOIUrl":"10.1177/23312165251413329","url":null,"abstract":"<p><p>Previous research has highlighted challenges for individuals with hearing loss, including increased listening effort and fatigue. This study aimed to: (a) examine the relationship between auditory demand and listening effort, affect, and fatigue, focusing on the moderating role of hearing loss; and (b) assess whether listening effort and affect mediate the effect of auditory demand on fatigue. A total of 130 participants, with and without hearing loss, participated in EMA over 5.5 days, answering questions on listening effort, fatigue, and listening environment attributes. Auditory demand was defined by contextual and subjective components derived from EMA responses. LME models analyzed the effect of auditory demand on listening effort, affect, fatigue and the moderating role of hearing loss. Additional models tested mediation by listening effort and affect. Results: highlighted that both contextual and subjective auditory demand significantly increased listening effort with stronger effects in those with more hearing loss. No effects of contextual auditory demand on affective state were observed, nor was there a moderation effect of hearing loss. An effect of subjective auditory demand on affect was observed, but no moderation of hearing loss was present. Contextual and subjective auditory demand predicted fatigue (β = 0.07-0.14, <i>p < .</i>01-<i>p</i> < .001) with amplified effects present in those with more hearing loss (<i>p</i><sub>interaction</sub> < .01) for contextual demand. Mediation analyses highlighted that listening effort contributed to the demand-fatigue relationship, though patterns differed by demand type. The results indicate that increased listening effort, rather than negative affect, may underlie the association between auditory demand and fatigue.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165251413329"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12966542/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147345382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-02-11DOI: 10.1177/23312165261421708
Go Ashida
The neural processing of interaural time and level differences (ITDs/ILDs) underlies binaural sound localization. Neurons of the mammalian lateral superior olive (LSO) are sensitive to ILDs and envelope-ITDs of acoustic stimuli. Bushy cells in the anteroventral cochlear nucleus convey relevant information from auditory nerve (AN) fibers to the LSO. More specifically, spherical bushy cells (SBCs) send ipsilateral excitatory inputs, while globular bushy cells (GBCs) project to the contralateral medial nucleus of the trapezoid body that provides inhibitory inputs to the LSO. Previous studies in vivo reported an enhancement of phase-locking in bushy cells compared to AN. This enhancement has been hypothesized to benefit temporal coding in binaural neurons, but its actual contribution in LSO remains unclear. Here we investigate this question by computational modeling of binaural circuity incorporating the AN, SBC/GBC, and LSO stages. Both bushy cell models were calibrated to replicate known physiological responses, including the representative peristimulus time histograms for high-frequency tones and enhanced phase-locking to low-frequency envelopes. We then simulated the binaural tuning of LSO with and without the bushy cell stage. The synaptic inputs to the LSO model were adjusted so that the simulated ILD-tuning remains unaltered between the input configurations. By adding the bushy cell stage, the simulated binaural response of LSO became more sharply tuned for envelope-ITDs. Furthermore, the envelope-ITD sensitivity was extended up to around 600 Hz, matching previously observed physiological limits. These results provide computational evidence for the functional benefit of having bushy cells in the binaural sound localization circuit.
{"title":"Benefits of Enhanced Phase-Locking for Binaural Coding of Amplitude-Modulated Sounds.","authors":"Go Ashida","doi":"10.1177/23312165261421708","DOIUrl":"10.1177/23312165261421708","url":null,"abstract":"<p><p>The neural processing of interaural time and level differences (ITDs/ILDs) underlies binaural sound localization. Neurons of the mammalian lateral superior olive (LSO) are sensitive to ILDs and envelope-ITDs of acoustic stimuli. Bushy cells in the anteroventral cochlear nucleus convey relevant information from auditory nerve (AN) fibers to the LSO. More specifically, spherical bushy cells (SBCs) send ipsilateral excitatory inputs, while globular bushy cells (GBCs) project to the contralateral medial nucleus of the trapezoid body that provides inhibitory inputs to the LSO. Previous studies in vivo reported an enhancement of phase-locking in bushy cells compared to AN. This enhancement has been hypothesized to benefit temporal coding in binaural neurons, but its actual contribution in LSO remains unclear. Here we investigate this question by computational modeling of binaural circuity incorporating the AN, SBC/GBC, and LSO stages. Both bushy cell models were calibrated to replicate known physiological responses, including the representative peristimulus time histograms for high-frequency tones and enhanced phase-locking to low-frequency envelopes. We then simulated the binaural tuning of LSO with and without the bushy cell stage. The synaptic inputs to the LSO model were adjusted so that the simulated ILD-tuning remains unaltered between the input configurations. By adding the bushy cell stage, the simulated binaural response of LSO became more sharply tuned for envelope-ITDs. Furthermore, the envelope-ITD sensitivity was extended up to around 600 Hz, matching previously observed physiological limits. These results provide computational evidence for the functional benefit of having bushy cells in the binaural sound localization circuit.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165261421708"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12901860/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146167661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-22DOI: 10.1177/23312165251396517
Alinka E Greasley, Amy V Beeston, Robert J Fulford, Harriet Crook, Jackie M Salter, Robin Hake, Brian C J Moore
Hearing aids, which are primarily designed to improve the intelligibility of speech, can negatively affect the perception and enjoyment of music. This large-scale survey study, conducted between 2016 and 2018, explored hearing aid use and preference behavior in both recorded and live music listening settings, aiming to understand the challenges and strategies used by listeners to improve their experiences, and how these may be affected by level of hearing loss (HL). One thousand five hundred and seven hearing aid users (mean age = 60 years) completed an online survey about their music listening behavior and use of hearing aids. Results showed that whilst hearing aids support engagement in music listening, they also present many issues and overall helpfulness is mixed. The most commonly reported issue was distortion and poor sound quality, particularly in loud or live contexts. The most frequently reported strategy for reducing distortion was to remove hearing aids altogether. Only a third of the sample reported using a music program and effectiveness was mixed, suggesting that manufacturer music programs do not currently provide significant benefits for music listening, and further research into the use, uptake and efficacy of music programs is needed. We call for further research into signal processing strategies for music especially for high sound levels such as live music or concert settings. The positive impact of mindsets supporting proactive behaviors, perseverance, adaptation, and experimentation with different technologies, genres, and listening environments was highlighted, strengthening the evidence base for audiologists to provide music listening guidance in the clinic.
{"title":"Using Hearing Aids for Music: A UK Survey of Challenges and Strategies.","authors":"Alinka E Greasley, Amy V Beeston, Robert J Fulford, Harriet Crook, Jackie M Salter, Robin Hake, Brian C J Moore","doi":"10.1177/23312165251396517","DOIUrl":"10.1177/23312165251396517","url":null,"abstract":"<p><p>Hearing aids, which are primarily designed to improve the intelligibility of speech, can negatively affect the perception and enjoyment of music. This large-scale survey study, conducted between 2016 and 2018, explored hearing aid use and preference behavior in both recorded and live music listening settings, aiming to understand the challenges and strategies used by listeners to improve their experiences, and how these may be affected by level of hearing loss (HL). One thousand five hundred and seven hearing aid users (mean age = 60 years) completed an online survey about their music listening behavior and use of hearing aids. Results showed that whilst hearing aids support engagement in music listening, they also present many issues and overall helpfulness is mixed. The most commonly reported issue was distortion and poor sound quality, particularly in loud or live contexts. The most frequently reported strategy for reducing distortion was to remove hearing aids altogether. Only a third of the sample reported using a music program and effectiveness was mixed, suggesting that manufacturer music programs do not currently provide significant benefits for music listening, and further research into the use, uptake and efficacy of music programs is needed. We call for further research into signal processing strategies for music especially for high sound levels such as live music or concert settings. The positive impact of mindsets supporting proactive behaviors, perseverance, adaptation, and experimentation with different technologies, genres, and listening environments was highlighted, strengthening the evidence base for audiologists to provide music listening guidance in the clinic.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165251396517"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12833179/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146020269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spectro-temporal modulation (STM) sensitivity has been proposed as a sensitive marker of speech intelligibility in challenging listening conditions, yet the underlying auditory mechanisms involved in STM detection remain incompletely understood. The present study measured STM detection thresholds in young normal-hearing and older hearing-impaired listeners and evaluated whether the revised Computational Auditory Signal Processing and Perception model (CASP) can account for individual performance. Thresholds were obtained for six modulation detection conditions, defined by combinations of spectral (0, 1, and 2 c/o) and temporal (4 and 12 Hz) rates. To individualize CASP, outer and inner hair cell loss estimates were obtained from audiometric and Adaptive Categorical Loudness Scaling (ACALOS) data. The results showed systematically elevated thresholds in older hearing-impaired listeners as compared to the young normal-hearing group, particularly at higher spectral rates. The model simulations reproduced overall threshold patterns, but substantially underestimated group differences and interindividual variability in the data. Moreover, the simulations showed limited sensitivity to estimates of outer and inner hair cell loss, supporting the idea that additional supra-threshold mechanisms contribute to STM deficits. While these findings demonstrate the potential of auditory models to predict STM performance, they also highlight the need for refined representations of peripheral and central processing to account for individual STM detection thresholds.
{"title":"Predicting Spectro-Temporal Modulation Detection Thresholds With a Functional Auditory Model.","authors":"Lily Cassandra Paulick, Torsten Dau, Helia Relaño-Iborra","doi":"10.1177/23312165261425853","DOIUrl":"10.1177/23312165261425853","url":null,"abstract":"<p><p>Spectro-temporal modulation (STM) sensitivity has been proposed as a sensitive marker of speech intelligibility in challenging listening conditions, yet the underlying auditory mechanisms involved in STM detection remain incompletely understood. The present study measured STM detection thresholds in young normal-hearing and older hearing-impaired listeners and evaluated whether the revised Computational Auditory Signal Processing and Perception model (CASP) can account for individual performance. Thresholds were obtained for six modulation detection conditions, defined by combinations of spectral (0, 1, and 2 c/o) and temporal (4 and 12 Hz) rates. To individualize CASP, outer and inner hair cell loss estimates were obtained from audiometric and Adaptive Categorical Loudness Scaling (ACALOS) data. The results showed systematically elevated thresholds in older hearing-impaired listeners as compared to the young normal-hearing group, particularly at higher spectral rates. The model simulations reproduced overall threshold patterns, but substantially underestimated group differences and interindividual variability in the data. Moreover, the simulations showed limited sensitivity to estimates of outer and inner hair cell loss, supporting the idea that additional supra-threshold mechanisms contribute to STM deficits. While these findings demonstrate the potential of auditory models to predict STM performance, they also highlight the need for refined representations of peripheral and central processing to account for individual STM detection thresholds.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165261425853"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12925006/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146229222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-03-23DOI: 10.1177/23312165261435260
Borgný Súsonnudóttir, Lars D Mosgaard, Georg Stiefenhofer, Pamela E Souza, Tobias Neher
In open-fit hearing aids (HAs), the interaction between the direct and processed sound leads to comb-filtering and, thus, perceived coloration effects. The magnitude of these effects depends on the level difference between the direct and processed sound and on the processing delay. A critical issue for HA uptake and use is own-voice perception, which the current study focused on. Its aims were to investigate (1) whether short processing delay is preferred over longer delays, (2) how processing delay influences different perceptual dimensions related to own-voice perception, and (3) whether spectral discrimination abilities can predict delay preference. Twenty-four individuals with mild-to-moderate hearing impairment participated. Using prototype receiver-in-the-canal HAs, processing delays of 0.5, 5, and 10 ms were compared. Delay preference was assessed using a paired-comparison task. Perceptual dimensions relating to own-voice perception were investigated using a customized version of the "Own Voice Qualities" questionnaire. Spectral discrimination abilities were assessed using a spectral ripple discrimination (SRD) task. The analyses showed that the 0.5-ms delay was preferred over the longer delays. Furthermore, the 0.5-ms delay received better ratings related to tonality perception (e.g., attributes such as metallic and sharp) and own-voice quality compared to the 10-ms delay. SRD abilities did not predict delay preference. Overall, these results provide insights into how open-fit HAs can be optimized with respect to own-voice perception.
{"title":"Own-Voice Perception with Different Processing Delays in Open-Fit Hearing Aids.","authors":"Borgný Súsonnudóttir, Lars D Mosgaard, Georg Stiefenhofer, Pamela E Souza, Tobias Neher","doi":"10.1177/23312165261435260","DOIUrl":"10.1177/23312165261435260","url":null,"abstract":"<p><p>In open-fit hearing aids (HAs), the interaction between the direct and processed sound leads to comb-filtering and, thus, perceived coloration effects. The magnitude of these effects depends on the level difference between the direct and processed sound and on the processing delay. A critical issue for HA uptake and use is own-voice perception, which the current study focused on. Its aims were to investigate (1) whether short processing delay is preferred over longer delays, (2) how processing delay influences different perceptual dimensions related to own-voice perception, and (3) whether spectral discrimination abilities can predict delay preference. Twenty-four individuals with mild-to-moderate hearing impairment participated. Using prototype receiver-in-the-canal HAs, processing delays of 0.5, 5, and 10 ms were compared. Delay preference was assessed using a paired-comparison task. Perceptual dimensions relating to own-voice perception were investigated using a customized version of the \"Own Voice Qualities\" questionnaire. Spectral discrimination abilities were assessed using a spectral ripple discrimination (SRD) task. The analyses showed that the 0.5-ms delay was preferred over the longer delays. Furthermore, the 0.5-ms delay received better ratings related to tonality perception (e.g., attributes such as <i>metallic</i> and <i>sharp</i>) and own-voice quality compared to the 10-ms delay. SRD abilities did not predict delay preference. Overall, these results provide insights into how open-fit HAs can be optimized with respect to own-voice perception.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165261435260"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13009764/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147500133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-19DOI: 10.1177/23312165251413850
Qiaoyu Liu, Yufei Qiao, Min Zhu, Jiayan Yang, Wen Sun, Yaohan Chen, Saiyi Jiao, Hang Shen, Yingying Shang
Single-sided deafness (SSD) is a typical condition of partial auditory deprivation. Total auditory deprivation triggers cross-modal neural reorganization, but in patients with partial hearing deprivation, how residual auditory function is balanced with the compensatory plasticity of other sensory modalities remains unclear. Previous studies have reported conflicting findings, potentially due to differences in study populations or task designs. Here, we investigated hierarchical neural processing in a homogeneous cohort of 37 congenital SSD patients (31.6 ± 6.5 years, 18 males) and 32 normal-hearing (NH) controls (30.6 ± 7.3 years, 14 males) using both auditory and visual oddball tasks with electroencephalography (EEG). In the auditory task, SSD patients presented reduced amplitudes of early exogenous components (N1, P2) and mismatch negativity (MMN), but preserved late endogenous components (N2, P3), compared with NH controls. Conversely, in the visual task, SSD patients presented increased early visual N1 amplitudes with intact visual mismatch negativity (vMMN) and endogenous components (N2, P3). No latency differences in the above components were observed. These results reveal a difference in plasticity between lower- and higher-level processing. Our findings indicate that functional plasticity in SSD patients occurs predominantly at sensory stages and is characterized by diminished auditory and compensatory elevated visual neural activity, whereas higher-level discrimination processing in either modality is largely unaffected. These findings clarify prior discrepancies, establish a hierarchical framework for understanding neuroplasticity in partial sensory deprivation, and have implications for rehabilitation strategies for SSD patients.
{"title":"Functional Plasticity in Auditory and Visual Discrimination Processing in Patients with Single-Sided Deafness: An EEG Study.","authors":"Qiaoyu Liu, Yufei Qiao, Min Zhu, Jiayan Yang, Wen Sun, Yaohan Chen, Saiyi Jiao, Hang Shen, Yingying Shang","doi":"10.1177/23312165251413850","DOIUrl":"10.1177/23312165251413850","url":null,"abstract":"<p><p>Single-sided deafness (SSD) is a typical condition of partial auditory deprivation. Total auditory deprivation triggers cross-modal neural reorganization, but in patients with partial hearing deprivation, how residual auditory function is balanced with the compensatory plasticity of other sensory modalities remains unclear. Previous studies have reported conflicting findings, potentially due to differences in study populations or task designs. Here, we investigated hierarchical neural processing in a homogeneous cohort of 37 congenital SSD patients (31.6 ± 6.5 years, 18 males) and 32 normal-hearing (NH) controls (30.6 ± 7.3 years, 14 males) using both auditory and visual oddball tasks with electroencephalography (EEG). In the auditory task, SSD patients presented reduced amplitudes of early exogenous components (N1, P2) and mismatch negativity (MMN), but preserved late endogenous components (N2, P3), compared with NH controls. Conversely, in the visual task, SSD patients presented increased early visual N1 amplitudes with intact visual mismatch negativity (vMMN) and endogenous components (N2, P3). No latency differences in the above components were observed. These results reveal a difference in plasticity between lower- and higher-level processing. Our findings indicate that functional plasticity in SSD patients occurs predominantly at sensory stages and is characterized by diminished auditory and compensatory elevated visual neural activity, whereas higher-level discrimination processing in either modality is largely unaffected. These findings clarify prior discrepancies, establish a hierarchical framework for understanding neuroplasticity in partial sensory deprivation, and have implications for rehabilitation strategies for SSD patients.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165251413850"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12816557/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145999400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-02-13DOI: 10.1177/23312165261423041
Robert A Lutfi, Lindsey Kummerer, Jungmee Lee, Varsha Rallapalli
Difficulty recognizing speech in noise is a common complaint among those with sensorineural hearing loss. Yet the degree of difficulty differs widely among individuals, often unrelated to the clinical gold standard for evaluating hearing, the pure-tone audiogram. Research has isolated both auditory and nonauditory factors responsible for these differences, but these factors do not operate in isolation. In the present work, a generic computational model involving simultaneous cue sensitivity, cue reliance, and decision noise provided an integrative framework for identifying sources of between-listener variance not accounted for by the audiogram. The framework was applied to performance differences within and between normal-hearing (NH) and hearing-impaired (HI) groups in the processing of linguistic, acoustic, and statistical cues supporting speech recognition in noise. The primary source of performance differences between groups was differences in sensitivity for the subtle, but largely stationary acoustic cues required for speech recognition. The overwhelming source of performance differences within groups was differences in decision noise associated with more salient, but highly variable statistical cues for speech separation. For speech separation, HI listeners placed far greater reliance than NH listeners on the one cue for which they were most sensitive. HI listeners, but not NH listeners, benefitted by shifting all acoustic information to this most relied on cue. The results provide preliminary support for the feasibility of integrative modeling as a means of evaluating the collective influence of factors affecting speech recognition in noise.
{"title":"Integrative Modeling of Individual Differences Recognizing Speech in Noise by Hearing-Impaired Adults.","authors":"Robert A Lutfi, Lindsey Kummerer, Jungmee Lee, Varsha Rallapalli","doi":"10.1177/23312165261423041","DOIUrl":"10.1177/23312165261423041","url":null,"abstract":"<p><p>Difficulty recognizing speech in noise is a common complaint among those with sensorineural hearing loss. Yet the degree of difficulty differs widely among individuals, often unrelated to the clinical gold standard for evaluating hearing, the pure-tone audiogram. Research has isolated both auditory and nonauditory factors responsible for these differences, but these factors do not operate in isolation. In the present work, a generic computational model involving simultaneous cue sensitivity, cue reliance, and decision noise provided an integrative framework for identifying sources of between-listener variance not accounted for by the audiogram. The framework was applied to performance differences within and between normal-hearing (NH) and hearing-impaired (HI) groups in the processing of linguistic, acoustic, and statistical cues supporting speech recognition in noise. The primary source of performance differences <i>between</i> groups was differences in sensitivity for the subtle, but largely stationary acoustic cues required for <i>speech recognition</i>. The overwhelming source of performance differences <i>within</i> groups was differences in decision noise associated with more salient, but highly variable statistical cues for <i>speech separation</i>. For speech separation, HI listeners placed far greater reliance than NH listeners on the one cue for which they were most sensitive. HI listeners, but not NH listeners, benefitted by shifting all acoustic information to this most relied on cue. The results provide preliminary support for the feasibility of integrative modeling as a means of evaluating the collective influence of factors affecting speech recognition in noise.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165261423041"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12905081/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146183051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-02-18DOI: 10.1177/23312165261422955
Robel Z Alemu, Alan Blakeman, Jaina Negandhi, Blake C Papsin, Sharon L Cushing, Karen A Gordon
This study aimed to characterize effects of bilateral bone conduction devices (BCD) including the Cochlear™ Osia® (Osia) and the Cochlear™ percutaneous Baha® Connect System (Baha) on localization of stationary and moving sound in children and adolescents with bilateral atresia. Participants were 11 listeners with BCDs [MAge(SD) = 14.7(3.5) years] and 11 age-matched controls [MAge(SD) = 14.9(1.9) years]. Outcomes were word recognition in quiet and noise, spatial release from masking (SRM) [spondee-word recognition thresholds in noise at co-located/0° or separated (90° left/right) positions], self-reported hearing using the Speech, Spatial and Qualities of Hearing Scale (SSQ), and localization of stationary and moving sound with tracking of real-time unrestricted head movements. BCD users had reduced speech perception accuracy in noise during unilateral listening (p < .001) and higher speech recognition thresholds than controls (p = .001). BCD users had higher errors than controls during stationary (p < .001) and moving (p < .001) sound localization consistent with self-reported spatial hearing challenges. BCD users had significantly reduced errors during bilateral use compared to unilateral use for stationary (p < .01) but not always for moving (right unilateral: p < .01; left unilateral: p = .46) sound localization. BCD users spent less time moving their heads in the correct direction compared to controls for stationary and moving sound localization (p < .01). Results indicate that children and adolescents with BCDs demonstrate improved localization of stationary but not moving sound-sources, with bilateral device use compared to unilateral use. This finding provides evidence for some access to binaural cues and mitigation of head shadow despite transcranial attenuation, but ineffective use of head movements.
本研究旨在描述双侧骨传导装置(BCD),包括Cochlear™Osia®(Osia)和Cochlear™经皮Baha®连接系统(Baha)对双侧闭锁儿童和青少年静止和运动声音定位的影响。参与者为11名患有bcd的听众[MAge(SD) = 14.7(3.5)岁]和11名年龄匹配的对照组[MAge(SD) = 14.9(1.9)岁]。结果包括安静和噪音条件下的单词识别,掩蔽的空间释放(SRM)[同时定位/0°或分离(90°左/右)位置的自发词识别阈值],使用语音、空间和听力质量量表(SSQ)自我报告听力,以及通过实时跟踪不受限制的头部运动来定位静止和移动的声音。单侧听力时,BCD使用者对噪音的语音感知准确度降低(p p = .001)。BCD使用者在平稳声音定位时的误差高于对照组(p p p p p = 0.46)。与静止和移动声音定位相比,BCD用户在正确方向上移动头部所花费的时间更少
{"title":"Benefits of Bilateral Bone Conduction Device Use Including Osia Devices in Children and Adolescents With Bilateral Atresia.","authors":"Robel Z Alemu, Alan Blakeman, Jaina Negandhi, Blake C Papsin, Sharon L Cushing, Karen A Gordon","doi":"10.1177/23312165261422955","DOIUrl":"10.1177/23312165261422955","url":null,"abstract":"<p><p>This study aimed to characterize effects of bilateral bone conduction devices (BCD) including the Cochlear™ Osia<sup>®</sup> (Osia) and the Cochlear™ percutaneous Baha<sup>®</sup> Connect System (Baha) on localization of stationary and moving sound in children and adolescents with bilateral atresia. Participants were 11 listeners with BCDs [<i>M</i><sub>Age</sub>(SD) = 14.7(3.5) years] and 11 age-matched controls [<i>M</i><sub>Age</sub>(SD) = 14.9(1.9) years]. Outcomes were word recognition in quiet and noise, spatial release from masking (SRM) [spondee-word recognition thresholds in noise at co-located/0° or separated (90° left/right) positions], self-reported hearing using the Speech, Spatial and Qualities of Hearing Scale (SSQ), and localization of stationary and moving sound with tracking of real-time unrestricted head movements. BCD users had reduced speech perception accuracy in noise during unilateral listening (<i>p</i> < .001) and higher speech recognition thresholds than controls (<i>p</i> = .001). BCD users had higher errors than controls during stationary (<i>p</i> < .001) and moving (<i>p</i> < .001) sound localization consistent with self-reported spatial hearing challenges. BCD users had significantly reduced errors during bilateral use compared to unilateral use for stationary (<i>p</i> < .01) but not always for moving (right unilateral: <i>p</i> < .01; left unilateral: <i>p</i> = .46) sound localization. BCD users spent less time moving their heads in the correct direction compared to controls for stationary and moving sound localization (<i>p</i> < .01). Results indicate that children and adolescents with BCDs demonstrate improved localization of stationary but not moving sound-sources, with bilateral device use compared to unilateral use. This finding provides evidence for some access to binaural cues and mitigation of head shadow despite transcranial attenuation, but ineffective use of head movements.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165261422955"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12921180/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146221675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-30DOI: 10.1177/23312165251410988
Hannah Guest, Paul Elliott, Martie van Tongeren, Joseph Laycock, Steven Thorley-Lawson, Michael A Stone, Michael T Loughran, Christopher J Plack
Research into the long-term effects of noise on hearing is often confounded by health and lifestyle differences between individuals. UK police radio ear-pieces are capable of emitting high sound levels and, crucially, are worn in one ear, allowing between-ear comparisons which control for individual-level confounding factors. Low volume-control settings are recommended to reduce risk to police hearing, yet actual usage patterns and auditory effects remain unexamined. This study used a large-scale survey (N = 4,498) to assess ear-piece noise exposure and the associated hearing health. Most participants reported using high volume-control settings and 45.2% reported experiencing signs of temporary threshold shift (TTS) in the exposed ear. Estimated weekly-averaged noise exposures frequently exceeded the UK's 85 dBA Upper Exposure Action Value. Ear-piece use was associated with 73% (95% confidence interval [CI] 46-106%) increased risk of persistent tinnitus, which on mediation analysis appeared to be driven by a subset of users who experienced signs of TTS. Importantly, tinnitus location was associated with the side of exposure, suggesting tinnitus related to device use rather than to other factors. In contrast, Digits-In-Noise thresholds showed no relation with noise exposure; potential explanations include compensatory auditory training effects, but limitations of Digits-In-Noise data must also be considered. Findings highlight a need for further investigation into hearing risks in police personnel, including in-person auditory testing. Risk mitigation strategies might involve improved device design, training on safe use, and expanded hearing health surveillance. Given the potential for cumulative auditory damage, TTS may serve as an early warning sign, warranting attention in broader noise-exposed populations.
{"title":"Leveraging Monaural Exposures to Reveal Early Effects of Noise: Evidence from Police Radio Ear-Piece Use.","authors":"Hannah Guest, Paul Elliott, Martie van Tongeren, Joseph Laycock, Steven Thorley-Lawson, Michael A Stone, Michael T Loughran, Christopher J Plack","doi":"10.1177/23312165251410988","DOIUrl":"10.1177/23312165251410988","url":null,"abstract":"<p><p>Research into the long-term effects of noise on hearing is often confounded by health and lifestyle differences between individuals. UK police radio ear-pieces are capable of emitting high sound levels and, crucially, are worn in one ear, allowing between-ear comparisons which control for individual-level confounding factors. Low volume-control settings are recommended to reduce risk to police hearing, yet actual usage patterns and auditory effects remain unexamined. This study used a large-scale survey (<i>N</i> = 4,498) to assess ear-piece noise exposure and the associated hearing health. Most participants reported using high volume-control settings and 45.2% reported experiencing signs of temporary threshold shift (TTS) in the exposed ear. Estimated weekly-averaged noise exposures frequently exceeded the UK's 85 dBA Upper Exposure Action Value. Ear-piece use was associated with 73% (95% confidence interval [CI] 46-106%) increased risk of persistent tinnitus, which on mediation analysis appeared to be driven by a subset of users who experienced signs of TTS. Importantly, tinnitus location was associated with the side of exposure, suggesting tinnitus related to device use rather than to other factors. In contrast, Digits-In-Noise thresholds showed no relation with noise exposure; potential explanations include compensatory auditory training effects, but limitations of Digits-In-Noise data must also be considered. Findings highlight a need for further investigation into hearing risks in police personnel, including in-person auditory testing. Risk mitigation strategies might involve improved device design, training on safe use, and expanded hearing health surveillance. Given the potential for cumulative auditory damage, TTS may serve as an early warning sign, warranting attention in broader noise-exposed populations.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165251410988"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12858745/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-30DOI: 10.1177/23312165251408761
Scott Bannister, Jennifer Firth, Gerardo Roa-Dabike, Rebecca Vos, William Whitmer, Alinka E Greasley, Simone Graetzer, Bruno Fazenda, Trevor Cox, Jon Barker, Michael A Akeroyd
Music is central to many people's lives, and hearing loss (HL) is often a barrier to musical engagement. Hearing aids (HAs) help, but their efficacy in improving speech does not consistently translate to music. This research evaluated systems submitted to the 1st Cadenza Machine Learning Challenge, where entrants aimed to improve music audio quality for HA users through source separation and remixing. The HA users (N = 53, ranging from "mild" to "moderately severe" HL) assessed eight challenge systems (including one baseline using the HDemucs source separation algorithm, remixing to original mixes of music samples, and applying National Acoustic Laboratories Revised amplification) and rated 200 music samples processed for their HL. Participants rated samples on basic audio quality, clarity, harshness, distortion, frequency balance, and liking. Results suggest no entrant system surpassed the baseline for audio quality, although differences emerged in system efficacy across HL severities. Clarity and distortion ratings were most predictive of audio quality. Finally, some systems produced signals with higher objective loudness, spectral flux and clipping with increasing HL severity; these received lower audio quality ratings by listeners with moderately severe HL. Findings highlight how music enhancement requires varied solutions and tests across a range of HL severities. This challenge provided a first application of source separation to music listening with HL. However, state-of-the-art source separation algorithms limited the diversity of entrant solutions, resulting in no improvements over the baseline; to promote development of innovative processing strategies, future work should increase complexity of music listening scenarios to be addressed through source separation.
{"title":"The First Cadenza Challenge: Perceptual Evaluation of Machine Learning Systems to Improve Audio Quality of Popular Music for Those with Hearing Loss.","authors":"Scott Bannister, Jennifer Firth, Gerardo Roa-Dabike, Rebecca Vos, William Whitmer, Alinka E Greasley, Simone Graetzer, Bruno Fazenda, Trevor Cox, Jon Barker, Michael A Akeroyd","doi":"10.1177/23312165251408761","DOIUrl":"10.1177/23312165251408761","url":null,"abstract":"<p><p>Music is central to many people's lives, and hearing loss (HL) is often a barrier to musical engagement. Hearing aids (HAs) help, but their efficacy in improving speech does not consistently translate to music. This research evaluated systems submitted to the 1<sup>st</sup> Cadenza Machine Learning Challenge, where entrants aimed to improve music audio quality for HA users through source separation and remixing. The HA users (<i>N</i> = 53, ranging from \"mild\" to \"moderately severe\" HL) assessed eight challenge systems (including one baseline using the HDemucs source separation algorithm, remixing to original mixes of music samples, and applying National Acoustic Laboratories Revised amplification) and rated 200 music samples processed for their HL. Participants rated samples on <i>basic audio quality, clarity, harshness, distortion, frequency balance</i>, and <i>liking</i>. Results suggest no entrant system surpassed the baseline for audio quality, although differences emerged in system efficacy across HL severities. <i>Clarity</i> and <i>distortion</i> ratings were most predictive of audio quality. Finally, some systems produced signals with higher objective loudness, spectral flux and clipping with increasing HL severity; these received lower audio quality ratings by listeners with moderately severe HL. Findings highlight how music enhancement requires varied solutions and tests across a range of HL severities. This challenge provided a first application of source separation to music listening with HL. However, state-of-the-art source separation algorithms limited the diversity of entrant solutions, resulting in no improvements over the baseline; to promote development of innovative processing strategies, future work should increase complexity of music listening scenarios to be addressed through source separation.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165251408761"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12858752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}