Pub Date : 2025-01-01Epub Date: 2025-03-18DOI: 10.1177/23312165251317027
Pedro Lladó, Piotr Majdak, Roberto Barumerli, Robert Baumgartner
Localization of sound sources in sagittal planes significantly relies on monaural spectral cues. These cues are primarily derived from the direction-specific filtering of the pinnae. The contribution of specific frequency regions to the cue evaluation has not been fully clarified. To this end, we analyzed how different spectral weighting schemes contribute to the explanatory power of a sagittal-plane localization model in response to wideband, flat-spectrum stimuli. Each weighting scheme emphasized the contribution of spectral cues within well-defined frequency bands, enabling us to assess their impact on the predictions of individual patterns of localization responses. By means of Bayesian model selection, we compared five model variants representing various spectral weights. Our results indicate a preference for the weighting schemes emphasizing the contribution of frequencies above 8 kHz, suggesting that, in the auditory system, spectral cue evaluation is upweighted in that frequency region. While various potential explanations are discussed, we conclude that special attention should be put on this high-frequency region in spatial-audio applications aiming at the best localization performance.
{"title":"Spectral Weighting of Monaural Cues for Auditory Localization in Sagittal Planes.","authors":"Pedro Lladó, Piotr Majdak, Roberto Barumerli, Robert Baumgartner","doi":"10.1177/23312165251317027","DOIUrl":"10.1177/23312165251317027","url":null,"abstract":"<p><p>Localization of sound sources in sagittal planes significantly relies on monaural spectral cues. These cues are primarily derived from the direction-specific filtering of the pinnae. The contribution of specific frequency regions to the cue evaluation has not been fully clarified. To this end, we analyzed how different spectral weighting schemes contribute to the explanatory power of a sagittal-plane localization model in response to wideband, flat-spectrum stimuli. Each weighting scheme emphasized the contribution of spectral cues within well-defined frequency bands, enabling us to assess their impact on the predictions of individual patterns of localization responses. By means of Bayesian model selection, we compared five model variants representing various spectral weights. Our results indicate a preference for the weighting schemes emphasizing the contribution of frequencies above 8 kHz, suggesting that, in the auditory system, spectral cue evaluation is upweighted in that frequency region. While various potential explanations are discussed, we conclude that special attention should be put on this high-frequency region in spatial-audio applications aiming at the best localization performance.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251317027"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11920987/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143659047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1177/23312165251320789
Michael L Smith, Matthew B Winn
The process of repairing misperceptions has been identified as a contributor to effortful listening in people who use cochlear implants (CIs). The current study was designed to examine the relative cost of repairing misperceptions at earlier or later parts of a sentence that contained contextual information that could be used to infer words both predictively and retroactively. Misperceptions were enforced at specific times by replacing single words with noise. Changes in pupil dilation were analyzed to track differences in the timing and duration of effort, comparing listeners with typical hearing (TH) or with CIs. Increases in pupil dilation were time-locked to the moment of the missing word, with longer-lasting increases when the missing word was earlier in the sentence. Compared to listeners with TH, CI listeners showed elevated pupil dilation for longer periods of time after listening, suggesting a lingering effect of effort after sentence offset. When needing to mentally repair missing words, CI listeners also made more mistakes on words elsewhere in the sentence, even though these words were not masked. Changes in effort based on the position of the missing word were not evident in basic measures like peak pupil dilation and only emerged when the full-time course was analyzed, suggesting the timing analysis adds new information to our understanding of listening effort. These results demonstrate that some mistakes are more costly than others and incur different levels of mental effort to resolve the mistake, underscoring the information lost when characterizing speech perception with simple measures like percent-correct scores.
{"title":"Repairing Misperceptions of Words Early in a Sentence is More Effortful Than Repairing Later Words, Especially for Listeners With Cochlear Implants.","authors":"Michael L Smith, Matthew B Winn","doi":"10.1177/23312165251320789","DOIUrl":"10.1177/23312165251320789","url":null,"abstract":"<p><p>The process of repairing misperceptions has been identified as a contributor to effortful listening in people who use cochlear implants (CIs). The current study was designed to examine the relative cost of repairing misperceptions at earlier or later parts of a sentence that contained contextual information that could be used to infer words both predictively and retroactively. Misperceptions were enforced at specific times by replacing single words with noise. Changes in pupil dilation were analyzed to track differences in the timing and duration of effort, comparing listeners with typical hearing (TH) or with CIs. Increases in pupil dilation were time-locked to the moment of the missing word, with longer-lasting increases when the missing word was earlier in the sentence. Compared to listeners with TH, CI listeners showed elevated pupil dilation for longer periods of time after listening, suggesting a lingering effect of effort after sentence offset. When needing to mentally repair missing words, CI listeners also made more mistakes on words elsewhere in the sentence, even though these words were not masked. Changes in effort based on the position of the missing word were not evident in basic measures like peak pupil dilation and only emerged when the full-time course was analyzed, suggesting the timing analysis adds new information to our understanding of listening effort. These results demonstrate that some mistakes are more costly than others and incur different levels of mental effort to resolve the mistake, underscoring the information lost when characterizing speech perception with simple measures like percent-correct scores.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251320789"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11851752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143494387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-05-28DOI: 10.1177/23312165251344947
Raphael Cueille, Mathieu Lavandier
A binaural model is proposed to predict speech intelligibility in rooms for normal-hearing (NH) and hearing-impaired listener groups, combining the advantages of two existing models. The leclere2015 model takes binaural room impulse responses (BRIRs) as inputs and accounts for the temporal smearing of the speech by reverberation, but only works with stationary noises for NH listeners. The vicente2020 model takes the speech and noise signals at the ears as well as the listener audiogram as inputs and accounts for modulations in the noise and hearing loss, but cannot predict the temporal smearing of the speech by reverberation. The new model takes the audiogram, BRIRs and ear signals as inputs to account for the temporal smearing of the speech, the masker modulations and hearing loss. It gave accurate predictions for speech reception thresholds measured in seven experiments. The proposed model can do predictions that neither of the two original models can make when the target speech is influenced by reverberation and the noise has modulations and/or the listeners have hearing loss. In terms of model parameters, four methods were compared to separate the early and late reverberation, and two methods were compared to account for hearing loss.
{"title":"Binaural Speech Intelligibility in Noise and Reverberation: Prediction of Group Performance for Normal-hearing and Hearing-impaired Listeners.","authors":"Raphael Cueille, Mathieu Lavandier","doi":"10.1177/23312165251344947","DOIUrl":"10.1177/23312165251344947","url":null,"abstract":"<p><p>A binaural model is proposed to predict speech intelligibility in rooms for normal-hearing (NH) and hearing-impaired listener groups, combining the advantages of two existing models. The <i>leclere2015</i> model takes binaural room impulse responses (BRIRs) as inputs and accounts for the temporal smearing of the speech by reverberation, but only works with stationary noises for NH listeners. The <i>vicente2020</i> model takes the speech and noise signals at the ears as well as the listener audiogram as inputs and accounts for modulations in the noise and hearing loss, but cannot predict the temporal smearing of the speech by reverberation. The new model takes the audiogram, BRIRs and ear signals as inputs to account for the temporal smearing of the speech, the masker modulations and hearing loss. It gave accurate predictions for speech reception thresholds measured in seven experiments. The proposed model can do predictions that neither of the two original models can make when the target speech is influenced by reverberation and the noise has modulations and/or the listeners have hearing loss. In terms of model parameters, four methods were compared to separate the early and late reverberation, and two methods were compared to account for hearing loss.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251344947"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12120292/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-05-14DOI: 10.1177/23312165251340864
Emily Buss, Margaret E Richter, Amanda D Sloop, Margaret T Dillon
The ability to tell where sound sources are in space is ecologically important for spatial awareness and communication in multisource environments. While hearing aids and cochlear implants (CIs) can support spatial hearing for some users, this ability is not routinely assessed clinically. The present study compared sound source localization for a 200-ms speech-shaped noise presented using real sources at 18° intervals from -54° to +54° azimuth and virtual sources that were simulated using amplitude panning with sources at -54° and +54°. Participants were 34 adult CI or electric-acoustic stimulation users, including individuals with single-sided deafness or aided acoustic hearing. The pattern of localization errors by participant was broadly similar for real and virtual sources, with some modest differences. For example, the root mean square (RMS) error for these two conditions was correlated at r = .89 (p < .001), with a mean RMS elevation of 3.9° for virtual sources. These results suggest that sound source localization with two-speaker amplitude panning may provide clinically useful information when testing with real sources is infeasible.
{"title":"Estimating Cochlear Implant Users' Sound Localization Abilities With Two Loudspeakers.","authors":"Emily Buss, Margaret E Richter, Amanda D Sloop, Margaret T Dillon","doi":"10.1177/23312165251340864","DOIUrl":"https://doi.org/10.1177/23312165251340864","url":null,"abstract":"<p><p>The ability to tell where sound sources are in space is ecologically important for spatial awareness and communication in multisource environments. While hearing aids and cochlear implants (CIs) can support spatial hearing for some users, this ability is not routinely assessed clinically. The present study compared sound source localization for a 200-ms speech-shaped noise presented using real sources at 18° intervals from -54° to +54° azimuth and virtual sources that were simulated using amplitude panning with sources at -54° and +54°. Participants were 34 adult CI or electric-acoustic stimulation users, including individuals with single-sided deafness or aided acoustic hearing. The pattern of localization errors by participant was broadly similar for real and virtual sources, with some modest differences. For example, the root mean square (RMS) error for these two conditions was correlated at <i>r</i> = .89 (<i>p</i> < .001), with a mean RMS elevation of 3.9° for virtual sources. These results suggest that sound source localization with two-speaker amplitude panning may provide clinically useful information when testing with real sources is infeasible.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251340864"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12078988/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Understanding the initial signature of noise-induced auditory damage remains a significant priority. Animal models suggest the cochlear base is particularly vulnerable to noise, raising the possibility that early-stage noise exposure could be linked to basal cochlear dysfunction, even when thresholds at 0.25-8 kHz are normal. To investigate this in humans, we conducted a meta-analysis following a systematic review, examining the association between noise exposure and hearing in frequencies from 9 to 20 kHz as a marker for basal cochlear dysfunction. Systematic review and meta-analysis followed PRISMA guidelines and the PICOS framework. Studies on noise exposure and hearing in the 9 to 20 kHz region in adults with clinically normal audiograms were included by searching five electronic databases (e.g., PubMed). Cohorts from 30 studies, comprising approximately 2,500 participants, were systematically reviewed. Meta-analysis was conducted on 23 studies using a random-effects model for occupational and recreational noise exposure. Analysis showed a significant positive association between occupational noise and hearing thresholds, with medium effect sizes at 9 and 11.2 kHz and large effect sizes at 10, 12, 14, and 16 kHz. However, the association with recreational noise was less consistent, with significant effects only at 12, 12.5, and 16 kHz. Egger's test indicated some publication bias, specifically at 10 kHz. Findings suggest thresholds above 8 kHz may indicate early noise exposure effects, even when lower-frequency (≤8 kHz) thresholds remain normal. Longitudinal studies incorporating noise dosimetry are crucial to establish causality and further support the clinical utility of extended high-frequency testing.
{"title":"Is Noise Exposure Associated With Impaired Extended High Frequency Hearing Despite a Normal Audiogram? A Systematic Review and Meta-Analysis.","authors":"Sajana Aryal, Monica Trevino, Hansapani Rodrigo, Srikanta Mishra","doi":"10.1177/23312165251343757","DOIUrl":"10.1177/23312165251343757","url":null,"abstract":"<p><p>Understanding the initial signature of noise-induced auditory damage remains a significant priority. Animal models suggest the cochlear base is particularly vulnerable to noise, raising the possibility that early-stage noise exposure could be linked to basal cochlear dysfunction, even when thresholds at 0.25-8 kHz are normal. To investigate this in humans, we conducted a meta-analysis following a systematic review, examining the association between noise exposure and hearing in frequencies from 9 to 20 kHz as a marker for basal cochlear dysfunction. Systematic review and meta-analysis followed PRISMA guidelines and the PICOS framework. Studies on noise exposure and hearing in the 9 to 20 kHz region in adults with clinically normal audiograms were included by searching five electronic databases (e.g., PubMed). Cohorts from 30 studies, comprising approximately 2,500 participants, were systematically reviewed. Meta-analysis was conducted on 23 studies using a random-effects model for occupational and recreational noise exposure. Analysis showed a significant positive association between occupational noise and hearing thresholds, with medium effect sizes at 9 and 11.2 kHz and large effect sizes at 10, 12, 14, and 16 kHz. However, the association with recreational noise was less consistent, with significant effects only at 12, 12.5, and 16 kHz. Egger's test indicated some publication bias, specifically at 10 kHz. Findings suggest thresholds above 8 kHz may indicate early noise exposure effects, even when lower-frequency (≤8 kHz) thresholds remain normal. Longitudinal studies incorporating noise dosimetry are crucial to establish causality and further support the clinical utility of extended high-frequency testing.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251343757"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12084714/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-08-14DOI: 10.1177/23312165251367630
Federica Bianchi, Sindri Jonsson, Torben Christiansen, Elaine Hoi Ning Ng
Although multitasking is a common everyday activity, it is often challenging. The aim of this study was to evaluate the effect of noise attenuation during an audio-visual dual task and investigate cognitive resource allocation over time via pupillometry. Twenty-six normal hearing participants performed a dual task consisting of a primary speech recognition task and a secondary visual reaction-time task, as well as a visual-only task. Four conditions were tested in the dual task: two speech levels (60- and 64-dB SPL) and two noise conditions (No Attenuation with noise at 70 dB SPL; Attenuation condition with noise attenuated by passive damping). Elevated pupillary responses for the No Attenuation condition relative to the Attenuation and visual-only conditions indicated that participants allocated additional resources on the primary task during the playback of the first part of the sentence, while reaction time to the secondary task increased significantly relative to the visual-only task. In the Attenuation condition, participants performed the secondary task with a similar reaction time relative to the visual-only task (no dual-task cost), while pupillary responses revealed allocation of resources on the primary task after completion of the secondary task. These findings reveal that the temporal dynamics of cognitive resource allocation between primary and secondary task were affected by the level of background noise in the primary task. This study demonstrates that noise attenuation, as offered for example by audio devices, frees up cognitive resources in noisy listening environments and may be beneficial to improve performance and decrease dual-task costs during multitasking.
虽然多任务处理是一种常见的日常活动,但它往往具有挑战性。本研究的目的是评估噪声衰减在视听双重任务中的效果,并通过瞳孔测量法研究认知资源随时间的分配。26名听力正常的参与者执行了一项双重任务,包括主要的语音识别任务和次要的视觉反应时间任务,以及一个仅限视觉的任务。在双重任务中测试了四种条件:两种语音水平(60和64 dB SPL)和两种噪声条件(70 dB SPL噪声无衰减;噪声经被动阻尼衰减后的衰减状态)。受试者在无衰减条件下的瞳孔反应明显高于衰减条件和仅视觉条件下的瞳孔反应,这表明受试者在回放句子第一部分时在主要任务上分配了额外的资源,而对次要任务的反应时间则明显高于仅视觉条件下的反应时间。在衰减条件下,受试者在完成次要任务时的反应时间与仅视觉任务相似(没有双任务成本),而瞳孔反应显示完成次要任务后资源在主要任务上的分配。研究结果表明,主次任务间认知资源分配的时间动态受到主次任务背景噪声水平的影响。这项研究表明,噪音衰减,如音频设备提供的,在嘈杂的听力环境中释放认知资源,可能有利于提高性能,减少多任务处理时的双重任务成本。
{"title":"Pupillary Responses During a Dual Task: Effect of Noise Attenuation on the Timing of Cognitive Resource Allocation.","authors":"Federica Bianchi, Sindri Jonsson, Torben Christiansen, Elaine Hoi Ning Ng","doi":"10.1177/23312165251367630","DOIUrl":"10.1177/23312165251367630","url":null,"abstract":"<p><p>Although multitasking is a common everyday activity, it is often challenging. The aim of this study was to evaluate the effect of noise attenuation during an audio-visual dual task and investigate cognitive resource allocation over time via pupillometry. Twenty-six normal hearing participants performed a dual task consisting of a primary speech recognition task and a secondary visual reaction-time task, as well as a visual-only task. Four conditions were tested in the dual task: two speech levels (60- and 64-dB SPL) and two noise conditions (<i>No Attenuation</i> with noise at 70 dB SPL<i>; Attenuation</i> condition with noise attenuated by passive damping). Elevated pupillary responses for the N<i>o Attenuation</i> condition relative to the A<i>ttenuation</i> and visual-only conditions indicated that participants allocated additional resources on the primary task during the playback of the first part of the sentence, while reaction time to the secondary task increased significantly relative to the visual-only task. In the A<i>ttenuation</i> condition, participants performed the secondary task with a similar reaction time relative to the visual-only task (no dual-task cost), while pupillary responses revealed allocation of resources on the primary task after completion of the secondary task. These findings reveal that the temporal dynamics of cognitive resource allocation between primary and secondary task were affected by the level of background noise in the primary task. This study demonstrates that noise attenuation, as offered for example by audio devices, frees up cognitive resources in noisy listening environments and may be beneficial to improve performance and decrease dual-task costs during multitasking.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251367630"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12357024/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144849442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-06-27DOI: 10.1177/23312165251347138
Pam Dawson, Amanda Fullerton, Harish Krishnamoorthi, Kerrie Plant, Robert Cowan, Nadine Buczak, Christopher Long, Chris J James, Fergio Sismono, Andreas Büchner
This study investigated which of a range of factors could explain performance in two distinct groups of experienced, adult cochlear implant recipients differentiated by performance on words in quiet: 72 with poorer word scores versus 77 with better word scores. Tests measured the potential contribution of sound processor mapping, electrode placement, neural health, impedance, cognitive, and patient-related factors in predicting performance. A systematically measured sound processor MAP was compared to the subject's walk-in MAP. Electrode placement included modiolar distance, basal and apical insertion angle, and presence of scalar translocation. Neural health measurements included bipolar thresholds, polarity effect using asymmetrical pulses, and evoked compound action potential (ECAP) measures such as the interphase gap (IPG) effect, total refractory time, and panoramic ECAP. Impedance measurements included trans impedance matrix and four-point impedance. Cognitive tests comprised vocabulary ability, the Stroop test, and the Symbol Digits Modality Test. Performance was measured with words in quiet and sentence in noise tests and basic auditory sensitivity measures including phoneme discrimination in noise and quiet, amplitude modulation detection thresholds and quick spectral modulation detection. A range of predictor variables accounted for between 33% and 60% of the variability in performance outcomes. Multivariable regression analyses showed four key factors that were consistently predictive of poorer performance across several outcomes: substantially underfitted sound processor MAP thresholds, higher average bipolar thresholds, greater total refractory time, and greater IPG offset. Scalar translocation, cognitive variables, and other patient related factors were also significant predictors across more than one performance outcome.
{"title":"A Prospective, Multicentre Case-Control Trial Examining Factors That Explain Variable Clinical Performance in Post Lingual Adult CI Recipients.","authors":"Pam Dawson, Amanda Fullerton, Harish Krishnamoorthi, Kerrie Plant, Robert Cowan, Nadine Buczak, Christopher Long, Chris J James, Fergio Sismono, Andreas Büchner","doi":"10.1177/23312165251347138","DOIUrl":"10.1177/23312165251347138","url":null,"abstract":"<p><p>This study investigated which of a range of factors could explain performance in two distinct groups of experienced, adult cochlear implant recipients differentiated by performance on words in quiet: 72 with poorer word scores versus 77 with better word scores. Tests measured the potential contribution of sound processor mapping, electrode placement, neural health, impedance, cognitive, and patient-related factors in predicting performance. A systematically measured sound processor MAP was compared to the subject's walk-in MAP. Electrode placement included modiolar distance, basal and apical insertion angle, and presence of scalar translocation. Neural health measurements included bipolar thresholds, polarity effect using asymmetrical pulses, and evoked compound action potential (ECAP) measures such as the interphase gap (IPG) effect, total refractory time, and panoramic ECAP. Impedance measurements included trans impedance matrix and four-point impedance. Cognitive tests comprised vocabulary ability, the Stroop test, and the Symbol Digits Modality Test. Performance was measured with words in quiet and sentence in noise tests and basic auditory sensitivity measures including phoneme discrimination in noise and quiet, amplitude modulation detection thresholds and quick spectral modulation detection. A range of predictor variables accounted for between 33% and 60% of the variability in performance outcomes. Multivariable regression analyses showed four key factors that were consistently predictive of poorer performance across several outcomes: substantially underfitted sound processor MAP thresholds, higher average bipolar thresholds, greater total refractory time, and greater IPG offset. Scalar translocation, cognitive variables, and other patient related factors were also significant predictors across more than one performance outcome.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251347138"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12205208/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144508936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-09-19DOI: 10.1177/23312165251375892
Ruijing Ning, Carine Signoret, Emil Holmer, Henrik Danielsson
This study investigates the impact of hearing aid (HA) use on visual lexical decision (LD) performance in individuals with hearing loss. We hypothesize that HA use benefits phonological processing and leads to faster and more accurate visual LD. We compared the visual LD performance among three groups: 92 short-term HA users (<5 years), 98 long-term HA users, and 55 nonusers, while controlling for hearing level, age, and years of education. Results showed that, compared with non-HA users, HA users showed significantly faster reaction times in visual LD, specifically, long-term HA use was associated with smaller difference in reaction time for pseudowords compared to nonwords. These results suggest that HA use is associated with faster visual word recognition, potentially reflecting enhanced cognitive functions beyond auditory processing. These findings point to possible cognitive advantages linked to HA use.
{"title":"Hearing Aid Use is Associated with Faster Visual Lexical Decision.","authors":"Ruijing Ning, Carine Signoret, Emil Holmer, Henrik Danielsson","doi":"10.1177/23312165251375892","DOIUrl":"10.1177/23312165251375892","url":null,"abstract":"<p><p>This study investigates the impact of hearing aid (HA) use on visual lexical decision (LD) performance in individuals with hearing loss. We hypothesize that HA use benefits phonological processing and leads to faster and more accurate visual LD. We compared the visual LD performance among three groups: 92 short-term HA users (<5 years), 98 long-term HA users, and 55 nonusers, while controlling for hearing level, age, and years of education. Results showed that, compared with non-HA users, HA users showed significantly faster reaction times in visual LD, specifically, long-term HA use was associated with smaller difference in reaction time for pseudowords compared to nonwords. These results suggest that HA use is associated with faster visual word recognition, potentially reflecting enhanced cognitive functions beyond auditory processing. These findings point to possible cognitive advantages linked to HA use.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251375892"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12449647/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145092808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-11-13DOI: 10.1177/23312165251389585
Nick Sommerhalder, Zbyněk Bureš, Oliver Profant, Tobias Kleinjung, Patrick Neff, Martin Meyer
Adults with chronic subjective tinnitus often struggle with speech recognition in challenging listening environments. While most research demonstrates deficits in speech recognition among individuals with tinnitus, studies focusing on older adults remain scarce. Besides speech recognition deficits, tinnitus has been linked to diminished cognitive performance, particularly in executive functions, yet its associations with specific cognitive domains in ageing populations are not fully understood. Our previous study of younger adults found that individuals with tinnitus exhibit deficits in speech recognition and interference control. Building on this, we hypothesized that these deficits are also present for older adults. We conducted a cross-sectional study of older adults (aged 60-79), 32 with tinnitus and 31 controls matched for age, gender, education, and approximately matched for hearing loss. Participants underwent audiometric, speech recognition, and cognitive tasks. The tinnitus participants performed more poorly in speech-in-noise and gated speech tasks, whereas no group differences were observed in the other suprathreshold auditory tasks. With regard to cognition, individuals with tinnitus showed reduced interference control, emotional interference, cognitive flexibility, and verbal working memory, correlating with tinnitus distress and loudness. It is concluded that tinnitus-related deficits persist and even worsen with age. Our results suggest that altered central mechanisms contribute to speech recognition difficulties in older adults with tinnitus.
{"title":"Association of Tinnitus With Speech Recognition and Executive Functions in Older Adults.","authors":"Nick Sommerhalder, Zbyněk Bureš, Oliver Profant, Tobias Kleinjung, Patrick Neff, Martin Meyer","doi":"10.1177/23312165251389585","DOIUrl":"10.1177/23312165251389585","url":null,"abstract":"<p><p>Adults with chronic subjective tinnitus often struggle with speech recognition in challenging listening environments. While most research demonstrates deficits in speech recognition among individuals with tinnitus, studies focusing on older adults remain scarce. Besides speech recognition deficits, tinnitus has been linked to diminished cognitive performance, particularly in executive functions, yet its associations with specific cognitive domains in ageing populations are not fully understood. Our previous study of younger adults found that individuals with tinnitus exhibit deficits in speech recognition and interference control. Building on this, we hypothesized that these deficits are also present for older adults. We conducted a cross-sectional study of older adults (aged 60-79), 32 with tinnitus and 31 controls matched for age, gender, education, and approximately matched for hearing loss. Participants underwent audiometric, speech recognition, and cognitive tasks. The tinnitus participants performed more poorly in speech-in-noise and gated speech tasks, whereas no group differences were observed in the other suprathreshold auditory tasks. With regard to cognition, individuals with tinnitus showed reduced interference control, emotional interference, cognitive flexibility, and verbal working memory, correlating with tinnitus distress and loudness. It is concluded that tinnitus-related deficits persist and even worsen with age. Our results suggest that altered central mechanisms contribute to speech recognition difficulties in older adults with tinnitus.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251389585"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12615926/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145514780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-11-24DOI: 10.1177/23312165251397373
E Sebastian Lelo de Larrea-Mancera, Tess K Koerner, William J Bologna, Sara Momtaz, Katherine N Menon, Audrey Carrillo, Eric C Hoover, G Christopher Stecker, Frederick J Gallun, Aaron R Seitz
Previous research has demonstrated that remote testing of suprathreshold auditory function using distributed technologies can produce results that closely match those obtained in laboratory settings with specialized, calibrated equipment. This work has facilitated the validation of various behavioral measures in remote settings that provide valuable insights into auditory function. In the current study, we sought to address whether a broad battery of auditory assessments could explain variance in self-report of hearing handicap. To address this, we used a portable psychophysics assessment tool along with an online recruitment tool (Prolific) to collect auditory task data from participants with (n= 84) and without (n= 108) self-reported hearing difficulty. Results indicate several measures of auditory processing differentiate participants with and without self-reported hearing difficulty. In addition, we report the factor structure of the test battery to clarify the underlying constructs and the extent to which they individually or jointly inform hearing function. Relationships between measures of auditory processing were found to be largely consistent with a hypothesized construct model that guided task selection. Overall, this study advances our understanding of the relationship between auditory and cognitive processing in those with and without subjective hearing difficulty. More broadly, these results indicate promise that these measures can be used in larger scale research studies in remote settings and have potential to contribute to telehealth approaches to better address people's hearing needs.
{"title":"At-Home Auditory Assessment Using Portable Automated Rapid Testing (PART) to Understand Self-Reported Hearing Difficulties.","authors":"E Sebastian Lelo de Larrea-Mancera, Tess K Koerner, William J Bologna, Sara Momtaz, Katherine N Menon, Audrey Carrillo, Eric C Hoover, G Christopher Stecker, Frederick J Gallun, Aaron R Seitz","doi":"10.1177/23312165251397373","DOIUrl":"10.1177/23312165251397373","url":null,"abstract":"<p><p>Previous research has demonstrated that remote testing of suprathreshold auditory function using distributed technologies can produce results that closely match those obtained in laboratory settings with specialized, calibrated equipment. This work has facilitated the validation of various behavioral measures in remote settings that provide valuable insights into auditory function. In the current study, we sought to address whether a broad battery of auditory assessments could explain variance in self-report of hearing handicap. To address this, we used a portable psychophysics assessment tool along with an online recruitment tool (Prolific) to collect auditory task data from participants with (<i>n</i> <i>=</i> 84) and without (<i>n</i> <i>=</i> 108) self-reported hearing difficulty. Results indicate several measures of auditory processing differentiate participants with and without self-reported hearing difficulty. In addition, we report the factor structure of the test battery to clarify the underlying constructs and the extent to which they individually or jointly inform hearing function. Relationships between measures of auditory processing were found to be largely consistent with a hypothesized construct model that guided task selection. Overall, this study advances our understanding of the relationship between auditory and cognitive processing in those with and without subjective hearing difficulty. More broadly, these results indicate promise that these measures can be used in larger scale research studies in remote settings and have potential to contribute to telehealth approaches to better address people's hearing needs.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251397373"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12644446/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145597487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}