Pub Date : 2025-01-01Epub Date: 2025-03-16DOI: 10.1177/23312165251317010
Timothy Beechey, Graham Naylor
This paper describes a conceptual model of adaptive responses to adverse auditory conditions with the aim of providing a basis for better understanding the demands of, and opportunities for, successful real-life auditory functioning. We review examples of behaviors that facilitate auditory functioning in adverse conditions. Next, we outline the concept of purpose-driven behavior and describe how changing behavior can ensure stable performance in a changing environment. We describe how tasks and environments (both physical and social) dictate which behaviors are possible and effective facilitators of auditory functioning, and how hearing disability may be understood in terms of capacity to adapt to the environment. A conceptual model of adaptive cognitive, physical, and linguistic responses within a moderating negative feedback system is presented along with implications for the interpretation of auditory experiments which seek to predict functioning outside the laboratory or clinic. We argue that taking account of how people can improve their own performance by adapting their behavior and modifying their environment may contribute to more robust and generalizable experimental findings.
{"title":"How Purposeful Adaptive Responses to Adverse Conditions Facilitate Successful Auditory Functioning: A Conceptual Model.","authors":"Timothy Beechey, Graham Naylor","doi":"10.1177/23312165251317010","DOIUrl":"10.1177/23312165251317010","url":null,"abstract":"<p><p>This paper describes a conceptual model of adaptive responses to adverse auditory conditions with the aim of providing a basis for better understanding the demands of, and opportunities for, successful real-life auditory functioning. We review examples of behaviors that facilitate auditory functioning in adverse conditions. Next, we outline the concept of purpose-driven behavior and describe how changing behavior can ensure stable performance in a changing environment. We describe how tasks and environments (both physical and social) dictate which behaviors are possible and effective facilitators of auditory functioning, and how hearing disability may be understood in terms of capacity to adapt to the environment. A conceptual model of adaptive cognitive, physical, and linguistic responses within a moderating negative feedback system is presented along with implications for the interpretation of auditory experiments which seek to predict functioning outside the laboratory or clinic. We argue that taking account of how people can improve their own performance by adapting their behavior and modifying their environment may contribute to more robust and generalizable experimental findings.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251317010"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11912170/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143651562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-05-27DOI: 10.1177/23312165251342436
Nuphar Singer, Yael Zaltz
Auditory learning is essential for adapting to continuously changing acoustic environments. This adaptive capability, however, may be impacted by age-related declines in sensory and cognitive functions, potentially limiting learning efficiency and generalization in older adults. This study investigated auditory learning and generalization in 24 older (65-82 years) and 24 younger (18-34 years) adults through voice discrimination (VD) training. Participants were divided into training (12 older, 12 younger adults) and control groups (12 older, 12 younger adults). Trained participants completed five sessions: Two testing sessions assessing VD performance using a 2-down 1-up adaptive procedure with F0-only, formant-only, and combined F0 + formant cues, and three training sessions focusing exclusively on VD with F0 cues. Control groups participated only in the two testing sessions, with no intermediate training. Results revealed significant training-induced improvements in VD with F0 cues for both younger and older adults, with comparable learning efficiency and gains across groups. However, generalization to the formant-only cue was observed only in younger adults, suggesting limited learning transfer in older adults. Additionally, VD training did not improve performance in the combined F0 + formant condition beyond control group improvements, underscoring the specificity of perceptual learning. These findings provide novel insights into auditory learning in older adults, showing that while they retain the ability for significant auditory skill acquisition, age-related declines in perceptual flexibility may limit broader generalization. This study highlights the importance of designing targeted auditory interventions for older adults, considering their specific limitations in generalizing learning gains across different acoustic cues.
{"title":"Auditory Learning and Generalization in Older Adults: Evidence from Voice Discrimination Training.","authors":"Nuphar Singer, Yael Zaltz","doi":"10.1177/23312165251342436","DOIUrl":"10.1177/23312165251342436","url":null,"abstract":"<p><p>Auditory learning is essential for adapting to continuously changing acoustic environments. This adaptive capability, however, may be impacted by age-related declines in sensory and cognitive functions, potentially limiting learning efficiency and generalization in older adults. This study investigated auditory learning and generalization in 24 older (65-82 years) and 24 younger (18-34 years) adults through voice discrimination (VD) training. Participants were divided into training (12 older, 12 younger adults) and control groups (12 older, 12 younger adults). Trained participants completed five sessions: Two testing sessions assessing VD performance using a 2-down 1-up adaptive procedure with F0-only, formant-only, and combined F0 + formant cues, and three training sessions focusing exclusively on VD with F0 cues. Control groups participated only in the two testing sessions, with no intermediate training. Results revealed significant training-induced improvements in VD with F0 cues for both younger and older adults, with comparable learning efficiency and gains across groups. However, generalization to the formant-only cue was observed only in younger adults, suggesting limited learning transfer in older adults. Additionally, VD training did not improve performance in the combined F0 + formant condition beyond control group improvements, underscoring the specificity of perceptual learning. These findings provide novel insights into auditory learning in older adults, showing that while they retain the ability for significant auditory skill acquisition, age-related declines in perceptual flexibility may limit broader generalization. This study highlights the importance of designing targeted auditory interventions for older adults, considering their specific limitations in generalizing learning gains across different acoustic cues.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251342436"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12117233/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144152623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-10-01DOI: 10.1177/23312165251367625
Mohsen Fatehifar, Kevin J Munro, Michael A Stone, David Wong, Tim Cootes, Josef Schlittenlacher
This proof-of-concept study evaluated the implementation of a digits-in-noise test we call the 'AI-powered test' that used text-to-speech (TTS) and automatic speech recognition (ASR). Two other digits-in-noise tests formed the baselines for comparison: the 'keyboard-based test' which used the same configurations as the AI-powered test, and the 'independent test', a third-party-sourced test not modified by us. The validity of the AI-powered test was evaluated by measuring its difference from the independent test and comparing it with the baseline, which was the difference between the Keyboard-based test and the Independent test. The reliability of the AI-powered test was measured by comparing the similarity of two runs of this test and the Independent test. The study involved 31 participants: 10 with hearing loss and 21 with normal-hearing. Achieved mean bias and limits-of-agreement showed that the agreement between the AI-powered test and the independent test (-1.3 ± 4.9 dB) was similar to the agreement between the keyboard-based test and the Independent test (-0.2 ± 4.4 dB), indicating that the addition of TTS and ASR did not have a negative impact. The AI-powered test had a reliability of -1.0 ± 5.7 dB, which was poorer than the baseline reliability (-0.4 ± 3.8 dB), but this was improved to -0.9 ± 3.8 dB when outliers were removed, showing that low-error ASR (as shown with the Whisper model) makes the test as reliable as independent tests. These findings suggest that a digits-in-noise test using synthetic stimuli and automatic speech recognition is a viable alternative to traditional tests and could have real-world applications.
{"title":"Digits-In-Noise Hearing Test Using Text-to-Speech and Automatic Speech Recognition: Proof-of-Concept Study.","authors":"Mohsen Fatehifar, Kevin J Munro, Michael A Stone, David Wong, Tim Cootes, Josef Schlittenlacher","doi":"10.1177/23312165251367625","DOIUrl":"10.1177/23312165251367625","url":null,"abstract":"<p><p>This proof-of-concept study evaluated the implementation of a digits-in-noise test we call the 'AI-powered test' that used text-to-speech (TTS) and automatic speech recognition (ASR). Two other digits-in-noise tests formed the baselines for comparison: the 'keyboard-based test' which used the same configurations as the AI-powered test, and the 'independent test', a third-party-sourced test not modified by us. The validity of the AI-powered test was evaluated by measuring its difference from the independent test and comparing it with the baseline, which was the difference between the Keyboard-based test and the Independent test. The reliability of the AI-powered test was measured by comparing the similarity of two runs of this test and the Independent test. The study involved 31 participants: 10 with hearing loss and 21 with normal-hearing. Achieved mean bias and limits-of-agreement showed that the agreement between the AI-powered test and the independent test (-1.3 ± 4.9 dB) was similar to the agreement between the keyboard-based test and the Independent test (-0.2 ± 4.4 dB), indicating that the addition of TTS and ASR did not have a negative impact. The AI-powered test had a reliability of -1.0 ± 5.7 dB, which was poorer than the baseline reliability (-0.4 ± 3.8 dB), but this was improved to -0.9 ± 3.8 dB when outliers were removed, showing that low-error ASR (as shown with the Whisper model) makes the test as reliable as independent tests. These findings suggest that a digits-in-noise test using synthetic stimuli and automatic speech recognition is a viable alternative to traditional tests and could have real-world applications.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251367625"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12489207/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145208105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-11-24DOI: 10.1177/23312165251396658
Rebecca C Felsheim, Sabine Hochmuth, Alina Kleinow, Andreas Radeloff, Mathias Dietz
Bimodal cochlear implant users show poor localization performance. One reason for this is a difference in the processing latency between the hearing aid and the cochlear implant side. It has been shown that reducing this latency difference acutely improves the localization performance of bimodal cochlear implant users. However, due to the frequency dependency of both the device latencies and the acoustic hearing ear, current frequency-independent latency adjustments cannot fully compensate for the differences, leaving open which latency adjustment is best. We therefore measured the localization performance of 11 bimodal cochlear implant users for multiple cochlear implant latencies. We confirmed previous studies that adjusting the interaural latency improves localization in most of our subjects. However, the latency that leads to the best localization performance for most subjects was not necessarily at the latency estimated to compensate for the interaural difference at intermediate frequencies (1 kHz). Nine of 11 subjects localized best with a cochlear implant latency that was slightly shorter than the estimated latency compensation.
{"title":"Bimodal Cochlear Implants: Measurement of the Localization Performance as a Function of Device Latency Difference.","authors":"Rebecca C Felsheim, Sabine Hochmuth, Alina Kleinow, Andreas Radeloff, Mathias Dietz","doi":"10.1177/23312165251396658","DOIUrl":"10.1177/23312165251396658","url":null,"abstract":"<p><p>Bimodal cochlear implant users show poor localization performance. One reason for this is a difference in the processing latency between the hearing aid and the cochlear implant side. It has been shown that reducing this latency difference acutely improves the localization performance of bimodal cochlear implant users. However, due to the frequency dependency of both the device latencies and the acoustic hearing ear, current frequency-independent latency adjustments cannot fully compensate for the differences, leaving open which latency adjustment is best. We therefore measured the localization performance of 11 bimodal cochlear implant users for multiple cochlear implant latencies. We confirmed previous studies that adjusting the interaural latency improves localization in most of our subjects. However, the latency that leads to the best localization performance for most subjects was not necessarily at the latency estimated to compensate for the interaural difference at intermediate frequencies (1 kHz). Nine of 11 subjects localized best with a cochlear implant latency that was slightly shorter than the estimated latency compensation.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251396658"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12644428/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145597477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-11-24DOI: 10.1177/23312165251397699
Qi Gao, Lena L N Wong, Fei Chen
This study investigated the effect of temporal misalignment between acoustic and simulated electric signals on the ability to process fast speech in normal-hearing listeners. The within-ear integration of acoustic and electric hearing was simulated, mimicking the electric-acoustic stimulation (EAS) condition, where cochlear implant users receive acoustic input at low frequencies and electric stimulation at high frequencies in the same ear. Time-compression thresholds (TCTs), defined as the 50% correct performance for time-compressed sentences, were adaptively measured in quiet and in speech-spectrum noise (SSN) as well as amplitude-modulated noise (AMN) at 4 dB and 10 dB signal-to-noise ratio (SNR). Temporal misalignment was introduced by delaying the acoustic or the simulated electric signals, which were generated using a low-pass filter (cutoff frequency: 600 Hz) and a five-channel noise vocoder, respectively. Listeners showed significant benefits from the addition of low-frequency acoustic signals in terms of TCTs, regardless of temporal misalignment. Within the range from 0 ms to ±30 ms, temporal misalignment decreased listeners' TCTs, and its effect interacted with SNR such that the adverse impact of misalignment was more pronounced at higher SNR levels. When misalignment was limited to within ±7 ms, which is closer to the clinically relevant range, its effect disappeared. In conclusion, while temporal misalignment negatively affects the ability of listeners with simulated EAS hearing to process fast sentences in Mandarin, its effect is negligible when it is close to a clinically relevant range. Future research should validate these findings in real EAS users.
{"title":"The Effect of Temporal Misalignment Between Acoustic and Simulated Electric Signals on the Time Compression Thresholds of Normal-Hearing Listeners.","authors":"Qi Gao, Lena L N Wong, Fei Chen","doi":"10.1177/23312165251397699","DOIUrl":"10.1177/23312165251397699","url":null,"abstract":"<p><p>This study investigated the effect of temporal misalignment between acoustic and simulated electric signals on the ability to process fast speech in normal-hearing listeners. The within-ear integration of acoustic and electric hearing was simulated, mimicking the electric-acoustic stimulation (EAS) condition, where cochlear implant users receive acoustic input at low frequencies and electric stimulation at high frequencies in the same ear. Time-compression thresholds (TCTs), defined as the 50% correct performance for time-compressed sentences, were adaptively measured in quiet and in speech-spectrum noise (SSN) as well as amplitude-modulated noise (AMN) at 4 dB and 10 dB signal-to-noise ratio (SNR). Temporal misalignment was introduced by delaying the acoustic or the simulated electric signals, which were generated using a low-pass filter (cutoff frequency: 600 Hz) and a five-channel noise vocoder, respectively. Listeners showed significant benefits from the addition of low-frequency acoustic signals in terms of TCTs, regardless of temporal misalignment. Within the range from 0 ms to ±30 ms, temporal misalignment decreased listeners' TCTs, and its effect interacted with SNR such that the adverse impact of misalignment was more pronounced at higher SNR levels. When misalignment was limited to within ±7 ms, which is closer to the clinically relevant range, its effect disappeared. In conclusion, while temporal misalignment negatively affects the ability of listeners with simulated EAS hearing to process fast sentences in Mandarin, its effect is negligible when it is close to a clinically relevant range. Future research should validate these findings in real EAS users.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251397699"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12644445/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145597739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-08-25DOI: 10.1177/23312165251368669
Robin Hake, Michel Bürgel, Christophe Lesimple, Matthias Vormann, Kirsten C Wagener, Volker Kuehnel, Kai Siedenburg
Hearing aids have traditionally been designed to facilitate speech perception. With regards to music perception, previous work indicates that hearing aid users frequently complain about music sound quality. Yet, the effects of hearing aid amplification on musical perception abilities are largely unknown. This study aimed to investigate the effects of hearing aid amplification and dynamic range compression (DRC) settings on musical scene analysis (MSA) abilities and sound quality ratings (SQR) using polyphonic music recordings. Additionally, speech reception thresholds in noise (SRT) were measured. Thirty-three hearing aid users with moderate to severe hearing loss participated in three conditions: unaided, and aided with either slow or fast DRC settings. Overall, MSA abilities, SQR and SRT significantly improved with the use of hearing aids compared to the unaided condition. Yet, differences were observed regarding the choice of compression settings. Fast DRC led to better MSA performance, reflecting enhanced selective listening in musical mixtures, while slow DRC elicited more favorable SQR. Despite these improvements, variability in amplification benefit across DRC settings and tasks remained considerable, with some individuals showing limited or no improvement. These findings highlight a trade-off between scene transparency (indexed by MSA) and perceived sound quality, with individual differences emerging as a key factor in shaping amplification outcomes. Our results underscore the potential benefits of hearing aids for music perception and indicate the need for personalized fitting strategies tailored to task-specific demands.
{"title":"Perception of Recorded Music With Hearing Aids: Compression Differentially Affects Musical Scene Analysis and Musical Sound Quality.","authors":"Robin Hake, Michel Bürgel, Christophe Lesimple, Matthias Vormann, Kirsten C Wagener, Volker Kuehnel, Kai Siedenburg","doi":"10.1177/23312165251368669","DOIUrl":"https://doi.org/10.1177/23312165251368669","url":null,"abstract":"<p><p>Hearing aids have traditionally been designed to facilitate speech perception. With regards to music perception, previous work indicates that hearing aid users frequently complain about music sound quality. Yet, the effects of hearing aid amplification on musical perception abilities are largely unknown. This study aimed to investigate the effects of hearing aid amplification and dynamic range compression (DRC) settings on musical scene analysis (MSA) abilities and sound quality ratings (SQR) using polyphonic music recordings. Additionally, speech reception thresholds in noise (SRT) were measured. Thirty-three hearing aid users with moderate to severe hearing loss participated in three conditions: unaided, and aided with either slow or fast DRC settings. Overall, MSA abilities, SQR and SRT significantly improved with the use of hearing aids compared to the unaided condition. Yet, differences were observed regarding the choice of compression settings. Fast DRC led to better MSA performance, reflecting enhanced selective listening in musical mixtures, while slow DRC elicited more favorable SQR. Despite these improvements, variability in amplification benefit across DRC settings and tasks remained considerable, with some individuals showing limited or no improvement. These findings highlight a trade-off between scene transparency (indexed by MSA) and perceived sound quality, with individual differences emerging as a key factor in shaping amplification outcomes. Our results underscore the potential benefits of hearing aids for music perception and indicate the need for personalized fitting strategies tailored to task-specific demands.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251368669"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12378302/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144975114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1177/23312165241312449
Aaron C Moberly, Liping Du, Terrin N Tamati
When listening to speech under adverse conditions, listeners compensate using neurocognitive resources. A clinically relevant form of adverse listening is listening through a cochlear implant (CI), which provides a spectrally degraded signal. CI listening is often simulated through noise-vocoding. This study investigated the neurocognitive mechanisms supporting recognition of spectrally degraded speech in adult CI users and normal-hearing (NH) peers listening to noise-vocoded speech, with the hypothesis that an overlapping set of neurocognitive functions would contribute to speech recognition in both groups. Ninety-seven adults with either a CI (54 CI individuals, mean age 66.6 years, range 45-87 years) or age-normal hearing (43 NH individuals, mean age 66.8 years, range 50-81 years) participated. Listeners heard materials varying in linguistic complexity consisting of isolated words, meaningful sentences, anomalous sentences, high-variability sentences, and audiovisually (AV) presented sentences. Participants were also tested for vocabulary knowledge, nonverbal reasoning, working memory capacity, inhibition-concentration, and speed of lexical and phonological access. Linear regression analyses with robust standard errors were performed for speech recognition tasks on neurocognitive functions. Nonverbal reasoning contributed to meaningful sentence recognition in NH peers and anomalous sentence recognition in CI users. Speed of lexical access contributed to performance on most speech tasks for CI users but not for NH peers. Finally, inhibition-concentration and vocabulary knowledge contributed to AV sentence recognition in NH listeners alone. Findings suggest that the complexity of speech materials may determine the particular contributions of neurocognitive skills, and that NH processing of noise-vocoded speech may not represent how CI listeners process speech.
{"title":"Individual Differences in the Recognition of Spectrally Degraded Speech: Associations With Neurocognitive Functions in Adult Cochlear Implant Users and With Noise-Vocoded Simulations.","authors":"Aaron C Moberly, Liping Du, Terrin N Tamati","doi":"10.1177/23312165241312449","DOIUrl":"10.1177/23312165241312449","url":null,"abstract":"<p><p>When listening to speech under adverse conditions, listeners compensate using neurocognitive resources. A clinically relevant form of adverse listening is listening through a cochlear implant (CI), which provides a spectrally degraded signal. CI listening is often simulated through noise-vocoding. This study investigated the neurocognitive mechanisms supporting recognition of spectrally degraded speech in adult CI users and normal-hearing (NH) peers listening to noise-vocoded speech, with the hypothesis that an overlapping set of neurocognitive functions would contribute to speech recognition in both groups. Ninety-seven adults with either a CI (54 CI individuals, mean age 66.6 years, range 45-87 years) or age-normal hearing (43 NH individuals, mean age 66.8 years, range 50-81 years) participated. Listeners heard materials varying in linguistic complexity consisting of isolated words, meaningful sentences, anomalous sentences, high-variability sentences, and audiovisually (AV) presented sentences. Participants were also tested for vocabulary knowledge, nonverbal reasoning, working memory capacity, inhibition-concentration, and speed of lexical and phonological access. Linear regression analyses with robust standard errors were performed for speech recognition tasks on neurocognitive functions. Nonverbal reasoning contributed to meaningful sentence recognition in NH peers and anomalous sentence recognition in CI users. Speed of lexical access contributed to performance on most speech tasks for CI users but not for NH peers. Finally, inhibition-concentration and vocabulary knowledge contributed to AV sentence recognition in NH listeners alone. Findings suggest that the complexity of speech materials may determine the particular contributions of neurocognitive skills, and that NH processing of noise-vocoded speech may not represent how CI listeners process speech.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241312449"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11742172/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-03-18DOI: 10.1177/23312165251317027
Pedro Lladó, Piotr Majdak, Roberto Barumerli, Robert Baumgartner
Localization of sound sources in sagittal planes significantly relies on monaural spectral cues. These cues are primarily derived from the direction-specific filtering of the pinnae. The contribution of specific frequency regions to the cue evaluation has not been fully clarified. To this end, we analyzed how different spectral weighting schemes contribute to the explanatory power of a sagittal-plane localization model in response to wideband, flat-spectrum stimuli. Each weighting scheme emphasized the contribution of spectral cues within well-defined frequency bands, enabling us to assess their impact on the predictions of individual patterns of localization responses. By means of Bayesian model selection, we compared five model variants representing various spectral weights. Our results indicate a preference for the weighting schemes emphasizing the contribution of frequencies above 8 kHz, suggesting that, in the auditory system, spectral cue evaluation is upweighted in that frequency region. While various potential explanations are discussed, we conclude that special attention should be put on this high-frequency region in spatial-audio applications aiming at the best localization performance.
{"title":"Spectral Weighting of Monaural Cues for Auditory Localization in Sagittal Planes.","authors":"Pedro Lladó, Piotr Majdak, Roberto Barumerli, Robert Baumgartner","doi":"10.1177/23312165251317027","DOIUrl":"10.1177/23312165251317027","url":null,"abstract":"<p><p>Localization of sound sources in sagittal planes significantly relies on monaural spectral cues. These cues are primarily derived from the direction-specific filtering of the pinnae. The contribution of specific frequency regions to the cue evaluation has not been fully clarified. To this end, we analyzed how different spectral weighting schemes contribute to the explanatory power of a sagittal-plane localization model in response to wideband, flat-spectrum stimuli. Each weighting scheme emphasized the contribution of spectral cues within well-defined frequency bands, enabling us to assess their impact on the predictions of individual patterns of localization responses. By means of Bayesian model selection, we compared five model variants representing various spectral weights. Our results indicate a preference for the weighting schemes emphasizing the contribution of frequencies above 8 kHz, suggesting that, in the auditory system, spectral cue evaluation is upweighted in that frequency region. While various potential explanations are discussed, we conclude that special attention should be put on this high-frequency region in spatial-audio applications aiming at the best localization performance.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251317027"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11920987/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143659047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-07-29DOI: 10.1177/23312165251364675
Courtney Coburn Glavin, Sumitrajit Dhar
Age-related hearing loss (ARHL) currently affects over 20 million adults in the U.S. and its prevalence is expected to increase as the population ages. However, little is known about the earliest manifestations of ARHL, including its influence on auditory function beyond the threshold of sensation. This work explores the effects of early aging on frequency selectivity (i.e., "tuning"), a critical feature of normal hearing function. Tuning is estimated using both behavioral and physiological measures-fast psychophysical tuning curves (fPTC), distortion product otoacoustic emission level ratio functions (DPOAE LRFs), and stimulus-frequency OAE (SFOAE) phase gradient delay. All three measures were selected because they have high potential for clinical translation but have not been compared directly in the same sample of ears. Results indicate that there may be subtle changes in tuning during early aging, even in ears with clinically normal audiometric thresholds. Additionally, there are notable differences in tuning estimates derived from the three measures. Psychophysical tuning estimates are highly variable and statistically significantly different from OAE-derived tuning estimates, suggesting that behavioral tuning is uniquely influenced by factors not affecting OAE-based tuning. Across all measures, there is considerable individual variability that warrants future investigation. Collectively, this work suggests that age-related auditory decline begins in relatively young ears (<60 years) and in the absence of traditionally defined "hearing loss." These findings suggest the potential benefit of characterizing ARHL beyond threshold and establishing a gold standard for measuring frequency selectivity in humans.
{"title":"Cochlear Tuning in Early Aging Estimated with Three Methods.","authors":"Courtney Coburn Glavin, Sumitrajit Dhar","doi":"10.1177/23312165251364675","DOIUrl":"10.1177/23312165251364675","url":null,"abstract":"<p><p>Age-related hearing loss (ARHL) currently affects over 20 million adults in the U.S. and its prevalence is expected to increase as the population ages. However, little is known about the earliest manifestations of ARHL, including its influence on auditory function beyond the threshold of sensation. This work explores the effects of early aging on frequency selectivity (i.e., \"tuning\"), a critical feature of normal hearing function. Tuning is estimated using both behavioral and physiological measures-fast psychophysical tuning curves (fPTC), distortion product otoacoustic emission level ratio functions (DPOAE LRFs), and stimulus-frequency OAE (SFOAE) phase gradient delay. All three measures were selected because they have high potential for clinical translation but have not been compared directly in the same sample of ears. Results indicate that there may be subtle changes in tuning during early aging, even in ears with clinically normal audiometric thresholds. Additionally, there are notable differences in tuning estimates derived from the three measures. Psychophysical tuning estimates are highly variable and statistically significantly different from OAE-derived tuning estimates, suggesting that behavioral tuning is uniquely influenced by factors not affecting OAE-based tuning. Across all measures, there is considerable individual variability that warrants future investigation. Collectively, this work suggests that age-related auditory decline begins in relatively young ears (<60 years) and in the absence of traditionally defined \"hearing loss.\" These findings suggest the potential benefit of characterizing ARHL beyond threshold and establishing a gold standard for measuring frequency selectivity in humans.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251364675"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12317184/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144745544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-08-18DOI: 10.1177/23312165251370006
Hendrik Husstedt, Jennifer Schmidt, Luca Wiederschein, Robert Wiedenbeck, Markus Kemper, Florian Denk
In addition to speech intelligibility, listening effort has emerged as a critical indicator of hearing performance. It can be defined as the effort experienced or invested in solving an auditory task. Subjective, behavioral, and physiological methods have been employed to assess listening effort. While previous studies have focused predominantly evaluated listening effort at clearly audible levels, such as in speech-in-noise conditions, we present findings from a study investigating listening effort for soft speech in quiet. Twenty young adults with normal hearing participated in speech intelligibility testing (OLSA), adaptive listening effort scaling (ACALES), and pupillometry. Experienced effort decreased with increasing speech level and "no effort" was reached at 40 dB sound pressure level (SPL). The difference between levels rated with "extreme effort" and "no effort" was, on average, 20.6 dB SPL. Thus, speech must be presented well above the speech-recognition threshold in quiet to achieve effortless listening. These results prompted a follow-up experiment involving 18 additional participants, who completed OLSA and ACALES tests with hearing threshold-simulating noise at conversational levels. Comparing the results of the main and follow-up experiments suggests that the observations in quiet cannot be fully attributed to the masking effects of internal noise but likely also reflect cognitive processes that are not yet fully understood. These findings have important implications, particularly regarding the benefits of amplification for soft sounds. We propose that the concept of a threshold for effortless listening has been overlooked and should be prioritized in future research, especially in the context of soft speech in quiet environments.
除了言语可理解性,听力努力也成为听力表现的一个重要指标。它可以被定义为在解决听觉任务中所经历或投入的努力。主观的、行为的和生理的方法被用来评估听力努力。虽然以前的研究主要集中在评估清晰可听水平下的听力努力,例如在噪音条件下的语音,但我们的研究结果来自于一项研究,调查了安静环境下软语的听力努力。20名听力正常的年轻人参加了语音清晰度测试(OLSA)、自适应听力努力量表(ACALES)和瞳孔测量。声压级(SPL)为40 dB时达到“不费力”;被评为“极度努力”和“不努力”的水平之间的差异平均为20.6 dB SPL。因此,语音必须在安静的情况下远高于语音识别阈值,以实现轻松的聆听。这些结果促使另外18名参与者进行了后续实验,他们在会话水平的听力阈值模拟噪音下完成了OLSA和ACALES测试。比较主要实验和后续实验的结果表明,安静环境下的观察结果不能完全归因于内部噪音的掩蔽效应,而可能也反映了尚未完全理解的认知过程。这些发现具有重要的意义,特别是关于对柔和声音的放大的好处。我们认为,在未来的研究中,特别是在安静环境中的软语环境中,容易倾听的阈值概念被忽视了,应该优先考虑。
{"title":"Listening Effort for Soft Speech in Quiet.","authors":"Hendrik Husstedt, Jennifer Schmidt, Luca Wiederschein, Robert Wiedenbeck, Markus Kemper, Florian Denk","doi":"10.1177/23312165251370006","DOIUrl":"10.1177/23312165251370006","url":null,"abstract":"<p><p>In addition to speech intelligibility, listening effort has emerged as a critical indicator of hearing performance. It can be defined as the effort experienced or invested in solving an auditory task. Subjective, behavioral, and physiological methods have been employed to assess listening effort. While previous studies have focused predominantly evaluated listening effort at clearly audible levels, such as in speech-in-noise conditions, we present findings from a study investigating listening effort for soft speech in quiet. Twenty young adults with normal hearing participated in speech intelligibility testing (OLSA), adaptive listening effort scaling (ACALES), and pupillometry. Experienced effort decreased with increasing speech level and \"no effort\" was reached at 40 dB sound pressure level (SPL). The difference between levels rated with \"extreme effort\" and \"no effort\" was, on average, 20.6 dB SPL. Thus, speech must be presented well above the speech-recognition threshold in quiet to achieve effortless listening. These results prompted a follow-up experiment involving 18 additional participants, who completed OLSA and ACALES tests with hearing threshold-simulating noise at conversational levels. Comparing the results of the main and follow-up experiments suggests that the observations in quiet cannot be fully attributed to the masking effects of internal noise but likely also reflect cognitive processes that are not yet fully understood. These findings have important implications, particularly regarding the benefits of amplification for soft sounds. We propose that the concept of a threshold for effortless listening has been overlooked and should be prioritized in future research, especially in the context of soft speech in quiet environments.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251370006"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12365469/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144876067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}