This nationwide retrospective cohort study examines the association between adults with hearing loss (HL) and subsequent injury risk. Utilizing data from the Taiwan National Health Insurance Research Database (2000-2017), the study included 19,480 patients with HL and 77,920 matched controls. Over an average follow-up of 9.08 years, 18.30% of the 97,400 subjects sustained subsequent all-cause injuries. The injury incidence was significantly higher in the HL group compared to the control group (24.04% vs. 16.86%, p < .001). After adjusting for demographics and comorbidities, the adjusted hazard ratio (aHR) for injury in the HL cohort was 2.35 (95% CI: 2.22-2.49). Kaplan-Meier analysis showed significant differences in injury-free survival between the HL and control groups (log-rank test, p < .001). The increased risk was consistent across age groups (18-64 and ≥65 years), with the HL group showing a higher risk of unintentional injuries (aHR: 2.62; 95% CI: 2.45-2.80), including falls (aHR: 2.83; 95% CI: 2.52-3.17) and traffic-related injuries (aHR: 2.38; 95% CI: 2.07-2.74). These findings highlight an independent association between HL and increased injury risk, underscoring the need for healthcare providers to counsel adult HL patients on preventive measures.
这项全国性的回顾性队列研究探讨了成人听力损失(HL)与随后的损伤风险之间的关系。在平均9.08年的随访中,97,400名受试者中有18.30%随后遭受了全因损伤。HL组损伤发生率明显高于对照组(24.04% vs. 16.86%, p < 0.05)
{"title":"Association of Increased Risk of Injury in Adults With Hearing Loss: A Population-Based Cohort Study.","authors":"Kuan-Yu Lai, Hung-Che Lin, Wan-Ting Shih, Wu-Chien Chien, Chi-Hsiang Chung, Mingchih Chen, Jeng-Wen Chen, Hung-Chun Chung","doi":"10.1177/23312165241309589","DOIUrl":"10.1177/23312165241309589","url":null,"abstract":"<p><p>This nationwide retrospective cohort study examines the association between adults with hearing loss (HL) and subsequent injury risk. Utilizing data from the Taiwan National Health Insurance Research Database (2000-2017), the study included 19,480 patients with HL and 77,920 matched controls. Over an average follow-up of 9.08 years, 18.30% of the 97,400 subjects sustained subsequent all-cause injuries. The injury incidence was significantly higher in the HL group compared to the control group (24.04% vs. 16.86%, <i>p </i>< .001). After adjusting for demographics and comorbidities, the adjusted hazard ratio (aHR) for injury in the HL cohort was 2.35 (95% CI: 2.22-2.49). Kaplan-Meier analysis showed significant differences in injury-free survival between the HL and control groups (log-rank test, <i>p </i>< .001). The increased risk was consistent across age groups (18-64 and ≥65 years), with the HL group showing a higher risk of unintentional injuries (aHR: 2.62; 95% CI: 2.45-2.80), including falls (aHR: 2.83; 95% CI: 2.52-3.17) and traffic-related injuries (aHR: 2.38; 95% CI: 2.07-2.74). These findings highlight an independent association between HL and increased injury risk, underscoring the need for healthcare providers to counsel adult HL patients on preventive measures.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241309589"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11736742/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-05-14DOI: 10.1177/23312165251342441
Benjamin Masters, Susan Aliakbaryhosseinabadi, Dorothea Wendt, Ewen N MacDonald
Pupillometry has been used to assess effort in a variety of listening experiments. However, measuring listening effort during conversational interaction remains difficult as it requires a complex overlap of attention and effort directed to both listening and speech planning. This work introduces a method for measuring how the pupil responds consistently to turn-taking over the course of an entire conversation. Pupillary temporal response functions to the so-called conversational state changes are derived and analyzed for consistent differences that exist across people and acoustic environmental conditions. Additional considerations are made to account for changes in the pupil response that could be attributed to eye-gaze behavior. Our findings, based on data collected from 12 normal-hearing pairs of talkers, reveal that the pupil does respond in a time-synchronous manner to turn-taking. Preliminary interpretation suggests that these variations correspond to our expectations around effort direction in conversation.
{"title":"Pupil Responses During Interactive Conversation.","authors":"Benjamin Masters, Susan Aliakbaryhosseinabadi, Dorothea Wendt, Ewen N MacDonald","doi":"10.1177/23312165251342441","DOIUrl":"https://doi.org/10.1177/23312165251342441","url":null,"abstract":"<p><p>Pupillometry has been used to assess effort in a variety of listening experiments. However, measuring listening effort during conversational interaction remains difficult as it requires a complex overlap of attention and effort directed to both listening and speech planning. This work introduces a method for measuring how the pupil responds consistently to turn-taking over the course of an entire conversation. Pupillary temporal response functions to the so-called conversational state changes are derived and analyzed for consistent differences that exist across people and acoustic environmental conditions. Additional considerations are made to account for changes in the pupil response that could be attributed to eye-gaze behavior. Our findings, based on data collected from 12 normal-hearing pairs of talkers, reveal that the pupil does respond in a time-synchronous manner to turn-taking. Preliminary interpretation suggests that these variations correspond to our expectations around effort direction in conversation.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251342441"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12078965/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-09-17DOI: 10.1177/23312165251378355
Jie Wang, Huanyong Zheng, Stefan Stenfelt, Qiongyao Qu, Jinqiu Sang, Chengshi Zheng
Current research on sound source externalization primarily focuses on air conduction (AC). As bone conduction (BC) technology advances and BC headphones become more common, the perception of externalization for BC-generated virtual sound sources has emerged as an area of significant interest. However, there remains a shortage of relevant research in this domain. The current study investigates the impact of reverberant sound components on the perception of externalization for BC virtual sound sources, both with the ear open (BC-open) and with the ear canals occluded (BC-blocked). To modify the reverberant components of the Binaural Room Impulse Responses (BRIRs), the BRIRs were either truncated or had their reverberation energy scaled. The experimental findings suggest that the perception of externalization does not significantly differ across the three stimulation modalities: AC, BC-open, and BC-blocked. Across both AC and BC transmission modes, the perception of externalization for virtual sound sources was primarily influenced by the reverberation present in the contralateral ear. The results were consistent between the BC-open and BC-blocked conditions, indicating that air radiated sounds from the BC transducer did not impact the results. Regression analyses indicated that under AC stimulation, sound source externalization ratings exhibited strong linear relationships with the Direct-to-Reverberant Energy Ratio (DRR), Frequency-to-Frequency Variability (FFV), and Interaural Coherence (IC). The results suggests that BC transducers provide a similar degree of sound source externalization as AC headphones.
{"title":"Externalization of Virtual Sound Sources With Bone and Air Conduction Stimulation.","authors":"Jie Wang, Huanyong Zheng, Stefan Stenfelt, Qiongyao Qu, Jinqiu Sang, Chengshi Zheng","doi":"10.1177/23312165251378355","DOIUrl":"10.1177/23312165251378355","url":null,"abstract":"<p><p>Current research on sound source externalization primarily focuses on air conduction (AC). As bone conduction (BC) technology advances and BC headphones become more common, the perception of externalization for BC-generated virtual sound sources has emerged as an area of significant interest. However, there remains a shortage of relevant research in this domain. The current study investigates the impact of reverberant sound components on the perception of externalization for BC virtual sound sources, both with the ear open (BC-open) and with the ear canals occluded (BC-blocked). To modify the reverberant components of the Binaural Room Impulse Responses (BRIRs), the BRIRs were either truncated or had their reverberation energy scaled. The experimental findings suggest that the perception of externalization does not significantly differ across the three stimulation modalities: AC, BC-open, and BC-blocked. Across both AC and BC transmission modes, the perception of externalization for virtual sound sources was primarily influenced by the reverberation present in the contralateral ear. The results were consistent between the BC-open and BC-blocked conditions, indicating that air radiated sounds from the BC transducer did not impact the results. Regression analyses indicated that under AC stimulation, sound source externalization ratings exhibited strong linear relationships with the Direct-to-Reverberant Energy Ratio (DRR), Frequency-to-Frequency Variability (FFV), and Interaural Coherence (IC). The results suggests that BC transducers provide a similar degree of sound source externalization as AC headphones.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251378355"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12444071/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145081988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-08-07DOI: 10.1177/23312165251362018
Giulia Angonese, Mareike Buhl, Jonathan A Gößwein, Birger Kollmeier, Andrea Hildebrandt
Individuals have different preferences for setting hearing aid (HA) algorithms that reduce ambient noise but introduce signal distortions. "Noise haters" prefer greater noise reduction, even at the expense of signal quality. "Distortion haters" accept higher noise levels to avoid signal distortion. These preferences have so far been assumed to be stable over time, and individuals were classified on the basis of these stable, trait scores. However, the question remains as to how stable individual listening preferences are and whether day-to-day state-related variability needs to be considered as further criterion for classification. We designed a mobile task to measure noise-distortion preferences over 2 weeks in an ecological momentary assessment study with N = 185 (106 f, Mage = 63.1, SDage = 6.5) individuals. Latent State-Trait Autoregressive (LST-AR) modeling was used to assess stability and dynamics of individual listening preferences for signals simulating the effects of noise reduction algorithms, presented in a web browser app. The analysis revealed a significant amount of state-related variance. The model has been extended to mixture LST-AR model for data-driven classification, taking into account state and trait components of listening preferences. In addition to successful identification of noise haters, distortion haters and a third intermediate class based on longitudinal, outside-of-the-lab data, we further differentiated individuals with different degrees of variability in listening preferences. Individualization of HA fitting could be improved by assessing individual preferences along the noise-distortion trade-off, and the day-to-day variability of these preferences needs to be taken into account for some individuals more than others.
{"title":"Toward an Extended Classification of Noise-Distortion Preferences by Modeling Longitudinal Dynamics of Listening Choices.","authors":"Giulia Angonese, Mareike Buhl, Jonathan A Gößwein, Birger Kollmeier, Andrea Hildebrandt","doi":"10.1177/23312165251362018","DOIUrl":"10.1177/23312165251362018","url":null,"abstract":"<p><p>Individuals have different preferences for setting hearing aid (HA) algorithms that reduce ambient noise but introduce signal distortions. \"Noise haters\" prefer greater noise reduction, even at the expense of signal quality. \"Distortion haters\" accept higher noise levels to avoid signal distortion. These preferences have so far been assumed to be stable over time, and individuals were classified on the basis of these stable, trait scores. However, the question remains as to how stable individual listening preferences are and whether day-to-day state-related variability needs to be considered as further criterion for classification. We designed a mobile task to measure noise-distortion preferences over 2 weeks in an ecological momentary assessment study with <i>N</i> = 185 (106 f, <i>M</i><sub>age</sub> = 63.1, SD<sub>age</sub> = 6.5) individuals. Latent State-Trait Autoregressive (LST-AR) modeling was used to assess stability and dynamics of individual listening preferences for signals simulating the effects of noise reduction algorithms, presented in a web browser app. The analysis revealed a significant amount of state-related variance. The model has been extended to mixture LST-AR model for data-driven classification, taking into account state and trait components of listening preferences. In addition to successful identification of noise haters, distortion haters and a third intermediate class based on longitudinal, outside-of-the-lab data, we further differentiated individuals with different degrees of variability in listening preferences. Individualization of HA fitting could be improved by assessing individual preferences along the noise-distortion trade-off, and the day-to-day variability of these preferences needs to be taken into account for some individuals more than others.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251362018"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12332338/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144795906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-09-05DOI: 10.1177/23312165251371118
Penelope Coupal, Yue Zhang, Mickael Deroche
While blink analysis was traditionally conducted within vision research, recent studies suggest that blinks might reflect a more general cognitive strategy for resource allocation, including with auditory tasks, but its use within the fields of Audiology or Psychoacoustics remains scarce and its interpretation largely speculative. It is hypothesized that as listening conditions become more difficult, the number of blinks would decrease, especially during stimulus presentation, because it reflects a window of alertness. In experiment 1, 21 participants were presented with 80 sentences at different signal-to-noise ratios (SNRs): 0, + 7, + 14 dB and in quiet, in a sound-proof room with gaze and luminance controlled (75 lux). In experiment 2, 28 participants were presented with 120 sentences at only 0 and +14 dB SNR, but in three luminance conditions (dark at 0 lux, medium at 75 lux, bright at 220 lux). Each pupil trace was manually screened for the number of blinks, along with their respective onset and offset. Results showed that blink occurrence decreased during sentence presentation, with the reduction becoming more pronounced at more adverse SNRs. Experiment 2 replicated this finding, regardless of luminance level. It is concluded that blinks could serve as an additional physiological correlate to listening effort in simple speech recognition tasks, and that it may be a useful indicator of cognitive load regardless of the modality of the processed information.
{"title":"Reduced Eye Blinking During Sentence Listening Reflects Increased Cognitive Load in Challenging Auditory Conditions.","authors":"Penelope Coupal, Yue Zhang, Mickael Deroche","doi":"10.1177/23312165251371118","DOIUrl":"10.1177/23312165251371118","url":null,"abstract":"<p><p>While blink analysis was traditionally conducted within vision research, recent studies suggest that blinks might reflect a more general cognitive strategy for resource allocation, including with auditory tasks, but its use within the fields of Audiology or Psychoacoustics remains scarce and its interpretation largely speculative. It is hypothesized that as listening conditions become more difficult, the number of blinks would decrease, especially during stimulus presentation, because it reflects a window of alertness. In experiment 1, 21 participants were presented with 80 sentences at different signal-to-noise ratios (SNRs): 0, + 7, + 14 dB and in quiet, in a sound-proof room with gaze and luminance controlled (75 lux). In experiment 2, 28 participants were presented with 120 sentences at only 0 and +14 dB SNR, but in three luminance conditions (dark at 0 lux, medium at 75 lux, bright at 220 lux). Each pupil trace was manually screened for the number of blinks, along with their respective onset and offset. Results showed that blink occurrence decreased during sentence presentation, with the reduction becoming more pronounced at more adverse SNRs. Experiment 2 replicated this finding, regardless of luminance level. It is concluded that blinks could serve as an additional physiological correlate to listening effort in simple speech recognition tasks, and that it may be a useful indicator of cognitive load regardless of the modality of the processed information.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251371118"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12413523/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145001759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-10-16DOI: 10.1177/23312165251388430
Maike Klingel, Bernhard Laback
Several segregation cues help listeners understand speech in the presence of distractor talkers, most notably differences in talker sex (i.e., differences in fundamental frequency and vocal tract length) and spatial location. It is unclear, however, how these cues work together, namely whether they show additive or even synergistic effects. Furthermore, previous research suggests better performance for target words that occur later in a sentence or sequence. We additionally investigate for which segregation cues or cue combinations this build-up occurs and whether it depends on memory effects. Twenty normal-hearing participants completed a speech-on-speech masking experiment using the OLSA (a German matrix test) speech material. We adaptively measured speech-reception thresholds for different segregation cues (differences in spatial location, fundamental frequency, and talker sex) and response conditions (which word(s) need(s) to be reported). The results show better thresholds for single-word reports, reflecting memory constraints for multiple-word reports. We also found additivity of segregation cues for multiple- but sub-additivity for single-word reports. Finally, we observed a build-up of release from speech-on-speech masking that depended on response and cue conditions, namely no build-up for multiple-word reports and continuous build-up except for the easiest condition, i.e., different sex/spatially separated maskers for single-word reports. These results shed further light on how listeners follow a target talker in the presence of competing talkers, i.e., the classical cocktail-party problem, and indicate the potential for performance improvement from enhancing segregation cues in the hearing-impaired.
{"title":"Release from Speech-on-Speech Masking: Additivity of Segregation Cues and Build-Up of Segregation.","authors":"Maike Klingel, Bernhard Laback","doi":"10.1177/23312165251388430","DOIUrl":"10.1177/23312165251388430","url":null,"abstract":"<p><p>Several segregation cues help listeners understand speech in the presence of distractor talkers, most notably differences in talker sex (i.e., differences in fundamental frequency and vocal tract length) and spatial location. It is unclear, however, how these cues work together, namely whether they show additive or even synergistic effects. Furthermore, previous research suggests better performance for target words that occur later in a sentence or sequence. We additionally investigate for which segregation cues or cue combinations this build-up occurs and whether it depends on memory effects. Twenty normal-hearing participants completed a speech-on-speech masking experiment using the OLSA (a German matrix test) speech material. We adaptively measured speech-reception thresholds for different segregation cues (differences in spatial location, fundamental frequency, and talker sex) and response conditions (which word(s) need(s) to be reported). The results show better thresholds for single-word reports, reflecting memory constraints for multiple-word reports. We also found additivity of segregation cues for multiple- but sub-additivity for single-word reports. Finally, we observed a build-up of release from speech-on-speech masking that depended on response and cue conditions, namely no build-up for multiple-word reports and continuous build-up except for the easiest condition, i.e., different sex/spatially separated maskers for single-word reports. These results shed further light on how listeners follow a target talker in the presence of competing talkers, i.e., the classical cocktail-party problem, and indicate the potential for performance improvement from enhancing segregation cues in the hearing-impaired.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251388430"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12536088/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145309622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-03-16DOI: 10.1177/23312165251317010
Timothy Beechey, Graham Naylor
This paper describes a conceptual model of adaptive responses to adverse auditory conditions with the aim of providing a basis for better understanding the demands of, and opportunities for, successful real-life auditory functioning. We review examples of behaviors that facilitate auditory functioning in adverse conditions. Next, we outline the concept of purpose-driven behavior and describe how changing behavior can ensure stable performance in a changing environment. We describe how tasks and environments (both physical and social) dictate which behaviors are possible and effective facilitators of auditory functioning, and how hearing disability may be understood in terms of capacity to adapt to the environment. A conceptual model of adaptive cognitive, physical, and linguistic responses within a moderating negative feedback system is presented along with implications for the interpretation of auditory experiments which seek to predict functioning outside the laboratory or clinic. We argue that taking account of how people can improve their own performance by adapting their behavior and modifying their environment may contribute to more robust and generalizable experimental findings.
{"title":"How Purposeful Adaptive Responses to Adverse Conditions Facilitate Successful Auditory Functioning: A Conceptual Model.","authors":"Timothy Beechey, Graham Naylor","doi":"10.1177/23312165251317010","DOIUrl":"10.1177/23312165251317010","url":null,"abstract":"<p><p>This paper describes a conceptual model of adaptive responses to adverse auditory conditions with the aim of providing a basis for better understanding the demands of, and opportunities for, successful real-life auditory functioning. We review examples of behaviors that facilitate auditory functioning in adverse conditions. Next, we outline the concept of purpose-driven behavior and describe how changing behavior can ensure stable performance in a changing environment. We describe how tasks and environments (both physical and social) dictate which behaviors are possible and effective facilitators of auditory functioning, and how hearing disability may be understood in terms of capacity to adapt to the environment. A conceptual model of adaptive cognitive, physical, and linguistic responses within a moderating negative feedback system is presented along with implications for the interpretation of auditory experiments which seek to predict functioning outside the laboratory or clinic. We argue that taking account of how people can improve their own performance by adapting their behavior and modifying their environment may contribute to more robust and generalizable experimental findings.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251317010"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11912170/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143651562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-05-27DOI: 10.1177/23312165251342436
Nuphar Singer, Yael Zaltz
Auditory learning is essential for adapting to continuously changing acoustic environments. This adaptive capability, however, may be impacted by age-related declines in sensory and cognitive functions, potentially limiting learning efficiency and generalization in older adults. This study investigated auditory learning and generalization in 24 older (65-82 years) and 24 younger (18-34 years) adults through voice discrimination (VD) training. Participants were divided into training (12 older, 12 younger adults) and control groups (12 older, 12 younger adults). Trained participants completed five sessions: Two testing sessions assessing VD performance using a 2-down 1-up adaptive procedure with F0-only, formant-only, and combined F0 + formant cues, and three training sessions focusing exclusively on VD with F0 cues. Control groups participated only in the two testing sessions, with no intermediate training. Results revealed significant training-induced improvements in VD with F0 cues for both younger and older adults, with comparable learning efficiency and gains across groups. However, generalization to the formant-only cue was observed only in younger adults, suggesting limited learning transfer in older adults. Additionally, VD training did not improve performance in the combined F0 + formant condition beyond control group improvements, underscoring the specificity of perceptual learning. These findings provide novel insights into auditory learning in older adults, showing that while they retain the ability for significant auditory skill acquisition, age-related declines in perceptual flexibility may limit broader generalization. This study highlights the importance of designing targeted auditory interventions for older adults, considering their specific limitations in generalizing learning gains across different acoustic cues.
{"title":"Auditory Learning and Generalization in Older Adults: Evidence from Voice Discrimination Training.","authors":"Nuphar Singer, Yael Zaltz","doi":"10.1177/23312165251342436","DOIUrl":"10.1177/23312165251342436","url":null,"abstract":"<p><p>Auditory learning is essential for adapting to continuously changing acoustic environments. This adaptive capability, however, may be impacted by age-related declines in sensory and cognitive functions, potentially limiting learning efficiency and generalization in older adults. This study investigated auditory learning and generalization in 24 older (65-82 years) and 24 younger (18-34 years) adults through voice discrimination (VD) training. Participants were divided into training (12 older, 12 younger adults) and control groups (12 older, 12 younger adults). Trained participants completed five sessions: Two testing sessions assessing VD performance using a 2-down 1-up adaptive procedure with F0-only, formant-only, and combined F0 + formant cues, and three training sessions focusing exclusively on VD with F0 cues. Control groups participated only in the two testing sessions, with no intermediate training. Results revealed significant training-induced improvements in VD with F0 cues for both younger and older adults, with comparable learning efficiency and gains across groups. However, generalization to the formant-only cue was observed only in younger adults, suggesting limited learning transfer in older adults. Additionally, VD training did not improve performance in the combined F0 + formant condition beyond control group improvements, underscoring the specificity of perceptual learning. These findings provide novel insights into auditory learning in older adults, showing that while they retain the ability for significant auditory skill acquisition, age-related declines in perceptual flexibility may limit broader generalization. This study highlights the importance of designing targeted auditory interventions for older adults, considering their specific limitations in generalizing learning gains across different acoustic cues.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251342436"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12117233/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144152623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-10-01DOI: 10.1177/23312165251367625
Mohsen Fatehifar, Kevin J Munro, Michael A Stone, David Wong, Tim Cootes, Josef Schlittenlacher
This proof-of-concept study evaluated the implementation of a digits-in-noise test we call the 'AI-powered test' that used text-to-speech (TTS) and automatic speech recognition (ASR). Two other digits-in-noise tests formed the baselines for comparison: the 'keyboard-based test' which used the same configurations as the AI-powered test, and the 'independent test', a third-party-sourced test not modified by us. The validity of the AI-powered test was evaluated by measuring its difference from the independent test and comparing it with the baseline, which was the difference between the Keyboard-based test and the Independent test. The reliability of the AI-powered test was measured by comparing the similarity of two runs of this test and the Independent test. The study involved 31 participants: 10 with hearing loss and 21 with normal-hearing. Achieved mean bias and limits-of-agreement showed that the agreement between the AI-powered test and the independent test (-1.3 ± 4.9 dB) was similar to the agreement between the keyboard-based test and the Independent test (-0.2 ± 4.4 dB), indicating that the addition of TTS and ASR did not have a negative impact. The AI-powered test had a reliability of -1.0 ± 5.7 dB, which was poorer than the baseline reliability (-0.4 ± 3.8 dB), but this was improved to -0.9 ± 3.8 dB when outliers were removed, showing that low-error ASR (as shown with the Whisper model) makes the test as reliable as independent tests. These findings suggest that a digits-in-noise test using synthetic stimuli and automatic speech recognition is a viable alternative to traditional tests and could have real-world applications.
{"title":"Digits-In-Noise Hearing Test Using Text-to-Speech and Automatic Speech Recognition: Proof-of-Concept Study.","authors":"Mohsen Fatehifar, Kevin J Munro, Michael A Stone, David Wong, Tim Cootes, Josef Schlittenlacher","doi":"10.1177/23312165251367625","DOIUrl":"10.1177/23312165251367625","url":null,"abstract":"<p><p>This proof-of-concept study evaluated the implementation of a digits-in-noise test we call the 'AI-powered test' that used text-to-speech (TTS) and automatic speech recognition (ASR). Two other digits-in-noise tests formed the baselines for comparison: the 'keyboard-based test' which used the same configurations as the AI-powered test, and the 'independent test', a third-party-sourced test not modified by us. The validity of the AI-powered test was evaluated by measuring its difference from the independent test and comparing it with the baseline, which was the difference between the Keyboard-based test and the Independent test. The reliability of the AI-powered test was measured by comparing the similarity of two runs of this test and the Independent test. The study involved 31 participants: 10 with hearing loss and 21 with normal-hearing. Achieved mean bias and limits-of-agreement showed that the agreement between the AI-powered test and the independent test (-1.3 ± 4.9 dB) was similar to the agreement between the keyboard-based test and the Independent test (-0.2 ± 4.4 dB), indicating that the addition of TTS and ASR did not have a negative impact. The AI-powered test had a reliability of -1.0 ± 5.7 dB, which was poorer than the baseline reliability (-0.4 ± 3.8 dB), but this was improved to -0.9 ± 3.8 dB when outliers were removed, showing that low-error ASR (as shown with the Whisper model) makes the test as reliable as independent tests. These findings suggest that a digits-in-noise test using synthetic stimuli and automatic speech recognition is a viable alternative to traditional tests and could have real-world applications.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251367625"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12489207/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145208105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-11-24DOI: 10.1177/23312165251396658
Rebecca C Felsheim, Sabine Hochmuth, Alina Kleinow, Andreas Radeloff, Mathias Dietz
Bimodal cochlear implant users show poor localization performance. One reason for this is a difference in the processing latency between the hearing aid and the cochlear implant side. It has been shown that reducing this latency difference acutely improves the localization performance of bimodal cochlear implant users. However, due to the frequency dependency of both the device latencies and the acoustic hearing ear, current frequency-independent latency adjustments cannot fully compensate for the differences, leaving open which latency adjustment is best. We therefore measured the localization performance of 11 bimodal cochlear implant users for multiple cochlear implant latencies. We confirmed previous studies that adjusting the interaural latency improves localization in most of our subjects. However, the latency that leads to the best localization performance for most subjects was not necessarily at the latency estimated to compensate for the interaural difference at intermediate frequencies (1 kHz). Nine of 11 subjects localized best with a cochlear implant latency that was slightly shorter than the estimated latency compensation.
{"title":"Bimodal Cochlear Implants: Measurement of the Localization Performance as a Function of Device Latency Difference.","authors":"Rebecca C Felsheim, Sabine Hochmuth, Alina Kleinow, Andreas Radeloff, Mathias Dietz","doi":"10.1177/23312165251396658","DOIUrl":"10.1177/23312165251396658","url":null,"abstract":"<p><p>Bimodal cochlear implant users show poor localization performance. One reason for this is a difference in the processing latency between the hearing aid and the cochlear implant side. It has been shown that reducing this latency difference acutely improves the localization performance of bimodal cochlear implant users. However, due to the frequency dependency of both the device latencies and the acoustic hearing ear, current frequency-independent latency adjustments cannot fully compensate for the differences, leaving open which latency adjustment is best. We therefore measured the localization performance of 11 bimodal cochlear implant users for multiple cochlear implant latencies. We confirmed previous studies that adjusting the interaural latency improves localization in most of our subjects. However, the latency that leads to the best localization performance for most subjects was not necessarily at the latency estimated to compensate for the interaural difference at intermediate frequencies (1 kHz). Nine of 11 subjects localized best with a cochlear implant latency that was slightly shorter than the estimated latency compensation.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251396658"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12644428/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145597477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}