Pub Date : 2025-01-01Epub Date: 2025-12-15DOI: 10.1177/23312165251389112
Julia Schütze, Stephan D Ewert, Christoph Kirsch, Birger Kollmeier
The discrepancy between the hearing aid benefit estimated in standard audiological tests, like speech audiometry, and the perceived benefit in daily life has led to interest in methods better reflecting real-world performance. In contrast to audiological tests, everyday communication commonly takes place in enclosed spaces with acoustic reflections and multiple sound sources, including sounds from adjoining rooms through open doors. This study investigates speech recognition thresholds (SRTs) with a sentence test in a laboratory environment resembling an average German living room with an adjacent kitchen. Additionally, acoustic simulations of the environment were presented in a large-scale (86) and small-scale (4) loudspeaker array, with the latter feasible for a clinical context. Measurements with normal-hearing and hearing-impaired listeners were conducted using different spatial target positions and a fixed masker position. One of the target positions was within the adjacent kitchen without line-of-sight to the sound source, representing a challenging acoustic configuration. Hearing-impaired listeners performed the measurements with and without their hearing aids. SRTs were compared between different presentation settings and to those measured in standard free-field audiological spatial configurations (S0N0, S0N90). An auditory model was employed for further analysis. Results show that SRTs in the simulated living room environment with 86 and 4 loudspeakers matched the real environment, even for aided listeners, indicating that virtual acoustics representations can reflect real-world listening performance. When signal-to-noise ratios were normalized, the measured hearing aid benefit did not differ significantly between the standard audiological spatial configuration S0N90 and any spatial configuration in the living room environment.
{"title":"Unaided and Aided Speech Intelligibility in a Real and Virtual Acoustic Environment.","authors":"Julia Schütze, Stephan D Ewert, Christoph Kirsch, Birger Kollmeier","doi":"10.1177/23312165251389112","DOIUrl":"10.1177/23312165251389112","url":null,"abstract":"<p><p>The discrepancy between the hearing aid benefit estimated in standard audiological tests, like speech audiometry, and the perceived benefit in daily life has led to interest in methods better reflecting real-world performance. In contrast to audiological tests, everyday communication commonly takes place in enclosed spaces with acoustic reflections and multiple sound sources, including sounds from adjoining rooms through open doors. This study investigates speech recognition thresholds (SRTs) with a sentence test in a laboratory environment resembling an average German living room with an adjacent kitchen. Additionally, acoustic simulations of the environment were presented in a large-scale (86) and small-scale (4) loudspeaker array, with the latter feasible for a clinical context. Measurements with normal-hearing and hearing-impaired listeners were conducted using different spatial target positions and a fixed masker position. One of the target positions was within the adjacent kitchen without line-of-sight to the sound source, representing a challenging acoustic configuration. Hearing-impaired listeners performed the measurements with and without their hearing aids. SRTs were compared between different presentation settings and to those measured in standard free-field audiological spatial configurations (S0N0, S0N90). An auditory model was employed for further analysis. Results show that SRTs in the simulated living room environment with 86 and 4 loudspeakers matched the real environment, even for aided listeners, indicating that virtual acoustics representations can reflect real-world listening performance. When signal-to-noise ratios were normalized, the measured hearing aid benefit did not differ significantly between the standard audiological spatial configuration S0N90 and any spatial configuration in the living room environment.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251389112"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12705970/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145764241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-03-21DOI: 10.1177/23312165251320439
Adnan M Shehabi, Christopher J Plack, Margaret Zuriekat, Ola Aboudi, Stephen A Roberts, Joseph Laycock, Hannah Guest
The study set out to acquire validation data for Arabic versions of the Digits-in-Noise (DIN) test, measured using browser-based software suitable for home hearing screening. DIN and pure-tone audiometric (PTA) thresholds were obtained from a sample of 155 Arabic-speaking participants, varying widely in age and in degree and type of hearing loss. DIN thresholds were measured using both diotic and antiphasic stimuli, with the goal of determining whether antiphasic testing provides superior prediction of poorer-ear hearing loss. A comprehensive study protocol was publicly pre-registered via the Open Science Framework. Both types of DIN threshold correlate with poorer-ear PTA thresholds after controlling for age, but the correlation is significantly stronger for antiphasic than diotic stimuli. Antiphasic DIN thresholds increase more steeply than diotic DIN thresholds as poorer-ear PTA thresholds increase, and are superior binary classifiers of hearing loss. Combined with previous results based on DIN data measured in participants' homes, the present findings suggest that the browser-based Arabic DIN test may be effective in remote hearing screening, when combined with antiphasic digit presentation.
{"title":"Arabic Digits-in-Noise Tests: Relations to Hearing Loss and Comparison of Diotic and Antiphasic Versions.","authors":"Adnan M Shehabi, Christopher J Plack, Margaret Zuriekat, Ola Aboudi, Stephen A Roberts, Joseph Laycock, Hannah Guest","doi":"10.1177/23312165251320439","DOIUrl":"10.1177/23312165251320439","url":null,"abstract":"<p><p>The study set out to acquire validation data for Arabic versions of the Digits-in-Noise (DIN) test, measured using browser-based software suitable for home hearing screening. DIN and pure-tone audiometric (PTA) thresholds were obtained from a sample of 155 Arabic-speaking participants, varying widely in age and in degree and type of hearing loss. DIN thresholds were measured using both diotic and antiphasic stimuli, with the goal of determining whether antiphasic testing provides superior prediction of poorer-ear hearing loss. A comprehensive study protocol was publicly pre-registered via the Open Science Framework. Both types of DIN threshold correlate with poorer-ear PTA thresholds after controlling for age, but the correlation is significantly stronger for antiphasic than diotic stimuli. Antiphasic DIN thresholds increase more steeply than diotic DIN thresholds as poorer-ear PTA thresholds increase, and are superior binary classifiers of hearing loss. Combined with previous results based on DIN data measured in participants' homes, the present findings suggest that the browser-based Arabic DIN test may be effective in remote hearing screening, when combined with antiphasic digit presentation.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251320439"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11930467/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143674765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1177/23312165251322064
Johanna Hengen, Inger Lundeborg Hammarström, Stefan Stenfelt
Problems with own-voice sounds are common in hearing aid users. As auditory feedback is used to regulate the voice, it is possible that hearing aid use affects phonation. The aim of this paper is to compare hearing aid users' perception of their own voice with and without hearing aids and any effect on phonation. Eighty-five first-time and 85 experienced hearing aid users together with a control group of 70 completed evaluations of their own recorded and live voice in addition to two external voices. The participants' voice recordings were used for acoustic analysis. The results showed moderate to severe own-voice problems (OVP) in 17.6% of first-time users and 18.8% of experienced users. Hearing condition was a significant predictor of the perception of pitch in external voices and of monotony, lower naturalness, and lower pleasantness in their own live voice. The groups with hearing impairment had a higher mean fundamental frequency (f0) than the control group. Hearing aids decreased the speaking sound pressure level by 2 dB on average. Moreover, acoustic analysis shows a complex relationship between hearing impairment, hearing aids, and phonation and an immediate decrease in speech level when using hearing aids. Our findings support previous literature regarding auditory feedback and voice regulation. The results should motivate clinicians in hearing and voice care to routinely take hearing functions into account when assessing voice problems.
{"title":"Effect of Hearing Aids on Phonation and Perceived Voice Qualities.","authors":"Johanna Hengen, Inger Lundeborg Hammarström, Stefan Stenfelt","doi":"10.1177/23312165251322064","DOIUrl":"10.1177/23312165251322064","url":null,"abstract":"<p><p>Problems with own-voice sounds are common in hearing aid users. As auditory feedback is used to regulate the voice, it is possible that hearing aid use affects phonation. The aim of this paper is to compare hearing aid users' perception of their own voice with and without hearing aids and any effect on phonation. Eighty-five first-time and 85 experienced hearing aid users together with a control group of 70 completed evaluations of their own recorded and live voice in addition to two external voices. The participants' voice recordings were used for acoustic analysis. The results showed moderate to severe own-voice problems (OVP) in 17.6% of first-time users and 18.8% of experienced users. Hearing condition was a significant predictor of the perception of pitch in external voices and of monotony, lower naturalness, and lower pleasantness in their own live voice. The groups with hearing impairment had a higher mean fundamental frequency (f0) than the control group. Hearing aids decreased the speaking sound pressure level by 2 dB on average. Moreover, acoustic analysis shows a complex relationship between hearing impairment, hearing aids, and phonation and an immediate decrease in speech level when using hearing aids. Our findings support previous literature regarding auditory feedback and voice regulation. The results should motivate clinicians in hearing and voice care to routinely take hearing functions into account when assessing voice problems.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251322064"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11873921/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143537883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This nationwide retrospective cohort study examines the association between adults with hearing loss (HL) and subsequent injury risk. Utilizing data from the Taiwan National Health Insurance Research Database (2000-2017), the study included 19,480 patients with HL and 77,920 matched controls. Over an average follow-up of 9.08 years, 18.30% of the 97,400 subjects sustained subsequent all-cause injuries. The injury incidence was significantly higher in the HL group compared to the control group (24.04% vs. 16.86%, p < .001). After adjusting for demographics and comorbidities, the adjusted hazard ratio (aHR) for injury in the HL cohort was 2.35 (95% CI: 2.22-2.49). Kaplan-Meier analysis showed significant differences in injury-free survival between the HL and control groups (log-rank test, p < .001). The increased risk was consistent across age groups (18-64 and ≥65 years), with the HL group showing a higher risk of unintentional injuries (aHR: 2.62; 95% CI: 2.45-2.80), including falls (aHR: 2.83; 95% CI: 2.52-3.17) and traffic-related injuries (aHR: 2.38; 95% CI: 2.07-2.74). These findings highlight an independent association between HL and increased injury risk, underscoring the need for healthcare providers to counsel adult HL patients on preventive measures.
这项全国性的回顾性队列研究探讨了成人听力损失(HL)与随后的损伤风险之间的关系。在平均9.08年的随访中,97,400名受试者中有18.30%随后遭受了全因损伤。HL组损伤发生率明显高于对照组(24.04% vs. 16.86%, p < 0.05)
{"title":"Association of Increased Risk of Injury in Adults With Hearing Loss: A Population-Based Cohort Study.","authors":"Kuan-Yu Lai, Hung-Che Lin, Wan-Ting Shih, Wu-Chien Chien, Chi-Hsiang Chung, Mingchih Chen, Jeng-Wen Chen, Hung-Chun Chung","doi":"10.1177/23312165241309589","DOIUrl":"10.1177/23312165241309589","url":null,"abstract":"<p><p>This nationwide retrospective cohort study examines the association between adults with hearing loss (HL) and subsequent injury risk. Utilizing data from the Taiwan National Health Insurance Research Database (2000-2017), the study included 19,480 patients with HL and 77,920 matched controls. Over an average follow-up of 9.08 years, 18.30% of the 97,400 subjects sustained subsequent all-cause injuries. The injury incidence was significantly higher in the HL group compared to the control group (24.04% vs. 16.86%, <i>p </i>< .001). After adjusting for demographics and comorbidities, the adjusted hazard ratio (aHR) for injury in the HL cohort was 2.35 (95% CI: 2.22-2.49). Kaplan-Meier analysis showed significant differences in injury-free survival between the HL and control groups (log-rank test, <i>p </i>< .001). The increased risk was consistent across age groups (18-64 and ≥65 years), with the HL group showing a higher risk of unintentional injuries (aHR: 2.62; 95% CI: 2.45-2.80), including falls (aHR: 2.83; 95% CI: 2.52-3.17) and traffic-related injuries (aHR: 2.38; 95% CI: 2.07-2.74). These findings highlight an independent association between HL and increased injury risk, underscoring the need for healthcare providers to counsel adult HL patients on preventive measures.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241309589"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11736742/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-05-14DOI: 10.1177/23312165251342441
Benjamin Masters, Susan Aliakbaryhosseinabadi, Dorothea Wendt, Ewen N MacDonald
Pupillometry has been used to assess effort in a variety of listening experiments. However, measuring listening effort during conversational interaction remains difficult as it requires a complex overlap of attention and effort directed to both listening and speech planning. This work introduces a method for measuring how the pupil responds consistently to turn-taking over the course of an entire conversation. Pupillary temporal response functions to the so-called conversational state changes are derived and analyzed for consistent differences that exist across people and acoustic environmental conditions. Additional considerations are made to account for changes in the pupil response that could be attributed to eye-gaze behavior. Our findings, based on data collected from 12 normal-hearing pairs of talkers, reveal that the pupil does respond in a time-synchronous manner to turn-taking. Preliminary interpretation suggests that these variations correspond to our expectations around effort direction in conversation.
{"title":"Pupil Responses During Interactive Conversation.","authors":"Benjamin Masters, Susan Aliakbaryhosseinabadi, Dorothea Wendt, Ewen N MacDonald","doi":"10.1177/23312165251342441","DOIUrl":"https://doi.org/10.1177/23312165251342441","url":null,"abstract":"<p><p>Pupillometry has been used to assess effort in a variety of listening experiments. However, measuring listening effort during conversational interaction remains difficult as it requires a complex overlap of attention and effort directed to both listening and speech planning. This work introduces a method for measuring how the pupil responds consistently to turn-taking over the course of an entire conversation. Pupillary temporal response functions to the so-called conversational state changes are derived and analyzed for consistent differences that exist across people and acoustic environmental conditions. Additional considerations are made to account for changes in the pupil response that could be attributed to eye-gaze behavior. Our findings, based on data collected from 12 normal-hearing pairs of talkers, reveal that the pupil does respond in a time-synchronous manner to turn-taking. Preliminary interpretation suggests that these variations correspond to our expectations around effort direction in conversation.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251342441"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12078965/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-09-17DOI: 10.1177/23312165251378355
Jie Wang, Huanyong Zheng, Stefan Stenfelt, Qiongyao Qu, Jinqiu Sang, Chengshi Zheng
Current research on sound source externalization primarily focuses on air conduction (AC). As bone conduction (BC) technology advances and BC headphones become more common, the perception of externalization for BC-generated virtual sound sources has emerged as an area of significant interest. However, there remains a shortage of relevant research in this domain. The current study investigates the impact of reverberant sound components on the perception of externalization for BC virtual sound sources, both with the ear open (BC-open) and with the ear canals occluded (BC-blocked). To modify the reverberant components of the Binaural Room Impulse Responses (BRIRs), the BRIRs were either truncated or had their reverberation energy scaled. The experimental findings suggest that the perception of externalization does not significantly differ across the three stimulation modalities: AC, BC-open, and BC-blocked. Across both AC and BC transmission modes, the perception of externalization for virtual sound sources was primarily influenced by the reverberation present in the contralateral ear. The results were consistent between the BC-open and BC-blocked conditions, indicating that air radiated sounds from the BC transducer did not impact the results. Regression analyses indicated that under AC stimulation, sound source externalization ratings exhibited strong linear relationships with the Direct-to-Reverberant Energy Ratio (DRR), Frequency-to-Frequency Variability (FFV), and Interaural Coherence (IC). The results suggests that BC transducers provide a similar degree of sound source externalization as AC headphones.
{"title":"Externalization of Virtual Sound Sources With Bone and Air Conduction Stimulation.","authors":"Jie Wang, Huanyong Zheng, Stefan Stenfelt, Qiongyao Qu, Jinqiu Sang, Chengshi Zheng","doi":"10.1177/23312165251378355","DOIUrl":"10.1177/23312165251378355","url":null,"abstract":"<p><p>Current research on sound source externalization primarily focuses on air conduction (AC). As bone conduction (BC) technology advances and BC headphones become more common, the perception of externalization for BC-generated virtual sound sources has emerged as an area of significant interest. However, there remains a shortage of relevant research in this domain. The current study investigates the impact of reverberant sound components on the perception of externalization for BC virtual sound sources, both with the ear open (BC-open) and with the ear canals occluded (BC-blocked). To modify the reverberant components of the Binaural Room Impulse Responses (BRIRs), the BRIRs were either truncated or had their reverberation energy scaled. The experimental findings suggest that the perception of externalization does not significantly differ across the three stimulation modalities: AC, BC-open, and BC-blocked. Across both AC and BC transmission modes, the perception of externalization for virtual sound sources was primarily influenced by the reverberation present in the contralateral ear. The results were consistent between the BC-open and BC-blocked conditions, indicating that air radiated sounds from the BC transducer did not impact the results. Regression analyses indicated that under AC stimulation, sound source externalization ratings exhibited strong linear relationships with the Direct-to-Reverberant Energy Ratio (DRR), Frequency-to-Frequency Variability (FFV), and Interaural Coherence (IC). The results suggests that BC transducers provide a similar degree of sound source externalization as AC headphones.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251378355"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12444071/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145081988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-08-07DOI: 10.1177/23312165251362018
Giulia Angonese, Mareike Buhl, Jonathan A Gößwein, Birger Kollmeier, Andrea Hildebrandt
Individuals have different preferences for setting hearing aid (HA) algorithms that reduce ambient noise but introduce signal distortions. "Noise haters" prefer greater noise reduction, even at the expense of signal quality. "Distortion haters" accept higher noise levels to avoid signal distortion. These preferences have so far been assumed to be stable over time, and individuals were classified on the basis of these stable, trait scores. However, the question remains as to how stable individual listening preferences are and whether day-to-day state-related variability needs to be considered as further criterion for classification. We designed a mobile task to measure noise-distortion preferences over 2 weeks in an ecological momentary assessment study with N = 185 (106 f, Mage = 63.1, SDage = 6.5) individuals. Latent State-Trait Autoregressive (LST-AR) modeling was used to assess stability and dynamics of individual listening preferences for signals simulating the effects of noise reduction algorithms, presented in a web browser app. The analysis revealed a significant amount of state-related variance. The model has been extended to mixture LST-AR model for data-driven classification, taking into account state and trait components of listening preferences. In addition to successful identification of noise haters, distortion haters and a third intermediate class based on longitudinal, outside-of-the-lab data, we further differentiated individuals with different degrees of variability in listening preferences. Individualization of HA fitting could be improved by assessing individual preferences along the noise-distortion trade-off, and the day-to-day variability of these preferences needs to be taken into account for some individuals more than others.
{"title":"Toward an Extended Classification of Noise-Distortion Preferences by Modeling Longitudinal Dynamics of Listening Choices.","authors":"Giulia Angonese, Mareike Buhl, Jonathan A Gößwein, Birger Kollmeier, Andrea Hildebrandt","doi":"10.1177/23312165251362018","DOIUrl":"10.1177/23312165251362018","url":null,"abstract":"<p><p>Individuals have different preferences for setting hearing aid (HA) algorithms that reduce ambient noise but introduce signal distortions. \"Noise haters\" prefer greater noise reduction, even at the expense of signal quality. \"Distortion haters\" accept higher noise levels to avoid signal distortion. These preferences have so far been assumed to be stable over time, and individuals were classified on the basis of these stable, trait scores. However, the question remains as to how stable individual listening preferences are and whether day-to-day state-related variability needs to be considered as further criterion for classification. We designed a mobile task to measure noise-distortion preferences over 2 weeks in an ecological momentary assessment study with <i>N</i> = 185 (106 f, <i>M</i><sub>age</sub> = 63.1, SD<sub>age</sub> = 6.5) individuals. Latent State-Trait Autoregressive (LST-AR) modeling was used to assess stability and dynamics of individual listening preferences for signals simulating the effects of noise reduction algorithms, presented in a web browser app. The analysis revealed a significant amount of state-related variance. The model has been extended to mixture LST-AR model for data-driven classification, taking into account state and trait components of listening preferences. In addition to successful identification of noise haters, distortion haters and a third intermediate class based on longitudinal, outside-of-the-lab data, we further differentiated individuals with different degrees of variability in listening preferences. Individualization of HA fitting could be improved by assessing individual preferences along the noise-distortion trade-off, and the day-to-day variability of these preferences needs to be taken into account for some individuals more than others.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251362018"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12332338/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144795906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-09-05DOI: 10.1177/23312165251371118
Penelope Coupal, Yue Zhang, Mickael Deroche
While blink analysis was traditionally conducted within vision research, recent studies suggest that blinks might reflect a more general cognitive strategy for resource allocation, including with auditory tasks, but its use within the fields of Audiology or Psychoacoustics remains scarce and its interpretation largely speculative. It is hypothesized that as listening conditions become more difficult, the number of blinks would decrease, especially during stimulus presentation, because it reflects a window of alertness. In experiment 1, 21 participants were presented with 80 sentences at different signal-to-noise ratios (SNRs): 0, + 7, + 14 dB and in quiet, in a sound-proof room with gaze and luminance controlled (75 lux). In experiment 2, 28 participants were presented with 120 sentences at only 0 and +14 dB SNR, but in three luminance conditions (dark at 0 lux, medium at 75 lux, bright at 220 lux). Each pupil trace was manually screened for the number of blinks, along with their respective onset and offset. Results showed that blink occurrence decreased during sentence presentation, with the reduction becoming more pronounced at more adverse SNRs. Experiment 2 replicated this finding, regardless of luminance level. It is concluded that blinks could serve as an additional physiological correlate to listening effort in simple speech recognition tasks, and that it may be a useful indicator of cognitive load regardless of the modality of the processed information.
{"title":"Reduced Eye Blinking During Sentence Listening Reflects Increased Cognitive Load in Challenging Auditory Conditions.","authors":"Penelope Coupal, Yue Zhang, Mickael Deroche","doi":"10.1177/23312165251371118","DOIUrl":"10.1177/23312165251371118","url":null,"abstract":"<p><p>While blink analysis was traditionally conducted within vision research, recent studies suggest that blinks might reflect a more general cognitive strategy for resource allocation, including with auditory tasks, but its use within the fields of Audiology or Psychoacoustics remains scarce and its interpretation largely speculative. It is hypothesized that as listening conditions become more difficult, the number of blinks would decrease, especially during stimulus presentation, because it reflects a window of alertness. In experiment 1, 21 participants were presented with 80 sentences at different signal-to-noise ratios (SNRs): 0, + 7, + 14 dB and in quiet, in a sound-proof room with gaze and luminance controlled (75 lux). In experiment 2, 28 participants were presented with 120 sentences at only 0 and +14 dB SNR, but in three luminance conditions (dark at 0 lux, medium at 75 lux, bright at 220 lux). Each pupil trace was manually screened for the number of blinks, along with their respective onset and offset. Results showed that blink occurrence decreased during sentence presentation, with the reduction becoming more pronounced at more adverse SNRs. Experiment 2 replicated this finding, regardless of luminance level. It is concluded that blinks could serve as an additional physiological correlate to listening effort in simple speech recognition tasks, and that it may be a useful indicator of cognitive load regardless of the modality of the processed information.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251371118"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12413523/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145001759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-10-16DOI: 10.1177/23312165251388430
Maike Klingel, Bernhard Laback
Several segregation cues help listeners understand speech in the presence of distractor talkers, most notably differences in talker sex (i.e., differences in fundamental frequency and vocal tract length) and spatial location. It is unclear, however, how these cues work together, namely whether they show additive or even synergistic effects. Furthermore, previous research suggests better performance for target words that occur later in a sentence or sequence. We additionally investigate for which segregation cues or cue combinations this build-up occurs and whether it depends on memory effects. Twenty normal-hearing participants completed a speech-on-speech masking experiment using the OLSA (a German matrix test) speech material. We adaptively measured speech-reception thresholds for different segregation cues (differences in spatial location, fundamental frequency, and talker sex) and response conditions (which word(s) need(s) to be reported). The results show better thresholds for single-word reports, reflecting memory constraints for multiple-word reports. We also found additivity of segregation cues for multiple- but sub-additivity for single-word reports. Finally, we observed a build-up of release from speech-on-speech masking that depended on response and cue conditions, namely no build-up for multiple-word reports and continuous build-up except for the easiest condition, i.e., different sex/spatially separated maskers for single-word reports. These results shed further light on how listeners follow a target talker in the presence of competing talkers, i.e., the classical cocktail-party problem, and indicate the potential for performance improvement from enhancing segregation cues in the hearing-impaired.
{"title":"Release from Speech-on-Speech Masking: Additivity of Segregation Cues and Build-Up of Segregation.","authors":"Maike Klingel, Bernhard Laback","doi":"10.1177/23312165251388430","DOIUrl":"10.1177/23312165251388430","url":null,"abstract":"<p><p>Several segregation cues help listeners understand speech in the presence of distractor talkers, most notably differences in talker sex (i.e., differences in fundamental frequency and vocal tract length) and spatial location. It is unclear, however, how these cues work together, namely whether they show additive or even synergistic effects. Furthermore, previous research suggests better performance for target words that occur later in a sentence or sequence. We additionally investigate for which segregation cues or cue combinations this build-up occurs and whether it depends on memory effects. Twenty normal-hearing participants completed a speech-on-speech masking experiment using the OLSA (a German matrix test) speech material. We adaptively measured speech-reception thresholds for different segregation cues (differences in spatial location, fundamental frequency, and talker sex) and response conditions (which word(s) need(s) to be reported). The results show better thresholds for single-word reports, reflecting memory constraints for multiple-word reports. We also found additivity of segregation cues for multiple- but sub-additivity for single-word reports. Finally, we observed a build-up of release from speech-on-speech masking that depended on response and cue conditions, namely no build-up for multiple-word reports and continuous build-up except for the easiest condition, i.e., different sex/spatially separated maskers for single-word reports. These results shed further light on how listeners follow a target talker in the presence of competing talkers, i.e., the classical cocktail-party problem, and indicate the potential for performance improvement from enhancing segregation cues in the hearing-impaired.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251388430"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12536088/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145309622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-03-16DOI: 10.1177/23312165251317010
Timothy Beechey, Graham Naylor
This paper describes a conceptual model of adaptive responses to adverse auditory conditions with the aim of providing a basis for better understanding the demands of, and opportunities for, successful real-life auditory functioning. We review examples of behaviors that facilitate auditory functioning in adverse conditions. Next, we outline the concept of purpose-driven behavior and describe how changing behavior can ensure stable performance in a changing environment. We describe how tasks and environments (both physical and social) dictate which behaviors are possible and effective facilitators of auditory functioning, and how hearing disability may be understood in terms of capacity to adapt to the environment. A conceptual model of adaptive cognitive, physical, and linguistic responses within a moderating negative feedback system is presented along with implications for the interpretation of auditory experiments which seek to predict functioning outside the laboratory or clinic. We argue that taking account of how people can improve their own performance by adapting their behavior and modifying their environment may contribute to more robust and generalizable experimental findings.
{"title":"How Purposeful Adaptive Responses to Adverse Conditions Facilitate Successful Auditory Functioning: A Conceptual Model.","authors":"Timothy Beechey, Graham Naylor","doi":"10.1177/23312165251317010","DOIUrl":"10.1177/23312165251317010","url":null,"abstract":"<p><p>This paper describes a conceptual model of adaptive responses to adverse auditory conditions with the aim of providing a basis for better understanding the demands of, and opportunities for, successful real-life auditory functioning. We review examples of behaviors that facilitate auditory functioning in adverse conditions. Next, we outline the concept of purpose-driven behavior and describe how changing behavior can ensure stable performance in a changing environment. We describe how tasks and environments (both physical and social) dictate which behaviors are possible and effective facilitators of auditory functioning, and how hearing disability may be understood in terms of capacity to adapt to the environment. A conceptual model of adaptive cognitive, physical, and linguistic responses within a moderating negative feedback system is presented along with implications for the interpretation of auditory experiments which seek to predict functioning outside the laboratory or clinic. We argue that taking account of how people can improve their own performance by adapting their behavior and modifying their environment may contribute to more robust and generalizable experimental findings.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251317010"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11912170/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143651562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}