Pub Date : 2025-01-01DOI: 10.1177/23312165241306091
Khaled H A Abdel-Latif, Thomas Koelewijn, Deniz Başkent, Hartmut Meister
Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal. This study aimed to propose a new VWP to examine the time course of speech segregation when competing sentences are presented and to collect pupil size data as a measure of listening effort. Twelve young normal-hearing participants were presented with competing matrix sentences (structure "name-verb-numeral-adjective-object") diotically via headphones at four target-to-masker ratios (TMRs), corresponding to intermediate to near perfect speech recognition. The VWP visually presented the number and object words from both the target and masker sentences. Participants were instructed to gaze at the corresponding words of the target sentence without providing verbal responses. The gaze fixations consistently reflected the different TMRs for both number and object words. The slopes of the fixation curves were steeper, and the proportion of target fixations increased with higher TMRs, suggesting more efficient segregation under more favorable conditions. Temporal analysis of pupil data using Bayesian paired sample t-tests showed a corresponding reduction in pupil dilation with increasing TMR, indicating reduced listening effort. The results support the conclusion that the proposed VWP and the captured eye movements and pupil dilation are suitable for objective assessment of sentence-based speech-on-speech segregation and the corresponding listening effort.
在日常语言交流中,语音对语音的掩蔽是一种常见且具有挑战性的情况。要将注意力集中在目标语音上,就必须具备分离相互竞争的听觉流的能力。视觉世界范式(Visual World Paradigm,VWP)通过捕捉对反映语音信号的视觉呈现图标的注视固定来深入了解语音处理过程。本研究旨在提出一种新的视觉世界范式,以考察在出现竞争句子时语音分离的时间过程,并收集瞳孔大小数据作为听力努力程度的测量指标。研究人员通过耳机向 12 名听力正常的年轻受试者连续呈现了四种目标与掩码比(TMRs)的竞争矩阵句子(结构为 "名称-动词-名词-形容词-宾语"),这四种目标与掩码比分别对应于中等到接近完美的语音识别能力。VWP 可视化呈现目标句和掩蔽句中的数词和宾词。受试者被要求注视目标句子中的相应单词,而不提供口头回答。注视定着一致地反映了数字词和物词的不同 TMR。固定曲线的斜率更陡峭,目标固定的比例随 TMR 越高而增加,这表明在更有利的条件下,分离的效率更高。使用贝叶斯配对样本 t 检验法对瞳孔数据进行的时间分析表明,随着 TMR 的增加,瞳孔放大的程度也相应减小,这表明听力强度降低了。这些结果支持这样的结论,即所提出的 VWP 以及捕捉到的眼球运动和瞳孔放大适合用于客观评估基于句子的语音分离和相应的听力强度。
{"title":"Assessment of Speech Processing and Listening Effort Associated With Speech-on-Speech Masking Using the Visual World Paradigm and Pupillometry.","authors":"Khaled H A Abdel-Latif, Thomas Koelewijn, Deniz Başkent, Hartmut Meister","doi":"10.1177/23312165241306091","DOIUrl":"10.1177/23312165241306091","url":null,"abstract":"<p><p>Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal. This study aimed to propose a new VWP to examine the time course of speech segregation when competing sentences are presented and to collect pupil size data as a measure of listening effort. Twelve young normal-hearing participants were presented with competing matrix sentences (structure \"name-verb-numeral-adjective-object\") diotically via headphones at four target-to-masker ratios (TMRs), corresponding to intermediate to near perfect speech recognition. The VWP visually presented the number and object words from both the target and masker sentences. Participants were instructed to gaze at the corresponding words of the target sentence without providing verbal responses. The gaze fixations consistently reflected the different TMRs for both number and object words. The slopes of the fixation curves were steeper, and the proportion of target fixations increased with higher TMRs, suggesting more efficient segregation under more favorable conditions. Temporal analysis of pupil data using Bayesian paired sample <i>t</i>-tests showed a corresponding reduction in pupil dilation with increasing TMR, indicating reduced listening effort. The results support the conclusion that the proposed VWP and the captured eye movements and pupil dilation are suitable for objective assessment of sentence-based speech-on-speech segregation and the corresponding listening effort.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241306091"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11726529/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142972857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1177/23312165251320794
Alexina Whitley, Timothy Beechey, Lauren V Hadley
Many of our conversations occur in nonideal situations, from the hum of a car to the babble of a cocktail party. Additionally, in conversation, listeners are often required to switch their attention between multiple talkers, which places demands on both auditory and cognitive processes. Speech understanding in such situations appears to be particularly demanding for older adults with hearing impairment. This study examined the effects of age and hearing ability on performance in an online speech recall task. Two target sentences, spoken by the same talker or different talkers, were presented one after the other, analogous to a conversational turn switch. The first target sentence was presented in quiet, and the second target sentence was presented alongside either a noise masker (steady-state speech-shaped noise) or a speech masker (another nontarget sentence). Relative to when the target talker remained the same between sentences, listeners were less accurate at recalling information in the second target sentence when the target talker changed, particularly when the target talker for sentence one became the masker for sentence two. Listeners with poorer speech-in-noise reception thresholds were less accurate in both noise- and speech-masked trials and made more masker confusions in speech-masked trials. Furthermore, an interaction revealed that listeners with poorer speech reception thresholds had particular difficulty when the target talker remained the same. Our study replicates previous research regarding the costs of switching nonspatial attention, extending these findings to older adults with a range of hearing abilities.
{"title":"Who Said That? The Effect of Hearing Ability on Following Sequential Utterances From Varying Talkers in Noise.","authors":"Alexina Whitley, Timothy Beechey, Lauren V Hadley","doi":"10.1177/23312165251320794","DOIUrl":"10.1177/23312165251320794","url":null,"abstract":"<p><p>Many of our conversations occur in nonideal situations, from the hum of a car to the babble of a cocktail party. Additionally, in conversation, listeners are often required to switch their attention between multiple talkers, which places demands on both auditory and cognitive processes. Speech understanding in such situations appears to be particularly demanding for older adults with hearing impairment. This study examined the effects of age and hearing ability on performance in an online speech recall task. Two target sentences, spoken by the same talker or different talkers, were presented one after the other, analogous to a conversational turn switch. The first target sentence was presented in quiet, and the second target sentence was presented alongside either a noise masker (steady-state speech-shaped noise) or a speech masker (another nontarget sentence). Relative to when the target talker remained the same between sentences, listeners were less accurate at recalling information in the second target sentence when the target talker changed, particularly when the target talker for sentence one became the masker for sentence two. Listeners with poorer speech-in-noise reception thresholds were less accurate in both noise- and speech-masked trials and made more masker confusions in speech-masked trials. Furthermore, an interaction revealed that listeners with poorer speech reception thresholds had particular difficulty when the target talker remained the same. Our study replicates previous research regarding the costs of switching nonspatial attention, extending these findings to older adults with a range of hearing abilities.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251320794"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11851761/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143484318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-07-30DOI: 10.1177/23312165251359415
Laura K Holden, Rosalie M Uchanski, Noël Y Dwyer, Ruth M Reeder, Timothy A Holden, Jill B Firszt
The study aimed to improve outcomes in Nucleus cochlear implant (CI) recipients with single-sided deafness (SSD) by reducing interaural frequency and loudness mismatches through device programming. In Experiment 1a, a modified frequency allocation table (FAT) was created to better match the tonotopicity of the contralateral ear and reduce interaural frequency mismatch. Twenty experienced SSD-CI users completed localization and speech recognition tests with their everyday FAT. Tests were repeated after 6 weeks' use of the modified FAT. Participants compared both FATs for 2 weeks before being tested again with each. For 10 newly implanted SSD-CI recipients (Experiment 1b), Group A was programmed with the manufacturer's default FAT and Group B with the modified FAT at activation. Speech recognition and localization were completed, after 6 weeks' use of each FAT. Participants then compared both FATs before testing with each. In Experiment 2, 15 experienced SSD-CI users were evaluated with their everyday program and a modified loudness program, which was created to obtain audibility of ∼20 dB HL from 0.25 to 6 kHz and balanced loudness between ears. Three test sessions occurred, resembling Experiment 1a. Experienced participants in Experiments 1a and 2 showed significant improvement in one speech-in-noise task with a modified program compared to the everyday program. Newly implanted recipients showed no significant difference in results between FATs. Results indicate that modified programs, created to reduce interaural mismatches, may improve outcomes. The first month after activation might be too early to compare FATs as SSD-CI recipients are adjusting to electric hearing.
该研究旨在通过设备编程减少耳蜗间频率和响度失配,改善单侧耳聋(SSD)人工耳蜗(CI)受者的预后。在实验1a中,为了更好地匹配对侧耳的张力性,减少耳间频率失配,我们创建了一个改进的频率分配表(FAT)。20名有经验的SSD-CI用户用他们的日常FAT完成了本地化和语音识别测试。使用改良FAT 6周后重复测试。参与者将两种脂肪进行了两周的比较,然后再次进行测试。对于10例新植入的SSD-CI受体(实验1b), A组使用制造商默认的FAT编程,B组使用激活时修改的FAT编程。每个FAT使用6周后完成语音识别和定位。然后参与者在测试前比较两种脂肪。在实验2中,15名经验丰富的SSD-CI用户使用他们的日常程序和修改的响度程序进行评估,该程序旨在获得0.25至6 kHz范围内约20 dB HL的可听性和耳朵之间的平衡响度。进行了三次测试,类似于实验1a。实验1a和实验2中经验丰富的参与者在使用修改后的程序时,与日常程序相比,在一项噪音语音任务中表现出显著的改善。新植入的受体在两种脂肪之间的结果没有显著差异。结果表明,修改程序,以减少内部不匹配,可以改善结果。在激活后的第一个月比较脂肪可能为时过早,因为SSD-CI接受者正在适应电听力。
{"title":"Improving Outcomes of Single-Sided Deaf Cochlear Implant Users by Reducing Interaural Frequency and Loudness Mismatches through Device Programming.","authors":"Laura K Holden, Rosalie M Uchanski, Noël Y Dwyer, Ruth M Reeder, Timothy A Holden, Jill B Firszt","doi":"10.1177/23312165251359415","DOIUrl":"10.1177/23312165251359415","url":null,"abstract":"<p><p>The study aimed to improve outcomes in Nucleus cochlear implant (CI) recipients with single-sided deafness (SSD) by reducing interaural frequency and loudness mismatches through device programming. In Experiment 1a, a modified frequency allocation table (FAT) was created to better match the tonotopicity of the contralateral ear and reduce interaural frequency mismatch. Twenty experienced SSD-CI users completed localization and speech recognition tests with their everyday FAT. Tests were repeated after 6 weeks' use of the modified FAT. Participants compared both FATs for 2 weeks before being tested again with each. For 10 newly implanted SSD-CI recipients (Experiment 1b), Group A was programmed with the manufacturer's default FAT and Group B with the modified FAT at activation. Speech recognition and localization were completed, after 6 weeks' use of each FAT. Participants then compared both FATs before testing with each. In Experiment 2, 15 experienced SSD-CI users were evaluated with their everyday program and a modified loudness program, which was created to obtain audibility of ∼20 dB HL from 0.25 to 6 kHz and balanced loudness between ears. Three test sessions occurred, resembling Experiment 1a. Experienced participants in Experiments 1a and 2 showed significant improvement in one speech-in-noise task with a modified program compared to the everyday program. Newly implanted recipients showed no significant difference in results between FATs. Results indicate that modified programs, created to reduce interaural mismatches, may improve outcomes. The first month after activation might be too early to compare FATs as SSD-CI recipients are adjusting to electric hearing.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251359415"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12317272/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144754854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-07-04DOI: 10.1177/23312165251356333
Robel Z Alemu, Alan Blakeman, Angela L Fung, Melissa Hazen, Jaina Negandhi, Blake C Papsin, Sharon L Cushing, Karen A Gordon
Spatial hearing in children with bilateral cochlear implants (BCIs) was assessed by: (a) comparing localization of stationary and moving sound, (b) investigating the relationship between sound localization and sensitivity to interaural level and timing differences (ILDs/ITDs), (c) evaluating effects of aural preference on sound localization, and (d) exploring head and eye (gaze) movements during sound localization. Children with BCIs (n = 42, MAge = 12.3 years) with limited duration of auditory deprivation and peers with typical hearing (controls; n = 37, MAge = 12.9 years) localized stationary and moving sound with unrestricted head and eye movements. Sensitivity to binaural cues was measured by a lateralization task to ILDs and ITDs. Spatial separation effects were measured by spondee-word recognition thresholds (SNR thresholds) when noise was presented in front (colocated/0°) or with 90° of left/right separation. BCI users had good speech reception thresholds (SRTs) in quiet but higher SRTs in noise than controls. Spatial separation of noise from speech revealed a greater advantage for the right ear across groups. BCI users showed increased errors localizing stationary sound and detecting moving sound direction compared to controls. Decreased ITD sensitivity occurred with poorer localization of stationary sound in BCI users. Gaze movements in BCI users were more random than controls for stationary and moving sounds. BCIs support symmetric hearing in children with limited duration of auditory deprivation and promote spatial hearing which is albeit impaired. Spatial hearing was thus considered to be "emerging." Remaining challenges may reflect disruptions in ITD sensitivity and ineffective gaze movements.
{"title":"Children With Bilateral Cochlear Implants Show Emerging Spatial Hearing of Stationary and Moving Sound.","authors":"Robel Z Alemu, Alan Blakeman, Angela L Fung, Melissa Hazen, Jaina Negandhi, Blake C Papsin, Sharon L Cushing, Karen A Gordon","doi":"10.1177/23312165251356333","DOIUrl":"10.1177/23312165251356333","url":null,"abstract":"<p><p>Spatial hearing in children with bilateral cochlear implants (BCIs) was assessed by: (a) comparing localization of stationary and moving sound, (b) investigating the relationship between sound localization and sensitivity to interaural level and timing differences (ILDs/ITDs), (c) evaluating effects of aural preference on sound localization, and (d) exploring head and eye (gaze) movements during sound localization. Children with BCIs (<i>n</i> = 42, <i>M</i><sub>Age</sub> = 12.3 years) with limited duration of auditory deprivation and peers with typical hearing (controls; <i>n</i> = 37, <i>M</i><sub>Age</sub> = 12.9 years) localized stationary and moving sound with unrestricted head and eye movements. Sensitivity to binaural cues was measured by a lateralization task to ILDs and ITDs. Spatial separation effects were measured by spondee-word recognition thresholds (SNR thresholds) when noise was presented in front (colocated/0°) or with 90° of left/right separation. BCI users had good speech reception thresholds (SRTs) in quiet but higher SRTs in noise than controls. Spatial separation of noise from speech revealed a greater advantage for the right ear across groups. BCI users showed increased errors localizing stationary sound and detecting moving sound direction compared to controls. Decreased ITD sensitivity occurred with poorer localization of stationary sound in BCI users. Gaze movements in BCI users were more random than controls for stationary and moving sounds. BCIs support symmetric hearing in children with limited duration of auditory deprivation and promote spatial hearing which is albeit impaired. Spatial hearing was thus considered to be \"emerging.\" Remaining challenges may reflect disruptions in ITD sensitivity and ineffective gaze movements.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251356333"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12227942/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144561560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-09-08DOI: 10.1177/23312165251376382
J Gerard G Borst, André Goedegebure
Individuals with tinnitus hear sounds that are not present in the external environment. Whereas hearing difficulties at frequencies near those matching the tinnitus pitch are a common complaint for individuals with tinnitus, it is unclear to what extent the internal tinnitus sounds interfere with the detection of external sounds. We therefore studied whether pure-tone detection at the estimated frequency corresponding to the tinnitus pitch (ftp) was affected by confusion with the tinnitus percept. Signs of confusion would be a high false alarm rate or a shallower slope of the psychometric function for tone detection at ftp. We selected participants with symmetric, tonal tinnitus, who were able to estimate its pitch consistently (n = 18). Another 18 participants matched for high-frequency hearing loss, age, and sex, but without tinnitus, served as the control group. For both groups, we measured the psychometric function for detecting long-duration tones, maximizing the likelihood for confusion with an external sound. We observed that false alarm rates for tinnitus participants were not higher for test tones at ftp, nor were they higher than for the control group without tinnitus. Similar results were obtained for the slopes of the psychometric functions. Apparently, individuals with tinnitus are well able to discriminate between their own tinnitus and comparable external sounds. Our results indicate that (tonal) tinnitus does not interfere with the detection of soft sounds at the tinnitus pitch-matched frequency.
{"title":"Tonal Tinnitus Does Not Interfere with Tone Detection at the Tinnitus Pitch-Matched Frequency.","authors":"J Gerard G Borst, André Goedegebure","doi":"10.1177/23312165251376382","DOIUrl":"10.1177/23312165251376382","url":null,"abstract":"<p><p>Individuals with tinnitus hear sounds that are not present in the external environment. Whereas hearing difficulties at frequencies near those matching the tinnitus pitch are a common complaint for individuals with tinnitus, it is unclear to what extent the internal tinnitus sounds interfere with the detection of external sounds. We therefore studied whether pure-tone detection at the estimated frequency corresponding to the tinnitus pitch (f<sub>tp</sub>) was affected by confusion with the tinnitus percept. Signs of confusion would be a high false alarm rate or a shallower slope of the psychometric function for tone detection at f<sub>tp</sub>. We selected participants with symmetric, tonal tinnitus, who were able to estimate its pitch consistently (n = 18). Another 18 participants matched for high-frequency hearing loss, age, and sex, but without tinnitus, served as the control group. For both groups, we measured the psychometric function for detecting long-duration tones, maximizing the likelihood for confusion with an external sound. We observed that false alarm rates for tinnitus participants were not higher for test tones at f<sub>tp</sub>, nor were they higher than for the control group without tinnitus. Similar results were obtained for the slopes of the psychometric functions. Apparently, individuals with tinnitus are well able to discriminate between their own tinnitus and comparable external sounds. Our results indicate that (tonal) tinnitus does not interfere with the detection of soft sounds at the tinnitus pitch-matched frequency.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251376382"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12618831/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145015303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-10-16DOI: 10.1177/23312165251385017
John H Grose, Monica Folkerts, Emily Buss
This study compared the behavioral minimum audible angle (MAA) and the electrophysiological acoustic change complex (ACC) elicited by an azimuthal shift in sound location. To examine age effects, 63 participants with normal or near-normal hearing were divided into three age groups (Young, Mid-Aged, and Older). The stimuli were narrow bands of noise centered at 500 Hz to facilitate reliance on primarily binaural temporal cues. Putative spatial location was manipulated by means of head-related transfer functions under headphones. MAA results showed that performance was dependent on the reference location, with performance becoming poorer as the reference location shifted away from midline. The Young group had smaller MAAs than the Older group, and performance of the Mid-Age group was intermediate. Measurement of the ACC was restricted to shifts away from midline, and results showed no ACC for shifts of 4.5° and 9° but present ACCs for shifts of 13.5°, 18°, and 36°. The robustness of the ACC, as measured with the intertrial phase coherence metric, grew with increasing azimuthal shift. For shifts of 13.5° and 18°, Young participants had more robust ACCs than Older participants. Although age-related deficits were found in both the MAA and in the robustness of the ACC, no associations were observed at the individual level between MAA and ACC measures. Further work is necessary to evaluate the ACC elicited by shifts from off-midline reference locations before a firm conclusion can be reached that the ACC is not a viable objective proxy for the MAA.
{"title":"Minimum Audible Angle and the Acoustic Change Complex Elicited by Azimuthal Shifts in Low-Frequency Sounds: Effects of Age.","authors":"John H Grose, Monica Folkerts, Emily Buss","doi":"10.1177/23312165251385017","DOIUrl":"10.1177/23312165251385017","url":null,"abstract":"<p><p>This study compared the behavioral minimum audible angle (MAA) and the electrophysiological acoustic change complex (ACC) elicited by an azimuthal shift in sound location. To examine age effects, 63 participants with normal or near-normal hearing were divided into three age groups (Young, Mid-Aged, and Older). The stimuli were narrow bands of noise centered at 500 Hz to facilitate reliance on primarily binaural temporal cues. Putative spatial location was manipulated by means of head-related transfer functions under headphones. MAA results showed that performance was dependent on the reference location, with performance becoming poorer as the reference location shifted away from midline. The Young group had smaller MAAs than the Older group, and performance of the Mid-Age group was intermediate. Measurement of the ACC was restricted to shifts away from midline, and results showed no ACC for shifts of 4.5° and 9° but present ACCs for shifts of 13.5°, 18°, and 36°. The robustness of the ACC, as measured with the intertrial phase coherence metric, grew with increasing azimuthal shift. For shifts of 13.5° and 18°, Young participants had more robust ACCs than Older participants. Although age-related deficits were found in both the MAA and in the robustness of the ACC, no associations were observed at the individual level between MAA and ACC measures. Further work is necessary to evaluate the ACC elicited by shifts from off-midline reference locations before a firm conclusion can be reached that the ACC is not a viable objective proxy for the MAA.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251385017"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12536095/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145309540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-10-23DOI: 10.1177/23312165251374938
Thomas Biberger, Stephan D Ewert
The effect of complex acoustic environments (CAEs), typically comprising target and interfering sound sources as well as room reflections, on the speech reception of hearing-impaired (HI) listeners has been examined in several studies. However, only little is known about audio quality perception of HI listeners in such CAEs. Thus, this study assessed detection thresholds and suprathreshold audio quality ratings of listeners with very mild and moderate hearing loss (HL) for several distortions applied to speech and pink noise: nonlinear saturation, spectral ripples, level differences, and spatial position offsets. The stimuli were presented in acoustical scenes that differ in their complexity by manipulating room size in conjunction with reverberation time, and the number and spatial position of interfering sound sources. The strongest differences between listeners with very mild and moderate HL were observed in the presence of interfering sounds. In such situations, listeners with moderate HL had consistently higher distortion detection thresholds than listeners with very mild HL. Moreover, they rated audio quality lower for the masked than for the unmasked distorted targets, indicating difficulties in separating the target from the maskers. Significant correlations were found between the listeners' pure tone average (PTA) and distortion detection thresholds in situations with maskers. Thus, PTAs seem to be a suitable predictor for distortion thresholds of HI listeners in CAEs. The effect of reverberation strongly depended on the target (speech or pink noise) and the type of distortions.
{"title":"Audio Quality Perception of Hearing-Impaired Listeners in Complex Acoustic Environments.","authors":"Thomas Biberger, Stephan D Ewert","doi":"10.1177/23312165251374938","DOIUrl":"10.1177/23312165251374938","url":null,"abstract":"<p><p>The effect of complex acoustic environments (CAEs), typically comprising target and interfering sound sources as well as room reflections, on the speech reception of hearing-impaired (HI) listeners has been examined in several studies. However, only little is known about audio quality perception of HI listeners in such CAEs. Thus, this study assessed detection thresholds and suprathreshold audio quality ratings of listeners with very mild and moderate hearing loss (HL) for several distortions applied to speech and pink noise: nonlinear saturation, spectral ripples, level differences, and spatial position offsets. The stimuli were presented in acoustical scenes that differ in their complexity by manipulating room size in conjunction with reverberation time, and the number and spatial position of interfering sound sources. The strongest differences between listeners with very mild and moderate HL were observed in the presence of interfering sounds. In such situations, listeners with moderate HL had consistently higher distortion detection thresholds than listeners with very mild HL. Moreover, they rated audio quality lower for the masked than for the unmasked distorted targets, indicating difficulties in separating the target from the maskers. Significant correlations were found between the listeners' pure tone average (PTA) and distortion detection thresholds in situations with maskers. Thus, PTAs seem to be a suitable predictor for distortion thresholds of HI listeners in CAEs. The effect of reverberation strongly depended on the target (speech or pink noise) and the type of distortions.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251374938"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12559647/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145356517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-12-15DOI: 10.1177/23312165251389112
Julia Schütze, Stephan D Ewert, Christoph Kirsch, Birger Kollmeier
The discrepancy between the hearing aid benefit estimated in standard audiological tests, like speech audiometry, and the perceived benefit in daily life has led to interest in methods better reflecting real-world performance. In contrast to audiological tests, everyday communication commonly takes place in enclosed spaces with acoustic reflections and multiple sound sources, including sounds from adjoining rooms through open doors. This study investigates speech recognition thresholds (SRTs) with a sentence test in a laboratory environment resembling an average German living room with an adjacent kitchen. Additionally, acoustic simulations of the environment were presented in a large-scale (86) and small-scale (4) loudspeaker array, with the latter feasible for a clinical context. Measurements with normal-hearing and hearing-impaired listeners were conducted using different spatial target positions and a fixed masker position. One of the target positions was within the adjacent kitchen without line-of-sight to the sound source, representing a challenging acoustic configuration. Hearing-impaired listeners performed the measurements with and without their hearing aids. SRTs were compared between different presentation settings and to those measured in standard free-field audiological spatial configurations (S0N0, S0N90). An auditory model was employed for further analysis. Results show that SRTs in the simulated living room environment with 86 and 4 loudspeakers matched the real environment, even for aided listeners, indicating that virtual acoustics representations can reflect real-world listening performance. When signal-to-noise ratios were normalized, the measured hearing aid benefit did not differ significantly between the standard audiological spatial configuration S0N90 and any spatial configuration in the living room environment.
{"title":"Unaided and Aided Speech Intelligibility in a Real and Virtual Acoustic Environment.","authors":"Julia Schütze, Stephan D Ewert, Christoph Kirsch, Birger Kollmeier","doi":"10.1177/23312165251389112","DOIUrl":"10.1177/23312165251389112","url":null,"abstract":"<p><p>The discrepancy between the hearing aid benefit estimated in standard audiological tests, like speech audiometry, and the perceived benefit in daily life has led to interest in methods better reflecting real-world performance. In contrast to audiological tests, everyday communication commonly takes place in enclosed spaces with acoustic reflections and multiple sound sources, including sounds from adjoining rooms through open doors. This study investigates speech recognition thresholds (SRTs) with a sentence test in a laboratory environment resembling an average German living room with an adjacent kitchen. Additionally, acoustic simulations of the environment were presented in a large-scale (86) and small-scale (4) loudspeaker array, with the latter feasible for a clinical context. Measurements with normal-hearing and hearing-impaired listeners were conducted using different spatial target positions and a fixed masker position. One of the target positions was within the adjacent kitchen without line-of-sight to the sound source, representing a challenging acoustic configuration. Hearing-impaired listeners performed the measurements with and without their hearing aids. SRTs were compared between different presentation settings and to those measured in standard free-field audiological spatial configurations (S0N0, S0N90). An auditory model was employed for further analysis. Results show that SRTs in the simulated living room environment with 86 and 4 loudspeakers matched the real environment, even for aided listeners, indicating that virtual acoustics representations can reflect real-world listening performance. When signal-to-noise ratios were normalized, the measured hearing aid benefit did not differ significantly between the standard audiological spatial configuration S0N90 and any spatial configuration in the living room environment.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251389112"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12705970/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145764241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The present work quantifies the Lombard effect across native speakers of Mandarin Chinese using the Matrix sentence test, which is optimized for precisely assessing speech recognition thresholds (SRTs) in noise. Specifically, we studied the effects of speaker gender, fundamental frequency (F0), formant frequencies (F1 and F2), the duration and rate of voiced segments, and frequency-specific energy redistribution characterized by alpha ratio and speech-weighted signal-to-noise ratio (swSNR) on the recognition of Mandarin in plain and Lombard speech. The Mandarin Chinese matrix test was recorded with plain and Lombard speech from 11 native-Mandarin speakers. SRTs in stationary noise were measured with native-Mandarin, normal-hearing listeners. Results showed that on average, Mandarin Lombard speech was more intelligible than Mandarin plain speech for both female and male speakers, and the Mandarin Lombard gain of female speakers was larger than that of males. In addition, various acoustic analyses involving all speakers showed that (a) only swSNR was significantly correlated with the SRT of the Mandarin plain speech; (b) most acoustic measures were significantly correlated with the SRT of the Mandarin Lombard speech; and (c) alpha ratio and swSNR were significantly correlated with the SRT Lombard gain. In addition, a gender effect was found in the correlational analysis between acoustic parameters and SRT as well as Lombard gain in SRT. The findings highlight the impact of increased high-frequency energy on the observed Lombard gain in Mandarin speech, whereas the changes in individual acoustic parameters (e.g., F0 and F1) appear to play only a minor role.
{"title":"Understanding the Lombard Effect for Mandarin: Relation Between Speech Recognition Thresholds and Acoustic Parameters.","authors":"Fei Chen, Changjie Pan, Hongmei Hu, Sabine Hochmuth, Birger Kollmeier, Anna Warzybok","doi":"10.1177/23312165251324266","DOIUrl":"10.1177/23312165251324266","url":null,"abstract":"<p><p>The present work quantifies the Lombard effect across native speakers of Mandarin Chinese using the Matrix sentence test, which is optimized for precisely assessing speech recognition thresholds (SRTs) in noise. Specifically, we studied the effects of speaker gender, fundamental frequency (F0), formant frequencies (F1 and F2), the duration and rate of voiced segments, and frequency-specific energy redistribution characterized by alpha ratio and speech-weighted signal-to-noise ratio (swSNR) on the recognition of Mandarin in plain and Lombard speech. The Mandarin Chinese matrix test was recorded with plain and Lombard speech from 11 native-Mandarin speakers. SRTs in stationary noise were measured with native-Mandarin, normal-hearing listeners. Results showed that on average, Mandarin Lombard speech was more intelligible than Mandarin plain speech for both female and male speakers, and the Mandarin Lombard gain of female speakers was larger than that of males. In addition, various acoustic analyses involving all speakers showed that (a) only swSNR was significantly correlated with the SRT of the Mandarin plain speech; (b) most acoustic measures were significantly correlated with the SRT of the Mandarin Lombard speech; and (c) alpha ratio and swSNR were significantly correlated with the SRT Lombard gain. In addition, a gender effect was found in the correlational analysis between acoustic parameters and SRT as well as Lombard gain in SRT. The findings highlight the impact of increased high-frequency energy on the observed Lombard gain in Mandarin speech, whereas the changes in individual acoustic parameters (e.g., F0 and F1) appear to play only a minor role.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251324266"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11938858/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143701432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1177/23312165251317925
Katrina Kate S McClannahan, Sarah McConkey, Julia M Levitan, Thomas L Rodebaugh, Jonathan E Peelle
Subjective ratings of communication function reflect both auditory sensitivity and the situational, social, and emotional consequences of communication difficulties. Listeners interact with people and their environment differently, have various ways of handling stressful situations, and have diverse communication needs. Therefore, understanding the relationship between auditory and mental health factors is crucial for the holistic diagnosis and treatment of communication difficulty, particularly as mental health and communication function may have bidirectional effects. The goal of this study was to evaluate the degree to which social anxiety and negative affect (encompassing generalized anxiety, depression, and anger) contributed to subjective communication function (hearing handicap) in adult listeners. A cross-sectional online survey was administered via REDCap. Primary measures were brief assessments of social anxiety, negative affect, and subjective communication function measures. Participants were 628 adults (408 women, 220 men), ages 19 to 87 years (mean = 43) living in the United States. Results indicated that individuals reporting higher social anxiety and higher negative affect also reported poorer communication function. Multiple linear regression analysis revealed that both negative affect and social anxiety were significant and unique predictors of subjective communication function. Social anxiety and negative affect both significantly, and uniquely, contribute to how much someone feels a hearing loss impacts their daily communication function. Further examination of social anxiety and negative affect in older adults with hearing loss may help researchers and clinicians understand the complex interactions between mental health and sensory function during everyday communication, in this rapidly growing clinical population.
{"title":"Social Anxiety, Negative Affect, and Hearing Difficulties in Adults.","authors":"Katrina Kate S McClannahan, Sarah McConkey, Julia M Levitan, Thomas L Rodebaugh, Jonathan E Peelle","doi":"10.1177/23312165251317925","DOIUrl":"10.1177/23312165251317925","url":null,"abstract":"<p><p>Subjective ratings of communication function reflect both auditory sensitivity and the situational, social, and emotional consequences of communication difficulties. Listeners interact with people and their environment differently, have various ways of handling stressful situations, and have diverse communication needs. Therefore, understanding the relationship between auditory and mental health factors is crucial for the holistic diagnosis and treatment of communication difficulty, particularly as mental health and communication function may have bidirectional effects. The goal of this study was to evaluate the degree to which social anxiety and negative affect (encompassing generalized anxiety, depression, and anger) contributed to subjective communication function (hearing handicap) in adult listeners. A cross-sectional online survey was administered via REDCap. Primary measures were brief assessments of social anxiety, negative affect, and subjective communication function measures. Participants were 628 adults (408 women, 220 men), ages 19 to 87 years (mean = 43) living in the United States. Results indicated that individuals reporting higher social anxiety and higher negative affect also reported poorer communication function. Multiple linear regression analysis revealed that both negative affect and social anxiety were significant and unique predictors of subjective communication function. Social anxiety and negative affect both significantly, and uniquely, contribute to how much someone feels a hearing loss impacts their daily communication function. Further examination of social anxiety and negative affect in older adults with hearing loss may help researchers and clinicians understand the complex interactions between mental health and sensory function during everyday communication, in this rapidly growing clinical population.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251317925"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11803679/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143366040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}