Pub Date : 2024-01-01DOI: 10.1177/23312165231222098
Christopher Slugocki, Francis Kuk, Petri Korhonen
This study measured electroencephalographic activity in the alpha band, often associated with task difficulty, to physiologically validate self-reported effort ratings from older hearing-impaired listeners performing the Repeat-Recall Test (RRT)-an integrative multipart assessment of speech-in-noise performance, context use, and auditory working memory. Following a single-blind within-subjects design, 16 older listeners (mean age = 71 years, SD = 13, 9 female) with a moderate-to-severe degree of bilateral sensorineural hearing loss performed the RRT while wearing hearing aids at four fixed signal-to-noise ratios (SNRs) of -5, 0, 5, and 10 dB. Performance and subjective ratings of listening effort were assessed for complementary versions of the RRT materials with high/low availability of semantic context. Listeners were also tested with a version of the RRT that omitted the memory (i.e., recall) component. As expected, results showed alpha power to decrease significantly with increasing SNR from 0 through 10 dB. When tested with high context sentences, alpha was significantly higher in conditions where listeners had to recall the sentence materials compared to conditions where the recall requirement was omitted. When tested with low context sentences, alpha power was relatively high irrespective of the memory component. Within-subjects, alpha power was related to listening effort ratings collected across the different RRT conditions. Overall, these results suggest that the multipart demands of the RRT modulate both neural and behavioral measures of listening effort in directions consistent with the expected/designed difficulty of the RRT conditions.
{"title":"Alpha-Band Dynamics of Hearing Aid Wearers Performing the Repeat-Recall Test (RRT).","authors":"Christopher Slugocki, Francis Kuk, Petri Korhonen","doi":"10.1177/23312165231222098","DOIUrl":"10.1177/23312165231222098","url":null,"abstract":"<p><p>This study measured electroencephalographic activity in the alpha band, often associated with task difficulty, to physiologically validate self-reported effort ratings from older hearing-impaired listeners performing the Repeat-Recall Test (RRT)-an integrative multipart assessment of speech-in-noise performance, context use, and auditory working memory. Following a single-blind within-subjects design, 16 older listeners (mean age = 71 years, SD = 13, 9 female) with a moderate-to-severe degree of bilateral sensorineural hearing loss performed the RRT while wearing hearing aids at four fixed signal-to-noise ratios (SNRs) of -5, 0, 5, and 10 dB. Performance and subjective ratings of listening effort were assessed for complementary versions of the RRT materials with high/low availability of semantic context. Listeners were also tested with a version of the RRT that omitted the memory (i.e., recall) component. As expected, results showed alpha power to decrease significantly with increasing SNR from 0 through 10 dB. When tested with high context sentences, alpha was significantly higher in conditions where listeners had to recall the sentence materials compared to conditions where the recall requirement was omitted. When tested with low context sentences, alpha power was relatively high irrespective of the memory component. Within-subjects, alpha power was related to listening effort ratings collected across the different RRT conditions. Overall, these results suggest that the multipart demands of the RRT modulate both neural and behavioral measures of listening effort in directions consistent with the expected/designed difficulty of the RRT conditions.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165231222098"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10981257/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140319547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165231225545
Chiara Casolani, Ali Borhan-Azad, Rikke Skovhøj Sørensen, Josef Schlittenlacher, Bastian Epp
This study aimed to assess the validity of a high-frequency audiometry tool based on Bayesian learning to provide a reliable, repeatable, automatic, and fast test to clinics. The study involved 85 people (138 ears) who had their high-frequency thresholds measured with three tests: standard audiometry (SA), alternative forced choice (AFC)-based algorithm, and Bayesian active (BA) learning-based algorithm. The results showed median differences within ±5 dB up to 10 kHz when comparing the BA with the other two tests, and median differences within ±10 dB at higher frequencies. The variability increased from lower to higher frequencies. The BA showed lower thresholds compared to the SA at the majority of the frequencies. The results of the different tests were consistent across groups (age, hearing loss, and tinnitus). The data for the BA showed high test-retest reliability (>90%). The time required for the BA was shorter than for the AFC (4 min vs. 13 min). The data suggest that the BA test for high-frequency audiometry could be a good candidate for clinical screening. It would add reliable and significant information without adding too much time to the visit.
本研究旨在评估基于贝叶斯学习的高频测听工具的有效性,以便为诊所提供可靠、可重复、自动和快速的测试。研究涉及 85 人(138 耳),他们的高频阈值通过三种测试进行了测量:标准测听(SA)、基于替代强迫选择(AFC)的算法和基于贝叶斯主动学习(BA)的算法。结果显示,在 10 kHz 以下,BA 与其他两种测试的中位数差异在 ±5 dB 以内,在较高频率下,中位数差异在 ±10 dB 以内。变异性从低频向高频增加。在大多数频率下,BA 的阈值低于 SA。不同组别(年龄、听力损失和耳鸣)的不同测试结果是一致的。BA 的数据显示出较高的测试重复可靠性(大于 90%)。BA 所需的时间比 AFC 短(4 分钟对 13 分钟)。这些数据表明,高频测听的 BA 测试可以作为临床筛查的一个很好的候选项目。它既能提供可靠而重要的信息,又不会增加过多的就诊时间。
{"title":"Evaluation of a Fast Method to Measure High-Frequency Audiometry Based on Bayesian Learning.","authors":"Chiara Casolani, Ali Borhan-Azad, Rikke Skovhøj Sørensen, Josef Schlittenlacher, Bastian Epp","doi":"10.1177/23312165231225545","DOIUrl":"10.1177/23312165231225545","url":null,"abstract":"<p><p>This study aimed to assess the validity of a high-frequency audiometry tool based on Bayesian learning to provide a reliable, repeatable, automatic, and fast test to clinics. The study involved 85 people (138 ears) who had their high-frequency thresholds measured with three tests: standard audiometry (SA), alternative forced choice (AFC)-based algorithm, and Bayesian active (BA) learning-based algorithm. The results showed median differences within ±5 dB up to 10 kHz when comparing the BA with the other two tests, and median differences within ±10 dB at higher frequencies. The variability increased from lower to higher frequencies. The BA showed lower thresholds compared to the SA at the majority of the frequencies. The results of the different tests were consistent across groups (age, hearing loss, and tinnitus). The data for the BA showed high test-retest reliability (>90%). The time required for the BA was shorter than for the AFC (4 min vs. 13 min). The data suggest that the BA test for high-frequency audiometry could be a good candidate for clinical screening. It would add reliable and significant information without adding too much time to the visit.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165231225545"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10777778/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139404869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241253653
Shangqiguo Wang, Lena L N Wong
This study aimed to preliminarily investigate the associations between performance on the integrated Digit-in-Noise Test (iDIN) and performance on measures of general cognition and working memory (WM). The study recruited 81 older adult hearing aid users between 60 and 95 years of age with bilateral moderate to severe hearing loss. The Chinese version of the Montreal Cognitive Assessment Basic (MoCA-BC) was used to screen older adults for mild cognitive impairment. Speech reception thresholds (SRTs) were measured using 2- to 5-digit sequences of the Mandarin iDIN. The differences in SRT between five-digit and two-digit sequences (SRT5-2), and between five-digit and three-digit sequences (SRT5-3), were used as indicators of memory performance. The results were compared to those from the Digit Span Test and Corsi Blocks Tapping Test, which evaluate WM and attention capacity. SRT5-2 and SRT5-3 demonstrated significant correlations with the three cognitive function tests (rs ranging from -.705 to -.528). Furthermore, SRT5-2 and SRT5-3 were significantly higher in participants who failed the MoCA-BC screening compared to those who passed. The findings show associations between performance on the iDIN and performance on memory tests. However, further validation and exploration are needed to fully establish its effectiveness and efficacy.
{"title":"An Exploration of the Memory Performance in Older Adult Hearing Aid Users on the Integrated Digit-in-Noise Test.","authors":"Shangqiguo Wang, Lena L N Wong","doi":"10.1177/23312165241253653","DOIUrl":"10.1177/23312165241253653","url":null,"abstract":"<p><p>This study aimed to preliminarily investigate the associations between performance on the integrated Digit-in-Noise Test (iDIN) and performance on measures of general cognition and working memory (WM). The study recruited 81 older adult hearing aid users between 60 and 95 years of age with bilateral moderate to severe hearing loss. The Chinese version of the Montreal Cognitive Assessment Basic (MoCA-BC) was used to screen older adults for mild cognitive impairment. Speech reception thresholds (SRTs) were measured using 2- to 5-digit sequences of the Mandarin iDIN. The differences in SRT between five-digit and two-digit sequences (SRT<sub>5-2</sub>), and between five-digit and three-digit sequences (SRT<sub>5-3</sub>), were used as indicators of memory performance. The results were compared to those from the Digit Span Test and Corsi Blocks Tapping Test, which evaluate WM and attention capacity. SRT<sub>5-2</sub> and SRT<sub>5-3</sub> demonstrated significant correlations with the three cognitive function tests (<i>r</i>s ranging from -.705 to -.528). Furthermore, SRT<sub>5-2</sub> and SRT<sub>5-3</sub> were significantly higher in participants who failed the MoCA-BC screening compared to those who passed. The findings show associations between performance on the iDIN and performance on memory tests. However, further validation and exploration are needed to fully establish its effectiveness and efficacy.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241253653"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11080745/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241298613
Borgný Súsonnudóttir, Borys Kowalewski, Georg Stiefenhofer, Tobias Neher
In open-fit digital hearing aids (HAs), the processing delay influences comb-filter effects that arise from the interaction of the processed HA sound with the unprocessed direct sound. The current study investigated potential relations between preferred processing delay, spectral and temporal processing abilities, and self-reported listening habits. Ten listeners with normal hearing and 20 listeners with mild-to-moderate sensorineural hearing impairments participated. Using a HA simulator, delay preference was assessed with a paired-comparison task, three types of stimuli, and five processing delays (0, 0.5, 2, 5, and 10 ms). Spectral processing was assessed with a spectral ripple discrimination (SRD) task. Temporal processing was assessed with a gap detection task. Self-reported listening habits were assessed using a shortened version of the 'sound preference and hearing habits' questionnaire. A linear mixed-effects model showed a strong effect of processing delay on preference scores (p < .001, η2 = 0.30). Post-hoc comparisons revealed no differences between either the two shortest delays or the three longer delays (all p > .05) but a clear difference between the two sets of delays (p < .001). A multiple linear regression analysis showed SRD to be a significant predictor of delay preference (p < .01, η2 = 0.29), with good spectral processing abilities being associated with a preference for short processing delay. Overall, these results indicate that assessing spectral processing abilities can guide the prescription of open-fit HAs.
在开放式数字助听器(HA)中,处理延迟会影响梳状滤波器效应,梳状滤波器效应是由经过处理的助听器声音与未经处理的直达声相互作用产生的。本研究调查了首选处理延迟、频谱和时间处理能力以及自我报告的听力习惯之间的潜在关系。10 名听力正常的听众和 20 名轻度至中度感音神经性听力障碍的听众参加了此次研究。使用 HA 模拟器,通过配对比较任务、三种类型的刺激和五种处理延迟(0、0.5、2、5 和 10 毫秒)来评估延迟偏好。频谱处理通过频谱波纹辨别(SRD)任务进行评估。时间处理通过间隙检测任务进行评估。自我报告的听力习惯通过 "声音偏好和听力习惯 "问卷的简短版本进行评估。线性混合效应模型显示,处理延迟对偏好分数有很大影响(p η2 = 0.30)。事后比较显示,两个最短延迟或三个较长延迟之间没有差异(均 p > .05),但两组延迟之间有明显差异(p p η2 = 0.29),良好的频谱处理能力与偏好短处理延迟有关。总之,这些结果表明,对频谱处理能力的评估可以指导开放拟合 HAs 的处方。
{"title":"Individual Differences Underlying Preference for Processing Delay in Open-Fit Hearing Aids.","authors":"Borgný Súsonnudóttir, Borys Kowalewski, Georg Stiefenhofer, Tobias Neher","doi":"10.1177/23312165241298613","DOIUrl":"10.1177/23312165241298613","url":null,"abstract":"<p><p>In open-fit digital hearing aids (HAs), the processing delay influences comb-filter effects that arise from the interaction of the processed HA sound with the unprocessed direct sound. The current study investigated potential relations between preferred processing delay, spectral and temporal processing abilities, and self-reported listening habits. Ten listeners with normal hearing and 20 listeners with mild-to-moderate sensorineural hearing impairments participated. Using a HA simulator, delay preference was assessed with a paired-comparison task, three types of stimuli, and five processing delays (0, 0.5, 2, 5, and 10 ms). Spectral processing was assessed with a spectral ripple discrimination (SRD) task. Temporal processing was assessed with a gap detection task. Self-reported listening habits were assessed using a shortened version of the 'sound preference and hearing habits' questionnaire. A linear mixed-effects model showed a strong effect of processing delay on preference scores (<i>p</i> < .001, <i>η</i><sup>2 </sup>= 0.30). Post-hoc comparisons revealed no differences between either the two shortest delays or the three longer delays (all <i>p</i> > .05) but a clear difference between the two sets of delays (<i>p</i> < .001). A multiple linear regression analysis showed SRD to be a significant predictor of delay preference (<i>p</i> < .01, <i>η</i><sup>2 </sup>= 0.29), with good spectral processing abilities being associated with a preference for short processing delay. Overall, these results indicate that assessing spectral processing abilities can guide the prescription of open-fit HAs.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241298613"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11638989/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142819729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241296909
{"title":"Corrigendum to \"Diagnosing Noise-Induced Hearing Loss Sustained During Military Service Using Deep Neural Networks\".","authors":"","doi":"10.1177/23312165241296909","DOIUrl":"10.1177/23312165241296909","url":null,"abstract":"","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241296909"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11561991/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142630839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study aimed to investigate the role of hearing aid (HA) usage in language outcomes among preschool children aged 3-5 years with mild bilateral hearing loss (MBHL). The data were retrieved from a total of 52 children with MBHL and 30 children with normal hearing (NH). The association between demographical, audiological factors and language outcomes was examined. Analyses of variance were conducted to compare the language abilities of HA users, non-HA users, and their NH peers. Furthermore, regression analyses were performed to identify significant predictors of language outcomes. Aided better ear pure-tone average (BEPTA) was significantly correlated with language comprehension scores. Among children with MBHL, those who used HA outperformed the ones who did not use HA across all linguistic domains. The language skills of children with MBHL were comparable to those of their peers with NH. The degree of improvement in audibility in terms of aided BEPTA was a significant predictor of language comprehension. It is noteworthy that 50% of the parents expressed reluctance regarding HA use for their children with MBHL. The findings highlight the positive impact of HA usage on language development in this population. Professionals may therefore consider HAs as a viable treatment option for children with MBHL, especially when there is a potential risk of language delay due to hearing loss. It was observed that 25% of the children with MBHL had late-onset hearing loss. Consequently, the implementation of preschool screening or a listening performance checklist is recommended to facilitate early detection.
{"title":"Impact of Hearing Aids on Language Outcomes in Preschool Children With Mild Bilateral Hearing Loss.","authors":"Yu-Chen Hung, Pei-Hsuan Ho, Pei-Hua Chen, Yi-Shin Tsai, Yi-Jui Li, Hung-Ching Lin","doi":"10.1177/23312165241256721","DOIUrl":"10.1177/23312165241256721","url":null,"abstract":"<p><p>This study aimed to investigate the role of hearing aid (HA) usage in language outcomes among preschool children aged 3-5 years with mild bilateral hearing loss (MBHL). The data were retrieved from a total of 52 children with MBHL and 30 children with normal hearing (NH). The association between demographical, audiological factors and language outcomes was examined. Analyses of variance were conducted to compare the language abilities of HA users, non-HA users, and their NH peers. Furthermore, regression analyses were performed to identify significant predictors of language outcomes. Aided better ear pure-tone average (BEPTA) was significantly correlated with language comprehension scores. Among children with MBHL, those who used HA outperformed the ones who did not use HA across all linguistic domains. The language skills of children with MBHL were comparable to those of their peers with NH. The degree of improvement in audibility in terms of aided BEPTA was a significant predictor of language comprehension. It is noteworthy that 50% of the parents expressed reluctance regarding HA use for their children with MBHL. The findings highlight the positive impact of HA usage on language development in this population. Professionals may therefore consider HAs as a viable treatment option for children with MBHL, especially when there is a potential risk of language delay due to hearing loss. It was observed that 25% of the children with MBHL had late-onset hearing loss. Consequently, the implementation of preschool screening or a listening performance checklist is recommended to facilitate early detection.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241256721"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11113073/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141076740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241260041
Larry E Humes, David A Zapala
Almost since the inception of the modern-day electroacoustic audiometer a century ago the results of pure-tone audiometry have been characterized by an audiogram. For almost as many years, clinicians and researchers have sought ways to distill the volume and complexity of information on the audiogram. Commonly used approaches have made use of pure-tone averages (PTAs) for various frequency ranges with the PTA for 500, 1000, 2000 and 4000 Hz (PTA4) being the most widely used for the categorization of hearing loss severity. Here, a three-digit triad is proposed as a single-number summary of not only the severity, but also the configuration and bilateral symmetry of the hearing loss. Each digit in the triad ranges from 0 to 9, increasing as the level of the pure-tone hearing threshold level (HTL) increases from a range of optimal hearing (< 10 dB Hearing Level; HL) to complete hearing loss (≥ 90 dB HL). Each digit also represents a different frequency region of the audiogram proceeding from left to right as: (Low, L) PTA for 500, 1000, and 2000 Hz; (Center, C) PTA for 3000, 4000 and 6000 Hz; and (High, H) HTL at 8000 Hz. This LCH Triad audiogram-classification system is evaluated using a large United States (U.S.) national dataset (N = 8,795) from adults 20 to 80 + years of age and two large clinical datasets totaling 8,254 adults covering a similar age range. Its ability to capture variations in hearing function was found to be superior to that of the widely used PTA4.
{"title":"Easy as 1-2-3: Development and Evaluation of a Simple yet Valid Audiogram-Classification System.","authors":"Larry E Humes, David A Zapala","doi":"10.1177/23312165241260041","DOIUrl":"10.1177/23312165241260041","url":null,"abstract":"<p><p>Almost since the inception of the modern-day electroacoustic audiometer a century ago the results of pure-tone audiometry have been characterized by an audiogram. For almost as many years, clinicians and researchers have sought ways to distill the volume and complexity of information on the audiogram. Commonly used approaches have made use of pure-tone averages (PTAs) for various frequency ranges with the PTA for 500, 1000, 2000 and 4000 Hz (PTA4) being the most widely used for the categorization of hearing loss severity. Here, a three-digit triad is proposed as a single-number summary of not only the severity, but also the configuration and bilateral symmetry of the hearing loss. Each digit in the triad ranges from 0 to 9, increasing as the level of the pure-tone hearing threshold level (HTL) increases from a range of optimal hearing (< 10 dB Hearing Level; HL) to complete hearing loss (≥ 90 dB HL). Each digit also represents a different frequency region of the audiogram proceeding from left to right as: (Low, L) PTA for 500, 1000, and 2000 Hz; (Center, C) PTA for 3000, 4000 and 6000 Hz; and (High, H) HTL at 8000 Hz. This LCH Triad audiogram-classification system is evaluated using a large United States (U.S.) national dataset (N = 8,795) from adults 20 to 80 + years of age and two large clinical datasets totaling 8,254 adults covering a similar age range. Its ability to capture variations in hearing function was found to be superior to that of the widely used PTA4.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241260041"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11179497/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141318660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165231215916
Moritz Wächtler, Pascale Sandmann, Hartmut Meister
When presenting two competing speech stimuli, one to each ear, a right-ear advantage (REA) can often be observed, reflected in better speech recognition compared to the left ear. Considering the left-hemispheric dominance for language, the REA has been explained by superior contralateral pathways (structural models) and language-induced shifts of attention to the right (attentional models). There is some evidence that the REA becomes more pronounced, as cognitive load increases. Hence, it is interesting to investigate the REA in static (constant target talker) and dynamic (target changing pseudo-randomly) cocktail-party situations, as the latter is associated with a higher cognitive load than the former. Furthermore, previous research suggests an increasing REA, when listening becomes more perceptually challenging. The present study examined the REA by using virtual acoustics to simulate static and dynamic cocktail-party situations, with three spatially separated talkers uttering concurrent matrix sentences. Sentences were presented at low sound pressure levels or processed with a noise vocoder to increase perceptual load. Sixteen young normal-hearing adults participated in the study. The REA was assessed by means of word recognition scores and a detailed error analysis. Word recognition revealed a greater REA for the dynamic than for the static situations, compatible with the view that an increase in cognitive load results in a heightened REA. Also, the REA depended on the type of perceptual load, as indicated by a higher REA associated with vocoded compared to low-level stimuli. The results of the error analysis support both structural and attentional models of the REA.
当呈现两个相互竞争的语音刺激时,两只耳朵各接受一个刺激,通常可以观察到右耳优势(REA),这反映在与左耳相比,右耳的语音识别能力更强。考虑到左半球在语言方面的优势,REA 可通过对侧的优势通路(结构模型)和语言引起的注意力向右侧转移(注意模型)来解释。有证据表明,随着认知负荷的增加,REA 会变得更加明显。因此,研究静态(目标谈话者不变)和动态(目标伪随机变化)鸡尾酒会情况下的 REA 是很有意义的,因为后者比前者与更高的认知负荷相关。此外,以往的研究表明,当听力变得更具知觉挑战性时,REA 会增加。本研究通过使用虚拟声学模拟静态和动态鸡尾酒会情境,让三个空间上分开的说话者同时说出矩阵句子,来检验 REA。句子以低声压级呈现,或用噪声声码器处理,以增加知觉负荷。16 名听力正常的年轻成年人参与了研究。通过单词识别得分和详细的错误分析对 REA 进行了评估。单词识别结果显示,动态情况下的 REA 高于静态情况下的 REA,这与认知负荷增加会导致 REA 增加的观点相吻合。此外,REA 还取决于感知负荷的类型,如与低级刺激相比,与声码刺激相关的 REA 更高。误差分析的结果支持 REA 的结构模型和注意模型。
{"title":"The Right-Ear Advantage in Static and Dynamic Cocktail-Party Situations.","authors":"Moritz Wächtler, Pascale Sandmann, Hartmut Meister","doi":"10.1177/23312165231215916","DOIUrl":"10.1177/23312165231215916","url":null,"abstract":"<p><p>When presenting two competing speech stimuli, one to each ear, a right-ear advantage (REA) can often be observed, reflected in better speech recognition compared to the left ear. Considering the left-hemispheric dominance for language, the REA has been explained by superior contralateral pathways (structural models) and language-induced shifts of attention to the right (attentional models). There is some evidence that the REA becomes more pronounced, as cognitive load increases. Hence, it is interesting to investigate the REA in static (constant target talker) and dynamic (target changing pseudo-randomly) cocktail-party situations, as the latter is associated with a higher cognitive load than the former. Furthermore, previous research suggests an increasing REA, when listening becomes more perceptually challenging. The present study examined the REA by using virtual acoustics to simulate static and dynamic cocktail-party situations, with three spatially separated talkers uttering concurrent matrix sentences. Sentences were presented at low sound pressure levels or processed with a noise vocoder to increase perceptual load. Sixteen young normal-hearing adults participated in the study. The REA was assessed by means of word recognition scores and a detailed error analysis. Word recognition revealed a greater REA for the dynamic than for the static situations, compatible with the view that an increase in cognitive load results in a heightened REA. Also, the REA depended on the type of perceptual load, as indicated by a higher REA associated with vocoded compared to low-level stimuli. The results of the error analysis support both structural and attentional models of the REA.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165231215916"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10826403/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139570355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165231217910
Robel Z Alemu, Blake C Papsin, Robert V Harrison, Al Blakeman, Karen A Gordon
The present study aimed to define use of head and eye movements during sound localization in children and adults to: (1) assess effects of stationary versus moving sound and (2) define effects of binaural cues degraded through acute monaural ear plugging. Thirty-three youth (MAge = 12.9 years) and seventeen adults (MAge = 24.6 years) with typical hearing were recruited and asked to localize white noise anywhere within a horizontal arc from -60° (left) to +60° (right) azimuth in two conditions (typical binaural and right ear plugged). In each trial, sound was presented at an initial stationary position (L1) and then while moving at ∼4°/s until reaching a second position (L2). Sound moved in five conditions (±40°, ±20°, or 0°). Participants adjusted a laser pointer to indicate L1 and L2 positions. Unrestricted head and eye movements were collected with gyroscopic sensors on the head and eye-tracking glasses, respectively. Results confirmed that accurate sound localization of both stationary and moving sound is disrupted by acute monaural ear plugging. Eye movements preceded head movements for sound localization in normal binaural listening and head movements were larger than eye movements during monaural plugging. Head movements favored the unplugged left ear when stationary sounds were presented in the right hemifield and during sound motion in both hemifields regardless of the movement direction. Disrupted binaural cues have greater effects on localization of moving than stationary sound. Head movements reveal preferential use of the better-hearing ear and relatively stable eye positions likely reflect normal vestibular-ocular reflexes.
{"title":"Head and Eye Movements Reveal Compensatory Strategies for Acute Binaural Deficits During Sound Localization.","authors":"Robel Z Alemu, Blake C Papsin, Robert V Harrison, Al Blakeman, Karen A Gordon","doi":"10.1177/23312165231217910","DOIUrl":"10.1177/23312165231217910","url":null,"abstract":"<p><p>The present study aimed to define use of head and eye movements during sound localization in children and adults to: (1) assess effects of stationary versus moving sound and (2) define effects of binaural cues degraded through acute monaural ear plugging. Thirty-three youth (<i>M</i><sub>Age </sub>= 12.9 years) and seventeen adults (<i>M</i><sub>Age </sub>= 24.6 years) with typical hearing were recruited and asked to localize white noise anywhere within a horizontal arc from -60° (left) to +60° (right) azimuth in two conditions (typical binaural and right ear plugged). In each trial, sound was presented at an initial stationary position (L1) and then while moving at ∼4°/s until reaching a second position (L2). Sound moved in five conditions (±40°, ±20°, or 0°). Participants adjusted a laser pointer to indicate L1 and L2 positions. Unrestricted head and eye movements were collected with gyroscopic sensors on the head and eye-tracking glasses, respectively. Results confirmed that accurate sound localization of both stationary and moving sound is disrupted by acute monaural ear plugging. Eye movements preceded head movements for sound localization in normal binaural listening and head movements were larger than eye movements during monaural plugging. Head movements favored the unplugged left ear when stationary sounds were presented in the right hemifield and during sound motion in both hemifields regardless of the movement direction. Disrupted binaural cues have greater effects on localization of moving than stationary sound. Head movements reveal preferential use of the better-hearing ear and relatively stable eye positions likely reflect normal vestibular-ocular reflexes.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165231217910"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10832417/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139651917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241232551
Bethany Plain, Hidde Pielage, Sophia E Kramer, Michael Richter, Gabrielle H Saunders, Niek J Versfeld, Adriana A Zekveld, Tanveer A Bhuiyan
In daily life, both acoustic factors and social context can affect listening effort investment. In laboratory settings, information about listening effort has been deduced from pupil and cardiovascular responses independently. The extent to which these measures can jointly predict listening-related factors is unknown. Here we combined pupil and cardiovascular features to predict acoustic and contextual aspects of speech perception. Data were collected from 29 adults (mean = 64.6 years, SD = 9.2) with hearing loss. Participants performed a speech perception task at two individualized signal-to-noise ratios (corresponding to 50% and 80% of sentences correct) and in two social contexts (the presence and absence of two observers). Seven features were extracted per trial: baseline pupil size, peak pupil dilation, mean pupil dilation, interbeat interval, blood volume pulse amplitude, pre-ejection period and pulse arrival time. These features were used to train k-nearest neighbor classifiers to predict task demand, social context and sentence accuracy. The k-fold cross validation on the group-level data revealed above-chance classification accuracies: task demand, 64.4%; social context, 78.3%; and sentence accuracy, 55.1%. However, classification accuracies diminished when the classifiers were trained and tested on data from different participants. Individually trained classifiers (one per participant) performed better than group-level classifiers: 71.7% (SD = 10.2) for task demand, 88.0% (SD = 7.5) for social context, and 60.0% (SD = 13.1) for sentence accuracy. We demonstrated that classifiers trained on group-level physiological data to predict aspects of speech perception generalized poorly to novel participants. Individually calibrated classifiers hold more promise for future applications.
{"title":"Combining Cardiovascular and Pupil Features Using k-Nearest Neighbor Classifiers to Assess Task Demand, Social Context, and Sentence Accuracy During Listening.","authors":"Bethany Plain, Hidde Pielage, Sophia E Kramer, Michael Richter, Gabrielle H Saunders, Niek J Versfeld, Adriana A Zekveld, Tanveer A Bhuiyan","doi":"10.1177/23312165241232551","DOIUrl":"10.1177/23312165241232551","url":null,"abstract":"<p><p>In daily life, both acoustic factors and social context can affect listening effort investment. In laboratory settings, information about listening effort has been deduced from pupil and cardiovascular responses independently. The extent to which these measures can jointly predict listening-related factors is unknown. Here we combined pupil and cardiovascular features to predict acoustic and contextual aspects of speech perception. Data were collected from 29 adults (mean = 64.6 years, SD = 9.2) with hearing loss. Participants performed a speech perception task at two individualized signal-to-noise ratios (corresponding to 50% and 80% of sentences correct) and in two social contexts (the presence and absence of two observers). Seven features were extracted per trial: baseline pupil size, peak pupil dilation, mean pupil dilation, interbeat interval, blood volume pulse amplitude, pre-ejection period and pulse arrival time. These features were used to train k-nearest neighbor classifiers to predict task demand, social context and sentence accuracy. The k-fold cross validation on the group-level data revealed above-chance classification accuracies: task demand, 64.4%; social context, 78.3%; and sentence accuracy, 55.1%. However, classification accuracies diminished when the classifiers were trained and tested on data from different participants. Individually trained classifiers (one per participant) performed better than group-level classifiers: 71.7% (SD = 10.2) for task demand, 88.0% (SD = 7.5) for social context, and 60.0% (SD = 13.1) for sentence accuracy. We demonstrated that classifiers trained on group-level physiological data to predict aspects of speech perception generalized poorly to novel participants. Individually calibrated classifiers hold more promise for future applications.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241232551"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10981225/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140319548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}