Pub Date : 2025-01-01Epub Date: 2025-05-29DOI: 10.1177/23312165251347130
Philip Reed, Joseph Paul Nemargut, Judith E Goldstein, Coral E Dirks, Yingzi Xiong
Individuals with dual sensory impairment (DSI) often have reduced independence in their daily activities. Vision impairment is consistently reported to play a more dominant role than hearing impairment on home-based daily living, while little is known regarding the relative impact of vision and hearing impairments on tasks such as independent travel that require interacting with more complex environments. To address this knowledge gap, we administered a semistructured survey in a convenience sample of 161 individuals with normal vision, low vision, or blindness, with or without hearing impairment. A combination of qualitative and quantitative approaches was used to analyze the data. Compared to normal vision, low vision and blind participants were significantly less likely to be frequent travelers. Low vision participants reported that vision impairment had a greater impact than hearing impairment on their travel independence, while blind participants reported hearing impairment to have a greater impact than blindness on their travel independence. The unique challenges in blind individuals were highlighted by their concerns on localizing dynamic sounds such as traffic during travel. Seventy percent of the hearing-impaired participants wore hearing aids and reported high utility for speech perception, but there was a significant reduction in the utility of hearing aids for sound localization especially for the blind participants. Our results reveal the interaction between vision and hearing impairments on independent travel and emphasize the need for an integrated rehabilitation approach for this population.
{"title":"Impact of Hearing Impairment on Independent Travel in Individuals With Normal Vision, Low Vision, and Blindness.","authors":"Philip Reed, Joseph Paul Nemargut, Judith E Goldstein, Coral E Dirks, Yingzi Xiong","doi":"10.1177/23312165251347130","DOIUrl":"10.1177/23312165251347130","url":null,"abstract":"<p><p>Individuals with dual sensory impairment (DSI) often have reduced independence in their daily activities. Vision impairment is consistently reported to play a more dominant role than hearing impairment on home-based daily living, while little is known regarding the relative impact of vision and hearing impairments on tasks such as independent travel that require interacting with more complex environments. To address this knowledge gap, we administered a semistructured survey in a convenience sample of 161 individuals with normal vision, low vision, or blindness, with or without hearing impairment. A combination of qualitative and quantitative approaches was used to analyze the data. Compared to normal vision, low vision and blind participants were significantly less likely to be frequent travelers. Low vision participants reported that vision impairment had a greater impact than hearing impairment on their travel independence, while blind participants reported hearing impairment to have a greater impact than blindness on their travel independence. The unique challenges in blind individuals were highlighted by their concerns on localizing dynamic sounds such as traffic during travel. Seventy percent of the hearing-impaired participants wore hearing aids and reported high utility for speech perception, but there was a significant reduction in the utility of hearing aids for sound localization especially for the blind participants. Our results reveal the interaction between vision and hearing impairments on independent travel and emphasize the need for an integrated rehabilitation approach for this population.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251347130"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12123108/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144175377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-05-29DOI: 10.1177/23312165251329791
Andrew J Byrne, Gerald Kidd
The perceived numerosity of simultaneous, spatially separated speech sources was used to evaluate the effectiveness of triple beamformer processing, compared to that of both a single-channel beamformer and natural listening. Participants made judgments of the total number of talkers present in a simulated sound field and the gender composition of the talker group. The perceived numerosity was always underestimated for groups of more than three talkers. Performance with the triple beamformer was roughly equivalent to that of natural listening, including a beneficial effect of spatial separation of the sources in azimuth. The gender mix of the talker group also affected the numerosity judgments although the perceived gender ratio was generally accurate even when the total group count was underestimated. Time-reversing the speech resulted in lower numerosity judgements (increased error) under both natural and triple beamformer listening, suggesting an influence of linguistic processing on source numerosity judgments. Overall, factors that enhanced source segregation and speech stream coherence decreased errors in numerosity judgments. A stimulus-derived metric-the composite of glimpsed energy retained for all talkers in the sound field-was found to be a reasonably accurate predictor of the subjective numerosity judgments.
{"title":"Judging the Number and Gender of Talkers Present in an Auditory Scene Aided by Acoustic Beamforming.","authors":"Andrew J Byrne, Gerald Kidd","doi":"10.1177/23312165251329791","DOIUrl":"10.1177/23312165251329791","url":null,"abstract":"<p><p>The perceived numerosity of simultaneous, spatially separated speech sources was used to evaluate the effectiveness of triple beamformer processing, compared to that of both a single-channel beamformer and natural listening. Participants made judgments of the total number of talkers present in a simulated sound field and the gender composition of the talker group. The perceived numerosity was always underestimated for groups of more than three talkers. Performance with the triple beamformer was roughly equivalent to that of natural listening, including a beneficial effect of spatial separation of the sources in azimuth. The gender mix of the talker group also affected the numerosity judgments although the perceived gender ratio was generally accurate even when the total group count was underestimated. Time-reversing the speech resulted in lower numerosity judgements (increased error) under both natural and triple beamformer listening, suggesting an influence of linguistic processing on source numerosity judgments. Overall, factors that enhanced source segregation and speech stream coherence decreased errors in numerosity judgments. A stimulus-derived metric-the composite of glimpsed energy retained for all talkers in the sound field-was found to be a reasonably accurate predictor of the subjective numerosity judgments.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251329791"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12123112/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144175396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-04-23DOI: 10.1177/23312165251320156
Ryan M O'Leary, Arthur Wingfield, Michael J Lyons, Carol E Franz, William S Kremen
Over 430 million people worldwide experience disabling hearing loss, a condition that becomes more prevalent with age. Although the genetic component to hearing loss has been well established, there has been less data available regarding changes in the genetic contributions to hearing loss over time. We report the pure tone hearing thresholds across 500, 1,000, 2,000, 4,000, and 8,000 Hz from over 1,000 male twins comprising monozygotic (MZ) and dizygotic (DZ) pairs sampled from the United States-based Vietnam Era Twin Study of Aging (VETSA). Twins were tested during three waves, at an average age of 56 at wave 1, an average age of 62 at wave 2, and an average age of 68 at wave 3. Genetically informed structural equation models were used to calculate the genetic contributions. Genetic factors accounted for between 49.4% and 67.7% of the variance in hearing acuity for all frequencies at all three time points. There was no substantial change in the ratio of genetic versus environmental contributions across the three time points, or across individual acoustic frequencies. The stability of hearing acuity over time was moderate to highly attributable to genetic factors. Change in hearing acuity was better explained by unique person-specific environmental factors. These results, from the largest-scale twin study of hearing acuity to date, replicate previous findings that hearing acuity in late life is significantly determined by genetic factors. The unique contribution of the present analysis is that the proportion of hearing acuity attributed to genetics remains relatively consistent across 12 years.
{"title":"Genetic and Environmental Contributions to Age-Related Hearing Loss: Results from a Longitudinal Twin Study.","authors":"Ryan M O'Leary, Arthur Wingfield, Michael J Lyons, Carol E Franz, William S Kremen","doi":"10.1177/23312165251320156","DOIUrl":"https://doi.org/10.1177/23312165251320156","url":null,"abstract":"<p><p>Over 430 million people worldwide experience disabling hearing loss, a condition that becomes more prevalent with age. Although the genetic component to hearing loss has been well established, there has been less data available regarding changes in the genetic contributions to hearing loss over time. We report the pure tone hearing thresholds across 500, 1,000, 2,000, 4,000, and 8,000 Hz from over 1,000 male twins comprising monozygotic (MZ) and dizygotic (DZ) pairs sampled from the United States-based Vietnam Era Twin Study of Aging (VETSA). Twins were tested during three waves, at an average age of 56 at wave 1, an average age of 62 at wave 2, and an average age of 68 at wave 3. Genetically informed structural equation models were used to calculate the genetic contributions. Genetic factors accounted for between 49.4% and 67.7% of the variance in hearing acuity for all frequencies at all three time points. There was no substantial change in the ratio of genetic versus environmental contributions across the three time points, or across individual acoustic frequencies. The stability of hearing acuity over time was moderate to highly attributable to genetic factors. Change in hearing acuity was better explained by unique person-specific environmental factors. These results, from the largest-scale twin study of hearing acuity to date, replicate previous findings that hearing acuity in late life is significantly determined by genetic factors. The unique contribution of the present analysis is that the proportion of hearing acuity attributed to genetics remains relatively consistent across 12 years.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251320156"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12035256/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144057792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-11-10DOI: 10.1177/23312165251393034
Astrid van Wieringen, Mira Van Wilderode, Les De Ridder, Tom Francart, Jan Wouters
Persons with hearing aids or cochlear implants often have difficulty understanding speech well, especially in noisy environments. Auditory perceptual training can help improve an individual's ability to discriminate and identify sound. The current study aimed to determine the efficacy of the ALICE (Assistant for Listening and Communication Enhancement) program, a self-guided home-based hearing care program including monitoring, training and counseling. A multicentric study was carried out, including hearing aid centers and a cochlear implant center in Flanders (Belgium). Adult participants were randomly assigned to an intervention (n = 65) or a control (n = 65) group. Participants in the intervention group received a tailored flow of exercises that could be streamed to the device or presented in a sound field. All participants were tested before and after 8 weeks using sentences in noise and different self-report questionnaires. Participants in the intervention group were compliant during the 8-week training period. Significant on-task improvements were observed, along with improved speech-in-noise understanding for the intervention group only. The self-report data did not reveal changes following the intervention. Our clinical trial demonstrates that the self-guided ALICE training program is effective at improving the auditory system's ability to parse untrained speech in noise. This enhancement in speech-in-noise performance is specific to the training group, as the control group did not show any improvement. The results of the clinical trial imply that ALICE can be used as a scalable, accessible, and safe hearing care intervention.
{"title":"ALICE: Improved Speech in Noise Understanding with Self-guided Hearing Care.","authors":"Astrid van Wieringen, Mira Van Wilderode, Les De Ridder, Tom Francart, Jan Wouters","doi":"10.1177/23312165251393034","DOIUrl":"10.1177/23312165251393034","url":null,"abstract":"<p><p>Persons with hearing aids or cochlear implants often have difficulty understanding speech well, especially in noisy environments. Auditory perceptual training can help improve an individual's ability to discriminate and identify sound. The current study aimed to determine the efficacy of the ALICE (Assistant for Listening and Communication Enhancement) program, a self-guided home-based hearing care program including monitoring, training and counseling. A multicentric study was carried out, including hearing aid centers and a cochlear implant center in Flanders (Belgium). Adult participants were randomly assigned to an intervention (<i>n</i> = 65) or a control (<i>n</i> = 65) group. Participants in the intervention group received a tailored flow of exercises that could be streamed to the device or presented in a sound field. All participants were tested before and after 8 weeks using sentences in noise and different self-report questionnaires. Participants in the intervention group were compliant during the 8-week training period. Significant on-task improvements were observed, along with improved speech-in-noise understanding for the intervention group only. The self-report data did not reveal changes following the intervention. Our clinical trial demonstrates that the self-guided ALICE training program is effective at improving the auditory system's ability to parse untrained speech in noise. This enhancement in speech-in-noise performance is specific to the training group, as the control group did not show any improvement. The results of the clinical trial imply that ALICE can be used as a scalable, accessible, and safe hearing care intervention.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251393034"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12602974/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145490692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1177/23312165241306091
Khaled H A Abdel-Latif, Thomas Koelewijn, Deniz Başkent, Hartmut Meister
Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal. This study aimed to propose a new VWP to examine the time course of speech segregation when competing sentences are presented and to collect pupil size data as a measure of listening effort. Twelve young normal-hearing participants were presented with competing matrix sentences (structure "name-verb-numeral-adjective-object") diotically via headphones at four target-to-masker ratios (TMRs), corresponding to intermediate to near perfect speech recognition. The VWP visually presented the number and object words from both the target and masker sentences. Participants were instructed to gaze at the corresponding words of the target sentence without providing verbal responses. The gaze fixations consistently reflected the different TMRs for both number and object words. The slopes of the fixation curves were steeper, and the proportion of target fixations increased with higher TMRs, suggesting more efficient segregation under more favorable conditions. Temporal analysis of pupil data using Bayesian paired sample t-tests showed a corresponding reduction in pupil dilation with increasing TMR, indicating reduced listening effort. The results support the conclusion that the proposed VWP and the captured eye movements and pupil dilation are suitable for objective assessment of sentence-based speech-on-speech segregation and the corresponding listening effort.
在日常语言交流中,语音对语音的掩蔽是一种常见且具有挑战性的情况。要将注意力集中在目标语音上,就必须具备分离相互竞争的听觉流的能力。视觉世界范式(Visual World Paradigm,VWP)通过捕捉对反映语音信号的视觉呈现图标的注视固定来深入了解语音处理过程。本研究旨在提出一种新的视觉世界范式,以考察在出现竞争句子时语音分离的时间过程,并收集瞳孔大小数据作为听力努力程度的测量指标。研究人员通过耳机向 12 名听力正常的年轻受试者连续呈现了四种目标与掩码比(TMRs)的竞争矩阵句子(结构为 "名称-动词-名词-形容词-宾语"),这四种目标与掩码比分别对应于中等到接近完美的语音识别能力。VWP 可视化呈现目标句和掩蔽句中的数词和宾词。受试者被要求注视目标句子中的相应单词,而不提供口头回答。注视定着一致地反映了数字词和物词的不同 TMR。固定曲线的斜率更陡峭,目标固定的比例随 TMR 越高而增加,这表明在更有利的条件下,分离的效率更高。使用贝叶斯配对样本 t 检验法对瞳孔数据进行的时间分析表明,随着 TMR 的增加,瞳孔放大的程度也相应减小,这表明听力强度降低了。这些结果支持这样的结论,即所提出的 VWP 以及捕捉到的眼球运动和瞳孔放大适合用于客观评估基于句子的语音分离和相应的听力强度。
{"title":"Assessment of Speech Processing and Listening Effort Associated With Speech-on-Speech Masking Using the Visual World Paradigm and Pupillometry.","authors":"Khaled H A Abdel-Latif, Thomas Koelewijn, Deniz Başkent, Hartmut Meister","doi":"10.1177/23312165241306091","DOIUrl":"10.1177/23312165241306091","url":null,"abstract":"<p><p>Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal. This study aimed to propose a new VWP to examine the time course of speech segregation when competing sentences are presented and to collect pupil size data as a measure of listening effort. Twelve young normal-hearing participants were presented with competing matrix sentences (structure \"name-verb-numeral-adjective-object\") diotically via headphones at four target-to-masker ratios (TMRs), corresponding to intermediate to near perfect speech recognition. The VWP visually presented the number and object words from both the target and masker sentences. Participants were instructed to gaze at the corresponding words of the target sentence without providing verbal responses. The gaze fixations consistently reflected the different TMRs for both number and object words. The slopes of the fixation curves were steeper, and the proportion of target fixations increased with higher TMRs, suggesting more efficient segregation under more favorable conditions. Temporal analysis of pupil data using Bayesian paired sample <i>t</i>-tests showed a corresponding reduction in pupil dilation with increasing TMR, indicating reduced listening effort. The results support the conclusion that the proposed VWP and the captured eye movements and pupil dilation are suitable for objective assessment of sentence-based speech-on-speech segregation and the corresponding listening effort.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241306091"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11726529/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142972857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1177/23312165251320794
Alexina Whitley, Timothy Beechey, Lauren V Hadley
Many of our conversations occur in nonideal situations, from the hum of a car to the babble of a cocktail party. Additionally, in conversation, listeners are often required to switch their attention between multiple talkers, which places demands on both auditory and cognitive processes. Speech understanding in such situations appears to be particularly demanding for older adults with hearing impairment. This study examined the effects of age and hearing ability on performance in an online speech recall task. Two target sentences, spoken by the same talker or different talkers, were presented one after the other, analogous to a conversational turn switch. The first target sentence was presented in quiet, and the second target sentence was presented alongside either a noise masker (steady-state speech-shaped noise) or a speech masker (another nontarget sentence). Relative to when the target talker remained the same between sentences, listeners were less accurate at recalling information in the second target sentence when the target talker changed, particularly when the target talker for sentence one became the masker for sentence two. Listeners with poorer speech-in-noise reception thresholds were less accurate in both noise- and speech-masked trials and made more masker confusions in speech-masked trials. Furthermore, an interaction revealed that listeners with poorer speech reception thresholds had particular difficulty when the target talker remained the same. Our study replicates previous research regarding the costs of switching nonspatial attention, extending these findings to older adults with a range of hearing abilities.
{"title":"Who Said That? The Effect of Hearing Ability on Following Sequential Utterances From Varying Talkers in Noise.","authors":"Alexina Whitley, Timothy Beechey, Lauren V Hadley","doi":"10.1177/23312165251320794","DOIUrl":"10.1177/23312165251320794","url":null,"abstract":"<p><p>Many of our conversations occur in nonideal situations, from the hum of a car to the babble of a cocktail party. Additionally, in conversation, listeners are often required to switch their attention between multiple talkers, which places demands on both auditory and cognitive processes. Speech understanding in such situations appears to be particularly demanding for older adults with hearing impairment. This study examined the effects of age and hearing ability on performance in an online speech recall task. Two target sentences, spoken by the same talker or different talkers, were presented one after the other, analogous to a conversational turn switch. The first target sentence was presented in quiet, and the second target sentence was presented alongside either a noise masker (steady-state speech-shaped noise) or a speech masker (another nontarget sentence). Relative to when the target talker remained the same between sentences, listeners were less accurate at recalling information in the second target sentence when the target talker changed, particularly when the target talker for sentence one became the masker for sentence two. Listeners with poorer speech-in-noise reception thresholds were less accurate in both noise- and speech-masked trials and made more masker confusions in speech-masked trials. Furthermore, an interaction revealed that listeners with poorer speech reception thresholds had particular difficulty when the target talker remained the same. Our study replicates previous research regarding the costs of switching nonspatial attention, extending these findings to older adults with a range of hearing abilities.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251320794"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11851761/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143484318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-07-30DOI: 10.1177/23312165251359415
Laura K Holden, Rosalie M Uchanski, Noël Y Dwyer, Ruth M Reeder, Timothy A Holden, Jill B Firszt
The study aimed to improve outcomes in Nucleus cochlear implant (CI) recipients with single-sided deafness (SSD) by reducing interaural frequency and loudness mismatches through device programming. In Experiment 1a, a modified frequency allocation table (FAT) was created to better match the tonotopicity of the contralateral ear and reduce interaural frequency mismatch. Twenty experienced SSD-CI users completed localization and speech recognition tests with their everyday FAT. Tests were repeated after 6 weeks' use of the modified FAT. Participants compared both FATs for 2 weeks before being tested again with each. For 10 newly implanted SSD-CI recipients (Experiment 1b), Group A was programmed with the manufacturer's default FAT and Group B with the modified FAT at activation. Speech recognition and localization were completed, after 6 weeks' use of each FAT. Participants then compared both FATs before testing with each. In Experiment 2, 15 experienced SSD-CI users were evaluated with their everyday program and a modified loudness program, which was created to obtain audibility of ∼20 dB HL from 0.25 to 6 kHz and balanced loudness between ears. Three test sessions occurred, resembling Experiment 1a. Experienced participants in Experiments 1a and 2 showed significant improvement in one speech-in-noise task with a modified program compared to the everyday program. Newly implanted recipients showed no significant difference in results between FATs. Results indicate that modified programs, created to reduce interaural mismatches, may improve outcomes. The first month after activation might be too early to compare FATs as SSD-CI recipients are adjusting to electric hearing.
该研究旨在通过设备编程减少耳蜗间频率和响度失配,改善单侧耳聋(SSD)人工耳蜗(CI)受者的预后。在实验1a中,为了更好地匹配对侧耳的张力性,减少耳间频率失配,我们创建了一个改进的频率分配表(FAT)。20名有经验的SSD-CI用户用他们的日常FAT完成了本地化和语音识别测试。使用改良FAT 6周后重复测试。参与者将两种脂肪进行了两周的比较,然后再次进行测试。对于10例新植入的SSD-CI受体(实验1b), A组使用制造商默认的FAT编程,B组使用激活时修改的FAT编程。每个FAT使用6周后完成语音识别和定位。然后参与者在测试前比较两种脂肪。在实验2中,15名经验丰富的SSD-CI用户使用他们的日常程序和修改的响度程序进行评估,该程序旨在获得0.25至6 kHz范围内约20 dB HL的可听性和耳朵之间的平衡响度。进行了三次测试,类似于实验1a。实验1a和实验2中经验丰富的参与者在使用修改后的程序时,与日常程序相比,在一项噪音语音任务中表现出显著的改善。新植入的受体在两种脂肪之间的结果没有显著差异。结果表明,修改程序,以减少内部不匹配,可以改善结果。在激活后的第一个月比较脂肪可能为时过早,因为SSD-CI接受者正在适应电听力。
{"title":"Improving Outcomes of Single-Sided Deaf Cochlear Implant Users by Reducing Interaural Frequency and Loudness Mismatches through Device Programming.","authors":"Laura K Holden, Rosalie M Uchanski, Noël Y Dwyer, Ruth M Reeder, Timothy A Holden, Jill B Firszt","doi":"10.1177/23312165251359415","DOIUrl":"10.1177/23312165251359415","url":null,"abstract":"<p><p>The study aimed to improve outcomes in Nucleus cochlear implant (CI) recipients with single-sided deafness (SSD) by reducing interaural frequency and loudness mismatches through device programming. In Experiment 1a, a modified frequency allocation table (FAT) was created to better match the tonotopicity of the contralateral ear and reduce interaural frequency mismatch. Twenty experienced SSD-CI users completed localization and speech recognition tests with their everyday FAT. Tests were repeated after 6 weeks' use of the modified FAT. Participants compared both FATs for 2 weeks before being tested again with each. For 10 newly implanted SSD-CI recipients (Experiment 1b), Group A was programmed with the manufacturer's default FAT and Group B with the modified FAT at activation. Speech recognition and localization were completed, after 6 weeks' use of each FAT. Participants then compared both FATs before testing with each. In Experiment 2, 15 experienced SSD-CI users were evaluated with their everyday program and a modified loudness program, which was created to obtain audibility of ∼20 dB HL from 0.25 to 6 kHz and balanced loudness between ears. Three test sessions occurred, resembling Experiment 1a. Experienced participants in Experiments 1a and 2 showed significant improvement in one speech-in-noise task with a modified program compared to the everyday program. Newly implanted recipients showed no significant difference in results between FATs. Results indicate that modified programs, created to reduce interaural mismatches, may improve outcomes. The first month after activation might be too early to compare FATs as SSD-CI recipients are adjusting to electric hearing.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251359415"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12317272/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144754854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-07-04DOI: 10.1177/23312165251356333
Robel Z Alemu, Alan Blakeman, Angela L Fung, Melissa Hazen, Jaina Negandhi, Blake C Papsin, Sharon L Cushing, Karen A Gordon
Spatial hearing in children with bilateral cochlear implants (BCIs) was assessed by: (a) comparing localization of stationary and moving sound, (b) investigating the relationship between sound localization and sensitivity to interaural level and timing differences (ILDs/ITDs), (c) evaluating effects of aural preference on sound localization, and (d) exploring head and eye (gaze) movements during sound localization. Children with BCIs (n = 42, MAge = 12.3 years) with limited duration of auditory deprivation and peers with typical hearing (controls; n = 37, MAge = 12.9 years) localized stationary and moving sound with unrestricted head and eye movements. Sensitivity to binaural cues was measured by a lateralization task to ILDs and ITDs. Spatial separation effects were measured by spondee-word recognition thresholds (SNR thresholds) when noise was presented in front (colocated/0°) or with 90° of left/right separation. BCI users had good speech reception thresholds (SRTs) in quiet but higher SRTs in noise than controls. Spatial separation of noise from speech revealed a greater advantage for the right ear across groups. BCI users showed increased errors localizing stationary sound and detecting moving sound direction compared to controls. Decreased ITD sensitivity occurred with poorer localization of stationary sound in BCI users. Gaze movements in BCI users were more random than controls for stationary and moving sounds. BCIs support symmetric hearing in children with limited duration of auditory deprivation and promote spatial hearing which is albeit impaired. Spatial hearing was thus considered to be "emerging." Remaining challenges may reflect disruptions in ITD sensitivity and ineffective gaze movements.
{"title":"Children With Bilateral Cochlear Implants Show Emerging Spatial Hearing of Stationary and Moving Sound.","authors":"Robel Z Alemu, Alan Blakeman, Angela L Fung, Melissa Hazen, Jaina Negandhi, Blake C Papsin, Sharon L Cushing, Karen A Gordon","doi":"10.1177/23312165251356333","DOIUrl":"10.1177/23312165251356333","url":null,"abstract":"<p><p>Spatial hearing in children with bilateral cochlear implants (BCIs) was assessed by: (a) comparing localization of stationary and moving sound, (b) investigating the relationship between sound localization and sensitivity to interaural level and timing differences (ILDs/ITDs), (c) evaluating effects of aural preference on sound localization, and (d) exploring head and eye (gaze) movements during sound localization. Children with BCIs (<i>n</i> = 42, <i>M</i><sub>Age</sub> = 12.3 years) with limited duration of auditory deprivation and peers with typical hearing (controls; <i>n</i> = 37, <i>M</i><sub>Age</sub> = 12.9 years) localized stationary and moving sound with unrestricted head and eye movements. Sensitivity to binaural cues was measured by a lateralization task to ILDs and ITDs. Spatial separation effects were measured by spondee-word recognition thresholds (SNR thresholds) when noise was presented in front (colocated/0°) or with 90° of left/right separation. BCI users had good speech reception thresholds (SRTs) in quiet but higher SRTs in noise than controls. Spatial separation of noise from speech revealed a greater advantage for the right ear across groups. BCI users showed increased errors localizing stationary sound and detecting moving sound direction compared to controls. Decreased ITD sensitivity occurred with poorer localization of stationary sound in BCI users. Gaze movements in BCI users were more random than controls for stationary and moving sounds. BCIs support symmetric hearing in children with limited duration of auditory deprivation and promote spatial hearing which is albeit impaired. Spatial hearing was thus considered to be \"emerging.\" Remaining challenges may reflect disruptions in ITD sensitivity and ineffective gaze movements.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251356333"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12227942/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144561560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-03-18DOI: 10.1177/23312165251322299
Carl Pedersen, Jesper Hvass Schmidt, Ellen Raben Pedersen, Chris Bang Sørensen, Søren Laugesen
Under- and overamplification of sound is a common problem in hearing aid fitting. This paper describes the implementation of two new variants of the hearing in noise test for quantifying aided hearing at the lower and upper ends of the range of everyday-life sound levels. We present results from experiments carried out with 30 adult hearing aid users to determine the respective test-retest reliabilities. Participants completed a test battery consisting of the standard Danish hearing in noise test, a variant targeting the lower threshold of audibility and a variant targeting the limit of loudness discomfort. The participants completed the test battery twice for reliability analysis. The results revealed a significant difference between test and retest for both the hearing in noise test and the two hearing in noise test variants. However, the effect sizes for the differences were all very small. A calculation of Pearson correlation coefficients showed that both the hearing in noise test and the two new hearing in noise test variants had significant and strong correlations between test and retest. The within-subject standard deviations were determined to be 0.8 dB for hearing in noise test, 0.9 dB for lower-end test, and 2.2 dB for upper-end test. The findings demonstrate that both the lower-end test and upper-end test have high test-retest reliabilities, and thus can provide consistent and reliable results.
{"title":"Two Tests for Quantifying Aided Hearing at Low- and High-Input Levels.","authors":"Carl Pedersen, Jesper Hvass Schmidt, Ellen Raben Pedersen, Chris Bang Sørensen, Søren Laugesen","doi":"10.1177/23312165251322299","DOIUrl":"10.1177/23312165251322299","url":null,"abstract":"<p><p>Under- and overamplification of sound is a common problem in hearing aid fitting. This paper describes the implementation of two new variants of the hearing in noise test for quantifying aided hearing at the lower and upper ends of the range of everyday-life sound levels. We present results from experiments carried out with 30 adult hearing aid users to determine the respective test-retest reliabilities. Participants completed a test battery consisting of the standard Danish hearing in noise test, a variant targeting the lower threshold of audibility and a variant targeting the limit of loudness discomfort. The participants completed the test battery twice for reliability analysis. The results revealed a significant difference between test and retest for both the hearing in noise test and the two hearing in noise test variants. However, the effect sizes for the differences were all very small. A calculation of Pearson correlation coefficients showed that both the hearing in noise test and the two new hearing in noise test variants had significant and strong correlations between test and retest. The within-subject standard deviations were determined to be 0.8 dB for hearing in noise test, 0.9 dB for lower-end test, and 2.2 dB for upper-end test. The findings demonstrate that both the lower-end test and upper-end test have high test-retest reliabilities, and thus can provide consistent and reliable results.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251322299"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11920982/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143659052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1177/23312165251317925
Katrina Kate S McClannahan, Sarah McConkey, Julia M Levitan, Thomas L Rodebaugh, Jonathan E Peelle
Subjective ratings of communication function reflect both auditory sensitivity and the situational, social, and emotional consequences of communication difficulties. Listeners interact with people and their environment differently, have various ways of handling stressful situations, and have diverse communication needs. Therefore, understanding the relationship between auditory and mental health factors is crucial for the holistic diagnosis and treatment of communication difficulty, particularly as mental health and communication function may have bidirectional effects. The goal of this study was to evaluate the degree to which social anxiety and negative affect (encompassing generalized anxiety, depression, and anger) contributed to subjective communication function (hearing handicap) in adult listeners. A cross-sectional online survey was administered via REDCap. Primary measures were brief assessments of social anxiety, negative affect, and subjective communication function measures. Participants were 628 adults (408 women, 220 men), ages 19 to 87 years (mean = 43) living in the United States. Results indicated that individuals reporting higher social anxiety and higher negative affect also reported poorer communication function. Multiple linear regression analysis revealed that both negative affect and social anxiety were significant and unique predictors of subjective communication function. Social anxiety and negative affect both significantly, and uniquely, contribute to how much someone feels a hearing loss impacts their daily communication function. Further examination of social anxiety and negative affect in older adults with hearing loss may help researchers and clinicians understand the complex interactions between mental health and sensory function during everyday communication, in this rapidly growing clinical population.
{"title":"Social Anxiety, Negative Affect, and Hearing Difficulties in Adults.","authors":"Katrina Kate S McClannahan, Sarah McConkey, Julia M Levitan, Thomas L Rodebaugh, Jonathan E Peelle","doi":"10.1177/23312165251317925","DOIUrl":"10.1177/23312165251317925","url":null,"abstract":"<p><p>Subjective ratings of communication function reflect both auditory sensitivity and the situational, social, and emotional consequences of communication difficulties. Listeners interact with people and their environment differently, have various ways of handling stressful situations, and have diverse communication needs. Therefore, understanding the relationship between auditory and mental health factors is crucial for the holistic diagnosis and treatment of communication difficulty, particularly as mental health and communication function may have bidirectional effects. The goal of this study was to evaluate the degree to which social anxiety and negative affect (encompassing generalized anxiety, depression, and anger) contributed to subjective communication function (hearing handicap) in adult listeners. A cross-sectional online survey was administered via REDCap. Primary measures were brief assessments of social anxiety, negative affect, and subjective communication function measures. Participants were 628 adults (408 women, 220 men), ages 19 to 87 years (mean = 43) living in the United States. Results indicated that individuals reporting higher social anxiety and higher negative affect also reported poorer communication function. Multiple linear regression analysis revealed that both negative affect and social anxiety were significant and unique predictors of subjective communication function. Social anxiety and negative affect both significantly, and uniquely, contribute to how much someone feels a hearing loss impacts their daily communication function. Further examination of social anxiety and negative affect in older adults with hearing loss may help researchers and clinicians understand the complex interactions between mental health and sensory function during everyday communication, in this rapidly growing clinical population.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251317925"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11803679/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143366040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}