Pub Date : 2024-11-07DOI: 10.1097/AUD.0000000000001601
Dina Lelic, Erin Picou, Valeriy Shafiro, Christian Lorenzi
The ability to monitor surrounding natural sounds and scenes is important for performing many activities in daily life and for overall well-being. Yet, unlike speech, perception of natural sounds and scenes is relatively understudied in relation to hearing loss, despite the documented restorative health effects. We present data from first-time hearing aid users describing "rediscovered" natural sounds they could now perceive with clarity. These data suggest that hearing loss not only diminishes recognition of natural sounds, but also limits people's awareness of the richness of their environment, thus limiting their connection to it. Little is presently known about the extent hearing aids can restore the perception of abundance, clarity, or intensity of natural sounds. Our call to action outlines specific steps to improve the experience of natural sounds and scenes for people with hearing loss-an overlooked aspect of their quality of life.
{"title":"Sounds of Nature and Hearing Loss: A Call to Action.","authors":"Dina Lelic, Erin Picou, Valeriy Shafiro, Christian Lorenzi","doi":"10.1097/AUD.0000000000001601","DOIUrl":"10.1097/AUD.0000000000001601","url":null,"abstract":"<p><p>The ability to monitor surrounding natural sounds and scenes is important for performing many activities in daily life and for overall well-being. Yet, unlike speech, perception of natural sounds and scenes is relatively understudied in relation to hearing loss, despite the documented restorative health effects. We present data from first-time hearing aid users describing \"rediscovered\" natural sounds they could now perceive with clarity. These data suggest that hearing loss not only diminishes recognition of natural sounds, but also limits people's awareness of the richness of their environment, thus limiting their connection to it. Little is presently known about the extent hearing aids can restore the perception of abundance, clarity, or intensity of natural sounds. Our call to action outlines specific steps to improve the experience of natural sounds and scenes for people with hearing loss-an overlooked aspect of their quality of life.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142592402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-06DOI: 10.1097/AUD.0000000000001602
Dana Bsharat-Maalouf, Jens Schmidtke, Tamar Degani, Hanin Karawani
Objectives: The present study aimed to examine the involvement of listening effort among multilinguals in their first (L1) and second (L2) languages in quiet and noisy listening conditions and investigate how the presence of a constraining context within sentences influences listening effort.
Design: A group of 46 young adult Arabic (L1)-Hebrew (L2) multilinguals participated in a listening task. This task aimed to assess participants' perceptual performance and the effort they exert (as measured through pupillometry) while listening to single words and sentences presented in their L1 and L2, in quiet and noisy environments (signal to noise ratio = 0 dB).
Results: Listening in quiet was easier than in noise, supported by both perceptual and pupillometry results. Perceptually, multilinguals performed similarly and reached ceiling levels in both languages in quiet. However, under noisy conditions, perceptual accuracy was significantly lower in L2, especially when processing sentences. Critically, pupil dilation was larger and more prolonged when listening to L2 than L1 stimuli. This difference was observed even in the quiet condition. Contextual support resulted in better perceptual performance of high-predictability sentences compared with low-predictability sentences, but only in L1 under noisy conditions. In L2, pupillometry showed increased effort when listening to high-predictability sentences compared with low-predictability sentences, but this increased effort did not lead to better understanding. In fact, in noise, speech perception was lower in high-predictability L2 sentences compared with low-predictability ones.
Conclusions: The findings underscore the importance of examining listening effort in multilingual speech processing and suggest that increased effort may be present in multilingual's L2 within clinical and educational settings.
{"title":"Through the Pupils' Lens: Multilingual Effort in First and Second Language Listening.","authors":"Dana Bsharat-Maalouf, Jens Schmidtke, Tamar Degani, Hanin Karawani","doi":"10.1097/AUD.0000000000001602","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001602","url":null,"abstract":"<p><strong>Objectives: </strong>The present study aimed to examine the involvement of listening effort among multilinguals in their first (L1) and second (L2) languages in quiet and noisy listening conditions and investigate how the presence of a constraining context within sentences influences listening effort.</p><p><strong>Design: </strong>A group of 46 young adult Arabic (L1)-Hebrew (L2) multilinguals participated in a listening task. This task aimed to assess participants' perceptual performance and the effort they exert (as measured through pupillometry) while listening to single words and sentences presented in their L1 and L2, in quiet and noisy environments (signal to noise ratio = 0 dB).</p><p><strong>Results: </strong>Listening in quiet was easier than in noise, supported by both perceptual and pupillometry results. Perceptually, multilinguals performed similarly and reached ceiling levels in both languages in quiet. However, under noisy conditions, perceptual accuracy was significantly lower in L2, especially when processing sentences. Critically, pupil dilation was larger and more prolonged when listening to L2 than L1 stimuli. This difference was observed even in the quiet condition. Contextual support resulted in better perceptual performance of high-predictability sentences compared with low-predictability sentences, but only in L1 under noisy conditions. In L2, pupillometry showed increased effort when listening to high-predictability sentences compared with low-predictability sentences, but this increased effort did not lead to better understanding. In fact, in noise, speech perception was lower in high-predictability L2 sentences compared with low-predictability ones.</p><p><strong>Conclusions: </strong>The findings underscore the importance of examining listening effort in multilingual speech processing and suggest that increased effort may be present in multilingual's L2 within clinical and educational settings.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-06DOI: 10.1097/AUD.0000000000001605
Varsha Rallapalli, Richard Freyman, Pamela Souza
<p><strong>Objectives: </strong>Previous research has shown that speech recognition with different wide dynamic range compression (WDRC) time-constants (fast-acting or Fast and slow-acting or Slow) is associated with individual working memory ability, especially in adverse listening conditions. Until recently, much of this research has been limited to omnidirectional hearing aid settings and colocated speech and noise, whereas most hearing aids are fit with directional processing that may improve the listening environment in spatially separated conditions and interact with WDRC processing. The primary objective of this study was to determine whether there is an association between individual working memory ability and speech recognition in noise with different WDRC time-constants, with and without microphone directionality (binaural beamformer or Beam versus omnidirectional or Omni) in a spatial condition ideal for the beamformer (speech at 0 , noise at 180 ). The hypothesis was that the relationship between speech recognition ability and different WDRC time-constants would depend on working memory in the Omni mode, whereas the relationship would diminish in the Beam mode. The study also examined whether this relationship is different from the effects of working memory on speech recognition with WDRC time-constants previously studied in colocated conditions.</p><p><strong>Design: </strong>Twenty-one listeners with bilateral mild to moderately severe sensorineural hearing loss repeated low-context sentences mixed with four-talker babble, presented across 0 to 10 dB signal to noise ratio (SNR) in colocated (0 ) and spatially separated (180 ) conditions. A wearable hearing aid customized to the listener's hearing level was used to present four signal processing combinations which combined microphone mode (Beam or Omni) and WDRC time-constants (Fast or Slow). Individual working memory ability was measured using the reading span test. A signal distortion metric was used to quantify cumulative temporal envelope distortion from background noise and the hearing aid processing for each listener. In a secondary analysis, the role of working memory in the relationship between cumulative signal distortion and speech recognition was examined in the spatially separated condition.</p><p><strong>Results: </strong>Signal distortion was greater with Fast WDRC compared with Slow WDRC, regardless of the microphone mode or spatial condition. As expected, Beam reduced signal distortion and improved speech recognition over Omni, especially at poorer SNRs. Contrary to the hypothesis, speech recognition with different WDRC time-constants did not depend on working memory in Beam or Omni (in the spatially separated condition). However, there was a significant interaction between working memory and cumulative signal distortion, such that speech recognition increased at a faster rate with lower distortion for an individual with better working memory. In Omni, the effect of w
{"title":"Relationship Between Working Memory, Compression, and Beamformers in Ideal Conditions.","authors":"Varsha Rallapalli, Richard Freyman, Pamela Souza","doi":"10.1097/AUD.0000000000001605","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001605","url":null,"abstract":"<p><strong>Objectives: </strong>Previous research has shown that speech recognition with different wide dynamic range compression (WDRC) time-constants (fast-acting or Fast and slow-acting or Slow) is associated with individual working memory ability, especially in adverse listening conditions. Until recently, much of this research has been limited to omnidirectional hearing aid settings and colocated speech and noise, whereas most hearing aids are fit with directional processing that may improve the listening environment in spatially separated conditions and interact with WDRC processing. The primary objective of this study was to determine whether there is an association between individual working memory ability and speech recognition in noise with different WDRC time-constants, with and without microphone directionality (binaural beamformer or Beam versus omnidirectional or Omni) in a spatial condition ideal for the beamformer (speech at 0 , noise at 180 ). The hypothesis was that the relationship between speech recognition ability and different WDRC time-constants would depend on working memory in the Omni mode, whereas the relationship would diminish in the Beam mode. The study also examined whether this relationship is different from the effects of working memory on speech recognition with WDRC time-constants previously studied in colocated conditions.</p><p><strong>Design: </strong>Twenty-one listeners with bilateral mild to moderately severe sensorineural hearing loss repeated low-context sentences mixed with four-talker babble, presented across 0 to 10 dB signal to noise ratio (SNR) in colocated (0 ) and spatially separated (180 ) conditions. A wearable hearing aid customized to the listener's hearing level was used to present four signal processing combinations which combined microphone mode (Beam or Omni) and WDRC time-constants (Fast or Slow). Individual working memory ability was measured using the reading span test. A signal distortion metric was used to quantify cumulative temporal envelope distortion from background noise and the hearing aid processing for each listener. In a secondary analysis, the role of working memory in the relationship between cumulative signal distortion and speech recognition was examined in the spatially separated condition.</p><p><strong>Results: </strong>Signal distortion was greater with Fast WDRC compared with Slow WDRC, regardless of the microphone mode or spatial condition. As expected, Beam reduced signal distortion and improved speech recognition over Omni, especially at poorer SNRs. Contrary to the hypothesis, speech recognition with different WDRC time-constants did not depend on working memory in Beam or Omni (in the spatially separated condition). However, there was a significant interaction between working memory and cumulative signal distortion, such that speech recognition increased at a faster rate with lower distortion for an individual with better working memory. In Omni, the effect of w","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-04DOI: 10.1097/AUD.0000000000001599
Jing Shen, Elizabeth Heller Murray
Objectives: Although breathy vocal quality and hearing loss are both prevalent age-related changes, their combined impact on speech communication is poorly understood. This study investigated whether breathy vocal quality affected speech perception and listening effort by older listeners. Furthermore, the study examined how this effect was modulated by the adverse listening environment of background noise and the listener's level of hearing loss.
Design: Nineteen older adults participated in the study. Their hearing ranged from near-normal to mild-moderate sensorineural hearing loss. Participants heard speech material of low-context sentences, with stimuli resynthesized to simulate original, mild-moderately breathy, and severely breathy conditions. Speech intelligibility was measured using a speech recognition in noise paradigm, with pupillometry data collected simultaneously to measure listening effort.
Results: Simulated severely breathy vocal quality was found to reduce intelligibility and increase listening effort. Breathiness and background noise level independently modulated listening effort. The impact of hearing loss was not observed in this dataset, which can be due to the use of individualized signal to noise ratios and a small sample size.
Conclusion: Results from this study demonstrate the challenges of listening to speech with a breathy vocal quality. Theoretically, the findings highlight the importance of periodicity cues in speech perception in noise by older listeners. Breathy voice could be challenging to separate from the noise when the noise also lacks periodicity. Clinically, it suggests the need to address both listener- and talker-related factors in speech communication by older adults.
{"title":"Breathy Vocal Quality, Background Noise, and Hearing Loss: How Do These Adverse Conditions Affect Speech Perception by Older Adults?","authors":"Jing Shen, Elizabeth Heller Murray","doi":"10.1097/AUD.0000000000001599","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001599","url":null,"abstract":"<p><strong>Objectives: </strong>Although breathy vocal quality and hearing loss are both prevalent age-related changes, their combined impact on speech communication is poorly understood. This study investigated whether breathy vocal quality affected speech perception and listening effort by older listeners. Furthermore, the study examined how this effect was modulated by the adverse listening environment of background noise and the listener's level of hearing loss.</p><p><strong>Design: </strong>Nineteen older adults participated in the study. Their hearing ranged from near-normal to mild-moderate sensorineural hearing loss. Participants heard speech material of low-context sentences, with stimuli resynthesized to simulate original, mild-moderately breathy, and severely breathy conditions. Speech intelligibility was measured using a speech recognition in noise paradigm, with pupillometry data collected simultaneously to measure listening effort.</p><p><strong>Results: </strong>Simulated severely breathy vocal quality was found to reduce intelligibility and increase listening effort. Breathiness and background noise level independently modulated listening effort. The impact of hearing loss was not observed in this dataset, which can be due to the use of individualized signal to noise ratios and a small sample size.</p><p><strong>Conclusion: </strong>Results from this study demonstrate the challenges of listening to speech with a breathy vocal quality. Theoretically, the findings highlight the importance of periodicity cues in speech perception in noise by older listeners. Breathy voice could be challenging to separate from the noise when the noise also lacks periodicity. Clinically, it suggests the need to address both listener- and talker-related factors in speech communication by older adults.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142569605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-06-17DOI: 10.1097/AUD.0000000000001536
Francis Kuk, Christopher Slugocki, Petri Korhonen
<p><strong>Objectives: </strong>Recently, the Noise-Tolerance Domains Test (NTDT) was applied to study the noise-tolerance domains used by young normal-hearing (NH) listeners during noise acceptance decisions. In this study, we examined how subjective speech intelligibility may drive noise acceptance decisions by applying the NTDT on NH and hearing-impaired (HI) listeners at the signal to noise ratios (SNRs) around the Tracking of Noise-Tolerance (TNT) thresholds.</p><p><strong>Design: </strong>A single-blind, within-subjects design with 22 NH and 17 HI older adults was followed. Listeners completed the TNT to determine the average noise acceptance threshold (TNT Ave ). Then, listeners completed the NTDT at the SNRs of 0, ±3 dB (re: TNT Ave ) to estimate the weighted noise-tolerance domain ratings (WNTDRs) for each domain criterion. Listeners also completed the Objective and Subjective Intelligibility Difference (OSID) Test to establish the individual intelligibility performance-intensity (P-I) functions of the TNT materials. All test measures were conducted at 75 and 82 dB SPL speech input levels. NH and HI listeners were tested in the unaided mode. The HI listeners were also tested using a study hearing aid. The WNTDRs were plotted against subjective speech intelligibilities extrapolated from individual P-I of the OSID at the SNRs corresponding to NTDT test conditions. Listeners were grouped according to their most heavily weighed domain and a regression analysis was performed against listener demographics as well as TNT and OSID performances to determine which variable(s) affected listener grouping.</p><p><strong>Results: </strong>Three linear mixed effects (LMEs) models were used to examine whether WNTDRs changed with subjective speech intelligibility. All three LMEs found significant fixed effects of domain criteria, subjective intelligibility, and speech input level on WNTDRs. In general, heavier weights were assigned to speech interference and loudness domains at poorer intelligibility levels (<50%) with reversals to distraction and annoyance at higher intelligibility levels (>80%). The comparison between NH and HI-unaided showed that NH listeners assigned greater weights to loudness than the HI-unaided listeners. The comparison between NH and HI-aided groups showed similar weights between groups. The comparison between HI-unaided and HI-aided found that HI listeners assigned lower weights to speech interference and greater weights to loudness when tested in aided compared with unaided modes. In all comparisons, loudness was weighed heavier at the 82 dB SPL input level than at the 75 dB SPL input level with greater weights to annoyance in the NH versus HI-unaided comparison and lower weights to distraction in the HI-aided versus HI-unaided comparison. A generalized linear model determined that listener grouping was best accounted for by subjective speech intelligibility estimated at TNT Ave .</p><p><strong>Conclusions: </strong>The domain
研究目的最近,噪声耐受域测试(NTDT)被用于研究年轻的正常听力(NH)听者在决定是否接受噪声时所使用的噪声耐受域。在本研究中,我们通过对听力正常(NH)和听力受损(HI)的听者在噪声容限跟踪(TNT)阈值附近的信噪比(SNR)进行NTDT测试,研究了主观语音清晰度如何驱动噪声接受决策:设计:对 22 名 NH 和 17 名 HI 老年人进行单盲、受试者内设计。听者完成 TNT 测试以确定平均噪声接受阈值 (TNTAve)。然后,听者在信噪比为 0、±3 dB(重 TNTAve)时完成 NTDT,以估算每个领域标准的加权噪声耐受领域评级(WNTDRs)。听者还完成了客观和主观智能度差异(OSID)测试,以确定 TNT 材料的个人智能度性能-强度(P-I)函数。所有测试均在 75 和 82 dB SPL 的语音输入水平下进行。NH 和 HI 听者在无辅助模式下进行测试。HI 听力者还使用助听器进行了测试。在与 NTDT 测试条件相对应的信噪比下,将 WNTDR 与 OSID 的单个 P-I 推断出的主观言语智能度进行对比。根据听者最重的权重域对其进行分组,并针对听者人口统计学特征以及 TNT 和 OSID 的表现进行回归分析,以确定哪些变量会影响听者分组:我们使用了三个线性混合效应(LMEs)模型来研究 WNTDRs 是否会随着主观语音清晰度的变化而变化。所有三个线性混合效应模型都发现了领域标准、主观清晰度和语音输入水平对 WNTDRs 的显著固定效应。一般来说,在较差的清晰度水平(80%)下,语音干扰和响度域的权重较高。国家听力和人工智能无辅助听力之间的比较表明,国家听力听者比人工智能无辅助听者对响度赋予了更大的权重。NH 和 HI 辅助听力组之间的比较显示,各组之间的权重相似。在无辅助听力组和有辅助听力组之间的比较中发现,与无辅助听力模式相比,有辅助听力模式下的听力测试中,辅助听力组听者对语音干扰赋予的权重较低,而对响度赋予的权重较高。在所有比较中,82 dB SPL 输入水平时的响度权重高于 75 dB SPL 输入水平时的响度权重,在 NH 与 HI 无辅助比较中,烦扰权重较大,而在 HI 辅助与 HI 无辅助比较中,干扰权重较低。一个广义线性模型确定,听者分组的最佳解释是在 TNTAve 下估计的主观语音清晰度:无论听力状况如何(即 NH 与 HI),听者使用的领域标准都受其主观语音清晰度的影响。一般来说,当主观语言清晰度较低时,语言干扰和响度的权重最大。随着主观清晰度的提高,烦扰和分心的权重也随之增加。此外,听者在 TNTAve 下的主观语音理解度大于 90% 的标准可以帮助我们对听者进行特征分析。
{"title":"Subjective Speech Intelligibility Drives Noise-Tolerance Domain Use During the Tracking of Noise-Tolerance Test.","authors":"Francis Kuk, Christopher Slugocki, Petri Korhonen","doi":"10.1097/AUD.0000000000001536","DOIUrl":"10.1097/AUD.0000000000001536","url":null,"abstract":"<p><strong>Objectives: </strong>Recently, the Noise-Tolerance Domains Test (NTDT) was applied to study the noise-tolerance domains used by young normal-hearing (NH) listeners during noise acceptance decisions. In this study, we examined how subjective speech intelligibility may drive noise acceptance decisions by applying the NTDT on NH and hearing-impaired (HI) listeners at the signal to noise ratios (SNRs) around the Tracking of Noise-Tolerance (TNT) thresholds.</p><p><strong>Design: </strong>A single-blind, within-subjects design with 22 NH and 17 HI older adults was followed. Listeners completed the TNT to determine the average noise acceptance threshold (TNT Ave ). Then, listeners completed the NTDT at the SNRs of 0, ±3 dB (re: TNT Ave ) to estimate the weighted noise-tolerance domain ratings (WNTDRs) for each domain criterion. Listeners also completed the Objective and Subjective Intelligibility Difference (OSID) Test to establish the individual intelligibility performance-intensity (P-I) functions of the TNT materials. All test measures were conducted at 75 and 82 dB SPL speech input levels. NH and HI listeners were tested in the unaided mode. The HI listeners were also tested using a study hearing aid. The WNTDRs were plotted against subjective speech intelligibilities extrapolated from individual P-I of the OSID at the SNRs corresponding to NTDT test conditions. Listeners were grouped according to their most heavily weighed domain and a regression analysis was performed against listener demographics as well as TNT and OSID performances to determine which variable(s) affected listener grouping.</p><p><strong>Results: </strong>Three linear mixed effects (LMEs) models were used to examine whether WNTDRs changed with subjective speech intelligibility. All three LMEs found significant fixed effects of domain criteria, subjective intelligibility, and speech input level on WNTDRs. In general, heavier weights were assigned to speech interference and loudness domains at poorer intelligibility levels (<50%) with reversals to distraction and annoyance at higher intelligibility levels (>80%). The comparison between NH and HI-unaided showed that NH listeners assigned greater weights to loudness than the HI-unaided listeners. The comparison between NH and HI-aided groups showed similar weights between groups. The comparison between HI-unaided and HI-aided found that HI listeners assigned lower weights to speech interference and greater weights to loudness when tested in aided compared with unaided modes. In all comparisons, loudness was weighed heavier at the 82 dB SPL input level than at the 75 dB SPL input level with greater weights to annoyance in the NH versus HI-unaided comparison and lower weights to distraction in the HI-aided versus HI-unaided comparison. A generalized linear model determined that listener grouping was best accounted for by subjective speech intelligibility estimated at TNT Ave .</p><p><strong>Conclusions: </strong>The domain ","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1484-1495"},"PeriodicalIF":2.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141332565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-05-31DOI: 10.1097/AUD.0000000000001532
Eric M Johnson, Eric W Healy
<p><strong>Objectives: </strong>This study aimed to determine the speech-to-background ratios (SBRs) at which normal-hearing (NH) and hearing-impaired (HI) listeners can recognize both speech and environmental sounds when the two types of signals are mixed. Also examined were the effect of individual sounds on speech recognition and environmental sound recognition (ESR), and the impact of divided versus selective attention on these tasks.</p><p><strong>Design: </strong>In Experiment 1 (divided attention), 11 NH and 10 HI listeners heard sentences mixed with environmental sounds at various SBRs and performed speech recognition and ESR tasks concurrently in each trial. In Experiment 2 (selective attention), 20 NH listeners performed these tasks in separate trials. Psychometric functions were generated for each task, listener group, and environmental sound. The range over which speech recognition and ESR were both high was determined, as was the optimal SBR for balancing recognition with ESR, defined as the point of intersection between each pair of normalized psychometric functions.</p><p><strong>Results: </strong>The NH listeners achieved greater than 95% accuracy on concurrent speech recognition and ESR over an SBR range of approximately 20 dB or greater. The optimal SBR for maximizing both speech recognition and ESR for NH listeners was approximately +12 dB. For the HI listeners, the range over which 95% performance was observed on both tasks was far smaller (span of 1 dB), with an optimal value of +5 dB. Acoustic analyses indicated that the speech and environmental sound stimuli were similarly audible, regardless of the hearing status of the listener, but that the speech fluctuated more than the environmental sounds. Divided versus selective attention conditions produced differences in performance that were statistically significant yet only modest in magnitude. In all conditions and for both listener groups, recognition was higher for environmental sounds than for speech when presented at equal intensities (i.e., 0 dB SBR), indicating that the environmental sounds were more effective maskers of speech than the converse. Each of the 25 environmental sounds used in this study (with one exception) had a span of SBRs over which speech recognition and ESR were both higher than 95%. These ranges tended to overlap substantially.</p><p><strong>Conclusions: </strong>A range of SBRs exists over which speech and environmental sounds can be simultaneously recognized with high accuracy by NH and HI listeners, but this range is larger for NH listeners. The single optimal SBR for jointly maximizing speech recognition and ESR also differs between NH and HI listeners. The greater masking effectiveness of the environmental sounds relative to the speech may be related to the lower degree of fluctuation present in the environmental sounds as well as possibly task differences between speech recognition and ESR (open versus closed set). The observed differences bet
{"title":"The Optimal Speech-to-Background Ratio for Balancing Speech Recognition With Environmental Sound Recognition.","authors":"Eric M Johnson, Eric W Healy","doi":"10.1097/AUD.0000000000001532","DOIUrl":"10.1097/AUD.0000000000001532","url":null,"abstract":"<p><strong>Objectives: </strong>This study aimed to determine the speech-to-background ratios (SBRs) at which normal-hearing (NH) and hearing-impaired (HI) listeners can recognize both speech and environmental sounds when the two types of signals are mixed. Also examined were the effect of individual sounds on speech recognition and environmental sound recognition (ESR), and the impact of divided versus selective attention on these tasks.</p><p><strong>Design: </strong>In Experiment 1 (divided attention), 11 NH and 10 HI listeners heard sentences mixed with environmental sounds at various SBRs and performed speech recognition and ESR tasks concurrently in each trial. In Experiment 2 (selective attention), 20 NH listeners performed these tasks in separate trials. Psychometric functions were generated for each task, listener group, and environmental sound. The range over which speech recognition and ESR were both high was determined, as was the optimal SBR for balancing recognition with ESR, defined as the point of intersection between each pair of normalized psychometric functions.</p><p><strong>Results: </strong>The NH listeners achieved greater than 95% accuracy on concurrent speech recognition and ESR over an SBR range of approximately 20 dB or greater. The optimal SBR for maximizing both speech recognition and ESR for NH listeners was approximately +12 dB. For the HI listeners, the range over which 95% performance was observed on both tasks was far smaller (span of 1 dB), with an optimal value of +5 dB. Acoustic analyses indicated that the speech and environmental sound stimuli were similarly audible, regardless of the hearing status of the listener, but that the speech fluctuated more than the environmental sounds. Divided versus selective attention conditions produced differences in performance that were statistically significant yet only modest in magnitude. In all conditions and for both listener groups, recognition was higher for environmental sounds than for speech when presented at equal intensities (i.e., 0 dB SBR), indicating that the environmental sounds were more effective maskers of speech than the converse. Each of the 25 environmental sounds used in this study (with one exception) had a span of SBRs over which speech recognition and ESR were both higher than 95%. These ranges tended to overlap substantially.</p><p><strong>Conclusions: </strong>A range of SBRs exists over which speech and environmental sounds can be simultaneously recognized with high accuracy by NH and HI listeners, but this range is larger for NH listeners. The single optimal SBR for jointly maximizing speech recognition and ESR also differs between NH and HI listeners. The greater masking effectiveness of the environmental sounds relative to the speech may be related to the lower degree of fluctuation present in the environmental sounds as well as possibly task differences between speech recognition and ESR (open versus closed set). The observed differences bet","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1444-1460"},"PeriodicalIF":2.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11493516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141180405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-06-25DOI: 10.1097/AUD.0000000000001526
Farid Alzhrani, Isra Aljazeeri, Yassin Abdelsamad, Abdulrahman Alsanosi, Ana H Kim, Angel Ramos-Macias, Angel Ramos-de-Miguel, Anja Kurz, Artur Lorens, Bruce Gantz, Craig A Buchman, Dayse Távora-Vieira, Georg Sprinzl, Griet Mertens, James E Saunders, Julie Kosaner, Laila M Telmesani, Luis Lassaletta, Manohar Bance, Medhat Yousef, Meredith A Holcomb, Oliver Adunka, Per Cayé- Thomasen, Piotr H Skarzynski, Ranjith Rajeswaran, Robert J Briggs, Seung-Ha Oh, Stefan Plontke, Stephen J O'Leary, Sumit Agrawal, Tatsuya Yamasoba, Thomas Lenarz, Thomas Wesarg, Walter Kutz, Patrick Connolly, Ilona Anderson, Abdulrahman Hagr
Objectives: A wide variety of intraoperative tests are available in cochlear implantation. However, no consensus exists on which tests constitute the minimum necessary battery. We assembled an international panel of clinical experts to develop, refine, and vote upon a set of core consensus statements.
Design: A literature review was used to identify intraoperative tests currently used in the field and draft a set of provisional statements. For statement evaluation and refinement, we used a modified Delphi consensus panel structure. Multiple interactive rounds of voting, evaluation, and feedback were conducted to achieve convergence.
Results: Twenty-nine provisional statements were included in the original draft. In the first voting round, consensus was reached on 15 statements. Of the 14 statements that did not reach consensus, 12 were revised based on feedback provided by the expert practitioners, and 2 were eliminated. In the second voting round, 10 of the 12 revised statements reached a consensus. The two statements which did not achieve consensus were further revised and subjected to a third voting round. However, both statements failed to achieve consensus in the third round. In addition, during the final revision, one more statement was decided to be deleted due to overlap with another modified statement.
Conclusions: A final core set of 24 consensus statements was generated, covering wide areas of intraoperative testing during CI surgery. These statements may provide utility as evidence-based guidelines to improve quality and achieve uniformity of surgical practice.
{"title":"International Consensus Statements on Intraoperative Testing for Cochlear Implantation Surgery.","authors":"Farid Alzhrani, Isra Aljazeeri, Yassin Abdelsamad, Abdulrahman Alsanosi, Ana H Kim, Angel Ramos-Macias, Angel Ramos-de-Miguel, Anja Kurz, Artur Lorens, Bruce Gantz, Craig A Buchman, Dayse Távora-Vieira, Georg Sprinzl, Griet Mertens, James E Saunders, Julie Kosaner, Laila M Telmesani, Luis Lassaletta, Manohar Bance, Medhat Yousef, Meredith A Holcomb, Oliver Adunka, Per Cayé- Thomasen, Piotr H Skarzynski, Ranjith Rajeswaran, Robert J Briggs, Seung-Ha Oh, Stefan Plontke, Stephen J O'Leary, Sumit Agrawal, Tatsuya Yamasoba, Thomas Lenarz, Thomas Wesarg, Walter Kutz, Patrick Connolly, Ilona Anderson, Abdulrahman Hagr","doi":"10.1097/AUD.0000000000001526","DOIUrl":"10.1097/AUD.0000000000001526","url":null,"abstract":"<p><strong>Objectives: </strong>A wide variety of intraoperative tests are available in cochlear implantation. However, no consensus exists on which tests constitute the minimum necessary battery. We assembled an international panel of clinical experts to develop, refine, and vote upon a set of core consensus statements.</p><p><strong>Design: </strong>A literature review was used to identify intraoperative tests currently used in the field and draft a set of provisional statements. For statement evaluation and refinement, we used a modified Delphi consensus panel structure. Multiple interactive rounds of voting, evaluation, and feedback were conducted to achieve convergence.</p><p><strong>Results: </strong>Twenty-nine provisional statements were included in the original draft. In the first voting round, consensus was reached on 15 statements. Of the 14 statements that did not reach consensus, 12 were revised based on feedback provided by the expert practitioners, and 2 were eliminated. In the second voting round, 10 of the 12 revised statements reached a consensus. The two statements which did not achieve consensus were further revised and subjected to a third voting round. However, both statements failed to achieve consensus in the third round. In addition, during the final revision, one more statement was decided to be deleted due to overlap with another modified statement.</p><p><strong>Conclusions: </strong>A final core set of 24 consensus statements was generated, covering wide areas of intraoperative testing during CI surgery. These statements may provide utility as evidence-based guidelines to improve quality and achieve uniformity of surgical practice.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1418-1426"},"PeriodicalIF":2.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11487033/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141447623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-07-11DOI: 10.1097/AUD.0000000000001544
Mirthe L A Fehrmann, Cris P Lanting, Lonneke Haer-Wigman, Helger G Yntema, Emmanuel A M Mylanus, Wendy J Huinck, Ronald J E Pennings
<p><strong>Objectives: </strong>Usher syndrome (USH), characterized by bilateral sensorineural hearing loss (SNHL) and retinitis pigmentosa (RP), prompts increased reliance on hearing due to progressive visual deterioration. It can be categorized into three subtypes: USH type 1 (USH1), characterized by severe to profound congenital SNHL, childhood-onset RP, and vestibular areflexia; USH type 2 (USH2), presenting with moderate to severe progressive SNHL and RP onset in the second decade, with or without vestibular dysfunction; and USH type 3 (USH3), featuring variable progressive SNHL beginning in childhood, variable RP onset, and diverse vestibular function. Previous studies evaluating cochlear implant (CI) outcomes in individuals with USH used varying or short follow-up durations, while others did not evaluate outcomes for each subtype separately. This study evaluates long-term CI performance in subjects with USH, at both short-term and long-term, considering each subtype separately.</p><p><strong>Design: </strong>This retrospective, observational cohort study identified 36 CI recipients (53 ears) who were categorized into four different groups: early-implanted USH1 (first CI at ≤7 years of age), late-implanted USH1 (first CI at ≥8 years of age), USH2 and USH3. Phoneme scores at 65 dB SPL with CI were evaluated at 1 year, ≥2 years (mid-term), and ≥5 years postimplantation (long-term). Each subtype was analyzed separately due to the significant variability in phenotype observed among the three subtypes.</p><p><strong>Results: </strong>Early-implanted USH1-subjects (N = 23 ears) achieved excellent long-term phoneme scores (100% [interquartile ranges {IQR} = 95 to 100]), with younger age at implantation significantly correlating with better CI outcomes. Simultaneously implanted subjects had significantly better outcomes than sequentially implanted subjects ( p = 0.028). Late-implanted USH1 subjects (N = 3 ears) used CI solely for sound detection and showed a mean phoneme discrimination score of 12% (IQR = 0 to 12), while still expressing satisfaction with ambient sound detection. In the USH2 group (N = 23 ears), a long-term mean phoneme score of 85% (IQR = 81 to 95) was found. Better outcomes were associated with younger age at implantation and higher preimplantation speech perception scores. USH3-subjects (N = 7 ears) achieved a mean postimplantation phoneme score of 71% (IQR = 45 to 91).</p><p><strong>Conclusions: </strong>This study is currently one of the largest and most comprehensive studies evaluating CI outcomes in individuals with USH, demonstrating that overall, individuals with USH benefit from CI at both short- and long-term follow-up. Due to the considerable variability in phenotype observed among the three subtypes, each subtype was analyzed separately, resulting in smaller sample sizes. For USH1 subjects, optimal CI outcomes are expected with early simultaneous bilateral implantation. Late implantation in USH1 provides signaling func
目的:乌谢尔综合征(USH)以双侧感音神经性听力损失(SNHL)和视网膜色素变性(RP)为特征,由于视力逐渐退化,患者对听力的依赖性增加。它可分为三个亚型:USH 1 型(USH1)的特点是重度到极重度先天性 SNHL、儿童期发病的 RP 和前庭反射障碍;USH 2 型(USH2)表现为中度到重度进行性 SNHL 和在第二个十年发病的 RP,伴有或不伴有前庭功能障碍;USH 3 型(USH3)的特点是儿童期开始出现不同程度的进行性 SNHL、不同程度的 RP 发病和不同的前庭功能。以往评估 USH 患者人工耳蜗植入(CI)效果的研究采用了不同或较短的随访时间,而其他研究则没有对每种亚型的效果进行单独评估。本研究对 USH 患者的长期人工耳蜗性能进行了短期和长期评估,并对每种亚型分别进行了考虑:这项回顾性、观察性队列研究确定了 36 名 CI 接受者(53 耳),他们被分为四个不同的组别:早期植入 USH1(首次植入 CI 时年龄小于 7 岁)、晚期植入 USH1(首次植入 CI 时年龄大于 8 岁)、USH2 和 USH3。在植入 CI 后 1 年、≥2 年(中期)和≥5 年(长期),分别对 65 dB SPL 下的音素评分进行评估。由于在三个亚型中观察到的表型存在显著差异,因此对每个亚型进行了单独分析:早期植入的 USH1 受试者(N = 23 耳)获得了极佳的长期音素评分(100% [四分位间范围 {IQR} = 95 至 100]),植入年龄越小,CI 效果越好。同时植入者的疗效明显优于连续植入者(p = 0.028)。晚期植入的 USH1 受试者(N = 3 耳)仅将 CI 用于声音检测,其平均音素辨别力得分为 12%(IQR = 0 至 12),同时仍对环境声音检测表示满意。在 USH2 组(N = 23 耳)中,长期平均音素得分率为 85%(IQR = 81 至 95)。较好的结果与植入年龄较小、植入前言语感知分数较高有关。USH3受试者(N = 7耳)植入后的平均音素得分率为71%(IQR = 45至91):这项研究是目前评估 USH 患者 CI 效果的最大规模、最全面的研究之一,表明总体而言,USH 患者在短期和长期随访中都能从 CI 中获益。由于在三个亚型中观察到的表型差异很大,因此对每个亚型都进行了单独分析,导致样本量较小。对于 USH1 受试者,早期双侧同时植入有望获得最佳的 CI 效果。在 USH1 中,晚期植入可提供信号功能,但实现的语音识别功能不足以进行口语交流。对于 USH2 和 USH3,特别是如果患者使用助听器表现出足够的言语识别能力并在植入前接受了充分的听觉刺激,则有望获得良好的 CI 效果。鉴于 USH2 的听力损失是渐进性的,同时伴有严重的视力障碍,因此建议尽早植入人工耳蜗。与 USH2 相比,由于 USH3 的变异性,预测其结果仍然具有挑战性。针对 USH2 和 USH3 的咨询应强调早期植入的益处,并鼓励患者使用助听器。
{"title":"Long-Term Outcomes of Cochlear Implantation in Usher Syndrome.","authors":"Mirthe L A Fehrmann, Cris P Lanting, Lonneke Haer-Wigman, Helger G Yntema, Emmanuel A M Mylanus, Wendy J Huinck, Ronald J E Pennings","doi":"10.1097/AUD.0000000000001544","DOIUrl":"10.1097/AUD.0000000000001544","url":null,"abstract":"<p><strong>Objectives: </strong>Usher syndrome (USH), characterized by bilateral sensorineural hearing loss (SNHL) and retinitis pigmentosa (RP), prompts increased reliance on hearing due to progressive visual deterioration. It can be categorized into three subtypes: USH type 1 (USH1), characterized by severe to profound congenital SNHL, childhood-onset RP, and vestibular areflexia; USH type 2 (USH2), presenting with moderate to severe progressive SNHL and RP onset in the second decade, with or without vestibular dysfunction; and USH type 3 (USH3), featuring variable progressive SNHL beginning in childhood, variable RP onset, and diverse vestibular function. Previous studies evaluating cochlear implant (CI) outcomes in individuals with USH used varying or short follow-up durations, while others did not evaluate outcomes for each subtype separately. This study evaluates long-term CI performance in subjects with USH, at both short-term and long-term, considering each subtype separately.</p><p><strong>Design: </strong>This retrospective, observational cohort study identified 36 CI recipients (53 ears) who were categorized into four different groups: early-implanted USH1 (first CI at ≤7 years of age), late-implanted USH1 (first CI at ≥8 years of age), USH2 and USH3. Phoneme scores at 65 dB SPL with CI were evaluated at 1 year, ≥2 years (mid-term), and ≥5 years postimplantation (long-term). Each subtype was analyzed separately due to the significant variability in phenotype observed among the three subtypes.</p><p><strong>Results: </strong>Early-implanted USH1-subjects (N = 23 ears) achieved excellent long-term phoneme scores (100% [interquartile ranges {IQR} = 95 to 100]), with younger age at implantation significantly correlating with better CI outcomes. Simultaneously implanted subjects had significantly better outcomes than sequentially implanted subjects ( p = 0.028). Late-implanted USH1 subjects (N = 3 ears) used CI solely for sound detection and showed a mean phoneme discrimination score of 12% (IQR = 0 to 12), while still expressing satisfaction with ambient sound detection. In the USH2 group (N = 23 ears), a long-term mean phoneme score of 85% (IQR = 81 to 95) was found. Better outcomes were associated with younger age at implantation and higher preimplantation speech perception scores. USH3-subjects (N = 7 ears) achieved a mean postimplantation phoneme score of 71% (IQR = 45 to 91).</p><p><strong>Conclusions: </strong>This study is currently one of the largest and most comprehensive studies evaluating CI outcomes in individuals with USH, demonstrating that overall, individuals with USH benefit from CI at both short- and long-term follow-up. Due to the considerable variability in phenotype observed among the three subtypes, each subtype was analyzed separately, resulting in smaller sample sizes. For USH1 subjects, optimal CI outcomes are expected with early simultaneous bilateral implantation. Late implantation in USH1 provides signaling func","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1542-1553"},"PeriodicalIF":2.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11487040/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141581571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-07-12DOI: 10.1097/AUD.0000000000001545
Lina A J Reiss, Melissa B Lawrence, Irina A Omelchenko, Wenxuan He, Jonathon R Kirk
<p><strong>Objectives: </strong>Electro-acoustic stimulation (EAS) combines electric stimulation via a cochlear implant (CI) with residual low-frequency acoustic hearing, with benefits for music appreciation and speech perception in noise. However, many EAS CI users lose residual acoustic hearing, reducing this benefit. The main objectives of this study were to determine whether chronic EAS leads to more hearing loss compared with CI surgery alone in an aged guinea pig model, and to assess the relationship of any hearing loss to histology measures. Conversely, it is also important to understand factors impacting efficacy of electric stimulation. If one contributor to CI-induced hearing loss is damage to the auditory nerve, both acoustic and electric thresholds will be affected. Excitotoxicity from EAS may also affect electric thresholds, while electric stimulation is osteogenic and may increase electrode impedances. Hence, secondary objectives were to assess how electric thresholds are related to the amount of residual hearing loss after CI surgery, and how EAS affects electric thresholds and impedances over time.</p><p><strong>Design: </strong>Two groups of guinea pigs, aged 9 to 21 months, were implanted with a CI in the left ear. Preoperatively, the animals had a range of hearing losses, as expected for an aged cohort. At 4 weeks after surgery, the EAS group (n = 5) received chronic EAS for 8 hours a day, 5 days a week, for 20 weeks via a tether system that allowed for free movement during stimulation. The nonstimulated group (NS; n = 6) received no EAS over the same timeframe. Auditory brainstem responses (ABRs) and electrically evoked ABRs (EABRs) were recorded at 3 to 4 week intervals to assess changes in acoustic and electric thresholds over time. At 24 weeks after surgery, cochlear tissue was harvested for histological evaluation, only analyzing animals without electrode extrusions (n = 4 per ear).</p><p><strong>Results: </strong>Cochlear implantation led to an immediate worsening of ABR thresholds peaking between 3 and 5 weeks after surgery and then recovering and stabilizing by 5 and 8 weeks. Significantly greater ABR threshold shifts were seen in the implanted ears compared with contralateral, non-implanted control ears after surgery. After EAS and termination, no significant additional ABR threshold shifts were seen in the EAS group compared with the NS group. A surprising finding was that NS animals had significantly greater recovery in EABR thresholds over time, with decreases (improvements) of -51.8 ± 33.0 and -39.0 ± 37.3 c.u. at 12 and 24 weeks, respectively, compared with EAS animals with EABR threshold increases (worsening) of +1.0 ± 25.6 and 12.8 ± 44.3 c.u. at 12 and 24 weeks. Impedance changes over time did not differ significantly between groups. After exclusion of cases with electrode extrusion or significant trauma, no significant correlations were seen between ABR and EABR thresholds, or between ABR thresholds with histo
目标:电声刺激(EAS)将通过人工耳蜗(CI)进行的电刺激与残余低频听力相结合,对音乐欣赏和噪音中的语言感知有好处。然而,许多 EAS CI 用户会丧失残余听力,从而减少了这种益处。本研究的主要目的是确定在老年豚鼠模型中,慢性 EAS 是否比单纯 CI 手术导致更多的听力损失,并评估听力损失与组织学测量的关系。相反,了解影响电刺激效果的因素也很重要。如果听觉神经受损是导致 CI 引起听力损失的原因之一,那么声阈和电阈都会受到影响。EAS 的兴奋毒性也可能会影响电阈值,而电刺激是骨源性的,可能会增加电极阻抗。因此,次要目标是评估电阈值与 CI 手术后残余听力损失量的关系,以及 EAS 如何随着时间的推移影响电阈值和阻抗:设计:两组年龄在 9 到 21 个月之间的豚鼠在左耳植入了 CI。术前,这些豚鼠的听力损失程度不一,这与年龄较大的豚鼠群的情况相符。术后4周,EAS组(n = 5)通过系绳系统接受每周5天、每天8小时的慢性EAS刺激,持续20周。非刺激组(NS;n = 6)在相同时间内不接受 EAS 刺激。每隔3到4周记录一次听性脑干反应(ABRs)和电诱发ABRs(EABRs),以评估声阈和电阈随时间的变化。术后24周,采集耳蜗组织进行组织学评估,仅分析无电极挤压的动物(每耳4只):结果:人工耳蜗植入术导致ABR阈值立即恶化,在术后3至5周达到峰值,然后在5至8周恢复并稳定下来。与对侧、未植入人工耳蜗的对照耳相比,植入人工耳蜗的耳朵在术后出现的 ABR 阈值移动明显更大。在 EAS 和终止手术后,EAS 组与 NS 组相比没有发现明显的额外 ABR 阈值偏移。一个令人惊讶的发现是,随着时间的推移,NS动物的EABR阈值恢复得更快,在12周和24周时分别下降(改善)-51.8 ± 33.0 c.u.和-39.0 ± 37.3 c.u.,而EAS动物的EABR阈值在12周和24周时分别上升(恶化)+1.0 ± 25.6 c.u.和12.8 ± 44.3 c.u.。阻抗随时间的变化在组间没有显著差异。在排除电极挤压或明显外伤的病例后,ABR 和 EABR 阈值之间,或 ABR 阈值与组织学测量的内/外毛细胞计数、突触带计数、血管纹毛细血管直径或螺旋神经节细胞密度之间均无明显相关性:尽管样本量较小,但研究结果并不表明 EAS 会严重破坏听力。在尽量减少手术创伤的情况下,没有证据表明人工耳蜗植入术后毛细胞、突触带、螺旋神经节细胞或血管纹与听力损失有关。在有重大创伤的病例中,声阈和电阈都升高了,这或许可以解释为什么在创伤和听力损失最小化的情况下,只植入人工耳蜗的效果往往更好。令人惊讶的是,慢性 EAS(或单纯电刺激)可能会对电阈值产生负面影响,这可能是由于 CI 手术后听神经的恢复受到阻碍。需要更多的研究来证实慢性 EAS 对电阈恢复的潜在负面影响。
{"title":"Chronic Electro-Acoustic Stimulation May Interfere With Electric Threshold Recovery After Cochlear Implantation in the Aged Guinea Pig.","authors":"Lina A J Reiss, Melissa B Lawrence, Irina A Omelchenko, Wenxuan He, Jonathon R Kirk","doi":"10.1097/AUD.0000000000001545","DOIUrl":"10.1097/AUD.0000000000001545","url":null,"abstract":"<p><strong>Objectives: </strong>Electro-acoustic stimulation (EAS) combines electric stimulation via a cochlear implant (CI) with residual low-frequency acoustic hearing, with benefits for music appreciation and speech perception in noise. However, many EAS CI users lose residual acoustic hearing, reducing this benefit. The main objectives of this study were to determine whether chronic EAS leads to more hearing loss compared with CI surgery alone in an aged guinea pig model, and to assess the relationship of any hearing loss to histology measures. Conversely, it is also important to understand factors impacting efficacy of electric stimulation. If one contributor to CI-induced hearing loss is damage to the auditory nerve, both acoustic and electric thresholds will be affected. Excitotoxicity from EAS may also affect electric thresholds, while electric stimulation is osteogenic and may increase electrode impedances. Hence, secondary objectives were to assess how electric thresholds are related to the amount of residual hearing loss after CI surgery, and how EAS affects electric thresholds and impedances over time.</p><p><strong>Design: </strong>Two groups of guinea pigs, aged 9 to 21 months, were implanted with a CI in the left ear. Preoperatively, the animals had a range of hearing losses, as expected for an aged cohort. At 4 weeks after surgery, the EAS group (n = 5) received chronic EAS for 8 hours a day, 5 days a week, for 20 weeks via a tether system that allowed for free movement during stimulation. The nonstimulated group (NS; n = 6) received no EAS over the same timeframe. Auditory brainstem responses (ABRs) and electrically evoked ABRs (EABRs) were recorded at 3 to 4 week intervals to assess changes in acoustic and electric thresholds over time. At 24 weeks after surgery, cochlear tissue was harvested for histological evaluation, only analyzing animals without electrode extrusions (n = 4 per ear).</p><p><strong>Results: </strong>Cochlear implantation led to an immediate worsening of ABR thresholds peaking between 3 and 5 weeks after surgery and then recovering and stabilizing by 5 and 8 weeks. Significantly greater ABR threshold shifts were seen in the implanted ears compared with contralateral, non-implanted control ears after surgery. After EAS and termination, no significant additional ABR threshold shifts were seen in the EAS group compared with the NS group. A surprising finding was that NS animals had significantly greater recovery in EABR thresholds over time, with decreases (improvements) of -51.8 ± 33.0 and -39.0 ± 37.3 c.u. at 12 and 24 weeks, respectively, compared with EAS animals with EABR threshold increases (worsening) of +1.0 ± 25.6 and 12.8 ± 44.3 c.u. at 12 and 24 weeks. Impedance changes over time did not differ significantly between groups. After exclusion of cases with electrode extrusion or significant trauma, no significant correlations were seen between ABR and EABR thresholds, or between ABR thresholds with histo","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1554-1567"},"PeriodicalIF":2.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11493501/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141592222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-07-01DOI: 10.1097/AUD.0000000000001527
Andrew E Amini, James G Naples, Luis Cortina, Tiffany Hwa, Mary Morcos, Irina Castellanos, Aaron C Moberly
<p><strong>Objectives: </strong>Evidence continues to emerge of associations between cochlear implant (CI) outcomes and cognitive functions in postlingually deafened adults. While there are multiple factors that appear to affect these associations, the impact of speech recognition background testing conditions (i.e., in quiet versus noise) has not been systematically explored. The two aims of this study were to (1) identify associations between speech recognition following cochlear implantation and performance on cognitive tasks, and to (2) investigate the impact of speech testing in quiet versus noise on these associations. Ultimately, we want to understand the conditions that impact this complex relationship between CI outcomes and cognition.</p><p><strong>Design: </strong>A scoping review following Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines was performed on published literature evaluating the relation between outcomes of cochlear implantation and cognition. The current review evaluates 39 papers that reported associations between over 30 cognitive assessments and speech recognition tests in adult patients with CIs. Six cognitive domains were evaluated: Global Cognition, Inhibition-Concentration, Memory and Learning, Controlled Fluency, Verbal Fluency, and Visuospatial Organization. Meta-analysis was conducted on three cognitive assessments among 12 studies to evaluate relations with speech recognition outcomes. Subgroup analyses were performed to identify whether speech recognition testing in quiet versus in background noise impacted its association with cognitive performance.</p><p><strong>Results: </strong>Significant associations between cognition and speech recognition in a background of quiet or noise were found in 69% of studies. Tests of Global Cognition and Inhibition-Concentration skills resulted in the highest overall frequency of significant associations with speech recognition (45% and 57%, respectively). Despite the modest proportion of significant associations reported, pooling effect sizes across samples through meta-analysis revealed a moderate positive correlation between tests of Global Cognition ( r = +0.37, p < 0.01) as well as Verbal Fluency ( r = +0.44, p < 0.01) and postoperative speech recognition skills. Tests of Memory and Learning are most frequently utilized in the setting of CI (in 26 of 39 included studies), yet meta-analysis revealed nonsignificant associations with speech recognition performance in a background of quiet ( r = +0.30, p = 0.18), and noise ( r = -0.06, p = 0.78).</p><p><strong>Conclusions: </strong>Background conditions of speech recognition testing may influence the relation between speech recognition outcomes and cognition. The magnitude of this effect of testing conditions on this relationship appears to vary depending on the cognitive construct being assessed. Overall, Global Cognition and Inhibition-Concentration skills are potentially useful in explaining sp
{"title":"A Scoping Review and Meta-Analysis of the Relations Between Cognition and Cochlear Implant Outcomes and the Effect of Quiet Versus Noise Testing Conditions.","authors":"Andrew E Amini, James G Naples, Luis Cortina, Tiffany Hwa, Mary Morcos, Irina Castellanos, Aaron C Moberly","doi":"10.1097/AUD.0000000000001527","DOIUrl":"10.1097/AUD.0000000000001527","url":null,"abstract":"<p><strong>Objectives: </strong>Evidence continues to emerge of associations between cochlear implant (CI) outcomes and cognitive functions in postlingually deafened adults. While there are multiple factors that appear to affect these associations, the impact of speech recognition background testing conditions (i.e., in quiet versus noise) has not been systematically explored. The two aims of this study were to (1) identify associations between speech recognition following cochlear implantation and performance on cognitive tasks, and to (2) investigate the impact of speech testing in quiet versus noise on these associations. Ultimately, we want to understand the conditions that impact this complex relationship between CI outcomes and cognition.</p><p><strong>Design: </strong>A scoping review following Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines was performed on published literature evaluating the relation between outcomes of cochlear implantation and cognition. The current review evaluates 39 papers that reported associations between over 30 cognitive assessments and speech recognition tests in adult patients with CIs. Six cognitive domains were evaluated: Global Cognition, Inhibition-Concentration, Memory and Learning, Controlled Fluency, Verbal Fluency, and Visuospatial Organization. Meta-analysis was conducted on three cognitive assessments among 12 studies to evaluate relations with speech recognition outcomes. Subgroup analyses were performed to identify whether speech recognition testing in quiet versus in background noise impacted its association with cognitive performance.</p><p><strong>Results: </strong>Significant associations between cognition and speech recognition in a background of quiet or noise were found in 69% of studies. Tests of Global Cognition and Inhibition-Concentration skills resulted in the highest overall frequency of significant associations with speech recognition (45% and 57%, respectively). Despite the modest proportion of significant associations reported, pooling effect sizes across samples through meta-analysis revealed a moderate positive correlation between tests of Global Cognition ( r = +0.37, p < 0.01) as well as Verbal Fluency ( r = +0.44, p < 0.01) and postoperative speech recognition skills. Tests of Memory and Learning are most frequently utilized in the setting of CI (in 26 of 39 included studies), yet meta-analysis revealed nonsignificant associations with speech recognition performance in a background of quiet ( r = +0.30, p = 0.18), and noise ( r = -0.06, p = 0.78).</p><p><strong>Conclusions: </strong>Background conditions of speech recognition testing may influence the relation between speech recognition outcomes and cognition. The magnitude of this effect of testing conditions on this relationship appears to vary depending on the cognitive construct being assessed. Overall, Global Cognition and Inhibition-Concentration skills are potentially useful in explaining sp","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1339-1352"},"PeriodicalIF":2.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11493527/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141494342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}