Pub Date : 2024-08-28DOI: 10.1016/j.heares.2024.109107
Ana B. Lao-Rodríguez , David Pérez-González , Manuel S. Malmierca
Summary
The detection of novel, low probability events in the environment is critical for survival. To perform this vital task, our brain is continuously building and updating a model of the outside world; an extensively studied phenomenon commonly referred to as predictive coding. Predictive coding posits that the brain is continuously extracting regularities from the environment to generate predictions. These predictions are then used to supress neuronal responses to redundant information, filtering those inputs, which then automatically enhances the remaining, unexpected inputs.
We have recently described the ability of auditory neurons to generate predictions about expected sensory inputs by detecting their absence in an oddball paradigm using omitted tones as deviants. Here, we studied the responses of individual neurons to omitted tones by presenting individual sequences of repetitive pure tones, using both random and periodic omissions, presented at both fast and slow rates in the inferior colliculus and auditory cortex neurons of anesthetized rats. Our goal was to determine whether feature-specific dependence of these predictions exists. Results showed that omitted tones could be detected at both high (8 Hz) and slow repetition rates (2 Hz), with detection being more robust at the non-lemniscal auditory pathway.
{"title":"Physiological properties of auditory neurons responding to omission deviants in the anesthetized rat","authors":"Ana B. Lao-Rodríguez , David Pérez-González , Manuel S. Malmierca","doi":"10.1016/j.heares.2024.109107","DOIUrl":"10.1016/j.heares.2024.109107","url":null,"abstract":"<div><h3>Summary</h3><p>The detection of novel, low probability events in the environment is critical for survival. To perform this vital task, our brain is continuously building and updating a model of the outside world; an extensively studied phenomenon commonly referred to as predictive coding. Predictive coding posits that the brain is continuously extracting regularities from the environment to generate predictions. These predictions are then used to supress neuronal responses to redundant information, filtering those inputs, which then automatically enhances the remaining, unexpected inputs.</p><p>We have recently described the ability of auditory neurons to generate predictions about expected sensory inputs by detecting their absence in an oddball paradigm using omitted tones as deviants. Here, we studied the responses of individual neurons to omitted tones by presenting individual sequences of repetitive pure tones, using both random and periodic omissions, presented at both fast and slow rates in the inferior colliculus and auditory cortex neurons of anesthetized rats. Our goal was to determine whether feature-specific dependence of these predictions exists. Results showed that omitted tones could be detected at both high (8 Hz) and slow repetition rates (2 Hz), with detection being more robust at the non-lemniscal auditory pathway.</p></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"452 ","pages":"Article 109107"},"PeriodicalIF":2.5,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0378595524001606/pdfft?md5=ae1f9fc25be6dc7cf9c64cbfa789fda4&pid=1-s2.0-S0378595524001606-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142145562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-21DOI: 10.1016/j.heares.2024.109106
Avril Genene Holt , Ronald D. Griffith Jr. , Soo D. Lee , Mikiya Asako , Eric Buras , Selin Yalcinoglu , Richard A. Altschuler
Several studies suggest that hearing loss results in changes in the balance between inhibition and excitation in the inferior colliculus (IC). The IC is an integral nucleus within the auditory brainstem. The majority of ascending pathways from the lateral lemniscus (LL), superior olivary complex (SOC), and cochlear nucleus (CN) synapse in the IC before projecting to the thalamus and cortex. Many of these ascending projections provide inhibitory innervation to neurons within the IC. However, the nature and the distribution of this inhibitory input have only been partially elucidated in the rat. The inhibitory neurotransmitter, gamma aminobutyric acid (GABA), from the ventral nucleus of the lateral lemniscus (VNLL), provides the primary inhibitory input to the IC of the rat with GABA from other lemniscal and SOC nuclei providing lesser, but prominent innervation.
There is evidence that hearing related conditions can result in dysfunction of IC neurons. These changes may be mediated in part by changes in GABA inputs to IC neurons. We have previously used gene micro-arrays in a study of deafness-related changes in gene expression in the IC and found significant changes in GAD as well as the GABA transporters and GABA receptors (Holt 2005). This is consistent with reports of age and trauma related changes in GABA (Bledsoe et al., 1995; Mossop et al., 2000; Salvi et al., 2000). Ototoxic lesions of the cochlea produced a permanent threshold shift. The number, intensity, and density of GABA positive axon terminals in the IC were compared in normal hearing and deafened rats. While the number of GABA immunolabeled puncta was only minimally different between groups, the intensity of labeling was significantly reduced. The ultrastructural localization and distribution of labeling was also examined. In deafened animals, the number of immuno gold particles was reduced by 78 % in axodendritic and 82 % in axosomatic GABAergic puncta. The affected puncta were primarily associated with small IC neurons. These results suggest that reduced inhibition to IC neurons contribute to the increased neuronal excitability observed in the IC following noise or drug induced hearing loss. Whether these deafness diminished inhibitory inputs originate from intrinsic or extrinsic CNIC sources awaits further study.
多项研究表明,听力损失会导致下丘(IC)抑制和兴奋之间的平衡发生变化。下丘脑(IC)是听觉脑干中一个完整的核团。来自外侧半月板(LL)、上橄榄复合体(SOC)和耳蜗核(CN)的大部分上升通路在投射到丘脑和大脑皮层之前都会在 IC 中发生突触。这些上升投射中有许多为 IC 内的神经元提供抑制性神经支配。然而,这种抑制性输入的性质和分布在大鼠身上仅得到部分阐明。抑制性神经递质γ-氨基丁酸(GABA)来自大鼠外侧半月板腹侧核(VNLL),是大鼠 IC 的主要抑制性输入,而来自其他半月板和 SOC 核的 GABA 则提供较少但重要的神经支配。有证据表明,与听力相关的疾病会导致 IC 神经元功能障碍,这些变化可能部分是由 IC 神经元 GABA 输入的变化介导的。我们曾使用基因微阵列研究耳聋相关的 IC 基因表达变化,发现 GAD 以及 GABA 转运体和 GABA 受体发生了显著变化(Holt,2005 年)。这与 GABA 的年龄和创伤相关变化的报告一致(Bledsoe 等人,1995 年;Mossop 等人,2000 年;Salvi 等人,2000 年)。耳蜗的耳毒性损伤会产生永久性的阈值偏移。我们比较了听力正常大鼠和耳聋大鼠 IC 中 GABA 阳性轴突末梢的数量、强度和密度。虽然各组间GABA免疫标记点的数量差异很小,但标记强度却显著降低。我们还检测了标记的超微结构定位和分布。在耳聋动物中,免疫金颗粒的数量在轴突GABA能点中减少了78%,在轴突GABA能点中减少了82%。受影响的点主要与小 IC 神经元有关。这些结果表明,噪声或药物诱导听力损失后,IC 神经元的抑制作用降低,导致 IC 神经元兴奋性增加。至于这些耳聋抑制性输入的减少是源于内在还是外在的 CNIC,还有待进一步研究。
{"title":"Ototoxicity-related changes in GABA immunolabeling within the rat inferior colliculus","authors":"Avril Genene Holt , Ronald D. Griffith Jr. , Soo D. Lee , Mikiya Asako , Eric Buras , Selin Yalcinoglu , Richard A. Altschuler","doi":"10.1016/j.heares.2024.109106","DOIUrl":"10.1016/j.heares.2024.109106","url":null,"abstract":"<div><p>Several studies suggest that hearing loss results in changes in the balance between inhibition and excitation in the inferior colliculus (IC). The IC is an integral nucleus within the auditory brainstem. The majority of ascending pathways from the lateral lemniscus (LL), superior olivary complex (SOC), and cochlear nucleus (CN) synapse in the IC before projecting to the thalamus and cortex. Many of these ascending projections provide inhibitory innervation to neurons within the IC. However, the nature and the distribution of this inhibitory input have only been partially elucidated in the rat. The inhibitory neurotransmitter, gamma aminobutyric acid (GABA), from the ventral nucleus of the lateral lemniscus (VNLL), provides the primary inhibitory input to the IC of the rat with GABA from other lemniscal and SOC nuclei providing lesser, but prominent innervation.</p><p>There is evidence that hearing related conditions can result in dysfunction of IC neurons. These changes may be mediated in part by changes in GABA inputs to IC neurons. We have previously used gene micro-arrays in a study of deafness-related changes in gene expression in the IC and found significant changes in GAD as well as the GABA transporters and GABA receptors (Holt 2005). This is consistent with reports of age and trauma related changes in GABA (Bledsoe et al., 1995; Mossop et al., 2000; Salvi et al., 2000). Ototoxic lesions of the cochlea produced a permanent threshold shift. The number, intensity, and density of GABA positive axon terminals in the IC were compared in normal hearing and deafened rats. While the number of GABA immunolabeled puncta was only minimally different between groups, the intensity of labeling was significantly reduced. The ultrastructural localization and distribution of labeling was also examined. In deafened animals, the number of immuno gold particles was reduced by 78 % in axodendritic and 82 % in axosomatic GABAergic puncta. The affected puncta were primarily associated with small IC neurons. These results suggest that reduced inhibition to IC neurons contribute to the increased neuronal excitability observed in the IC following noise or drug induced hearing loss. Whether these deafness diminished inhibitory inputs originate from intrinsic or extrinsic CNIC sources awaits further study.</p></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"452 ","pages":"Article 109106"},"PeriodicalIF":2.5,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over the last decade, multiple studies have shown that hearing-impaired listeners’ speech-in-noise reception ability, measured with audibility compensation, is closely associated with performance in spectro-temporal modulation (STM) detection tests. STM tests thus have the potential to provide highly relevant beyond-the-audiogram information in the clinic, but the available STM tests have not been optimized for clinical use in terms of test duration, required equipment, and procedural standardization. The present study introduces a quick-and-simple clinically viable STM test, named the Audible Contrast Threshold (ACT™) test. First, an experimenter-controlled STM measurement paradigm was developed, in which the patient is presented bilaterally with a continuous audibility-corrected noise via headphones and asked to press a pushbutton whenever they hear an STM target sound in the noise. The patient's threshold is established using a Hughson-Westlake tracking procedure with a three-out-of-five criterion and then refined by post-processing the collected data using a logistic function. Different stimulation paradigms were tested in 28 hearing-impaired participants and compared to data previously measured in the same participants with an established STM test paradigm. The best stimulation paradigm showed excellent test-retest reliability and good agreement with the established laboratory version. Second, the best stimulation paradigm with 1-second noise “waves” (windowed noise) was chosen, further optimized with respect to step size and logistic-function fitting, and tested in a population of 25 young normal-hearing participants using various types of transducers to obtain normative data. Based on these normative data, the “normalized Contrast Level” (in dB nCL) scale was defined, where 0 ± 4 dB nCL corresponds to normal performance and elevated dB nCL values indicate the degree of audible contrast loss. Overall, the results of the present study suggest that the ACT test may be considered a reliable, quick-and-simple (and thus clinically viable) test of STM sensitivity. The ACT can be measured directly after the audiogram using the same set up, adding only a few minutes to the process.
{"title":"The Audible Contrast Threshold (ACT) test: A clinical spectro-temporal modulation detection test","authors":"Johannes Zaar , Lisbeth Birkelund Simonsen , Raul Sanchez-Lopez , Søren Laugesen","doi":"10.1016/j.heares.2024.109103","DOIUrl":"10.1016/j.heares.2024.109103","url":null,"abstract":"<div><p>Over the last decade, multiple studies have shown that hearing-impaired listeners’ speech-in-noise reception ability, measured with audibility compensation, is closely associated with performance in spectro-temporal modulation (STM) detection tests. STM tests thus have the potential to provide highly relevant beyond-the-audiogram information in the clinic, but the available STM tests have not been optimized for clinical use in terms of test duration, required equipment, and procedural standardization. The present study introduces a quick-and-simple clinically viable STM test, named the Audible Contrast Threshold (ACT™) test. First, an experimenter-controlled STM measurement paradigm was developed, in which the patient is presented bilaterally with a continuous audibility-corrected noise via headphones and asked to press a pushbutton whenever they hear an STM target sound in the noise. The patient's threshold is established using a Hughson-Westlake tracking procedure with a three-out-of-five criterion and then refined by post-processing the collected data using a logistic function. Different stimulation paradigms were tested in 28 hearing-impaired participants and compared to data previously measured in the same participants with an established STM test paradigm. The best stimulation paradigm showed excellent test-retest reliability and good agreement with the established laboratory version. Second, the best stimulation paradigm with 1-second noise “waves” (windowed noise) was chosen, further optimized with respect to step size and logistic-function fitting, and tested in a population of 25 young normal-hearing participants using various types of transducers to obtain normative data. Based on these normative data, the “normalized Contrast Level” (in dB nCL) scale was defined, where 0 ± 4 dB nCL corresponds to normal performance and elevated dB nCL values indicate the degree of audible contrast loss. Overall, the results of the present study suggest that the ACT test may be considered a reliable, quick-and-simple (and thus clinically viable) test of STM sensitivity. The ACT can be measured directly after the audiogram using the same set up, adding only a few minutes to the process.</p></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"453 ","pages":"Article 109103"},"PeriodicalIF":2.5,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0378595524001564/pdfft?md5=ca5cd0522acec385daee6132f78d3bc9&pid=1-s2.0-S0378595524001564-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142145574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-14DOI: 10.1016/j.heares.2024.109104
Yixiang Niu, Ning Chen, Hongqing Zhu, Guangqiang Li, Yibo Chen
Auditory spatial attention detection (ASAD) seeks to determine which speaker in a surround sound field a listener is focusing on based on the one’s brain biosignals. Although existing studies have achieved ASAD from a single-trial electroencephalogram (EEG), the huge inter-subject variability makes them generally perform poorly in cross-subject scenarios. Besides, most ASAD methods do not take full advantage of topological relationships between EEG channels, which are crucial for high-quality ASAD. Recently, some advanced studies have introduced graph-based brain topology modeling into ASAD, but how to calculate edge weights in a graph to better capture actual brain connectivity is worthy of further investigation. To address these issues, we propose a new ASAD method in this paper. First, we model a multi-channel EEG segment as a graph, where differential entropy serves as the node feature, and a static adjacency matrix is generated based on inter-channel mutual information to quantify brain functional connectivity. Then, different subjects’ EEG graphs are encoded into a shared embedding space through a total variation graph neural network. Meanwhile, feature distribution alignment based on multi-kernel maximum mean discrepancy is adopted to learn subject-invariant patterns. Note that we align EEG embeddings of different subjects to reference distributions rather than align them to each other for the purpose of privacy preservation. A series of experiments on open datasets demonstrate that the proposed model outperforms state-of-the-art ASAD models in cross-subject scenarios with relatively low computational complexity, and feature distribution alignment improves the generalizability of the proposed model to a new subject.
{"title":"Subject-independent auditory spatial attention detection based on brain topology modeling and feature distribution alignment","authors":"Yixiang Niu, Ning Chen, Hongqing Zhu, Guangqiang Li, Yibo Chen","doi":"10.1016/j.heares.2024.109104","DOIUrl":"10.1016/j.heares.2024.109104","url":null,"abstract":"<div><p>Auditory spatial attention detection (ASAD) seeks to determine which speaker in a surround sound field a listener is focusing on based on the one’s brain biosignals. Although existing studies have achieved ASAD from a single-trial electroencephalogram (EEG), the huge inter-subject variability makes them generally perform poorly in cross-subject scenarios. Besides, most ASAD methods do not take full advantage of topological relationships between EEG channels, which are crucial for high-quality ASAD. Recently, some advanced studies have introduced graph-based brain topology modeling into ASAD, but how to calculate edge weights in a graph to better capture actual brain connectivity is worthy of further investigation. To address these issues, we propose a new ASAD method in this paper. First, we model a multi-channel EEG segment as a graph, where differential entropy serves as the node feature, and a static adjacency matrix is generated based on inter-channel mutual information to quantify brain functional connectivity. Then, different subjects’ EEG graphs are encoded into a shared embedding space through a total variation graph neural network. Meanwhile, feature distribution alignment based on multi-kernel maximum mean discrepancy is adopted to learn subject-invariant patterns. Note that we align EEG embeddings of different subjects to reference distributions rather than align them to each other for the purpose of privacy preservation. A series of experiments on open datasets demonstrate that the proposed model outperforms state-of-the-art ASAD models in cross-subject scenarios with relatively low computational complexity, and feature distribution alignment improves the generalizability of the proposed model to a new subject.</p></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"453 ","pages":"Article 109104"},"PeriodicalIF":2.5,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-14DOI: 10.1016/j.heares.2024.109105
Alexandre Celma-Miralles, Alberte B. Seeberg, Niels T. Haumann, Peter Vuust, Bjørn Petersen
Cochlear implant (CI) users experience diminished music enjoyment due to the technical limitations of the CI. Nonetheless, behavioral studies have reported that rhythmic features are well-transmitted through the CI. Still, the gradual improvement of rhythm perception after the CI switch-on has not yet been determined using neurophysiological measures. To fill this gap, we here reanalyzed the electroencephalographic responses of participants from two previous mismatch negativity studies. These studies included eight recently implanted CI users measured twice, within the first six weeks after CI switch-on and approximately three months later; thirteen experienced CI users with a median experience of 7 years; and fourteen normally hearing (NH) controls. All participants listened to a repetitive four-tone pattern (known in music as Alberti bass) for 35 min. Applying frequency tagging, we aimed to estimate the neural activity synchronized to the periodicities of the Alberti bass. We hypothesized that longer experience with the CI would be reflected in stronger frequency-tagged neural responses approaching the responses of NH controls. We found an increase in the frequency-tagged amplitudes after only 3 months of CI use. This increase in neural synchronization may reflect an early adaptation to the CI stimulation. Moreover, the frequency-tagged amplitudes of experienced CI users were significantly greater than those of recently implanted CI users, but still smaller than those of NH controls. The frequency-tagged neural responses did not just reflect spectrotemporal changes in the stimuli (i.e., intensity or spectral content fluctuating over time), but also showed non-linear transformations that seemed to enhance relevant periodicities of the Alberti bass. Our findings provide neurophysiological evidence indicating a gradual adaptation to the CI, which is noticeable already after three months, resulting in close to NH brain processing of spectrotemporal features of musical rhythms after extended CI use.
由于人工耳蜗(CI)的技术限制,人工耳蜗植入者对音乐的享受会大打折扣。尽管如此,有行为学研究报告称,节奏特征可以通过 CI 很好地传递。不过,目前还没有使用神经生理学测量方法来确定 CI 启动后节奏感的逐渐改善。为了填补这一空白,我们在此重新分析了之前两项错配否定性研究中参与者的脑电图反应。这两项研究的参与者包括八名新近植入 CI 的用户,他们分别在 CI 启用后的头六周和大约三个月后接受了两次测量;十三名经验丰富的 CI 用户,他们的经验中位数为 7 年;以及十四名听力正常(NH)的对照组。所有参与者都聆听了 35 分钟的重复四音模式(音乐中称为阿尔贝蒂低音)。通过频率标记,我们旨在估算与阿尔贝蒂低音周期性同步的神经活动。我们假设,如果使用 CI 的时间更长,则频率标记神经反应会更强,接近 NH 对照组的反应。我们发现,仅在使用 CI 3 个月后,频率标记振幅就有所增加。这种神经同步的增加可能反映了对 CI 刺激的早期适应。此外,经验丰富的 CI 使用者的频率标记振幅明显大于新植入 CI 的使用者,但仍小于 NH 对照组。频率标记神经反应不仅反映了刺激的频谱时相变化(即强度或频谱内容随时间波动),而且还表现出非线性变换,似乎增强了阿尔贝蒂低音的相关周期性。我们的研究结果提供了神经生理学证据,表明在使用人工耳蜗三个月后,患者已经开始逐渐适应人工耳蜗,从而在长时间使用人工耳蜗后,大脑对音乐节奏的谱时特征进行了接近正常的处理。
{"title":"Experience with the cochlear implant enhances the neural tracking of spectrotemporal patterns in the Alberti bass","authors":"Alexandre Celma-Miralles, Alberte B. Seeberg, Niels T. Haumann, Peter Vuust, Bjørn Petersen","doi":"10.1016/j.heares.2024.109105","DOIUrl":"10.1016/j.heares.2024.109105","url":null,"abstract":"<div><p>Cochlear implant (CI) users experience diminished music enjoyment due to the technical limitations of the CI. Nonetheless, behavioral studies have reported that rhythmic features are well-transmitted through the CI. Still, the gradual improvement of rhythm perception after the CI switch-on has not yet been determined using neurophysiological measures. To fill this gap, we here reanalyzed the electroencephalographic responses of participants from two previous mismatch negativity studies. These studies included eight recently implanted CI users measured twice, within the first six weeks after CI switch-on and approximately three months later; thirteen experienced CI users with a median experience of 7 years; and fourteen normally hearing (NH) controls. All participants listened to a repetitive four-tone pattern (known in music as Alberti bass) for 35 min. Applying frequency tagging, we aimed to estimate the neural activity synchronized to the periodicities of the Alberti bass. We hypothesized that longer experience with the CI would be reflected in stronger frequency-tagged neural responses approaching the responses of NH controls. We found an increase in the frequency-tagged amplitudes after only 3 months of CI use. This increase in neural synchronization may reflect an early adaptation to the CI stimulation. Moreover, the frequency-tagged amplitudes of experienced CI users were significantly greater than those of recently implanted CI users, but still smaller than those of NH controls. The frequency-tagged neural responses did not just reflect spectrotemporal changes in the stimuli (i.e., intensity or spectral content fluctuating over time), but also showed non-linear transformations that seemed to enhance relevant periodicities of the Alberti bass. Our findings provide neurophysiological evidence indicating a gradual adaptation to the CI, which is noticeable already after three months, resulting in close to NH brain processing of spectrotemporal features of musical rhythms after extended CI use.</p></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"452 ","pages":"Article 109105"},"PeriodicalIF":2.5,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0378595524001588/pdfft?md5=5e2d3854decc88c45354f171052f0009&pid=1-s2.0-S0378595524001588-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-02DOI: 10.1016/j.heares.2024.109095
Elizabeth Dinces , Elyse S. Sussman
The current study investigated the effect of lower frequency input on stream segregation acuity in older, normal hearing adults. Using event-related brain potentials (ERPs) and perceptual performance measures, we previously showed that stream segregation abilities were less proficient in older compared to younger adults. However, in that study we used frequency ranges greater than 1500 Hz. In the current study, we lowered the target frequency range below 1500 Hz and found similar stream segregation abilities in younger and older adults. These results indicate that the perception of complex auditory scenes is influenced by the spectral content of the auditory input and suggest that lower frequency ranges of input in older adults may facilitate listening ability in complex auditory environments. These results also have implications for the advancement of prosthetic devices.
{"title":"Lower frequency range of auditory input facilitates stream segregation in older adults","authors":"Elizabeth Dinces , Elyse S. Sussman","doi":"10.1016/j.heares.2024.109095","DOIUrl":"10.1016/j.heares.2024.109095","url":null,"abstract":"<div><p>The current study investigated the effect of lower frequency input on stream segregation acuity in older, normal hearing adults. Using event-related brain potentials (ERPs) and perceptual performance measures, we previously showed that stream segregation abilities were less proficient in older compared to younger adults. However, in that study we used frequency ranges greater than 1500 Hz. In the current study, we lowered the target frequency range below 1500 Hz and found similar stream segregation abilities in younger and older adults. These results indicate that the perception of complex auditory scenes is influenced by the spectral content of the auditory input and suggest that lower frequency ranges of input in older adults may facilitate listening ability in complex auditory environments. These results also have implications for the advancement of prosthetic devices.</p></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"451 ","pages":"Article 109095"},"PeriodicalIF":2.5,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141906354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Congenital or early-onset unilateral hearing loss (UHL) can disrupt the normal development of the auditory system. In extreme cases of UHL (i.e., single sided deafness), consistent cochlear implant use during sensitive periods resulted in cortical reorganization that partially reversed the detrimental effects of unilateral sensory deprivation. There is a gap in knowledge, however, regarding cortical plasticity i.e. the brain's capacity to adapt, reorganize, and develop binaural pathways in milder degrees of UHL rehabilitated by a hearing aid (HA). The current study was set to investigate early-stage cortical processing and electrophysiological manifestations of binaural processing by means of cortical auditory evoked potentials (CAEPs) to speech sounds, in children with moderate to severe-to-profound UHL using a HA. Fourteen children with UHL (CHwUHL), 6-14 years old consistently using a HA for 3.5 (±2.3) years participated in the study. CAEPs were elicited to the speech sounds /m/, /g/, and /t/ in three listening conditions: monaural [Normal hearing (NH), HA], and bilateral [BI (NH + HA)]. Results indicated age-appropriate CAEP morphology in the NH and BI listening conditions in all children. In the HA listening condition: (1) CAEPs showed similar morphology to that found in the NH listening condition, however, the mature morphology observed in older children in the NH listening condition was not evident; (2) P1 was elicited in all but two children with severe-to-profound hearing loss, to at least one speech stimuli, indicating effective audibility; (3) A significant mismatch in timing and synchrony between the NH and HA ear was found; (4) P1 was sensitive to the acoustic features of the eliciting stimulus and to the amplification characteristics of the HA. Finally, a cortical binaural interaction component (BIC) was derived in most children. In conclusion, the current study provides first-time evidence for cortical plasticity and partial reversal of the detrimental effects of moderate to severe-to-profound UHL rehabilitated by a HA. The derivation of a cortical biomarker of binaural processing implies that functional binaural pathways can develop when sufficient auditory input is provided to the affected ear. CAEPs may thus serve as a clinical tool for assessing, monitoring, and managing CHwUHL using a HA.
先天性或早发性单侧听力损失(UHL)会破坏听觉系统的正常发育。在极端的 UHL(即单侧耳聋)病例中,在敏感期持续使用人工耳蜗会导致大脑皮层重组,从而部分逆转单侧感官剥夺的有害影响。然而,关于大脑皮层的可塑性,即大脑对通过助听器(HA)康复的轻度 UHL 的适应、重组和双耳通路的开发能力,还存在着知识空白。本研究旨在通过皮层听觉诱发电位(CAEPs),研究使用助听器的中度至重度至永久性 UHL 儿童的早期皮层处理和双耳处理的电生理表现。14 名 6-14 岁的 UHL(CHwUHL)儿童参加了这项研究,他们持续使用 HA 3.5 (±2.3) 年。在单耳[正常听力(NH)、HA]和双耳[BI(NH + HA)]三种听力条件下,对语音/m/、/g/和/t/进行了CAEP。结果表明,在 NH 和 BI 听力条件下,所有儿童的 CAEP 形态均与年龄相符。在 HA 听力条件下(1) CAEPs 的形态与 NH 聆听条件下的 CAEPs 相似,但在 NH 聆听条件下年龄较大的儿童身上观察到的成熟形态并不明显;(2) 除两名重度至永久性听力损失的儿童外,所有儿童都能对至少一个言语刺激触发 P1,这表明听力有效;(3) 发现 NH 耳和 HA 耳之间在时间和同步性上存在明显的不匹配;(4) P1 对触发刺激的声学特征和 HA 的放大特性非常敏感。最后,大多数儿童的大脑皮层都出现了双耳相互作用成分(BIC)。总之,本研究首次证明了大脑皮层的可塑性,并部分逆转了中重度至重度 UHL 通过 HA 康复所产生的有害影响。双耳处理的皮质生物标志物的产生意味着,当向患耳提供足够的听觉输入时,功能性双耳通路就能发展起来。因此,CAEPs 可作为使用 HA 评估、监测和管理 CHwUHL 的临床工具。
{"title":"Biomarkers of auditory cortical plasticity and development of binaural pathways in children with unilateral hearing loss using a hearing aid","authors":"Ricky Kaplan-Neeman , Tzvia Greenbom , Suhaill Habiballah , Yael Henkin","doi":"10.1016/j.heares.2024.109096","DOIUrl":"10.1016/j.heares.2024.109096","url":null,"abstract":"<div><p>Congenital or early-onset unilateral hearing loss (UHL) can disrupt the normal development of the auditory system. In extreme cases of UHL (i.e., single sided deafness), consistent cochlear implant use during sensitive periods resulted in cortical reorganization that partially reversed the detrimental effects of unilateral sensory deprivation. There is a gap in knowledge, however, regarding cortical plasticity i.e. the brain's capacity to adapt, reorganize, and develop binaural pathways in milder degrees of UHL rehabilitated by a hearing aid (HA). The current study was set to investigate early-stage cortical processing and electrophysiological manifestations of binaural processing by means of cortical auditory evoked potentials (CAEPs) to speech sounds, in children with moderate to severe-to-profound UHL using a HA. Fourteen children with UHL (CHwUHL), 6-14 years old consistently using a HA for 3.5 (±2.3) years participated in the study. CAEPs were elicited to the speech sounds /m/, /g/, and /t/ in three listening conditions: monaural [Normal hearing (NH), HA], and bilateral [BI (NH + HA)]. Results indicated age-appropriate CAEP morphology in the NH and BI listening conditions in all children. In the HA listening condition: (1) CAEPs showed similar morphology to that found in the NH listening condition, however, the mature morphology observed in older children in the NH listening condition was not evident; (2) P1 was elicited in all but two children with severe-to-profound hearing loss, to at least one speech stimuli, indicating effective audibility; (3) A significant mismatch in timing and synchrony between the NH and HA ear was found; (4) P1 was sensitive to the acoustic features of the eliciting stimulus and to the amplification characteristics of the HA. Finally, a cortical binaural interaction component (BIC) was derived in most children. In conclusion, the current study provides first-time evidence for cortical plasticity and partial reversal of the detrimental effects of moderate to severe-to-profound UHL rehabilitated by a HA. The derivation of a cortical biomarker of binaural processing implies that functional binaural pathways can develop when sufficient auditory input is provided to the affected ear. CAEPs may thus serve as a clinical tool for assessing, monitoring, and managing CHwUHL using a HA.</p></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"451 ","pages":"Article 109096"},"PeriodicalIF":2.5,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141906353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-31DOI: 10.1016/j.heares.2024.109094
Keito Hishikawa, Keiko Ogawa
Sound localization in the front-back dimension is reported to be challenging, with individual differences. We investigated whether auditory discrimination processing in the brain differs based on front-back sound localization ability. This study conducted an auditory oddball task using speakers in front of and behind the participants. We used event-related brain potentials to examine the deviance detection process between groups that could and could not discriminate front-back sound localization. The results indicated that mismatch negativity (MMN) occurred during the deviance detection process, and P2 amplitude differed between standard and deviant locations in both groups. However, the latency of MMN was shorter in the group that could discriminate front-back sounds than in the group that could not. Additionally, N1 amplitude increased for deviant locations compared to standard ones only in the discriminating group. In conclusion, the sensory memories matching process based on traces of previously presented stimuli (MMN, P2) occurred regardless of discrimination ability. However, the response to changes in the physical properties of sounds (MMN latency, N1 amplitude) differed depending on the ability to discriminate front-back sounds. Our findings suggest that the brain may have different processing strategies for the two directions even without subjective recognition of the front-back direction of incoming sounds.
{"title":"Mismatch negativity between discriminating and undiscriminating participants on the front-back sound localization","authors":"Keito Hishikawa, Keiko Ogawa","doi":"10.1016/j.heares.2024.109094","DOIUrl":"10.1016/j.heares.2024.109094","url":null,"abstract":"<div><p>Sound localization in the front-back dimension is reported to be challenging, with individual differences. We investigated whether auditory discrimination processing in the brain differs based on front-back sound localization ability. This study conducted an auditory oddball task using speakers in front of and behind the participants. We used event-related brain potentials to examine the deviance detection process between groups that could and could not discriminate front-back sound localization. The results indicated that mismatch negativity (MMN) occurred during the deviance detection process, and P2 amplitude differed between standard and deviant locations in both groups. However, the latency of MMN was shorter in the group that could discriminate front-back sounds than in the group that could not. Additionally, N1 amplitude increased for deviant locations compared to standard ones only in the discriminating group. In conclusion, the sensory memories matching process based on traces of previously presented stimuli (MMN, P2) occurred regardless of discrimination ability. However, the response to changes in the physical properties of sounds (MMN latency, N1 amplitude) differed depending on the ability to discriminate front-back sounds. Our findings suggest that the brain may have different processing strategies for the two directions even without subjective recognition of the front-back direction of incoming sounds.</p></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"452 ","pages":"Article 109094"},"PeriodicalIF":2.5,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141993230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-30DOI: 10.1016/j.heares.2024.109077
Samuel Couth , Garreth Prendergast , Hannah Guest , Kevin J. Munro , David R. Moore , Christopher J. Plack , Jane Ginsborg , Piers Dawes
Musicians are at risk of hearing loss and tinnitus due to regular exposure to high levels of noise. This level of risk may have been underestimated previously since damage to the auditory system, such as cochlear synaptopathy, may not be easily detectable using standard clinical measures. Most previous research investigating hearing loss in musicians has involved cross-sectional study designs that may capture only a snapshot of hearing health in relation to noise exposure. The aim of this study was to investigate the effects of cumulative noise exposure on behavioural, electrophysiological, and self-report indices of hearing damage in early-career musicians and non-musicians with normal hearing over a 2-year period. Participants completed an annual test battery consisting of pure tone audiometry, extended high-frequency hearing thresholds, distortion product otoacoustic emissions (DPOAEs), speech perception in noise, auditory brainstem responses, and self-report measures of tinnitus, hyperacusis, and hearing in background noise. Participants also completed the Noise Exposure Structured Interview to estimate cumulative noise exposure across the study period. Linear mixed models assessed changes over time. The longitudinal analysis comprised 64 early-career musicians (female n = 34; age range at T0 = 18–26 years) and 30 non-musicians (female n = 20; age range at T0 = 18–27 years). There were few longitudinal changes as a result of musicianship. Small improvements over time in some measures may be attributable to a practice/test-retest effect. Some measures (e.g., DPOAE indices of outer hair cell function) were associated with noise exposure at each time point, but did not show a significant change over time. A small proportion of participants reported a worsening of their tinnitus symptoms, which participants attributed to noise exposure, or not using hearing protection. Future longitudinal studies should attempt to capture the effects of noise exposure over a longer period, taken at several time points, for a precise measure of how hearing changes over time. Hearing conservation programmes for “at risk” individuals should closely monitor DPOAEs to detect early signs of noise-induced hearing loss when audiometric thresholds are clinically normal.
{"title":"A longitudinal study investigating the effects of noise exposure on behavioural, electrophysiological and self-report measures of hearing in musicians with normal audiometric thresholds","authors":"Samuel Couth , Garreth Prendergast , Hannah Guest , Kevin J. Munro , David R. Moore , Christopher J. Plack , Jane Ginsborg , Piers Dawes","doi":"10.1016/j.heares.2024.109077","DOIUrl":"10.1016/j.heares.2024.109077","url":null,"abstract":"<div><p>Musicians are at risk of hearing loss and tinnitus due to regular exposure to high levels of noise. This level of risk may have been underestimated previously since damage to the auditory system, such as cochlear synaptopathy, may not be easily detectable using standard clinical measures. Most previous research investigating hearing loss in musicians has involved cross-sectional study designs that may capture only a snapshot of hearing health in relation to noise exposure. The aim of this study was to investigate the effects of cumulative noise exposure on behavioural, electrophysiological, and self-report indices of hearing damage in early-career musicians and non-musicians with normal hearing over a 2-year period. Participants completed an annual test battery consisting of pure tone audiometry, extended high-frequency hearing thresholds, distortion product otoacoustic emissions (DPOAEs), speech perception in noise, auditory brainstem responses, and self-report measures of tinnitus, hyperacusis, and hearing in background noise. Participants also completed the Noise Exposure Structured Interview to estimate cumulative noise exposure across the study period. Linear mixed models assessed changes over time. The longitudinal analysis comprised 64 early-career musicians (female <em>n</em> = 34; age range at T0 = 18–26 years) and 30 non-musicians (female <em>n</em> = 20; age range at T0 = 18–27 years). There were few longitudinal changes as a result of musicianship. Small improvements over time in some measures may be attributable to a practice/test-retest effect. Some measures (e.g., DPOAE indices of outer hair cell function) were associated with noise exposure at each time point, but did not show a significant change over time. A small proportion of participants reported a worsening of their tinnitus symptoms, which participants attributed to noise exposure, or not using hearing protection. Future longitudinal studies should attempt to capture the effects of noise exposure over a longer period, taken at several time points, for a precise measure of how hearing changes over time. Hearing conservation programmes for “at risk” individuals should closely monitor DPOAEs to detect early signs of noise-induced hearing loss when audiometric thresholds are clinically normal.</p></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"451 ","pages":"Article 109077"},"PeriodicalIF":2.5,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0378595524001308/pdfft?md5=434ac8217c3e0fe9a04305e60ffb26ab&pid=1-s2.0-S0378595524001308-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141859488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-28DOI: 10.1016/j.heares.2024.109093
Catherine Pérez-Valenzuela , Sergio Vicencio-Jiménez , Mia Caballero , Paul H. Delano , Diego Elgueda
The discovery and development of electrocochleography (ECochG) in animal models has been fundamental for its implementation in clinical audiology and neurotology. In our laboratory, the use of round-window ECochG recordings in chinchillas has allowed a better understanding of auditory efferent functioning. In previous works, we gave evidence of the corticofugal modulation of auditory-nerve and cochlear responses during visual attention and working memory. However, whether these cognitive top-down mechanisms to the most peripheral structures of the auditory pathway are also active during audiovisual crossmodal stimulation is unknown. Here, we introduce a new technique, wireless ECochG to record compound-action potentials of the auditory nerve (CAP), cochlear microphonics (CM), and round-window noise (RWN) in awake chinchillas during a paradigm of crossmodal (visual and auditory) stimulation. We compared ECochG data obtained from four awake chinchillas recorded with a wireless ECochG system with wired ECochG recordings from six anesthetized animals. Although ECochG experiments with the wireless system had a lower signal-to-noise ratio than wired recordings, their quality was sufficient to compare ECochG potentials in awake crossmodal conditions. We found non-significant differences in CAP and CM amplitudes in response to audiovisual stimulation compared to auditory stimulation alone (clicks and tones). On the other hand, spontaneous auditory-nerve activity (RWN) was modulated by visual crossmodal stimulation, suggesting that visual crossmodal simulation can modulate spontaneous but not evoked auditory-nerve activity. However, given the limited sample of 10 animals (4 wireless and 6 wired), these results should be interpreted cautiously. Future experiments are required to substantiate these conclusions. In addition, we introduce the use of wireless ECochG in animal models as a useful tool for translational research.
在动物模型中发现和发展耳蜗电图(ECochG)是将其应用于临床听力学和神经听力学的基础。在我们的实验室中,使用圆窗法记录龙猫的听觉传出功能使我们对听觉传出功能有了更好的了解。在以前的研究中,我们已证明在视觉注意力和工作记忆过程中,听觉神经和耳蜗反应受皮质耳蜗调节。然而,在视听跨模态刺激过程中,这些对听觉通路最外围结构的自上而下的认知机制是否也处于活跃状态还不得而知。在这里,我们引入了一种新技术--无线 ECochG,在跨模态(视觉和听觉)刺激范例中记录清醒龙猫的听觉神经复合动作电位(CAP)、耳蜗微音(CM)和圆窗噪声(RWN)。我们将四只清醒龙猫使用无线心电系统记录的心电数据与六只麻醉动物的有线心电记录进行了比较。虽然使用无线系统进行的心电实验的信噪比低于有线记录,但其质量足以比较清醒跨模态条件下的心电势。我们发现,与单纯的听觉刺激(咔嗒声和音调)相比,视听刺激下的 CAP 和 CM 波幅差异不大。另一方面,自发听觉神经活动(RWN)受到视觉跨模态刺激的调节,这表明视觉跨模态模拟可以调节自发听觉神经活动,而不是诱发听觉神经活动。然而,由于样本有限,只有 10 只动物(4 只无线动物和 6 只有线动物),因此应谨慎解释这些结果。要证实这些结论,还需要未来的实验。此外,我们还介绍了在动物模型中使用无线心电图作为转化研究的有用工具。
{"title":"Wireless electrocochleography in awake chinchillas: A model to study crossmodal modulations at the peripheral level","authors":"Catherine Pérez-Valenzuela , Sergio Vicencio-Jiménez , Mia Caballero , Paul H. Delano , Diego Elgueda","doi":"10.1016/j.heares.2024.109093","DOIUrl":"10.1016/j.heares.2024.109093","url":null,"abstract":"<div><p>The discovery and development of electrocochleography (ECochG) in animal models has been fundamental for its implementation in clinical audiology and neurotology. In our laboratory, the use of round-window ECochG recordings in chinchillas has allowed a better understanding of auditory efferent functioning. In previous works, we gave evidence of the corticofugal modulation of auditory-nerve and cochlear responses during visual attention and working memory. However, whether these cognitive top-down mechanisms to the most peripheral structures of the auditory pathway are also active during audiovisual crossmodal stimulation is unknown. Here, we introduce a new technique, wireless ECochG to record compound-action potentials of the auditory nerve (CAP), cochlear microphonics (CM), and round-window noise (RWN) in awake chinchillas during a paradigm of crossmodal (visual and auditory) stimulation. We compared ECochG data obtained from four awake chinchillas recorded with a wireless ECochG system with wired ECochG recordings from six anesthetized animals. Although ECochG experiments with the wireless system had a lower signal-to-noise ratio than wired recordings, their quality was sufficient to compare ECochG potentials in awake crossmodal conditions. We found non-significant differences in CAP and CM amplitudes in response to audiovisual stimulation compared to auditory stimulation alone (clicks and tones). On the other hand, spontaneous auditory-nerve activity (RWN) was modulated by visual crossmodal stimulation, suggesting that visual crossmodal simulation can modulate spontaneous but not evoked auditory-nerve activity. However, given the limited sample of 10 animals (4 wireless and 6 wired), these results should be interpreted cautiously. Future experiments are required to substantiate these conclusions. In addition, we introduce the use of wireless ECochG in animal models as a useful tool for translational research.</p></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"451 ","pages":"Article 109093"},"PeriodicalIF":2.5,"publicationDate":"2024-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141848660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}