首页 > 最新文献

Trends in Hearing最新文献

英文 中文
Toward a Unified Theory of the Reference Frame of the Ventriloquism Aftereffect. 走向腹语术后遗症参照系的统一理论。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2023-01-01 DOI: 10.1177/23312165231201020
Peter Lokša, Norbert Kopčo

The ventriloquism aftereffect (VAE), observed as a shift in the perceived locations of sounds after audio-visual stimulation, requires reference frame (RF) alignment since hearing and vision encode space in different RFs (head-centered vs. eye-centered). Previous experimental studies reported inconsistent results, observing either a mixture of head-centered and eye-centered frames, or a predominantly head-centered frame. Here, a computational model is introduced, examining the neural mechanisms underlying these effects. The basic model version assumes that the auditory spatial map is head-centered and the visual signals are converted to head-centered frame prior to inducing the adaptation. Two mechanisms are considered as extended model versions to describe the mixed-frame experimental data: (1) additional presence of visual signals in eye-centered frame and (2) eye-gaze direction-dependent attenuation in VAE when eyes shift away from the training fixation. Simulation results show that the mixed-frame results are mainly due to the second mechanism, suggesting that the RF of VAE is mainly head-centered. Additionally, a mechanism is proposed to explain a new ventriloquism-aftereffect-like phenomenon in which adaptation is induced by aligned audio-visual signals when saccades are used for responding to auditory targets. A version of the model extended to consider such response-method-related biases accurately predicts the new phenomenon. When attempting to model all the experimentally observed phenomena simultaneously, the model predictions are qualitatively similar but less accurate, suggesting that the proposed neural mechanisms interact in a more complex way than assumed in the model.

腹语后遗症(VAE)是视听刺激后声音感知位置的变化,需要参考系(RF)对齐,因为听觉和视觉在不同的RF中编码空间(以头部为中心与以眼睛为中心)。先前的实验研究报告了不一致的结果,观察到的是以头部为中心和以眼睛为中心的框架的混合,或者是以头部为主的框架。在这里,引入了一个计算模型,研究了这些效应背后的神经机制。基本模型版本假设听觉空间图是以头部为中心的,并且在诱导自适应之前将视觉信号转换为以头部为核心的帧。两种机制被认为是描述混合帧实验数据的扩展模型版本:(1)以眼睛为中心的帧中视觉信号的额外存在;(2)当眼睛偏离训练注视时,VAE中的眼睛凝视方向依赖性衰减。仿真结果表明,混合帧结果主要是由于第二种机制,表明VAE的RF主要是以头部为中心的。此外,还提出了一种机制来解释一种新的类似腹语后效的现象,在这种现象中,当扫视用于响应听觉目标时,对齐的视听信号会诱导适应。该模型的一个版本扩展到考虑了这种反应方法相关的偏差,准确地预测了新现象。当试图同时对所有实验观察到的现象进行建模时,模型预测在质量上相似,但不太准确,这表明所提出的神经机制以比模型中假设的更复杂的方式相互作用。
{"title":"Toward a Unified Theory of the Reference Frame of the Ventriloquism Aftereffect.","authors":"Peter Lokša, Norbert Kopčo","doi":"10.1177/23312165231201020","DOIUrl":"10.1177/23312165231201020","url":null,"abstract":"<p><p>The ventriloquism aftereffect (VAE), observed as a shift in the perceived locations of sounds after audio-visual stimulation, requires reference frame (RF) alignment since hearing and vision encode space in different RFs (head-centered vs. eye-centered). Previous experimental studies reported inconsistent results, observing either a mixture of head-centered and eye-centered frames, or a predominantly head-centered frame. Here, a computational model is introduced, examining the neural mechanisms underlying these effects. The basic model version assumes that the auditory spatial map is head-centered and the visual signals are converted to head-centered frame prior to inducing the adaptation. Two mechanisms are considered as extended model versions to describe the mixed-frame experimental data: (1) additional presence of visual signals in eye-centered frame and (2) eye-gaze direction-dependent attenuation in VAE when eyes shift away from the training fixation. Simulation results show that the mixed-frame results are mainly due to the second mechanism, suggesting that the RF of VAE is mainly head-centered. Additionally, a mechanism is proposed to explain a new ventriloquism-aftereffect-like phenomenon in which adaptation is induced by aligned audio-visual signals when saccades are used for responding to auditory targets. A version of the model extended to consider such response-method-related biases accurately predicts the new phenomenon. When attempting to model all the experimentally observed phenomena simultaneously, the model predictions are qualitatively similar but less accurate, suggesting that the proposed neural mechanisms interact in a more complex way than assumed in the model.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/ff/13/10.1177_23312165231201020.PMC10505348.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10670951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adapting to the Sound of Music - Development of Music Discrimination Skills in Recently Implanted CI Users. 适应音乐之声——新近植入CI使用者音乐辨别能力的发展。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2023-01-01 DOI: 10.1177/23312165221148035
Alberte B Seeberg, Niels T Haumann, Andreas Højlund, Anne S F Andersen, Kathleen F Faulkner, Elvira Brattico, Peter Vuust, Bjørn Petersen
Cochlear implants (CIs) are optimized for speech perception but poor in conveying musical sound features such as pitch, melody, and timbre. Here, we investigated the early development of discrimination of musical sound features after cochlear implantation. Nine recently implanted CI users (CIre) were tested shortly after switch-on (T1) and approximately 3 months later (T2), using a musical multifeature mismatch negativity (MMN) paradigm, presenting four deviant features (intensity, pitch, timbre, and rhythm), and a three-alternative forced-choice behavioral test. For reference, groups of experienced CI users (CIex; n = 13) and normally hearing (NH) controls (n = 14) underwent the same tests once. We found significant improvement in CIre's neural discrimination of pitch and timbre as marked by increased MMN amplitudes. This was not reflected in the behavioral results. Behaviorally, CIre scored well above chance level at both time points for all features except intensity, but significantly below NH controls for all features except rhythm. Both CI groups scored significantly below NH in behavioral pitch discrimination. No significant difference was found in MMN amplitude between CIex and NH. The results indicate that development of musical discrimination can be detected neurophysiologically early after switch-on. However, to fully take advantage of the sparse information from the implant, a prolonged adaptation period may be required. Behavioral discrimination accuracy was notably high already shortly after implant switch-on, although well below that of NH listeners. This study provides new insight into the early development of music-discrimination abilities in CI users and may have clinical and therapeutic relevance.
人工耳蜗(CIs)在语音感知方面进行了优化,但在传递音高、旋律和音色等音乐声音特征方面表现不佳。在此,我们研究了人工耳蜗植入后音乐声音特征辨别的早期发展。9名最近植入的CI用户(ciire)在打开后不久(T1)和大约3个月后(T2)进行了测试,使用音乐多特征失配否定(MMN)范式,呈现4个异常特征(强度、音调、音色和节奏),以及3个选项的强迫选择行为测试。作为参考,有经验的CI用户组(CIex;n = 13)和听力正常(NH)对照组(n = 14)进行了一次相同的测试。我们发现CIre对音高和音色的神经辨别有了显著的改善,这是由MMN振幅的增加所标志的。这并没有反映在行为结果中。在行为学上,CIre在两个时间点的所有特征(强度除外)得分均高于机会水平,但在所有特征(节奏除外)得分均显著低于NH对照组。两个CI组在行为音高辨别上的得分都明显低于NH组。CIex与NH在MMN振幅上无显著差异。结果表明,音乐辨别的发展在开启后的早期就可以从神经生理学上检测到。然而,为了充分利用来自植入物的稀疏信息,可能需要较长的适应期。在植入物打开后不久,行为辨别的准确性就已经非常高了,尽管远低于NH听众。本研究为CI使用者音乐辨别能力的早期发展提供了新的见解,并可能具有临床和治疗意义。
{"title":"Adapting to the Sound of Music - Development of Music Discrimination Skills in Recently Implanted CI Users.","authors":"Alberte B Seeberg,&nbsp;Niels T Haumann,&nbsp;Andreas Højlund,&nbsp;Anne S F Andersen,&nbsp;Kathleen F Faulkner,&nbsp;Elvira Brattico,&nbsp;Peter Vuust,&nbsp;Bjørn Petersen","doi":"10.1177/23312165221148035","DOIUrl":"https://doi.org/10.1177/23312165221148035","url":null,"abstract":"Cochlear implants (CIs) are optimized for speech perception but poor in conveying musical sound features such as pitch, melody, and timbre. Here, we investigated the early development of discrimination of musical sound features after cochlear implantation. Nine recently implanted CI users (CIre) were tested shortly after switch-on (T1) and approximately 3 months later (T2), using a musical multifeature mismatch negativity (MMN) paradigm, presenting four deviant features (intensity, pitch, timbre, and rhythm), and a three-alternative forced-choice behavioral test. For reference, groups of experienced CI users (CIex; n = 13) and normally hearing (NH) controls (n = 14) underwent the same tests once. We found significant improvement in CIre's neural discrimination of pitch and timbre as marked by increased MMN amplitudes. This was not reflected in the behavioral results. Behaviorally, CIre scored well above chance level at both time points for all features except intensity, but significantly below NH controls for all features except rhythm. Both CI groups scored significantly below NH in behavioral pitch discrimination. No significant difference was found in MMN amplitude between CIex and NH. The results indicate that development of musical discrimination can be detected neurophysiologically early after switch-on. However, to fully take advantage of the sparse information from the implant, a prolonged adaptation period may be required. Behavioral discrimination accuracy was notably high already shortly after implant switch-on, although well below that of NH listeners. This study provides new insight into the early development of music-discrimination abilities in CI users and may have clinical and therapeutic relevance.","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/0a/8a/10.1177_23312165221148035.PMC9830578.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10750139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Factors Influencing Hearing Help-Seeking and Hearing Aid Uptake in Adults: A Systematic Review of the Past Decade. 影响成人寻求助听器和使用助听器的因素:对过去十年的系统回顾。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2023-01-01 DOI: 10.1177/23312165231157255
Megan Knoetze, Vinaya Manchaiah, Bopane Mothemela, De Wet Swanepoel

This systematic review examined the audiological and nonaudiological factors that influence hearing help-seeking and hearing aid uptake in adults with hearing loss based on the literature published during the last decade. Peer-reviewed articles published between January 2011 and February 2022 were identified through systematic searches in electronic databases CINAHL, PsycINFO, and MEDLINE. The review was conducted and reported according to the PRISMA protocol. Forty-two articles met the inclusion criteria. Seventy (42 audiological and 28 nonaudiological) hearing help-seeking factors and 159 (93 audiological and 66 nonaudiological) hearing aid uptake factors were investigated with many factors reported only once (10/70 and 62/159, respectively). Hearing aid uptake had some strong predictors (e.g., hearing sensitivity) with others showing conflicting results (e.g., self-reported health). Hearing help-seeking had clear nonpredictive factors (e.g., education) and conflicting factors (e.g., self-reported health). New factors included cognitive anxiety associated with increased help-seeking and hearing aid uptake and urban residency and access to financial support with hearing aid uptake. Most studies were rated as having a low level of evidence (67%) and fair quality (86%). Effective promotion of hearing help-seeking requires more research evidence. Investigating factors with conflicting results and limited evidence is important to clarify what factors support help-seeking and hearing aid uptake in adults with hearing loss. These findings can inform future research and hearing health promotion and rehabilitation practices.

本系统综述基于过去十年发表的文献,研究了影响听力损失成人寻求助听器和使用助听器的听力学和非听力学因素。通过系统检索电子数据库CINAHL、PsycINFO和MEDLINE,确定了2011年1月至2022年2月间发表的同行评议文章。审查是根据PRISMA协议进行和报告的。42篇文章符合纳入标准。调查了70个(42个听力学因素和28个非听力学因素)助听器寻求因素和159个(93个听力学因素和66个非听力学因素)助听器摄取因素,其中许多因素仅报告一次(分别为10/70和62/159)。助听器的使用有一些强有力的预测因素(例如,听力敏感性),而另一些则显示出相互矛盾的结果(例如,自我报告的健康状况)。寻求助听器有明显的非预测性因素(如教育程度)和相互冲突的因素(如自我报告的健康状况)。新的因素包括认知焦虑与寻求帮助和助听器使用增加、城市居住和获得助听器使用的经济支持有关。大多数研究被评为证据水平低(67%)和质量一般(86%)。有效促进助听需要更多的研究证据。调查结果相互矛盾和证据有限的因素对于阐明哪些因素支持听力损失成人寻求帮助和使用助听器非常重要。这些发现可以为未来的研究和听力健康促进和康复实践提供信息。
{"title":"Factors Influencing Hearing Help-Seeking and Hearing Aid Uptake in Adults: A Systematic Review of the Past Decade.","authors":"Megan Knoetze,&nbsp;Vinaya Manchaiah,&nbsp;Bopane Mothemela,&nbsp;De Wet Swanepoel","doi":"10.1177/23312165231157255","DOIUrl":"https://doi.org/10.1177/23312165231157255","url":null,"abstract":"<p><p>This systematic review examined the audiological and nonaudiological factors that influence hearing help-seeking and hearing aid uptake in adults with hearing loss based on the literature published during the last decade. Peer-reviewed articles published between January 2011 and February 2022 were identified through systematic searches in electronic databases CINAHL, PsycINFO, and MEDLINE. The review was conducted and reported according to the PRISMA protocol. Forty-two articles met the inclusion criteria. Seventy (42 audiological and 28 nonaudiological) hearing help-seeking factors and 159 (93 audiological and 66 nonaudiological) hearing aid uptake factors were investigated with many factors reported only once (10/70 and 62/159, respectively). Hearing aid uptake had some strong predictors (e.g., hearing sensitivity) with others showing conflicting results (e.g., self-reported health). Hearing help-seeking had clear nonpredictive factors (e.g., education) and conflicting factors (e.g., self-reported health). New factors included cognitive anxiety associated with increased help-seeking and hearing aid uptake and urban residency and access to financial support with hearing aid uptake. Most studies were rated as having a low level of evidence (67%) and fair quality (86%). Effective promotion of hearing help-seeking requires more research evidence. Investigating factors with conflicting results and limited evidence is important to clarify what factors support help-seeking and hearing aid uptake in adults with hearing loss. These findings can inform future research and hearing health promotion and rehabilitation practices.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/5a/a2/10.1177_23312165231157255.PMC9940236.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10752961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Grouping by Time and Pitch Facilitates Free but Not Cued Recall for Word Lists in Normally-Hearing Listeners. 按时间和音高分组有助于在正常听力的听众中自由但不提示地回忆单词列表。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2023-01-01 DOI: 10.1177/23312165231181757
Anastasia G Sares, Annie C Gilbert, Yue Zhang, Maria Iordanov, Alexandre Lehmann, Mickael L D Deroche

Auditory memory is an important everyday skill evaluated more and more frequently in clinical settings as there is recently a greater recognition of the cost of hearing loss to cognitive systems. Testing often involves reading a list of unrelated items aloud; but prosodic variations in pitch and timing across the list can affect the number of items remembered. Here, we ran a series of online studies on normally-hearing participants to provide normative data (with a larger and more diverse population than the typical student sample) on a novel protocol characterizing the effects of suprasegmental properties in speech, namely investigating pitch patterns, fast and slow pacing, and interactions between pitch and time grouping. In addition to free recall, and in line with our desire to work eventually with individuals exhibiting more limited cognitive capacity, we included a cued recall task to help participants recover specifically the words forgotten during the free recall part. We replicated key findings from previous research, demonstrating the benefits of slower pacing and of grouping on free recall. However, only slower pacing led to better performance on cued recall, indicating that grouping effects may decay surprisingly fast (over a matter of one minute) compared to the effect of slowed pacing. These results provide a benchmark for future comparisons of short-term recall performance in hearing-impaired listeners and users of cochlear implants.

听觉记忆是一项重要的日常技能,在临床环境中越来越频繁地被评估,因为最近人们越来越认识到听力损失对认知系统的影响。测试通常包括大声朗读一系列不相关的项目;但音高和时间的韵律变化会影响记忆项目的数量。在这里,我们对听力正常的参与者进行了一系列在线研究,以提供规范的数据(与典型的学生样本相比,人群更大、更多样化),以描述语音中超分段特性的影响,即调查音高模式、快节奏和慢节奏,以及音高和时间分组之间的相互作用。除了自由回忆之外,为了与认知能力有限的个体合作,我们还包括了一个线索回忆任务,帮助参与者回忆在自由回忆部分忘记的单词。我们重复了先前研究的关键发现,证明了慢节奏和分组对自由回忆的好处。然而,只有较慢的节奏才能在线索回忆方面取得更好的表现,这表明与放慢节奏的效果相比,分组效果可能会以惊人的速度(在一分钟内)衰减。这些结果为将来比较听力受损听众和人工耳蜗使用者的短期回忆表现提供了一个基准。
{"title":"Grouping by Time and Pitch Facilitates Free but Not Cued Recall for Word Lists in Normally-Hearing Listeners.","authors":"Anastasia G Sares,&nbsp;Annie C Gilbert,&nbsp;Yue Zhang,&nbsp;Maria Iordanov,&nbsp;Alexandre Lehmann,&nbsp;Mickael L D Deroche","doi":"10.1177/23312165231181757","DOIUrl":"https://doi.org/10.1177/23312165231181757","url":null,"abstract":"<p><p>Auditory memory is an important everyday skill evaluated more and more frequently in clinical settings as there is recently a greater recognition of the cost of hearing loss to cognitive systems. Testing often involves reading a list of unrelated items aloud; but prosodic variations in pitch and timing across the list can affect the number of items remembered. Here, we ran a series of online studies on normally-hearing participants to provide normative data (with a larger and more diverse population than the typical student sample) on a novel protocol characterizing the effects of suprasegmental properties in speech, namely investigating pitch patterns, fast and slow pacing, and interactions between pitch and time grouping. In addition to free recall, and in line with our desire to work eventually with individuals exhibiting more limited cognitive capacity, we included a cued recall task to help participants recover specifically the words forgotten during the free recall part. We replicated key findings from previous research, demonstrating the benefits of slower pacing and of grouping on free recall. However, only slower pacing led to better performance on cued recall, indicating that grouping effects may decay surprisingly fast (over a matter of one minute) compared to the effect of slowed pacing. These results provide a benchmark for future comparisons of short-term recall performance in hearing-impaired listeners and users of cochlear implants.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/a6/25/10.1177_23312165231181757.PMC10286184.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9712047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Susceptibility to Steady Noise Largely Explains Susceptibility to Dynamic Maskers in Cochlear Implant Users, but not in Normal-Hearing Listeners 对稳定噪音的敏感性在很大程度上解释了人工耳蜗使用者对动态掩蔽物的敏感性,但在正常听力的听者中则不然
2区 医学 Q1 Health Professions Pub Date : 2023-01-01 DOI: 10.1177/23312165231205713
Biao Chen, Ying Shi, Ying Kong, Jingyuan Chen, Lifang Zhang, Yongxin Li, John J. Galvin, Qian-Jie Fu
Different from normal-hearing (NH) listeners, speech recognition thresholds (SRTs) in cochlear implant (CI) users are typically poorer with dynamic maskers than with speech-spectrum noise (SSN). The effectiveness of different masker types may depend on their acoustic and linguistic characteristics. The goal of the present study was to evaluate the effectiveness of different masker types with varying acoustic and linguistic properties in CI and NH listeners. SRTs were measured with nine maskers, including SSN, dynamic nonspeech maskers, and speech maskers with or without lexical content. Results showed that CI users performed significantly poorer than NH listeners with all maskers. NH listeners were much more sensitive to masker type than were CI users. Relative to SSN, NH listeners experienced significant masking release for most maskers, which could be well explained by the glimpse proportion, especially for maskers containing similar cues related to fundamental frequency or lexical content. In contrast, CI users generally experienced negative masking release. There was significant intercorrelation among the maskers for CI users’ SRTs but much less so for NH listeners’ SRTs. Principal component analysis showed that one factor explained 72% of the variance in CI users’ SRTs but only 55% in NH listeners’ SRTs across all maskers. Taken together, the results suggest that SRTs in SSN largely accounted for the variability in CI users’ SRTs with dynamic maskers. Different from NH listeners, CI users appear to be more susceptible to energetic masking and do not experience a release from masking with dynamic envelopes or speech maskers.
与正常听力(NH)听者不同,人工耳蜗(CI)使用者的语音识别阈值(srt)通常在动态掩蔽器的作用下低于语音频谱噪声(SSN)。不同类型面具的效果可能取决于它们的声学和语言特征。本研究的目的是评估具有不同声学和语言特性的不同掩蔽器类型在CI和NH听者中的有效性。srt用9个掩码来测量,包括SSN、动态非语音掩码和有或没有词汇内容的语音掩码。结果表明,CI使用者的表现明显低于全掩码的NH听众。NH听众比CI用户对掩码类型更敏感。相对于SSN, NH听者对大多数掩码都经历了显著的掩码释放,这可以用瞥见比例很好地解释,特别是对于包含与基本频率或词汇内容相关的类似线索的掩码。相比之下,CI用户通常会经历负面的屏蔽释放。在CI使用者的srt中,掩蔽因子之间存在显著的相互关系,而在NH听者的srt中,这种相互关系要小得多。主成分分析表明,一个因素解释了所有蒙面者中CI使用者的srt中72%的差异,而NH听者的srt中只有55%的差异。综上所述,结果表明SSN的srt在很大程度上解释了CI用户使用动态掩蔽器时srt的变异性。与NH听众不同,CI用户似乎更容易受到能量掩蔽的影响,并且不会从动态信封或语音掩蔽器中解脱出来。
{"title":"Susceptibility to Steady Noise Largely Explains Susceptibility to Dynamic Maskers in Cochlear Implant Users, but not in Normal-Hearing Listeners","authors":"Biao Chen, Ying Shi, Ying Kong, Jingyuan Chen, Lifang Zhang, Yongxin Li, John J. Galvin, Qian-Jie Fu","doi":"10.1177/23312165231205713","DOIUrl":"https://doi.org/10.1177/23312165231205713","url":null,"abstract":"Different from normal-hearing (NH) listeners, speech recognition thresholds (SRTs) in cochlear implant (CI) users are typically poorer with dynamic maskers than with speech-spectrum noise (SSN). The effectiveness of different masker types may depend on their acoustic and linguistic characteristics. The goal of the present study was to evaluate the effectiveness of different masker types with varying acoustic and linguistic properties in CI and NH listeners. SRTs were measured with nine maskers, including SSN, dynamic nonspeech maskers, and speech maskers with or without lexical content. Results showed that CI users performed significantly poorer than NH listeners with all maskers. NH listeners were much more sensitive to masker type than were CI users. Relative to SSN, NH listeners experienced significant masking release for most maskers, which could be well explained by the glimpse proportion, especially for maskers containing similar cues related to fundamental frequency or lexical content. In contrast, CI users generally experienced negative masking release. There was significant intercorrelation among the maskers for CI users’ SRTs but much less so for NH listeners’ SRTs. Principal component analysis showed that one factor explained 72% of the variance in CI users’ SRTs but only 55% in NH listeners’ SRTs across all maskers. Taken together, the results suggest that SRTs in SSN largely accounted for the variability in CI users’ SRTs with dynamic maskers. Different from NH listeners, CI users appear to be more susceptible to energetic masking and do not experience a release from masking with dynamic envelopes or speech maskers.","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136257090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
U.S. Population Data on Hearing Loss, Trouble Hearing, and Hearing-Device Use in Adults: National Health and Nutrition Examination Survey, 2011-12, 2015-16, and 2017-20. 美国成年人听力损失、听力障碍和听力设备使用的人口数据:2011-12年、2015-16年和2017-20年全国健康和营养检查调查。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2023-01-01 DOI: 10.1177/23312165231160978
Larry E Humes

The National Health and Nutrition Examination Survey (NHANES) data on audiometric hearing loss, self-reported trouble hearing, and the use of hearing aids and assistive listening devices (ALDs) for the three most recent surveys (2011-12, 2015-16, and 2017-20) were analyzed for adults ranging in age from 20 to 80-plus years. Complete audiograms were available for a total of 8,795 adults. The prevalence of hearing loss, measured audiometrically and self-reported, is provided for males and females by age decade. Logistic-regression analyses identified variables affecting the odds of having an audiometrically defined hearing loss or self-reported trouble hearing. As in previous reports, males were more likely than females to have audiometric hearing loss and the prevalence of hearing loss increased steadily with advancing age. The same trends were observed for self-reported hearing difficulty, although the effects of age and sex were smaller for self-reported trouble hearing compared to audiometric hearing loss. The agreement between the audiometric classification of hearing loss severity and the amount of trouble reported on the self-report measure was moderate (r = 0.61). The prevalence of hearing-aid and ALD use differed for males and females of the same age, females generally using these devices less frequently than males, but both showing increased prevalence of device use with advancing age. Unmet hearing-healthcare need, defined as the percentage of those with identified hearing loss or trouble hearing who were not current hearing-aid users or had never tried hearing aids or ALDs, was about 85%.

美国国家健康与营养调查(NHANES)最近三次调查(2011-12年、2015-16年和2017-20年)中有关听力损失、自述听力障碍以及助听器和辅助听力设备使用的数据进行了分析,调查对象为年龄在20岁至80岁以上的成年人。共有8795名成年人获得了完整的听力图。听力损失的流行程度,通过听力测量和自我报告,按年龄提供男性和女性。逻辑回归分析确定了影响听力学定义的听力损失或自我报告的听力障碍几率的变量。与之前的报告一样,男性比女性更容易出现听力损失,并且随着年龄的增长,听力损失的患病率稳步上升。在自我报告的听力困难中也观察到同样的趋势,尽管与听力损失相比,年龄和性别对自我报告的听力困难的影响较小。听力损失严重程度的听力学分类与自我报告测量报告的麻烦数量之间的一致性为中等(r = 0.61)。助听器和ALD的使用在相同年龄的男性和女性中有所不同,女性使用这些设备的频率通常低于男性,但随着年龄的增长,两者都显示出设备使用的患病率增加。未满足的听力保健需求约为85%。未满足的听力保健需求的定义是,已确诊的听力损失或听力障碍患者中目前未使用助听器或从未使用过助听器或助听器的比例。
{"title":"U.S. Population Data on Hearing Loss, Trouble Hearing, and Hearing-Device Use in Adults: National Health and Nutrition Examination Survey, 2011-12, 2015-16, and 2017-20.","authors":"Larry E Humes","doi":"10.1177/23312165231160978","DOIUrl":"https://doi.org/10.1177/23312165231160978","url":null,"abstract":"<p><p>The National Health and Nutrition Examination Survey (NHANES) data on audiometric hearing loss, self-reported trouble hearing, and the use of hearing aids and assistive listening devices (ALDs) for the three most recent surveys (2011-12, 2015-16, and 2017-20) were analyzed for adults ranging in age from 20 to 80-plus years. Complete audiograms were available for a total of 8,795 adults. The prevalence of hearing loss, measured audiometrically and self-reported, is provided for males and females by age decade. Logistic-regression analyses identified variables affecting the odds of having an audiometrically defined hearing loss or self-reported trouble hearing. As in previous reports, males were more likely than females to have audiometric hearing loss and the prevalence of hearing loss increased steadily with advancing age. The same trends were observed for self-reported hearing difficulty, although the effects of age and sex were smaller for self-reported trouble hearing compared to audiometric hearing loss. The agreement between the audiometric classification of hearing loss severity and the amount of trouble reported on the self-report measure was moderate (<i>r </i>= 0.61). The prevalence of hearing-aid and ALD use differed for males and females of the same age, females generally using these devices less frequently than males, but both showing increased prevalence of device use with advancing age. Unmet hearing-healthcare need, defined as the percentage of those with identified hearing loss or trouble hearing who were not current hearing-aid users or had never tried hearing aids or ALDs, was about 85%.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/01/61/10.1177_23312165231160978.PMC10084570.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9566781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Perceptual Learning of Noise-Vocoded Speech Under Divided Attention. 分注意条件下噪声声码语音的感知学习。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2023-01-01 DOI: 10.1177/23312165231192297
Han Wang, Rongru Chen, Yu Yan, Carolyn McGettigan, Stuart Rosen, Patti Adank

Speech perception performance for degraded speech can improve with practice or exposure. Such perceptual learning is thought to be reliant on attention and theoretical accounts like the predictive coding framework suggest a key role for attention in supporting learning. However, it is unclear whether speech perceptual learning requires undivided attention. We evaluated the role of divided attention in speech perceptual learning in two online experiments (N = 336). Experiment 1 tested the reliance of perceptual learning on undivided attention. Participants completed a speech recognition task where they repeated forty noise-vocoded sentences in a between-group design. Participants performed the speech task alone or concurrently with a domain-general visual task (dual task) at one of three difficulty levels. We observed perceptual learning under divided attention for all four groups, moderated by dual-task difficulty. Listeners in easy and intermediate visual conditions improved as much as the single-task group. Those who completed the most challenging visual task showed faster learning and achieved similar ending performance compared to the single-task group. Experiment 2 tested whether learning relies on domain-specific or domain-general processes. Participants completed a single speech task or performed this task together with a dual task aiming to recruit domain-specific (lexical or phonological), or domain-general (visual) processes. All secondary task conditions produced patterns and amount of learning comparable to the single speech task. Our results demonstrate that the impact of divided attention on perceptual learning is not strictly dependent on domain-general or domain-specific processes and speech perceptual learning persists under divided attention.

退化语音的语音感知性能可以随着练习或暴露而提高。这种感知学习被认为依赖于注意力,预测编码框架等理论解释表明注意力在支持学习中发挥着关键作用。然而,目前尚不清楚言语感知学习是否需要全神贯注。我们在两个在线实验(N = 336)。实验1测试了知觉学习对集中注意力的依赖性。参与者完成了一项语音识别任务,在小组之间的设计中,他们重复了40个噪声声码句子。参与者单独或与领域通用视觉任务(双重任务)同时执行语音任务,难度为三个级别之一。我们观察到所有四组在分散注意力下的感知学习,受双重任务难度的调节。在简单和中等视觉条件下的听众与单个任务组一样进步。与单一任务组相比,那些完成了最具挑战性的视觉任务的人表现出更快的学习速度,并取得了相似的结局表现。实验2测试了学习是依赖于领域特定过程还是领域通用过程。参与者完成了一项单一的语音任务,或将该任务与旨在招募特定领域(词汇或语音)或一般领域(视觉)过程的双重任务一起执行。所有次要任务条件都产生了与单个语音任务相当的模式和学习量。我们的研究结果表明,划分注意力对感知学习的影响并不严格依赖于领域通用或领域特定过程,语音感知学习在划分注意力下持续存在。
{"title":"Perceptual Learning of Noise-Vocoded Speech Under Divided Attention.","authors":"Han Wang, Rongru Chen, Yu Yan, Carolyn McGettigan, Stuart Rosen, Patti Adank","doi":"10.1177/23312165231192297","DOIUrl":"10.1177/23312165231192297","url":null,"abstract":"<p><p>Speech perception performance for degraded speech can improve with practice or exposure. Such perceptual learning is thought to be reliant on attention and theoretical accounts like the predictive coding framework suggest a key role for attention in supporting learning. However, it is unclear whether speech perceptual learning requires undivided attention. We evaluated the role of divided attention in speech perceptual learning in two online experiments (<i>N</i> = 336). Experiment 1 tested the reliance of perceptual learning on undivided attention. Participants completed a speech recognition task where they repeated forty noise-vocoded sentences in a between-group design. Participants performed the speech task alone or concurrently with a domain-general visual task (dual task) at one of three difficulty levels. We observed perceptual learning under divided attention for all four groups, moderated by dual-task difficulty. Listeners in easy and intermediate visual conditions improved as much as the single-task group. Those who completed the most challenging visual task showed faster learning and achieved similar ending performance compared to the single-task group. Experiment 2 tested whether learning relies on domain-specific or domain-general processes. Participants completed a single speech task or performed this task together with a dual task aiming to recruit domain-specific (lexical or phonological), or domain-general (visual) processes. All secondary task conditions produced patterns and amount of learning comparable to the single speech task. Our results demonstrate that the impact of divided attention on perceptual learning is not strictly dependent on domain-general or domain-specific processes and speech perceptual learning persists under divided attention.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10408355/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10336541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Effect of Gaze Direction in Cocktail Party Listening. 鸡尾酒会聆听中注视方向的影响
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2023-01-01 DOI: 10.1177/23312165231152356
Virginia Best, Alex D Boyd, Kamal Sen

It is well established that gaze direction can influence auditory spatial perception, but the implications of this interaction for performance in complex listening tasks is unclear. In the current study, we investigated whether there is a measurable effect of gaze direction on speech intelligibility in a "cocktail party" listening situation. We presented sequences of digits from five loudspeakers positioned at 0°,  ±  15°, and ± 30° azimuth, and asked participants to repeat back the digits presented from a designated target loudspeaker. In different blocks of trials, the participant visually fixated on a cue presented at the target location or at a nontarget location. Eye position was tracked continuously to monitor compliance. Performance was best when fixation was on-target (vs. off-target) and the size of this effect depended on the specific configuration. This result demonstrates an influence of gaze direction in multitalker mixtures, even in the absence of visual speech information.

凝视方向会影响听觉空间感知,这一点已得到公认,但这种相互作用对复杂听力任务中的表现有何影响尚不清楚。在本研究中,我们调查了在 "鸡尾酒会 "听力情境中,注视方向是否会对语音清晰度产生可测量的影响。我们从方位角为 0°、± 15°和± 30°的五个扬声器中播放数字序列,并要求被试复述从指定目标扬声器中播放的数字。在不同的试验块中,受试者的视线固定在目标位置或非目标位置的提示上。对眼球位置进行持续跟踪,以监测服从性。当被试的视线定格在目标位置(与非目标位置)时,被试的表现最佳,而这种效应的大小取决于具体的配置。这一结果表明,即使在没有视觉语音信息的情况下,多语种混合物中的注视方向也会产生影响。
{"title":"An Effect of Gaze Direction in Cocktail Party Listening.","authors":"Virginia Best, Alex D Boyd, Kamal Sen","doi":"10.1177/23312165231152356","DOIUrl":"10.1177/23312165231152356","url":null,"abstract":"<p><p>It is well established that gaze direction can influence auditory spatial perception, but the implications of this interaction for performance in complex listening tasks is unclear. In the current study, we investigated whether there is a measurable effect of gaze direction on speech intelligibility in a \"cocktail party\" listening situation. We presented sequences of digits from five loudspeakers positioned at 0°,  ±  15°, and ± 30° azimuth, and asked participants to repeat back the digits presented from a designated target loudspeaker. In different blocks of trials, the participant visually fixated on a cue presented at the target location or at a nontarget location. Eye position was tracked continuously to monitor compliance. Performance was best when fixation was on-target (vs. off-target) and the size of this effect depended on the specific configuration. This result demonstrates an influence of gaze direction in multitalker mixtures, even in the absence of visual speech information.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/df/41/10.1177_23312165231152356.PMC9896088.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9451749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Musical Emotion Categorization with Vocoders of Varying Temporal and Spectral Content. 用变化时间和频谱内容的声码器进行音乐情感分类。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2023-01-01 DOI: 10.1177/23312165221141142
Eleanor E Harding, Etienne Gaudrain, Imke J Hrycyk, Robert L Harris, Barbara Tillmann, Bert Maat, Rolien H Free, Deniz Başkent

While previous research investigating music emotion perception of cochlear implant (CI) users observed that temporal cues informing tempo largely convey emotional arousal (relaxing/stimulating), it remains unclear how other properties of the temporal content may contribute to the transmission of arousal features. Moreover, while detailed spectral information related to pitch and harmony in music - often not well perceived by CI users- reportedly conveys emotional valence (positive, negative), it remains unclear how the quality of spectral content contributes to valence perception. Therefore, the current study used vocoders to vary temporal and spectral content of music and tested music emotion categorization (joy, fear, serenity, sadness) in 23 normal-hearing participants. Vocoders were varied with two carriers (sinewave or noise; primarily modulating temporal information), and two filter orders (low or high; primarily modulating spectral information). Results indicated that emotion categorization was above-chance in vocoded excerpts but poorer than in a non-vocoded control condition. Among vocoded conditions, better temporal content (sinewave carriers) improved emotion categorization with a large effect while better spectral content (high filter order) improved it with a small effect. Arousal features were comparably transmitted in non-vocoded and vocoded conditions, indicating that lower temporal content successfully conveyed emotional arousal. Valence feature transmission steeply declined in vocoded conditions, revealing that valence perception was difficult for both lower and higher spectral content. The reliance on arousal information for emotion categorization of vocoded music suggests that efforts to refine temporal cues in the CI user signal may immediately benefit their music emotion perception.

虽然之前的研究对人工耳蜗(CI)使用者的音乐情绪感知进行了调查,发现通知节奏的时间线索在很大程度上传达了情绪唤醒(放松/刺激),但尚不清楚时间内容的其他特性如何有助于唤醒特征的传递。此外,虽然与音乐的音高和和声相关的详细频谱信息——通常不被CI用户很好地感知——据报道传达了情绪效价(积极的,消极的),但频谱内容的质量如何有助于效价感知仍不清楚。因此,本研究使用声码器来改变音乐的时间和频谱内容,并在23名听力正常的参与者中测试音乐情绪分类(快乐,恐惧,宁静,悲伤)。声码器有两种载体(正弦波或噪声;主要调制时间信息),以及两个滤波顺序(低或高;主要调制光谱信息)。结果表明,在语音编码的节选中,情绪分类高于机会,但低于非语音编码的控制条件。在语音编码条件下,较好的时间内容(正弦波载波)对情绪分类的改善作用较大,而较好的频谱内容(高滤波阶数)对情绪分类的改善作用较小。唤醒特征在非声编码和声编码条件下的传递效果相当,表明较低的时间内容成功地传递了情绪唤醒。在语音编码条件下,价特征透射率急剧下降,表明在低谱和高谱条件下,价特征感知都很困难。对声音编码音乐的情绪分类依赖于唤醒信息,这表明在CI用户信号中提炼时间线索的努力可能会立即有利于他们的音乐情感感知。
{"title":"Musical Emotion Categorization with Vocoders of Varying Temporal and Spectral Content.","authors":"Eleanor E Harding,&nbsp;Etienne Gaudrain,&nbsp;Imke J Hrycyk,&nbsp;Robert L Harris,&nbsp;Barbara Tillmann,&nbsp;Bert Maat,&nbsp;Rolien H Free,&nbsp;Deniz Başkent","doi":"10.1177/23312165221141142","DOIUrl":"https://doi.org/10.1177/23312165221141142","url":null,"abstract":"<p><p>While previous research investigating music emotion perception of cochlear implant (CI) users observed that temporal cues informing tempo largely convey emotional arousal (relaxing/stimulating), it remains unclear how other properties of the temporal content may contribute to the transmission of arousal features. Moreover, while detailed spectral information related to pitch and harmony in music - often not well perceived by CI users- reportedly conveys emotional valence (positive, negative), it remains unclear how the quality of spectral content contributes to valence perception. Therefore, the current study used vocoders to vary temporal and spectral content of music and tested music emotion categorization (joy, fear, serenity, sadness) in 23 normal-hearing participants. Vocoders were varied with two carriers (sinewave or noise; primarily modulating temporal information), and two filter orders (low or high; primarily modulating spectral information). Results indicated that emotion categorization was above-chance in vocoded excerpts but poorer than in a non-vocoded control condition. Among vocoded conditions, better temporal content (sinewave carriers) improved emotion categorization with a large effect while better spectral content (high filter order) improved it with a small effect. Arousal features were comparably transmitted in non-vocoded and vocoded conditions, indicating that lower temporal content successfully conveyed emotional arousal. Valence feature transmission steeply declined in vocoded conditions, revealing that valence perception was difficult for both lower and higher spectral content. The reliance on arousal information for emotion categorization of vocoded music suggests that efforts to refine temporal cues in the CI user signal may immediately benefit their music emotion perception.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/83/fa/10.1177_23312165221141142.PMC9837297.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10746841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Plasticity After Hearing Rehabilitation in the Aging Brain. 老化大脑听力康复后的可塑性。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2023-01-01 DOI: 10.1177/23312165231156412
Diane S Lazard, Keith B Doelling, Luc H Arnal

Age-related hearing loss, presbycusis, is an unavoidable sensory degradation, often associated with the progressive decline of cognitive and social functions, and dementia. It is generally considered a natural consequence of the inner-ear deterioration. However, presbycusis arguably conflates a wide array of peripheral and central impairments. Although hearing rehabilitation maintains the integrity and activity of auditory networks and can prevent or revert maladaptive plasticity, the extent of such neural plastic changes in the aging brain is poorly appreciated. By reanalyzing a large-scale dataset of more than 2200 cochlear implant users (CI) and assessing the improvement in speech perception from 6 to 24 months of use, we show that, although rehabilitation improves speech understanding on average, age at implantation only minimally affects speech scores at 6 months but has a pejorative effect at 24 months post implantation. Furthermore, older subjects (>67 years old) were significantly more likely to degrade their performances after 2 years of CI use than the younger patients for each year increase in age. Secondary analysis reveals three possible plasticity trajectories after auditory rehabilitation to account for these disparities: Awakening, reversal of deafness-specific changes; Counteracting, stabilization of additional cognitive impairments; or Decline, independent pejorative processes that hearing rehabilitation cannot prevent. The role of complementary behavioral interventions needs to be considered to potentiate the (re)activation of auditory brain networks.

与年龄相关的听力损失,老年性耳聋,是一种不可避免的感觉退化,通常与认知和社会功能的进行性下降以及痴呆有关。这通常被认为是内耳恶化的自然结果。然而,老年性耳聋有争议地合并了广泛的外周和中枢损伤。虽然听力康复可以维持听觉网络的完整性和活动,并可以预防或恢复不适应的可塑性,但这种神经可塑性在衰老大脑中的变化程度尚不清楚。通过重新分析2200多名人工耳蜗使用者(CI)的大规模数据集,并评估使用6至24个月时语音感知的改善,我们发现,尽管康复平均改善了语音理解,但植入年龄对6个月时的语音评分影响很小,但在植入后24个月时具有负面影响。此外,年龄较大的受试者(>67岁)在使用CI 2年后比年龄每增加一年的年轻患者更有可能降低他们的表现。二级分析揭示了听力康复后的三种可能的可塑性轨迹,以解释这些差异:觉醒,耳聋特异性变化的逆转;抵消,稳定额外的认知障碍;或衰退,听力康复无法预防的独立的贬损过程。需要考虑补充性行为干预的作用,以增强(重新)激活听觉脑网络。
{"title":"Plasticity After Hearing Rehabilitation in the Aging Brain.","authors":"Diane S Lazard,&nbsp;Keith B Doelling,&nbsp;Luc H Arnal","doi":"10.1177/23312165231156412","DOIUrl":"https://doi.org/10.1177/23312165231156412","url":null,"abstract":"<p><p>Age-related hearing loss, presbycusis, is an unavoidable sensory degradation, often associated with the progressive decline of cognitive and social functions, and dementia. It is generally considered a natural consequence of the inner-ear deterioration. However, presbycusis arguably conflates a wide array of peripheral and central impairments. Although hearing rehabilitation maintains the integrity and activity of auditory networks and can prevent or revert maladaptive plasticity, the extent of such neural plastic changes in the aging brain is poorly appreciated. By reanalyzing a large-scale dataset of more than 2200 cochlear implant users (CI) and assessing the improvement in speech perception from 6 to 24 months of use, we show that, although rehabilitation improves speech understanding on average, age at implantation only minimally affects speech scores at 6 months but has a pejorative effect at 24 months post implantation. Furthermore, older subjects (>67 years old) were significantly more likely to degrade their performances after 2 years of CI use than the younger patients for each year increase in age. Secondary analysis reveals three possible plasticity trajectories after auditory rehabilitation to account for these disparities: Awakening, reversal of deafness-specific changes; Counteracting, stabilization of additional cognitive impairments; or Decline, independent pejorative processes that hearing rehabilitation cannot prevent. The role of complementary behavioral interventions needs to be considered to potentiate the (re)activation of auditory brain networks.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/6d/15/10.1177_23312165231156412.PMC9936397.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10751676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Trends in Hearing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1