首页 > 最新文献

Trends in Hearing最新文献

英文 中文
Clinical Feasibility and Familiarization Effects of Device Delay Mismatch Compensation in Bimodal CI/HA Users. 设备延迟不匹配补偿在双模CI/HA用户中的临床可行性和熟悉效应。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2023-01-01 DOI: 10.1177/23312165231171987
Julian Angermeier, Werner Hemmert, Stefan Zirn

Subjects utilizing a cochlear implant (CI) in one ear and a hearing aid (HA) on the contralateral ear suffer from mismatches in stimulation timing due to different processing latencies of both devices. This device delay mismatch leads to a temporal mismatch in auditory nerve stimulation. Compensating for this auditory nerve stimulation mismatch by compensating for the device delay mismatch can significantly improve sound source localization accuracy. One CI manufacturer has already implemented the possibility of mismatch compensation in its current fitting software. This study investigated if this fitting parameter can be readily used in clinical settings and determined the effects of familiarization to a compensated device delay mismatch over a period of 3-4 weeks. Sound localization accuracy and speech understanding in noise were measured in eleven bimodal CI/HA users, with and without a compensation of the device delay mismatch. The results showed that sound localization bias improved to 0°, implying that the localization bias towards the CI was eliminated when the device delay mismatch was compensated. The RMS error was improved by 18% with this improvement not reaching statistical significance. The effects were acute and did not further improve after 3 weeks of familiarization. For the speech tests, spatial release from masking did not improve with a compensated mismatch. The results show that this fitting parameter can be readily used by clinicians to improve sound localization ability in bimodal users. Further, our findings suggest that subjects with poor sound localization ability benefit the most from the device delay mismatch compensation.

在对侧耳使用人工耳蜗(CI)和助听器(HA)的受试者由于两种设备的处理潜伏期不同,在刺激时间上存在不匹配。这种装置延迟失配导致听神经刺激的时间失配。通过补偿设备延迟失配来补偿这种听觉神经刺激失配,可以显著提高声源定位精度。一家CI制造商已经在其当前的装配软件中实现了失配补偿的可能性。本研究调查了该拟合参数是否可以在临床环境中轻松使用,并确定了在3-4周的时间内熟悉补偿装置延迟不匹配的效果。在11个双峰CI/HA用户中测量了噪音中的声音定位精度和语音理解,有无补偿设备延迟不匹配。结果表明,声音定位偏差改善到0°,这意味着补偿了器件延迟失配后,消除了对CI的定位偏差。均方根误差改善了18%,但没有达到统计学意义。效果是急性的,在熟悉3周后没有进一步改善。对于语音测试,屏蔽的空间释放不会因补偿不匹配而改善。结果表明,该拟合参数可以很容易地被临床医生用于提高双峰使用者的声音定位能力。此外,我们的研究结果表明,声音定位能力差的受试者从设备延迟不匹配补偿中获益最多。
{"title":"Clinical Feasibility and Familiarization Effects of Device Delay Mismatch Compensation in Bimodal CI/HA Users.","authors":"Julian Angermeier,&nbsp;Werner Hemmert,&nbsp;Stefan Zirn","doi":"10.1177/23312165231171987","DOIUrl":"https://doi.org/10.1177/23312165231171987","url":null,"abstract":"<p><p>Subjects utilizing a cochlear implant (CI) in one ear and a hearing aid (HA) on the contralateral ear suffer from mismatches in stimulation timing due to different processing latencies of both devices. This device delay mismatch leads to a temporal mismatch in auditory nerve stimulation. Compensating for this auditory nerve stimulation mismatch by compensating for the device delay mismatch can significantly improve sound source localization accuracy. One CI manufacturer has already implemented the possibility of mismatch compensation in its current fitting software. This study investigated if this fitting parameter can be readily used in clinical settings and determined the effects of familiarization to a compensated device delay mismatch over a period of 3-4 weeks. Sound localization accuracy and speech understanding in noise were measured in eleven bimodal CI/HA users, with and without a compensation of the device delay mismatch. The results showed that sound localization bias improved to 0°, implying that the localization bias towards the CI was eliminated when the device delay mismatch was compensated. The RMS error was improved by 18% with this improvement not reaching statistical significance. The effects were acute and did not further improve after 3 weeks of familiarization. For the speech tests, spatial release from masking did not improve with a compensated mismatch. The results show that this fitting parameter can be readily used by clinicians to improve sound localization ability in bimodal users. Further, our findings suggest that subjects with poor sound localization ability benefit the most from the device delay mismatch compensation.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10196534/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9886415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Diagnosing Noise-Induced Hearing Loss Sustained During Military Service Using Deep Neural Networks. 使用深度神经网络诊断服役期间持续的噪声引起的听力损失。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2023-01-01 DOI: 10.1177/23312165231184982
Brian C J Moore, Josef Schlittenlacher

The diagnosis of noise-induced hearing loss (NIHL) is based on three requirements: a history of exposure to noise with the potential to cause hearing loss; the absence of known causes of hearing loss other than noise exposure; and the presence of certain features in the audiogram. All current methods for diagnosing NIHL have involved examination of the typical features of the audiograms of noise-exposed individuals and the formulation of quantitative rules for the identification of those features. This article describes an alternative approach based on the use of multilayer perceptrons (MLPs). The approach was applied to databases containing the ages and audiograms of individuals claiming compensation for NIHL sustained during military service (M-NIHL), who were assumed mostly to have M-NIHL, and control databases with no known exposure to intense sounds. The MLPs were trained so as to classify individuals as belonging to the exposed or control group based on their audiograms and ages, thereby automatically identifying the features of the audiogram that provide optimal classification. Two databases (noise exposed and nonexposed) were used for training and validation of the MLPs and two independent databases were used for evaluation and further analyses. The best-performing MLP was one trained to identify whether or not an individual had M-NIHL based on age and the audiogram for both ears. This achieved a sensitivity of 0.986 and a specificity of 0.902, giving an overall accuracy markedly higher than for previous methods.

噪声性听力损失(NIHL)的诊断基于三个要求:有可能导致听力损失的噪声暴露史;除了噪声暴露之外,没有已知的听力损失原因;以及听力图中某些特征的存在。目前诊断NIHL的所有方法都涉及到检查噪声暴露个体的听力图的典型特征,以及制定识别这些特征的定量规则。本文描述了一种基于多层感知器(MLP)的替代方法。该方法被应用于包含要求对服役期间持续的NIHL(M-NIHL)进行赔偿的个人的年龄和听力图的数据库,这些人被认为主要患有M-NIHL,以及没有已知暴露于强烈声音的对照数据库。MLP被训练为根据个体的听力图和年龄将其分类为暴露组或对照组,从而自动识别提供最佳分类的听力图的特征。两个数据库(暴露于噪声和未暴露于噪声)用于MLP的培训和验证,两个独立的数据库用于评估和进一步分析。表现最好的MLP是根据年龄和双耳的听力图来识别个体是否患有M-NIHL的MLP。这实现了0.986的灵敏度和0.902的特异性,总体准确度明显高于以前的方法。
{"title":"Diagnosing Noise-Induced Hearing Loss Sustained During Military Service Using Deep Neural Networks.","authors":"Brian C J Moore, Josef Schlittenlacher","doi":"10.1177/23312165231184982","DOIUrl":"10.1177/23312165231184982","url":null,"abstract":"<p><p>The diagnosis of noise-induced hearing loss (NIHL) is based on three requirements: a history of exposure to noise with the potential to cause hearing loss; the absence of known causes of hearing loss other than noise exposure; and the presence of certain features in the audiogram. All current methods for diagnosing NIHL have involved examination of the typical features of the audiograms of noise-exposed individuals and the formulation of quantitative rules for the identification of those features. This article describes an alternative approach based on the use of multilayer perceptrons (MLPs). The approach was applied to databases containing the ages and audiograms of individuals claiming compensation for NIHL sustained during military service (M-NIHL), who were assumed mostly to have M-NIHL, and control databases with no known exposure to intense sounds. The MLPs were trained so as to classify individuals as belonging to the exposed or control group based on their audiograms and ages, thereby automatically identifying the features of the audiogram that provide optimal classification. Two databases (noise exposed and nonexposed) were used for training and validation of the MLPs and two independent databases were used for evaluation and further analyses. The best-performing MLP was one trained to identify whether or not an individual had M-NIHL based on age and the audiogram for both ears. This achieved a sensitivity of 0.986 and a specificity of 0.902, giving an overall accuracy markedly higher than for previous methods.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10408324/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10318915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization of Sound Coding Strategies to Make Singing Music More Accessible for Cochlear Implant Users. 优化声音编码策略,使人工耳蜗使用者更容易获得歌唱音乐。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2023-01-01 DOI: 10.1177/23312165221148022
Sina Tahmasebi, Manuel Segovia-Martinez, Waldo Nogueira

Cochlear implants (CIs) are implantable medical devices that can partially restore hearing to people suffering from profound sensorineural hearing loss. While these devices provide good speech understanding in quiet, many CI users face difficulties when listening to music. Reasons include poor spatial specificity of electric stimulation, limited transmission of spectral and temporal fine structure of acoustic signals, and restrictions in the dynamic range that can be conveyed via electric stimulation of the auditory nerve. The coding strategies currently used in CIs are typically designed for speech rather than music. This work investigates the optimization of CI coding strategies to make singing music more accessible to CI users. The aim is to reduce the spectral complexity of music by selecting fewer bands for stimulation, attenuating the background instruments by strengthening a noise reduction algorithm, and optimizing the electric dynamic range through a back-end compressor. The optimizations were evaluated through both objective and perceptual measures of speech understanding and melody identification of singing voice with and without background instruments, as well as music appreciation questionnaires. Consistent with the objective measures, results gathered from the perceptual evaluations indicated that reducing the number of selected bands and optimizing the electric dynamic range significantly improved speech understanding in music. Moreover, results obtained from questionnaires show that the new music back-end compressor significantly improved music enjoyment. These results have potential as a new CI program for improved singing music perception.

人工耳蜗是一种植入式医疗设备,可以部分恢复重度感音神经性听力损失患者的听力。虽然这些设备在安静的环境下可以很好地理解语音,但许多CI用户在听音乐时却会遇到困难。原因包括电刺激的空间特异性较差,声信号的频谱和时间精细结构传输受限,听神经电刺激所能传递的动态范围受限。目前在ci中使用的编码策略通常是为语音而不是音乐设计的。这项工作研究了CI编码策略的优化,以使CI用户更容易接受歌唱音乐。其目的是通过选择更少的刺激频带来降低音乐的频谱复杂性,通过增强降噪算法来衰减背景乐器,并通过后端压缩器优化电动态范围。通过有背景乐器和没有背景乐器的语音理解和旋律识别的客观和感性测量以及音乐欣赏问卷来评估优化效果。与客观测量结果一致,从感知评估中收集的结果表明,减少选择频带的数量和优化电动态范围显着提高了音乐中的语音理解。此外,问卷调查结果显示,新的音乐后端压缩机显著提高音乐享受。这些结果有可能作为一种新的CI程序来改善歌唱音乐感知。
{"title":"Optimization of Sound Coding Strategies to Make Singing Music More Accessible for Cochlear Implant Users.","authors":"Sina Tahmasebi,&nbsp;Manuel Segovia-Martinez,&nbsp;Waldo Nogueira","doi":"10.1177/23312165221148022","DOIUrl":"https://doi.org/10.1177/23312165221148022","url":null,"abstract":"<p><p>Cochlear implants (CIs) are implantable medical devices that can partially restore hearing to people suffering from profound sensorineural hearing loss. While these devices provide good speech understanding in quiet, many CI users face difficulties when listening to music. Reasons include poor spatial specificity of electric stimulation, limited transmission of spectral and temporal fine structure of acoustic signals, and restrictions in the dynamic range that can be conveyed via electric stimulation of the auditory nerve. The coding strategies currently used in CIs are typically designed for speech rather than music. This work investigates the optimization of CI coding strategies to make singing music more accessible to CI users. The aim is to reduce the spectral complexity of music by selecting fewer bands for stimulation, attenuating the background instruments by strengthening a noise reduction algorithm, and optimizing the electric dynamic range through a back-end compressor. The optimizations were evaluated through both objective and perceptual measures of speech understanding and melody identification of singing voice with and without background instruments, as well as music appreciation questionnaires. Consistent with the objective measures, results gathered from the perceptual evaluations indicated that reducing the number of selected bands and optimizing the electric dynamic range significantly improved speech understanding in music. Moreover, results obtained from questionnaires show that the new music back-end compressor significantly improved music enjoyment. These results have potential as a new CI program for improved singing music perception.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/a4/9b/10.1177_23312165221148022.PMC9837293.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10746839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feasibility of Diagnosing Dead Regions Using Auditory Steady-State Responses to an Exponentially Amplitude Modulated Tone in Threshold Equalizing Notched Noise, Assessed Using Normal-Hearing Participants. 利用正常听力参与者对阈值均衡缺口噪声中指数调幅音调的听觉稳态响应诊断死区的可行性。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2023-01-01 DOI: 10.1177/23312165231173234
Emanuele Perugia, Frederic Marmel, Karolina Kluk

The aim of this study was to assess feasibility of using electrophysiological auditory steady-state response (ASSR) masking for detecting dead regions (DRs). Fifteen normally hearing adults were tested using behavioral and electrophysiological tasks. In the electrophysiological task, ASSRs were recorded to a 2 kHz exponentially amplitude-modulated tone (AM2) presented within a notched threshold equalizing noise (TEN) whose center frequency (CFNOTCH) varied. We hypothesized that, in the absence of DRs, ASSR amplitudes would be largest for CFNOTCH at/or near the signal frequency. In the presence of a DR at the signal frequency, the largest ASSR amplitude would occur at a frequency (fmax) far away from the signal frequency. The AM2 and the TEN were presented at 60 and 75 dB SPL, respectively. In the behavioral task, for the same maskers as above, the masker level at which an AM and a pure tone could just be distinguished, denoted AM2ML, was determined, for low (10 dB above absolute AM2 threshold) and high (60 dB SPL) signal levels. We also hypothesized that the value of fmax would be similar for both techniques. The ASSR fmax values obtained from grand average ASSR amplitudes, but not from individual amplitudes, were consistent with our hypotheses. The agreement between the behavioral fmax and ASSR fmax was poor. The within-session ASSR-amplitude repeatability was good for AM2 alone, but poor for AM2 in notched TEN. The ASSR-amplitude variability between and within participants seems to be a major roadblock to developing our approach into an effective DR detection method.

本研究旨在评估使用电生理学听觉稳态反应(ASSR)掩蔽检测死区(DR)的可行性。15 名听力正常的成年人接受了行为和电生理任务测试。在电生理任务中,我们记录了在中心频率(CFNOTCH)变化的缺口阈值均衡噪声(TEN)中出现的 2 kHz 指数调幅音(AM2)的听觉稳态反应。我们假设,在没有 DR 的情况下,当 CFNOTCH 位于或接近信号频率时,ASSR 幅值最大。在信号频率存在 DR 的情况下,最大的 ASSR 振幅将出现在远离信号频率的频率(fmax)上。AM2 和 TEN 的声压级分别为 60 和 75 dB。在行为任务中,对于与上述相同的掩蔽器,我们确定了在低信号水平(比 AM2 绝对阈值高 10 dB)和高信号水平(60 dB SPL)下,AM 和纯音刚刚能被区分开的掩蔽器电平,记为 AM2ML。我们还假设两种技术的 fmax 值相似。从 ASSR 总平均振幅(而非单个振幅)获得的 ASSR fmax 值与我们的假设一致。行为最大值与 ASSR 最大值之间的一致性较差。单独使用 AM2 时,会话内 ASSR 振幅的可重复性较好,但缺口 TEN 中 AM2 的可重复性较差。参与者之间和参与者内部的 ASSR 振幅变异性似乎是将我们的方法发展成有效 DR 检测方法的主要障碍。
{"title":"Feasibility of Diagnosing Dead Regions Using Auditory Steady-State Responses to an Exponentially Amplitude Modulated Tone in Threshold Equalizing Notched Noise, Assessed Using Normal-Hearing Participants.","authors":"Emanuele Perugia, Frederic Marmel, Karolina Kluk","doi":"10.1177/23312165231173234","DOIUrl":"10.1177/23312165231173234","url":null,"abstract":"<p><p>The aim of this study was to assess feasibility of using electrophysiological auditory steady-state response (ASSR) masking for detecting dead regions (DRs). Fifteen normally hearing adults were tested using behavioral and electrophysiological tasks. In the electrophysiological task, ASSRs were recorded to a 2 kHz exponentially amplitude-modulated tone (AM2) presented within a notched threshold equalizing noise (TEN) whose center frequency (CF<sub>NOTCH</sub>) varied. We hypothesized that, in the absence of DRs, ASSR amplitudes would be largest for CF<sub>NOTCH</sub> at/or near the signal frequency. In the presence of a DR at the signal frequency, the largest ASSR amplitude would occur at a frequency (<i>f<sub>max</sub></i>) far away from the signal frequency. The AM2 and the TEN were presented at 60 and 75 dB SPL, respectively. In the behavioral task, for the same maskers as above, the masker level at which an AM and a pure tone could just be distinguished, denoted AM2ML, was determined, for low (10 dB above absolute AM2 threshold) and high (60 dB SPL) signal levels. We also hypothesized that the value of <i>f<sub>max</sub></i> would be similar for both techniques. The ASSR <i>f<sub>max</sub></i> values obtained from grand average ASSR amplitudes, but not from individual amplitudes, were consistent with our hypotheses. The agreement between the behavioral <i>f<sub>max</sub></i> and ASSR <i>f<sub>max</sub></i> was poor. The within-session ASSR-amplitude repeatability was good for AM2 alone, but poor for AM2 in notched TEN. The ASSR-amplitude variability between and within participants seems to be a major roadblock to developing our approach into an effective DR detection method.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10336760/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9775441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speech Intelligibility in Reverberation is Reduced During Self-Rotation. 在自旋转过程中,混响中的语音清晰度降低。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2023-01-01 DOI: 10.1177/23312165231188619
Ľuboš Hládek, Bernhard U Seeber

Speech intelligibility in cocktail party situations has been traditionally studied for stationary sound sources and stationary participants. Here, speech intelligibility and behavior were investigated during active self-rotation of standing participants in a spatialized speech test. We investigated if people would rotate to improve speech intelligibility, and we asked if knowing the target location would be further beneficial. Target sentences randomly appeared at one of four possible locations: 0°, ± 90°, 180° relative to the participant's initial orientation on each trial, while speech-shaped noise was presented from the front (0°). Participants responded naturally with self-rotating motion. Target sentences were presented either without (Audio-only) or with a picture of an avatar (Audio-Visual). In a baseline (Static) condition, people were standing still without visual location cues. Participants' self-orientation undershot the target location and orientations were close to acoustically optimal. Participants oriented more often in an acoustically optimal way, and speech intelligibility was higher in the Audio-Visual than in the Audio-only condition for the lateral targets. The intelligibility of the individual words in Audio-Visual and Audio-only increased during self-rotation towards the rear target, but it was reduced for the lateral targets when compared to Static, which could be mostly, but not fully, attributed to changes in spatial unmasking. Speech intelligibility prediction based on a model of static spatial unmasking considering self-rotations overestimated the participant performance by 1.4 dB. The results suggest that speech intelligibility is reduced during self-rotation, and that visual cues of location help to achieve more optimal self-rotations and better speech intelligibility.

传统上,鸡尾酒会场景中的语音清晰度是针对固定声源和固定参与者进行研究的。在这里,在空间化语音测试中,研究了站立参与者在主动自旋转过程中的语音可懂度和行为。我们调查了人们是否会轮换以提高语音清晰度,并询问了解目标位置是否会进一步有益。目标句随机出现在四个可能的位置之一:0°, ± 在每次试验中,相对于参与者的初始方位为90°、180°,而语音形状的噪声是从正面(0°)出现的。参与者以自我旋转的动作做出自然反应。目标句子要么没有(仅音频),要么有化身的图片(视听)。在基线(静态)条件下,人们在没有视觉位置提示的情况下静止不动。参与者的自我定向低于目标位置,并且定向在声学上接近最佳。参与者更经常以声学最优的方式定向,对于横向目标,视听条件下的语音清晰度高于纯音频条件下的。视听和音频中单个单词的可懂度仅在朝向后方目标的自旋转过程中增加,但与静态相比,横向目标的可懂性降低,静态主要但不完全归因于空间揭开的变化。基于考虑自旋转的静态空间揭开模型的语音可懂度预测将参与者的表现高估了1.4 dB。结果表明,自旋转过程中语音可懂度降低,位置的视觉提示有助于实现更优化的自旋转和更好的语音可懂性。
{"title":"Speech Intelligibility in Reverberation is Reduced During Self-Rotation.","authors":"Ľuboš Hládek, Bernhard U Seeber","doi":"10.1177/23312165231188619","DOIUrl":"10.1177/23312165231188619","url":null,"abstract":"<p><p>Speech intelligibility in cocktail party situations has been traditionally studied for stationary sound sources and stationary participants. Here, speech intelligibility and behavior were investigated during active self-rotation of standing participants in a spatialized speech test. We investigated if people would rotate to improve speech intelligibility, and we asked if knowing the target location would be further beneficial. Target sentences randomly appeared at one of four possible locations: 0°, ± 90°, 180° relative to the participant's initial orientation on each trial, while speech-shaped noise was presented from the front (0°). Participants responded naturally with self-rotating motion. Target sentences were presented either without (Audio-only) or with a picture of an avatar (Audio-Visual). In a baseline (Static) condition, people were standing still without visual location cues. Participants' self-orientation undershot the target location and orientations were close to acoustically optimal. Participants oriented more often in an acoustically optimal way, and speech intelligibility was higher in the Audio-Visual than in the Audio-only condition for the lateral targets. The intelligibility of the individual words in Audio-Visual and Audio-only increased during self-rotation towards the rear target, but it was reduced for the lateral targets when compared to Static, which could be mostly, but not fully, attributed to changes in spatial unmasking. Speech intelligibility prediction based on a model of static spatial unmasking considering self-rotations overestimated the participant performance by 1.4 dB. The results suggest that speech intelligibility is reduced during self-rotation, and that visual cues of location help to achieve more optimal self-rotations and better speech intelligibility.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10363862/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9872318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Capturing Visual Attention With Perturbed Auditory Spatial Cues. 用扰动的听觉空间线索捕捉视觉注意力。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2023-01-01 DOI: 10.1177/23312165231182289
Chiara Valzolgher, Mariam Alzaher, Valérie Gaveau, Aurélie Coudert, Mathieu Marx, Eric Truy, Pascal Barone, Alessandro Farnè, Francesco Pavani

Lateralized sounds can orient visual attention, with benefits for audio-visual processing. Here, we asked to what extent perturbed auditory spatial cues-resulting from cochlear implants (CI) or unilateral hearing loss (uHL)-allow this automatic mechanism of information selection from the audio-visual environment. We used a classic paradigm from experimental psychology (capture of visual attention with sounds) to probe the integrity of audio-visual attentional orienting in 60 adults with hearing loss: bilateral CI users (N = 20), unilateral CI users (N = 20), and individuals with uHL (N = 20). For comparison, we also included a group of normal-hearing (NH, N = 20) participants, tested in binaural and monaural listening conditions (i.e., with one ear plugged). All participants also completed a sound localization task to assess spatial hearing skills. Comparable audio-visual orienting was observed in bilateral CI, uHL, and binaural NH participants. By contrast, audio-visual orienting was, on average, absent in unilateral CI users and reduced in NH listening with one ear plugged. Spatial hearing skills were better in bilateral CI, uHL, and binaural NH participants than in unilateral CI users and monaurally plugged NH listeners. In unilateral CI users, spatial hearing skills correlated with audio-visual-orienting abilities. These novel results show that audio-visual-attention orienting can be preserved in bilateral CI users and in uHL patients to a greater extent than unilateral CI users. This highlights the importance of assessing the impact of hearing loss beyond auditory difficulties alone: to capture to what extent it may enable or impede typical interactions with the multisensory environment.

偏侧化的声音可以引导视觉注意力,有利于视听处理。在这里,我们询问了人工耳蜗(CI)或单侧听力损失(uHL)引起的受干扰听觉空间线索在多大程度上允许从视听环境中自动选择信息。我们使用实验心理学的经典范式(用声音捕捉视觉注意力)来探讨60名听力损失成年人视听注意定向的完整性:双侧CI用户(N = 20) ,单向CI用户(N = 20) 和uHL(N = 20) 。为了进行比较,我们还纳入了一组正常听力(NH = 20) 参与者,在双耳和单耳听力条件下进行测试(即,一只耳朵塞住)。所有参与者还完成了声音定位任务,以评估空间听觉技能。在双侧CI、uHL和双耳NH参与者中观察到可比较的视听定向。相比之下,视听定向在单侧CI用户中平均不存在,而在单耳封闭的NH听力中减少。双侧CI、uHL和双耳NH参与者的空间听觉技能优于单侧CI使用者和单声道插入NH听众。在单侧CI用户中,空间听觉技能与视听定向能力相关。这些新的结果表明,与单侧CI用户相比,视听注意定向在双侧CI用户和uHL患者中可以得到更大程度的保留。这突出了评估听力损失影响的重要性,而不仅仅是听觉困难:捕捉听力损失在多大程度上可能促成或阻碍与多感官环境的典型互动。
{"title":"Capturing Visual Attention With Perturbed Auditory Spatial Cues.","authors":"Chiara Valzolgher, Mariam Alzaher, Valérie Gaveau, Aurélie Coudert, Mathieu Marx, Eric Truy, Pascal Barone, Alessandro Farnè, Francesco Pavani","doi":"10.1177/23312165231182289","DOIUrl":"10.1177/23312165231182289","url":null,"abstract":"<p><p>Lateralized sounds can orient visual attention, with benefits for audio-visual processing. Here, we asked to what extent perturbed auditory spatial cues-resulting from cochlear implants (CI) or unilateral hearing loss (uHL)-allow this automatic mechanism of information selection from the audio-visual environment. We used a classic paradigm from experimental psychology (capture of visual attention with sounds) to probe the integrity of audio-visual attentional orienting in 60 adults with hearing loss: bilateral CI users (<i>N</i> = 20), unilateral CI users (<i>N</i> = 20), and individuals with uHL (<i>N</i> = 20). For comparison, we also included a group of normal-hearing (NH, <i>N</i> = 20) participants, tested in binaural and monaural listening conditions (i.e., with one ear plugged). All participants also completed a sound localization task to assess spatial hearing skills. Comparable audio-visual orienting was observed in bilateral CI, uHL, and binaural NH participants. By contrast, audio-visual orienting was, on average, absent in unilateral CI users and reduced in NH listening with one ear plugged. Spatial hearing skills were better in bilateral CI, uHL, and binaural NH participants than in unilateral CI users and monaurally plugged NH listeners. In unilateral CI users, spatial hearing skills correlated with audio-visual-orienting abilities. These novel results show that audio-visual-attention orienting can be preserved in bilateral CI users and in uHL patients to a greater extent than unilateral CI users. This highlights the importance of assessing the impact of hearing loss beyond auditory difficulties alone: to capture to what extent it may enable or impede typical interactions with the multisensory environment.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/84/a2/10.1177_23312165231182289.PMC10467228.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10127241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visually biased Perception in Cochlear Implant Users: A Study of the McGurk and Sound-Induced Flash Illusions. 人工耳蜗使用者的视觉偏差感知:麦克格克幻觉和声音诱发的闪光幻觉研究。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2023-01-01 DOI: 10.1177/23312165221076681
Iliza M Butera, Ryan A Stevenson, René H Gifford, Mark T Wallace

The reduction in spectral resolution by cochlear implants oftentimes requires complementary visual speech cues to facilitate understanding. Despite substantial clinical characterization of auditory-only speech measures, relatively little is known about the audiovisual (AV) integrative abilities that most cochlear implant (CI) users rely on for daily speech comprehension. In this study, we tested AV integration in 63 CI users and 69 normal-hearing (NH) controls using the McGurk and sound-induced flash illusions. To our knowledge, this study is the largest to-date measuring the McGurk effect in this population and the first that tests the sound-induced flash illusion (SIFI). When presented with conflicting AV speech stimuli (i.e., the phoneme "ba" dubbed onto the viseme "ga"), we found that 55 CI users (87%) reported a fused percept of "da" or "tha" on at least one trial. After applying an error correction based on unisensory responses, we found that among those susceptible to the illusion, CI users experienced lower fusion than controls-a result that was concordant with results from the SIFI where the pairing of a single circle flashing on the screen with multiple beeps resulted in fewer illusory flashes for CI users. While illusion perception in these two tasks appears to be uncorrelated among CI users, we identified a negative correlation in the NH group. Because neither illusion appears to provide further explanation of variability in CI outcome measures, further research is needed to determine how these findings relate to CI users' speech understanding, particularly in ecological listening conditions that are naturally multisensory.

由于人工耳蜗降低了频谱分辨率,因此通常需要辅助视觉语音线索来帮助理解。尽管纯听觉语音测量的临床特征很多,但大多数人工耳蜗(CI)用户日常语音理解所依赖的视听(AV)整合能力却知之甚少。在本研究中,我们使用麦克格克幻觉和声音诱导闪光幻觉测试了 63 名 CI 使用者和 69 名正常听力(NH)对照者的视听整合能力。据我们所知,这项研究是迄今为止在该人群中测量麦格克效应的最大规模研究,也是第一项测试声音诱发闪光幻觉(SIFI)的研究。我们发现,当出现相互冲突的 AV 语音刺激(即音素 "ba "配音到视觉音素 "ga "上)时,55 名 CI 用户(87%)至少在一次试验中报告了 "da "或 "tha "的融合感知。根据单感官反应进行误差校正后,我们发现,在易受幻觉影响的人群中,CI 用户的融合感低于对照组--这一结果与 SIFI 的结果一致,即屏幕上闪烁的单个圆圈与多个蜂鸣声配对会导致 CI 用户的幻觉闪烁减少。虽然在这两项任务中,CI 使用者的幻觉感知似乎并不相关,但我们在 NH 组中发现了负相关。由于这两种幻觉似乎都不能进一步解释 CI 结果测量的变异性,因此需要进一步研究以确定这些发现与 CI 用户的语音理解有何关系,特别是在自然多感官的生态听力条件下。
{"title":"Visually biased Perception in Cochlear Implant Users: A Study of the McGurk and Sound-Induced Flash Illusions.","authors":"Iliza M Butera, Ryan A Stevenson, René H Gifford, Mark T Wallace","doi":"10.1177/23312165221076681","DOIUrl":"10.1177/23312165221076681","url":null,"abstract":"<p><p>The reduction in spectral resolution by cochlear implants oftentimes requires complementary visual speech cues to facilitate understanding. Despite substantial clinical characterization of auditory-only speech measures, relatively little is known about the audiovisual (AV) integrative abilities that most cochlear implant (CI) users rely on for daily speech comprehension. In this study, we tested AV integration in 63 CI users and 69 normal-hearing (NH) controls using the McGurk and sound-induced flash illusions. To our knowledge, this study is the largest to-date measuring the McGurk effect in this population and the first that tests the sound-induced flash illusion (SIFI). When presented with conflicting AV speech stimuli (i.e., the phoneme \"ba\" dubbed onto the viseme \"ga\"), we found that 55 CI users (87%) reported a fused percept of \"da\" or \"tha\" on at least one trial. After applying an error correction based on unisensory responses, we found that among those susceptible to the illusion, CI users experienced lower fusion than controls-a result that was concordant with results from the SIFI where the pairing of a single circle flashing on the screen with multiple beeps resulted in fewer illusory flashes for CI users. While illusion perception in these two tasks appears to be uncorrelated among CI users, we identified a negative correlation in the NH group. Because neither illusion appears to provide further explanation of variability in CI outcome measures, further research is needed to determine how these findings relate to CI users' speech understanding, particularly in ecological listening conditions that are naturally multisensory.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/6d/d6/10.1177_23312165221076681.PMC10334005.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9763744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Time-specific Components of Pupil Responses Reveal Alternations in Effort Allocation Caused by Memory Task Demands During Speech Identification in Noise. 瞳孔反应的时间特异性成分揭示了噪声环境下语音识别过程中记忆任务需求引起的努力分配变化。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2023-01-01 DOI: 10.1177/23312165231153280
Patrycja Książek, Adriana A Zekveld, Lorenz Fiedler, Sophia E Kramer, Dorothea Wendt

Daily communication may be effortful due to poor acoustic quality. In addition, memory demands can induce effort, especially for long or complex sentences. In the current study, we tested the impact of memory task demands and speech-to-noise ratio on the time-specific components of effort allocation during speech identification in noise. Thirty normally hearing adults (15 females, mean age 42.2 years) participated. In an established auditory memory test, listeners had to listen to a list of seven sentences in noise, and repeat the sentence-final word after presentation, and, if instructed, recall the repeated words. We tested the effects of speech-to-noise ratio (SNR; -4 dB, +1 dB) and recall (Recall; Yes, No), on the time-specific components of pupil responses, trial baseline pupil size, and their dynamics (change) along the list. We found three components in the pupil responses (early, middle, and late). While the additional memory task (recall versus no recall) lowered all components' values, SNR (-4 dB versus +1 dB SNR) increased the middle and late component values. Increasing memory demands (Recall) progressively increased trial baseline and steepened decrease of the late component's values. Trial baseline increased most steeply in the condition of +1 dB SNR with recall. The findings suggest that adding a recall to the auditory task alters effort allocation for listening. Listeners are dynamically re-allocating effort from listening to memorizing under changing memory and acoustic demands. The pupil baseline and the time-specific components of pupil responses provide a comprehensive picture of the interplay of SNR and recall on effort.

由于音质不好,日常交流可能会很费力。此外,记忆需求可以引起努力,特别是对于长句或复杂的句子。在本研究中,我们测试了记忆任务需求和语音噪声比对语音识别过程中努力分配的时间特异性成分的影响。30名听力正常的成年人(15名女性,平均年龄42.2岁)参与。在一项既定的听觉记忆测试中,听者必须在噪音中听一列七个句子,并在陈述后重复句子的最后一个单词,如果有指示,则回忆重复的单词。我们测试了语音噪声比(SNR)的影响;-4 dB, +1 dB)和召回(召回;是,否),瞳孔反应的时间特异性成分,试验基线瞳孔大小,以及它们在列表中的动态(变化)。我们在瞳孔反应中发现了三个组成部分(早期、中期和晚期)。而额外的记忆任务(回忆与不回忆)降低了所有成分的值,信噪比(-4 dB与+1 dB信噪比)增加了中后期成分的值。增加记忆需求(回忆)逐渐增加试验基线和急剧下降的后期组件的值。在召回率为+1 dB信噪比的情况下,试验基线的增加幅度最大。研究结果表明,在听觉任务中加入回忆会改变听力的努力分配。在不断变化的记忆和声学需求下,听者动态地重新分配从听到记忆的努力。瞳孔基线和瞳孔反应的时间特异性成分提供了信噪比和回忆对努力的相互作用的全面图景。
{"title":"Time-specific Components of Pupil Responses Reveal Alternations in Effort Allocation Caused by Memory Task Demands During Speech Identification in Noise.","authors":"Patrycja Książek,&nbsp;Adriana A Zekveld,&nbsp;Lorenz Fiedler,&nbsp;Sophia E Kramer,&nbsp;Dorothea Wendt","doi":"10.1177/23312165231153280","DOIUrl":"https://doi.org/10.1177/23312165231153280","url":null,"abstract":"<p><p>Daily communication may be effortful due to poor acoustic quality. In addition, memory demands can induce effort, especially for long or complex sentences. In the current study, we tested the impact of memory task demands and speech-to-noise ratio on the time-specific components of effort allocation during speech identification in noise. Thirty normally hearing adults (15 females, mean age 42.2 years) participated. In an established auditory memory test, listeners had to listen to a list of seven sentences in noise, and repeat the sentence-final word after presentation, and, if instructed, recall the repeated words. We tested the effects of speech-to-noise ratio (SNR; -4 dB, +1 dB) and recall (Recall; Yes, No), on the time-specific components of pupil responses, trial baseline pupil size, and their dynamics (change) along the list. We found three components in the pupil responses (early, middle, and late). While the additional memory task (recall versus no recall) lowered all components' values, SNR (-4 dB versus +1 dB SNR) increased the middle and late component values. Increasing memory demands (Recall) progressively increased trial baseline and steepened decrease of the late component's values. Trial baseline increased most steeply in the condition of +1 dB SNR with recall. The findings suggest that adding a recall to the auditory task alters effort allocation for listening. Listeners are dynamically re-allocating effort from listening to memorizing under changing memory and acoustic demands. The pupil baseline and the time-specific components of pupil responses provide a comprehensive picture of the interplay of SNR and recall on effort.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/85/b7/10.1177_23312165231153280.PMC10028670.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9514033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Human Auditory Ecology: Extending Hearing Research to the Perception of Natural Soundscapes by Humans in Rapidly Changing Environments. 人类听觉生态学:将听觉研究扩展到人类在快速变化的环境中对自然声景的感知。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2023-01-01 DOI: 10.1177/23312165231212032
Christian Lorenzi, Frédéric Apoux, Elie Grinfeder, Bernie Krause, Nicole Miller-Viacava, Jérôme Sueur

Research in hearing sciences has provided extensive knowledge about how the human auditory system processes speech and assists communication. In contrast, little is known about how this system processes "natural soundscapes," that is the complex arrangements of biological and geophysical sounds shaped by sound propagation through non-anthropogenic habitats [Grinfeder et al. (2022). Frontiers in Ecology and Evolution. 10: 894232]. This is surprising given that, for many species, the capacity to process natural soundscapes determines survival and reproduction through the ability to represent and monitor the immediate environment. Here we propose a framework to encourage research programmes in the field of "human auditory ecology," focusing on the study of human auditory perception of ecological processes at work in natural habitats. Based on large acoustic databases with high ecological validity, these programmes should investigate the extent to which this presumably ancestral monitoring function of the human auditory system is adapted to specific information conveyed by natural soundscapes, whether it operate throughout the life span or whether it emerges through individual learning or cultural transmission. Beyond fundamental knowledge of human hearing, these programmes should yield a better understanding of how normal-hearing and hearing-impaired listeners monitor rural and city green and blue spaces and benefit from them, and whether rehabilitation devices (hearing aids and cochlear implants) restore natural soundscape perception and emotional responses back to normal. Importantly, they should also reveal whether and how humans hear the rapid changes in the environment brought about by human activity.

听力科学的研究提供了关于人类听觉系统如何处理语言和协助交流的广泛知识。相比之下,人们对这个系统如何处理“自然声景”知之甚少,“自然声景”是通过非人为栖息地的声音传播形成的生物和地球物理声音的复杂排列[Grinfeder et al.(2022)]。生态与进化前沿。10:894232。这是令人惊讶的,因为对于许多物种来说,处理自然声景的能力通过表现和监控周围环境的能力决定了它们的生存和繁殖。在此,我们提出了一个框架,以鼓励“人类听觉生态学”领域的研究计划,重点研究人类听觉感知在自然栖息地中工作的生态过程。基于具有高生态有效性的大型声学数据库,这些程序应该调查人类听觉系统的这种可能的祖先监测功能在多大程度上适应了自然声景所传达的特定信息,它是否贯穿整个生命周期,还是通过个人学习或文化传播出现。除了人类听力的基本知识之外,这些计划还应使人们更好地了解听力正常和听力受损的听众如何监测农村和城市的绿色和蓝色空间并从中受益,以及康复设备(助听器和人工耳蜗)是否能恢复自然的音景感知和情绪反应。重要的是,它们还应该揭示人类是否以及如何听到人类活动给环境带来的快速变化。
{"title":"Human Auditory Ecology: Extending Hearing Research to the Perception of Natural Soundscapes by Humans in Rapidly Changing Environments.","authors":"Christian Lorenzi, Frédéric Apoux, Elie Grinfeder, Bernie Krause, Nicole Miller-Viacava, Jérôme Sueur","doi":"10.1177/23312165231212032","DOIUrl":"10.1177/23312165231212032","url":null,"abstract":"<p><p>Research in hearing sciences has provided extensive knowledge about how the human auditory system processes speech and assists communication. In contrast, little is known about how this system processes \"natural soundscapes,\" that is the complex arrangements of biological and geophysical sounds shaped by sound propagation through non-anthropogenic habitats [Grinfeder et al. (2022). <i>Frontiers in Ecology and Evolution. 10:</i> 894232]. This is surprising given that, for many species, the capacity to process natural soundscapes determines survival and reproduction through the ability to represent and monitor the immediate environment. Here we propose a framework to encourage research programmes in the field of \"human auditory ecology,\" focusing on the study of human auditory perception of ecological processes at work in natural habitats. Based on large acoustic databases with high ecological validity, these programmes should investigate the extent to which this presumably ancestral monitoring function of the human auditory system is adapted to specific information conveyed by natural soundscapes, whether it operate throughout the life span or whether it emerges through individual learning or cultural transmission. Beyond fundamental knowledge of human hearing, these programmes should yield a better understanding of how normal-hearing and hearing-impaired listeners monitor rural and city green and blue spaces and benefit from them, and whether rehabilitation devices (hearing aids and cochlear implants) restore natural soundscape perception and emotional responses back to normal. Importantly, they should also reveal whether and how humans hear the rapid changes in the environment brought about by human activity.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10658775/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138048241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modified T2 Statistics for Improved Detection of Aided Cortical Auditory Evoked Potentials in Hearing-Impaired Infants. 改进听力受损婴儿皮层听觉诱发电位辅助检测的修正 T2 统计量。
IF 2.7 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2023-01-01 DOI: 10.1177/23312165231154035
Michael Alexander Chesnaye, Steven Lewis Bell, James Michael Harte, Lisbeth Birkelund Simonsen, Anisa Sadru Visram, Michael Anthony Stone, Kevin James Munro, David Martin Simpson

The cortical auditory evoked potential (CAEP) is a change in neural activity in response to sound, and is of interest for audiological assessment of infants, especially those who use hearing aids. Within this population, CAEP waveforms are known to vary substantially across individuals, which makes detecting the CAEP through visual inspection a challenging task. It also means that some of the best automated CAEP detection methods used in adults are probably not suitable for this population. This study therefore evaluates and optimizes the performance of new and existing methods for aided (i.e., the stimuli are presented through subjects' hearing aid(s)) CAEP detection in infants with hearing loss. Methods include the conventional Hotellings T2 test, various modified q-sample statistics, and two novel variants of T2 statistics, which were designed to exploit the correlation structure underlying the data. Various additional methods from the literature were also evaluated, including the previously best-performing methods for adult CAEP detection. Data for the assessment consisted of aided CAEPs recorded from 59 infant hearing aid users with mild to profound bilateral hearing loss, and simulated signals. The highest test sensitivities were observed for the modified T2 statistics, followed by the modified q-sample statistics, and lastly by the conventional Hotelling's T2 test, which showed low detection rates for ensemble sizes <80 epochs. The high test sensitivities at small ensemble sizes observed for the modified T2 and q-sample statistics are especially relevant for infant testing, as the time available for data collection tends to be limited in this population.

皮层听觉诱发电位(CAEP)是神经活动对声音的反应变化,对婴儿,尤其是使用助听器的婴儿的听力评估很有意义。在这一人群中,CAEP 波形因人而异,这使得通过视觉检测 CAEP 成为一项具有挑战性的任务。这也意味着一些用于成人的最佳 CAEP 自动检测方法可能并不适合这一人群。因此,本研究评估并优化了新的和现有的听力损失婴儿 CAEP 检测辅助方法(即通过受试者的助听器呈现刺激)的性能。这些方法包括传统的 Hotellings T2 测试、各种修改后的 q 样本统计和两种新的 T2 统计变体,其目的是利用数据背后的相关结构。此外,还对文献中的其他各种方法进行了评估,包括之前用于成人 CAEP 检测的最佳方法。评估数据由 59 名患有轻度至深度双侧听力损失的婴儿助听器用户记录的辅助 CAEP 和模拟信号组成。改良的 T2 统计法的检测灵敏度最高,其次是改良的 q 样本统计法,最后是传统的 Hotelling's T2 检测法。
{"title":"Modified T<sup>2</sup> Statistics for Improved Detection of Aided Cortical Auditory Evoked Potentials in Hearing-Impaired Infants.","authors":"Michael Alexander Chesnaye, Steven Lewis Bell, James Michael Harte, Lisbeth Birkelund Simonsen, Anisa Sadru Visram, Michael Anthony Stone, Kevin James Munro, David Martin Simpson","doi":"10.1177/23312165231154035","DOIUrl":"10.1177/23312165231154035","url":null,"abstract":"<p><p>The cortical auditory evoked potential (CAEP) is a change in neural activity in response to sound, and is of interest for audiological assessment of infants, especially those who use hearing aids. Within this population, CAEP waveforms are known to vary substantially across individuals, which makes detecting the CAEP through visual inspection a challenging task. It also means that some of the best automated CAEP detection methods used in adults are probably not suitable for this population. This study therefore evaluates and optimizes the performance of new and existing methods for aided (i.e., the stimuli are presented through subjects' hearing aid(s)) CAEP detection in infants with hearing loss. Methods include the conventional Hotellings T<sup>2</sup> test, various modified q-sample statistics, and two novel variants of T<sup>2</sup> statistics, which were designed to exploit the correlation structure underlying the data. Various additional methods from the literature were also evaluated, including the previously best-performing methods for adult CAEP detection. Data for the assessment consisted of aided CAEPs recorded from 59 infant hearing aid users with mild to profound bilateral hearing loss, and simulated signals. The highest test sensitivities were observed for the modified T<sup>2</sup> statistics, followed by the modified q-sample statistics, and lastly by the conventional Hotelling's T<sup>2</sup> test, which showed low detection rates for ensemble sizes <80 epochs. The high test sensitivities at small ensemble sizes observed for the modified T<sup>2</sup> and q-sample statistics are especially relevant for infant testing, as the time available for data collection tends to be limited in this population.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9974628/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10828646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Trends in Hearing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1