首页 > 最新文献

Trends in Hearing最新文献

英文 中文
A Perspective on Auditory Wellness: What It Is, Why It Is Important, and How It Can Be Managed. 听觉健康透视:听觉健康是什么、为什么重要以及如何管理。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241273342
Larry E Humes, Sumitrajit Dhar, Vinaya Manchaiah, Anu Sharma, Theresa H Chisolm, Michelle L Arnold, Victoria A Sanchez

During the last decade, there has been a move towards consumer-centric hearing healthcare. This is a direct result of technological advancements (e.g., merger of consumer grade hearing aids with consumer grade earphones creating a wide range of hearing devices) as well as policy changes (e.g., the U.S. Food and Drug Administration creating a new over-the-counter [OTC] hearing aid category). In addition to various direct-to-consumer (DTC) hearing devices available on the market, there are also several validated tools for the self-assessment of auditory function and the detection of ear disease, as well as tools for education about hearing loss, hearing devices, and communication strategies. Further, all can be made easily available to a wide range of people. This perspective provides a framework and identifies tools to improve and maintain optimal auditory wellness across the adult life course. A broadly available and accessible set of tools that can be made available on a digital platform to aid adults in the assessment and as needed, the improvement, of auditory wellness is discussed.

在过去的十年中,听力保健已开始向以消费者为中心的方向发展。这是技术进步(例如,消费级助听器与消费级耳机的合并创造了多种听力设备)和政策变化(例如,美国食品和药物管理局创建了一个新的非处方[OTC]助听器类别)的直接结果。除了市场上各种直接面向消费者(DTC)的听力设备外,还有几种经过验证的听觉功能自我评估和耳病检测工具,以及听力损失、听力设备和沟通策略教育工具。此外,所有这些工具都可以方便地提供给广大人群。这一观点提供了一个框架,并确定了在整个成人生活过程中改善和保持最佳听觉健康的工具。本文讨论了一套可在数字平台上广泛使用和获取的工具,以帮助成年人评估听力健康状况,并根据需要改善听力健康状况。
{"title":"A Perspective on Auditory Wellness: What It Is, Why It Is Important, and How It Can Be Managed.","authors":"Larry E Humes, Sumitrajit Dhar, Vinaya Manchaiah, Anu Sharma, Theresa H Chisolm, Michelle L Arnold, Victoria A Sanchez","doi":"10.1177/23312165241273342","DOIUrl":"10.1177/23312165241273342","url":null,"abstract":"<p><p>During the last decade, there has been a move towards consumer-centric hearing healthcare. This is a direct result of technological advancements (e.g., merger of consumer grade hearing aids with consumer grade earphones creating a wide range of hearing devices) as well as policy changes (e.g., the U.S. Food and Drug Administration creating a new over-the-counter [OTC] hearing aid category). In addition to various direct-to-consumer (DTC) hearing devices available on the market, there are also several validated tools for the self-assessment of auditory function and the detection of ear disease, as well as tools for education about hearing loss, hearing devices, and communication strategies. Further, all can be made easily available to a wide range of people. This <i>perspective</i> provides a framework and identifies tools to improve and maintain optimal auditory wellness across the adult life course. A broadly available and accessible set of tools that can be made available on a digital platform to aid adults in the assessment and as needed, the improvement, of auditory wellness is discussed.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11329910/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of a Phrase-Based Speech-Recognition Test Using Synthetic Speech. 利用合成语音开发基于短语的语音识别测试。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241261490
Saskia Ibelings, Thomas Brand, Esther Ruigendijk, Inga Holube

Speech-recognition tests are widely used in both clinical and research audiology. The purpose of this study was the development of a novel speech-recognition test that combines concepts of different speech-recognition tests to reduce training effects and allows for a large set of speech material. The new test consists of four different words per trial in a meaningful construct with a fixed structure, the so-called phrases. Various free databases were used to select the words and to determine their frequency. Highly frequent nouns were grouped into thematic categories and combined with related adjectives and infinitives. After discarding inappropriate and unnatural combinations, and eliminating duplications of (sub-)phrases, a total number of 772 phrases remained. Subsequently, the phrases were synthesized using a text-to-speech system. The synthesis significantly reduces the effort compared to recordings with a real speaker. After excluding outliers, measured speech-recognition scores for the phrases with 31 normal-hearing participants at fixed signal-to-noise ratios (SNR) revealed speech-recognition thresholds (SRT) for each phrase varying up to 4 dB. The median SRT was -9.1 dB SNR and thus comparable to existing sentence tests. The psychometric function's slope of 15 percentage points per dB is also comparable and enables efficient use in audiology. Summarizing, the principle of creating speech material in a modular system has many potential applications.

语音识别测试广泛应用于临床和研究听力学领域。本研究的目的是开发一种新的语音识别测试,它结合了不同语音识别测试的概念,以减少训练效果,并允许使用大量的语音材料。新测试由每次试验的四个不同单词组成,这些单词具有固定的结构,即所谓的短语。我们使用各种免费数据库来选择单词并确定其频率。高频名词被归入主题类别,并与相关的形容词和不定式结合在一起。在剔除了不恰当和不自然的组合以及重复的(子)短语后,共剩下 772 个短语。随后,使用文本到语音系统对这些短语进行了合成。与真实说话者的录音相比,合成大大减少了工作量。排除异常值后,在固定信噪比(SNR)条件下对 31 名听力正常的参与者进行的短语语音识别评分显示,每个短语的语音识别阈值(SRT)最高相差 4 分贝。SRT 的中位数为 -9.1 dB SNR,因此可与现有的句子测试相媲美。心理测量函数的斜率为每分贝 15 个百分点,也具有可比性,可在听力学中有效使用。总之,在模块化系统中创建语音材料的原理具有许多潜在的应用价值。
{"title":"Development of a Phrase-Based Speech-Recognition Test Using Synthetic Speech.","authors":"Saskia Ibelings, Thomas Brand, Esther Ruigendijk, Inga Holube","doi":"10.1177/23312165241261490","DOIUrl":"10.1177/23312165241261490","url":null,"abstract":"<p><p>Speech-recognition tests are widely used in both clinical and research audiology. The purpose of this study was the development of a novel speech-recognition test that combines concepts of different speech-recognition tests to reduce training effects and allows for a large set of speech material. The new test consists of four different words per trial in a meaningful construct with a fixed structure, the so-called phrases. Various free databases were used to select the words and to determine their frequency. Highly frequent nouns were grouped into thematic categories and combined with related adjectives and infinitives. After discarding inappropriate and unnatural combinations, and eliminating duplications of (sub-)phrases, a total number of 772 phrases remained. Subsequently, the phrases were synthesized using a text-to-speech system. The synthesis significantly reduces the effort compared to recordings with a real speaker. After excluding outliers, measured speech-recognition scores for the phrases with 31 normal-hearing participants at fixed signal-to-noise ratios (SNR) revealed speech-recognition thresholds (SRT) for each phrase varying up to 4 dB. The median SRT was -9.1 dB SNR and thus comparable to existing sentence tests. The psychometric function's slope of 15 percentage points per dB is also comparable and enables efficient use in audiology. Summarizing, the principle of creating speech material in a modular system has many potential applications.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11273571/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141761864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In-situ Audiometry Compared to Conventional Audiometry for Hearing Aid Fitting. 在助听器选配方面,原位测听与传统测听法的比较。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2024-01-01 DOI: 10.1177/23312165241259704
Maaike Van Eeckhoutte, Bettina Skjold Jasper, Erik Finn Kjærbøl, David Harbo Jordell, Torsten Dau

The use of in-situ audiometry for hearing aid fitting is appealing due to its reduced resource and equipment requirements compared to standard approaches employing conventional audiometry alongside real-ear measures. However, its validity has been a subject of debate, as previous studies noted differences between hearing thresholds measured using conventional and in-situ audiometry. The differences were particularly notable for open-fit hearing aids, attributed to low-frequency leakage caused by the vent. Here, in-situ audiometry was investigated for six receiver-in-canal hearing aids from different manufacturers through three experiments. In Experiment I, the hearing aid gain was measured to investigate whether corrections were implemented to the prescribed target gain. In Experiment II, the in-situ stimuli were recorded to investigate if corrections were directly incorporated to the delivered in-situ stimulus. Finally, in Experiment III, hearing thresholds using in-situ and conventional audiometry were measured with real patients wearing open-fit hearing aids. Results indicated that (1) the hearing aid gain remained unaffected when measured with in-situ or conventional audiometry for all open-fit measurements, (2) the in-situ stimuli were adjusted for up to 30 dB at frequencies below 1000 Hz for all open-fit hearing aids except one, which also recommends the use of closed domes for all in-situ measurements, and (3) the mean interparticipant threshold difference fell within 5 dB for frequencies between 250 and 6000 Hz. The results clearly indicated that modern measured in-situ thresholds align (within 5 dB) with conventional thresholds measured, indicating the potential of in-situ audiometry for remote hearing care.

在助听器验配中使用现场测听法,与使用传统测听法和真耳测听法的标准方法相比,可减少对资源和设备的需求,因此很有吸引力。然而,由于之前的研究注意到使用传统测听法和现场测听法所测得的听阈之间存在差异,因此现场测听法的有效性一直备受争议。这种差异在开放式助听器上尤为明显,原因是通气孔造成了低频泄漏。在此,我们通过三项实验,对来自不同制造商的六款 "耳道式 "助听器进行了现场测听。在实验 I 中,测量了助听器的增益,以调查是否对规定的目标增益进行了修正。在实验二中,记录了原位刺激,以调查是否直接对提供的原位刺激进行了修正。最后,在实验 III 中,对佩戴开放式助听器的真实患者使用原位和传统测听法测量了听阈。结果表明:(1) 所有开放式助听器在使用原位或传统测听法测量时,助听器增益均不受影响;(2) 除一款开放式助听器外,所有开放式助听器在频率低于 1000 Hz 时,原位刺激均可调整达 30 dB;(3) 在频率介于 250 至 6000 Hz 之间时,参与者之间的平均阈值差异在 5 dB 以内。结果清楚地表明,现代原位测量的阈值与传统测量的阈值一致(在 5 分贝以内),这表明原位测听在远程听力保健方面具有潜力。
{"title":"In-situ Audiometry Compared to Conventional Audiometry for Hearing Aid Fitting.","authors":"Maaike Van Eeckhoutte, Bettina Skjold Jasper, Erik Finn Kjærbøl, David Harbo Jordell, Torsten Dau","doi":"10.1177/23312165241259704","DOIUrl":"10.1177/23312165241259704","url":null,"abstract":"<p><p>The use of in-situ audiometry for hearing aid fitting is appealing due to its reduced resource and equipment requirements compared to standard approaches employing conventional audiometry alongside real-ear measures. However, its validity has been a subject of debate, as previous studies noted differences between hearing thresholds measured using conventional and in-situ audiometry. The differences were particularly notable for open-fit hearing aids, attributed to low-frequency leakage caused by the vent. Here, in-situ audiometry was investigated for six receiver-in-canal hearing aids from different manufacturers through three experiments. In Experiment I, the hearing aid gain was measured to investigate whether corrections were implemented to the prescribed target gain. In Experiment II, the in-situ stimuli were recorded to investigate if corrections were directly incorporated to the delivered in-situ stimulus. Finally, in Experiment III, hearing thresholds using in-situ and conventional audiometry were measured with real patients wearing open-fit hearing aids. Results indicated that (1) the hearing aid gain remained unaffected when measured with in-situ or conventional audiometry for all open-fit measurements, (2) the in-situ stimuli were adjusted for up to 30 dB at frequencies below 1000 Hz for all open-fit hearing aids except one, which also recommends the use of closed domes for all in-situ measurements, and (3) the mean interparticipant threshold difference fell within 5 dB for frequencies between 250 and 6000 Hz. The results clearly indicated that modern measured in-situ thresholds align (within 5 dB) with conventional thresholds measured, indicating the potential of in-situ audiometry for remote hearing care.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11155351/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141248830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impact of Trained Conditions on the Generalization of Learning Gains Following Voice Discrimination Training. 声音辨别训练后,训练条件对学习成果推广的影响
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241275895
Yael Zaltz

Auditory training can lead to notable enhancements in specific tasks, but whether these improvements generalize to untrained tasks like speech-in-noise (SIN) recognition remains uncertain. This study examined how training conditions affect generalization. Fifty-five young adults were divided into "Trained-in-Quiet" (n = 15), "Trained-in-Noise" (n = 20), and "Control" (n = 20) groups. Participants completed two sessions. The first session involved an assessment of SIN recognition and voice discrimination (VD) with word or sentence stimuli, employing combined fundamental frequency (F0) + formant frequencies voice cues. Subsequently, only the trained groups proceeded to an interleaved training phase, encompassing six VD blocks with sentence stimuli, utilizing either F0-only or formant-only cues. The second session replicated the interleaved training for the trained groups, followed by a second assessment conducted by all three groups, identical to the first session. Results showed significant improvements in the trained task regardless of training conditions. However, VD training with a single cue did not enhance VD with both cues beyond control group improvements, suggesting limited generalization. Notably, the Trained-in-Noise group exhibited the most significant SIN recognition improvements posttraining, implying generalization across tasks that share similar acoustic conditions. Overall, findings suggest training conditions impact generalization by influencing processing levels associated with the trained task. Training in noisy conditions may prompt higher auditory and/or cognitive processing than training in quiet, potentially extending skills to tasks involving challenging listening conditions, such as SIN recognition. These insights hold significant theoretical and clinical implications, potentially advancing the development of effective auditory training protocols.

听觉训练能显著提高特定任务的能力,但这些能力是否能推广到噪声语音识别(SIN)等未经训练的任务中,目前仍不确定。本研究考察了训练条件对泛化的影响。55 名年轻人被分为 "安静训练 "组(15 人)、"噪声训练 "组(20 人)和 "对照 "组(20 人)。参与者完成两个环节。第一个环节是评估单词或句子刺激下的 SIN 识别能力和语音辨别能力(VD),采用基频 (F0) + 共振频率相结合的语音提示。随后,只有接受过训练的小组才进入交错训练阶段,包括六个句子刺激的 VD 块,使用纯 F0 或纯声母线索。第二阶段重复了受训组的交错训练,然后由所有三组进行第二次评估,评估内容与第一阶段相同。结果表明,无论训练条件如何,受训任务都有明显改善。然而,使用单一线索的 VD 训练并没有在对照组的基础上提高使用两个线索的 VD,这表明其普遍性有限。值得注意的是,"噪音训练 "组在训练后的 SIN 识别能力有了最显著的提高,这意味着在具有相似声学条件的任务中也具有普遍性。总之,研究结果表明,训练条件通过影响与训练任务相关的处理水平来影响泛化。与安静环境下的训练相比,嘈杂环境下的训练可能会促进更高的听觉和/或认知处理能力,从而有可能将技能扩展到具有挑战性听力条件的任务中,如 SIN 识别。这些见解具有重要的理论和临床意义,有可能推动有效听觉训练方案的开发。
{"title":"The Impact of Trained Conditions on the Generalization of Learning Gains Following Voice Discrimination Training.","authors":"Yael Zaltz","doi":"10.1177/23312165241275895","DOIUrl":"10.1177/23312165241275895","url":null,"abstract":"<p><p>Auditory training can lead to notable enhancements in specific tasks, but whether these improvements generalize to untrained tasks like speech-in-noise (SIN) recognition remains uncertain. This study examined how training conditions affect generalization. Fifty-five young adults were divided into \"Trained-in-Quiet\" (<i>n</i> = 15), \"Trained-in-Noise\" (<i>n</i> = 20), and \"Control\" (<i>n</i> = 20) groups. Participants completed two sessions. The first session involved an assessment of SIN recognition and voice discrimination (VD) with word or sentence stimuli, employing combined fundamental frequency (F0) + formant frequencies voice cues. Subsequently, only the trained groups proceeded to an interleaved training phase, encompassing six VD blocks with sentence stimuli, utilizing either F0-only or formant-only cues. The second session replicated the interleaved training for the trained groups, followed by a second assessment conducted by all three groups, identical to the first session. Results showed significant improvements in the trained task regardless of training conditions. However, VD training with a single cue did not enhance VD with both cues beyond control group improvements, suggesting limited generalization. Notably, the Trained-in-Noise group exhibited the most significant SIN recognition improvements posttraining, implying generalization across tasks that share similar acoustic conditions. Overall, findings suggest training conditions impact generalization by influencing processing levels associated with the trained task. Training in noisy conditions may prompt higher auditory and/or cognitive processing than training in quiet, potentially extending skills to tasks involving challenging listening conditions, such as SIN recognition. These insights hold significant theoretical and clinical implications, potentially advancing the development of effective auditory training protocols.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11367600/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142113727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of Monaural Temporal Electrode Asynchrony and Channel Interactions in Bilateral and Unilateral Cochlear-Implant Stimulation. 单耳颞电极不同步和通道相互作用对双侧和单侧人工耳蜗刺激的影响
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241271340
Martin J Lindenbeck, Piotr Majdak, Bernhard Laback

Timing cues such as interaural time differences (ITDs) and temporal pitch are pivotal for sound localization and source segregation, but their perception is degraded in cochlear-implant (CI) listeners as compared to normal-hearing listeners. In multi-electrode stimulation, intra-aural channel interactions between electrodes are assumed to be an important factor limiting access to those cues. The monaural asynchrony of stimulation timing across electrodes is assumed to mediate the amount of these interactions. This study investigated the effect of the monaural temporal electrode asynchrony (mTEA) between two electrodes, applied similarly in both ears, on ITD-based left/right discrimination sensitivity in five CI listeners, using pulse trains with 100 pulses per second and per electrode. Forward-masked spatial tuning curves were measured at both ears to find electrode separations evoking controlled degrees of across-electrode masking. For electrode separations smaller than 3 mm, results showed an effect of mTEA. Patterns were u/v-shaped, consistent with an explanation in terms of the effective pulse rate that appears to be subject to the well-known rate limitation in electric hearing. For separations larger than 7 mm, no mTEA effects were observed. A comparison to monaural rate-pitch discrimination in a separate set of listeners and in a matched setup showed no systematic differences between percepts. Overall, an important role of the mTEA in both binaural and monaural dual-electrode stimulation is consistent with a monaural pulse-rate limitation whose effect is mediated by channel interactions. Future CI stimulation strategies aiming at improved timing-cue encoding should minimize the stimulation delay between nearby electrodes that need to be stimulated successively.

耳间时差(ITD)和时间音高等时间线索对于声音定位和声源分离至关重要,但与正常听力的听众相比,人工耳蜗植入者(CI)对这些线索的感知能力有所下降。在多电极刺激中,电极之间的耳内通道相互作用被认为是限制获得这些线索的一个重要因素。各电极间单耳不同步的刺激时间被认为会影响这些相互作用的程度。本研究调查了两个电极之间的单耳时间电极不同步(mTEA)对基于 ITD 的左/右辨别灵敏度的影响。测量双耳的前向掩蔽空间调谐曲线,以找到可引起受控程度的跨电极掩蔽的电极间距。当电极间距小于 3 毫米时,结果显示出 mTEA 的影响。模式呈 u/v 形,符合有效脉冲速率的解释,而有效脉冲速率似乎受到众所周知的电听速率限制的影响。对于大于 7 毫米的间隔,没有观察到 mTEA 的影响。在一组单独的听者和一个匹配的设置中与单耳速率-音高辨别进行的比较显示,不同的感知之间没有系统性的差异。总之,mTEA 在双耳和单耳双电极刺激中的重要作用与单耳脉冲速率限制一致,而单耳脉冲速率限制的影响是由通道相互作用介导的。未来旨在改进时间线索编码的 CI 刺激策略应尽量减少需要连续刺激的邻近电极之间的刺激延迟。
{"title":"Effects of Monaural Temporal Electrode Asynchrony and Channel Interactions in Bilateral and Unilateral Cochlear-Implant Stimulation.","authors":"Martin J Lindenbeck, Piotr Majdak, Bernhard Laback","doi":"10.1177/23312165241271340","DOIUrl":"https://doi.org/10.1177/23312165241271340","url":null,"abstract":"<p><p>Timing cues such as interaural time differences (ITDs) and temporal pitch are pivotal for sound localization and source segregation, but their perception is degraded in cochlear-implant (CI) listeners as compared to normal-hearing listeners. In multi-electrode stimulation, intra-aural channel interactions between electrodes are assumed to be an important factor limiting access to those cues. The monaural asynchrony of stimulation timing across electrodes is assumed to mediate the amount of these interactions. This study investigated the effect of the monaural temporal electrode asynchrony (mTEA) between two electrodes, applied similarly in both ears, on ITD-based left/right discrimination sensitivity in five CI listeners, using pulse trains with 100 pulses per second and per electrode. Forward-masked spatial tuning curves were measured at both ears to find electrode separations evoking controlled degrees of across-electrode masking. For electrode separations smaller than 3 mm, results showed an effect of mTEA. Patterns were u/v-shaped, consistent with an explanation in terms of the effective pulse rate that appears to be subject to the well-known rate limitation in electric hearing. For separations larger than 7 mm, no mTEA effects were observed. A comparison to monaural rate-pitch discrimination in a separate set of listeners and in a matched setup showed no systematic differences between percepts. Overall, an important role of the mTEA in both binaural and monaural dual-electrode stimulation is consistent with a monaural pulse-rate limitation whose effect is mediated by channel interactions. Future CI stimulation strategies aiming at improved timing-cue encoding should minimize the stimulation delay between nearby electrodes that need to be stimulated successively.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142113726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptation to Reverberation for Speech Perception: A Systematic Review. 语音感知的混响适应:系统回顾
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241273399
Avgeris Tsironis, Eleni Vlahou, Panagiota Kontou, Pantelis Bagos, Norbert Kopčo

In everyday acoustic environments, reverberation alters the speech signal received at the ears. Normal-hearing listeners are robust to these distortions, quickly recalibrating to achieve accurate speech perception. Over the past two decades, multiple studies have investigated the various adaptation mechanisms that listeners use to mitigate the negative impacts of reverberation and improve speech intelligibility. Following the PRISMA guidelines, we performed a systematic review of these studies, with the aim to summarize existing research, identify open questions, and propose future directions. Two researchers independently assessed a total of 661 studies, ultimately including 23 in the review. Our results showed that adaptation to reverberant speech is robust across diverse environments, experimental setups, speech units, and tasks, in noise-masked or unmasked conditions. The time course of adaptation is rapid, sometimes occurring in less than 1 s, but this can vary depending on the reverberation and noise levels of the acoustic environment. Adaptation is stronger in moderately reverberant rooms and minimal in rooms with very intense reverberation. While the mechanisms underlying the recalibration are largely unknown, adaptation to the direct-to-reverberant ratio-related changes in amplitude modulation appears to be the predominant candidate. However, additional factors need to be explored to provide a unified theory for the effect and its applications.

在日常声学环境中,混响会改变耳朵接收到的语音信号。听力正常的听者对这些失真具有很强的适应能力,他们能迅速重新校准,以获得准确的语音感知。在过去的二十年中,多项研究调查了听者用来减轻混响负面影响和提高语音清晰度的各种适应机制。根据 PRISMA 指南,我们对这些研究进行了系统性回顾,旨在总结现有研究、找出未决问题并提出未来方向。两位研究人员独立评估了总共 661 项研究,最终将 23 项研究纳入了综述。我们的研究结果表明,在不同的环境、实验设置、语音单元和任务中,在噪声掩蔽或无掩蔽的条件下,对混响语音的适应都是稳健的。适应的时间过程很快,有时不到 1 秒钟就能完成,但这取决于声学环境的混响和噪音水平。在混响适中的房间中,适应性更强,而在混响非常强烈的房间中,适应性则微乎其微。虽然重新校准的基本机制尚不清楚,但适应振幅调制中与直达混响比相关的变化似乎是最主要的候选机制。不过,还需要对其他因素进行探索,以便为这种效果及其应用提供统一的理论。
{"title":"Adaptation to Reverberation for Speech Perception: A Systematic Review.","authors":"Avgeris Tsironis, Eleni Vlahou, Panagiota Kontou, Pantelis Bagos, Norbert Kopčo","doi":"10.1177/23312165241273399","DOIUrl":"https://doi.org/10.1177/23312165241273399","url":null,"abstract":"<p><p>In everyday acoustic environments, reverberation alters the speech signal received at the ears. Normal-hearing listeners are robust to these distortions, quickly recalibrating to achieve accurate speech perception. Over the past two decades, multiple studies have investigated the various adaptation mechanisms that listeners use to mitigate the negative impacts of reverberation and improve speech intelligibility. Following the PRISMA guidelines, we performed a systematic review of these studies, with the aim to summarize existing research, identify open questions, and propose future directions. Two researchers independently assessed a total of 661 studies, ultimately including 23 in the review. Our results showed that adaptation to reverberant speech is robust across diverse environments, experimental setups, speech units, and tasks, in noise-masked or unmasked conditions. The time course of adaptation is rapid, sometimes occurring in less than 1 s, but this can vary depending on the reverberation and noise levels of the acoustic environment. Adaptation is stronger in moderately reverberant rooms and minimal in rooms with very intense reverberation. While the mechanisms underlying the recalibration are largely unknown, adaptation to the direct-to-reverberant ratio-related changes in amplitude modulation appears to be the predominant candidate. However, additional factors need to be explored to provide a unified theory for the effect and its applications.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantifying the Impact of Auditory Deafferentation on Speech Perception. 量化听觉失真对语音感知的影响
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2024-01-01 DOI: 10.1177/23312165241227818
Jiayue Liu, Joshua Stohl, Enrique A Lopez-Poveda, Tobias Overath

The past decade has seen a wealth of research dedicated to determining which and how morphological changes in the auditory periphery contribute to people experiencing hearing difficulties in noise despite having clinically normal audiometric thresholds in quiet. Evidence from animal studies suggests that cochlear synaptopathy in the inner ear might lead to auditory nerve deafferentation, resulting in impoverished signal transmission to the brain. Here, we quantify the likely perceptual consequences of auditory deafferentation in humans via a physiologically inspired encoding-decoding model. The encoding stage simulates the processing of an acoustic input stimulus (e.g., speech) at the auditory periphery, while the decoding stage is trained to optimally regenerate the input stimulus from the simulated auditory nerve firing data. This allowed us to quantify the effect of different degrees of auditory deafferentation by measuring the extent to which the decoded signal supported the identification of speech in quiet and in noise. In a series of experiments, speech perception thresholds in quiet and in noise increased (worsened) significantly as a function of the degree of auditory deafferentation for modeled deafferentation greater than 90%. Importantly, this effect was significantly stronger in a noisy than in a quiet background. The encoding-decoding model thus captured the hallmark symptom of degraded speech perception in noise together with normal speech perception in quiet. As such, the model might function as a quantitative guide to evaluating the degree of auditory deafferentation in human listeners.

过去十年来,大量研究致力于确定听觉外围的形态变化是哪些因素以及如何导致人们在噪声中出现听力困难,尽管他们在安静环境中的听阈值临床上是正常的。来自动物实验的证据表明,内耳耳蜗突触病变可能会导致听觉神经失常,从而导致向大脑的信号传输不畅。在这里,我们通过生理学启发的编码-解码模型,量化了人类听觉失聪可能造成的感知后果。编码阶段模拟听觉外围对声音输入刺激(如语音)的处理,而解码阶段则经过训练,从模拟的听觉神经发射数据中优化再生输入刺激。这样,我们就可以通过测量解码信号在多大程度上支持在安静和噪声环境中识别语音,来量化不同程度的听觉失聪的影响。在一系列实验中,当听觉失调程度大于 90% 时,安静和噪音中的语音感知阈值会随着听觉失调程度的增加而显著增加(恶化)。重要的是,这种效应在噪声背景下明显强于在安静背景下。因此,编码-解码模型捕捉到了噪声中语音感知能力下降以及安静环境中语音感知能力正常的标志性症状。因此,该模型可作为评估人类听者听觉失认程度的定量指南。
{"title":"Quantifying the Impact of Auditory Deafferentation on Speech Perception.","authors":"Jiayue Liu, Joshua Stohl, Enrique A Lopez-Poveda, Tobias Overath","doi":"10.1177/23312165241227818","DOIUrl":"10.1177/23312165241227818","url":null,"abstract":"<p><p>The past decade has seen a wealth of research dedicated to determining which and how morphological changes in the auditory periphery contribute to people experiencing hearing difficulties in noise despite having clinically normal audiometric thresholds in quiet. Evidence from animal studies suggests that cochlear synaptopathy in the inner ear might lead to auditory nerve deafferentation, resulting in impoverished signal transmission to the brain. Here, we quantify the likely perceptual consequences of auditory deafferentation in humans via a physiologically inspired encoding-decoding model. The encoding stage simulates the processing of an acoustic input stimulus (e.g., speech) at the auditory periphery, while the decoding stage is trained to optimally regenerate the input stimulus from the simulated auditory nerve firing data. This allowed us to quantify the effect of different degrees of auditory deafferentation by measuring the extent to which the decoded signal supported the identification of speech in quiet and in noise. In a series of experiments, speech perception thresholds in quiet and in noise increased (worsened) significantly as a function of the degree of auditory deafferentation for modeled deafferentation greater than 90%. Importantly, this effect was significantly stronger in a noisy than in a quiet background. The encoding-decoding model thus captured the hallmark symptom of degraded speech perception in noise together with normal speech perception in quiet. As such, the model might function as a quantitative guide to evaluating the degree of auditory deafferentation in human listeners.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10832414/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Sound Localization Testing in Virtual Reality to Aid in the Screening of Auditory Processing Disorders. 在虚拟现实中进行声音定位测试,帮助筛查听觉处理障碍。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2024-01-01 DOI: 10.1177/23312165241235463
Melissa Ramírez, Johannes M Arend, Petra von Gablenz, Heinrich R Liesefeld, Christoph Pörschmann

Sound localization testing is key for comprehensive hearing evaluations, particularly in cases of suspected auditory processing disorders. However, sound localization is not commonly assessed in clinical practice, likely due to the complexity and size of conventional measurement systems, which require semicircular loudspeaker arrays in large and acoustically treated rooms. To address this issue, we investigated the feasibility of testing sound localization in virtual reality (VR). Previous research has shown that virtualization can lead to an increase in localization blur. To measure these effects, we conducted a study with a group of normal-hearing adults, comparing sound localization performance in different augmented reality and VR scenarios. We started with a conventional loudspeaker-based measurement setup and gradually moved to a virtual audiovisual environment, testing sound localization in each scenario using a within-participant design. The loudspeaker-based experiment yielded results comparable to those reported in the literature, and the results of the virtual localization test provided new insights into localization performance in state-of-the-art VR environments. By comparing localization performance between the loudspeaker-based and virtual conditions, we were able to estimate the increase in localization blur induced by virtualization relative to a conventional test setup. Notably, our study provides the first proxy normative cutoff values for sound localization testing in VR. As an outlook, we discuss the potential of a VR-based sound localization test as a suitable, accessible, and portable alternative to conventional setups and how it could serve as a time- and resource-saving prescreening tool to avoid unnecessarily extensive and complex laboratory testing.

声音定位测试是全面听力评估的关键,尤其是在怀疑听觉处理障碍的情况下。然而,在临床实践中,声音定位的评估并不常见,这可能是由于传统测量系统的复杂性和尺寸,需要在经过声学处理的大型房间中安装半圆形扬声器阵列。为了解决这个问题,我们研究了在虚拟现实(VR)中测试声音定位的可行性。先前的研究表明,虚拟化会导致定位模糊度增加。为了测量这些影响,我们对一组听力正常的成年人进行了研究,比较了不同增强现实和 VR 场景中的声音定位性能。我们从传统的基于扬声器的测量设置开始,逐渐过渡到虚拟视听环境,在每个场景中使用参与者内设计测试声音定位。基于扬声器的实验结果与文献报道的结果相当,而虚拟定位测试的结果则为了解最先进的 VR 环境中的定位性能提供了新的视角。通过比较基于扬声器和虚拟环境下的定位性能,我们能够估算出相对于传统测试设置,虚拟化会导致定位模糊的增加。值得注意的是,我们的研究为 VR 中的声音定位测试提供了首个替代性规范临界值。展望未来,我们讨论了基于 VR 的声音定位测试作为传统测试装置的一种合适、易用和便携的替代方法的潜力,以及它如何作为一种节省时间和资源的预筛选工具,以避免不必要的广泛和复杂的实验室测试。
{"title":"Toward Sound Localization Testing in Virtual Reality to Aid in the Screening of Auditory Processing Disorders.","authors":"Melissa Ramírez, Johannes M Arend, Petra von Gablenz, Heinrich R Liesefeld, Christoph Pörschmann","doi":"10.1177/23312165241235463","DOIUrl":"10.1177/23312165241235463","url":null,"abstract":"<p><p>Sound localization testing is key for comprehensive hearing evaluations, particularly in cases of suspected auditory processing disorders. However, sound localization is not commonly assessed in clinical practice, likely due to the complexity and size of conventional measurement systems, which require semicircular loudspeaker arrays in large and acoustically treated rooms. To address this issue, we investigated the feasibility of testing sound localization in virtual reality (VR). Previous research has shown that virtualization can lead to an increase in localization blur. To measure these effects, we conducted a study with a group of normal-hearing adults, comparing sound localization performance in different augmented reality and VR scenarios. We started with a conventional loudspeaker-based measurement setup and gradually moved to a virtual audiovisual environment, testing sound localization in each scenario using a within-participant design. The loudspeaker-based experiment yielded results comparable to those reported in the literature, and the results of the virtual localization test provided new insights into localization performance in state-of-the-art VR environments. By comparing localization performance between the loudspeaker-based and virtual conditions, we were able to estimate the increase in localization blur induced by virtualization relative to a conventional test setup. Notably, our study provides the first proxy normative cutoff values for sound localization testing in VR. As an outlook, we discuss the potential of a VR-based sound localization test as a suitable, accessible, and portable alternative to conventional setups and how it could serve as a time- and resource-saving prescreening tool to avoid unnecessarily extensive and complex laboratory testing.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10908240/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139997946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intracochlear Recording of Electrocochleography During and After Cochlear Implant Insertion Dependent on the Location in the Cochlea. 人工耳蜗植入过程中和植入后的耳蜗内耳电记录取决于耳蜗的位置。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2024-01-01 DOI: 10.1177/23312165241248973
Sabine Haumann, Max E Timm, Andreas Büchner, Thomas Lenarz, Rolf B Salcher

To preserve residual hearing during cochlear implant (CI) surgery it is desirable to use intraoperative monitoring of inner ear function (cochlear monitoring). A promising method is electrocochleography (ECochG). Within this project the relations between intracochlear ECochG recordings, position of the recording contact in the cochlea with respect to anatomy and frequency and preservation of residual hearing were investigated. The aim was to better understand the changes in ECochG signals and whether these are due to the electrode position in the cochlea or to trauma generated during insertion. During and after insertion of hearing preservation electrodes, intraoperative ECochG recordings were performed using the CI electrode (MED-EL). During insertion, the recordings were performed at discrete insertion steps on electrode contact 1. After insertion as well as postoperatively the recordings were performed at different electrode contacts. The electrode location in the cochlea during insertion was estimated by mathematical models using preoperative clinical imaging, the postoperative location was measured using postoperative clinical imaging. The recordings were analyzed from six adult CI recipients. In the four patients with good residual hearing in the low frequencies the signal amplitude rose with largest amplitudes being recorded closest to the generators of the stimulation frequency, while in both cases with severe pantonal hearing losses the amplitude initially rose and then dropped. This might be due to various reasons as discussed in the following. Our results indicate that this approach can provide valuable information for the interpretation of intracochlearly recorded ECochG signals.

为了在人工耳蜗植入(CI)手术中保留残余听力,最好在术中对内耳功能进行监测(耳蜗监测)。一种很有前景的方法就是耳蜗内电子耳蜗图(ECochG)。在该项目中,研究了耳蜗内 ECochG 记录、耳蜗内记录触点的解剖位置和频率与保留残余听力之间的关系。目的是更好地了解心电图信号的变化,以及这些变化是由耳蜗中的电极位置还是插入过程中产生的创伤引起的。在插入听力保护电极期间和之后,使用 CI 电极(MED-EL)进行术中心电图记录。在插入过程中,记录是在电极接触点 1 的不连续插入步骤上进行的。插入后和术后在不同的电极接触点进行记录。插入时电极在耳蜗中的位置是通过术前临床成像的数学模型估算出来的,术后位置则是通过术后临床成像测量出来的。对六名成年人工耳蜗植入者的记录进行了分析。在低频残余听力良好的四名患者中,信号振幅上升,最大振幅记录在最接近刺激频率发生器的位置,而在严重泛音听力损失的两名患者中,振幅最初上升,然后下降。这可能是由于下文讨论的各种原因造成的。我们的研究结果表明,这种方法可以为解读耳内记录的心电图信号提供有价值的信息。
{"title":"Intracochlear Recording of Electrocochleography During and After Cochlear Implant Insertion Dependent on the Location in the Cochlea.","authors":"Sabine Haumann, Max E Timm, Andreas Büchner, Thomas Lenarz, Rolf B Salcher","doi":"10.1177/23312165241248973","DOIUrl":"10.1177/23312165241248973","url":null,"abstract":"<p><p>To preserve residual hearing during cochlear implant (CI) surgery it is desirable to use intraoperative monitoring of inner ear function (cochlear monitoring). A promising method is electrocochleography (ECochG). Within this project the relations between intracochlear ECochG recordings, position of the recording contact in the cochlea with respect to anatomy and frequency and preservation of residual hearing were investigated. The aim was to better understand the changes in ECochG signals and whether these are due to the electrode position in the cochlea or to trauma generated during insertion. During and after insertion of hearing preservation electrodes, intraoperative ECochG recordings were performed using the CI electrode (MED-EL). During insertion, the recordings were performed at discrete insertion steps on electrode contact 1. After insertion as well as postoperatively the recordings were performed at different electrode contacts. The electrode location in the cochlea during insertion was estimated by mathematical models using preoperative clinical imaging, the postoperative location was measured using postoperative clinical imaging. The recordings were analyzed from six adult CI recipients. In the four patients with good residual hearing in the low frequencies the signal amplitude rose with largest amplitudes being recorded closest to the generators of the stimulation frequency, while in both cases with severe pantonal hearing losses the amplitude initially rose and then dropped. This might be due to various reasons as discussed in the following. Our results indicate that this approach can provide valuable information for the interpretation of intracochlearly recorded ECochG signals.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11080744/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extending Subcortical EEG Responses to Continuous Speech to the Sound-Field. 将皮层下脑电图对连续语音的反应扩展到声场。
IF 2.7 2区 医学 Q1 Health Professions Pub Date : 2024-01-01 DOI: 10.1177/23312165241246596
Florine L Bachmann, Joshua P Kulasingham, Kasper Eskelund, Martin Enqvist, Emina Alickovic, Hamish Innes-Brown

The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which is conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, brainstem responses to continuous speech presented via earphones have been recently detected using linear temporal response functions (TRFs). Here, we extend earlier studies by measuring subcortical responses to continuous speech presented in the sound-field, and assess the amount of data needed to estimate brainstem TRFs. Electroencephalography (EEG) was recorded from 24 normal hearing participants while they listened to clicks and stories presented via earphones and loudspeakers. Subcortical TRFs were computed after accounting for non-linear processing in the auditory periphery by either stimulus rectification or an auditory nerve model. Our results demonstrated that subcortical responses to continuous speech could be reliably measured in the sound-field. TRFs estimated using auditory nerve models outperformed simple rectification, and 16 minutes of data was sufficient for the TRFs of all participants to show clear wave V peaks for both earphones and sound-field stimuli. Subcortical TRFs to continuous speech were highly consistent in both earphone and sound-field conditions, and with click ABRs. However, sound-field TRFs required slightly more data (16 minutes) to achieve clear wave V peaks compared to earphone TRFs (12 minutes), possibly due to effects of room acoustics. By investigating subcortical responses to sound-field speech stimuli, this study lays the groundwork for bringing objective hearing assessment closer to real-life conditions, which may lead to improved hearing evaluations and smart hearing technologies.

听觉脑干反应(ABR)是一种用于客观听力评估的重要临床工具,传统的检测方法是将神经对数千个短刺激的反应平均化。除了这些非自然刺激之外,最近还利用线性时间反应函数(TRF)检测了脑干对耳机播放的连续语音的反应。在这里,我们通过测量皮层下对声场中连续语音的反应来扩展之前的研究,并评估估计脑干 TRF 所需的数据量。我们记录了 24 名听力正常的参与者在聆听通过耳机和扬声器播放的咔嗒声和故事时的脑电图(EEG)。在通过刺激整流或听觉神经模型对听觉外围进行非线性处理后,计算了皮层下 TRF。我们的研究结果表明,在声场中可以可靠地测量皮层下对连续语音的反应。使用听觉神经模型估算的TRF优于简单的整流,16分钟的数据足以让所有参与者的TRF在耳机和声场刺激下都显示出清晰的V波峰值。在耳机和声场条件下,皮层下连续语音 TRF 与点击 ABR 高度一致。然而,与耳机 TRF(12 分钟)相比,声场 TRF 需要稍多的数据(16 分钟)才能达到清晰的波 V 峰值,这可能是由于室内声学的影响。通过研究皮层下对声场言语刺激的反应,本研究为使客观听力评估更接近真实生活条件奠定了基础,这可能会改进听力评估和智能听力技术。
{"title":"Extending Subcortical EEG Responses to Continuous Speech to the Sound-Field.","authors":"Florine L Bachmann, Joshua P Kulasingham, Kasper Eskelund, Martin Enqvist, Emina Alickovic, Hamish Innes-Brown","doi":"10.1177/23312165241246596","DOIUrl":"10.1177/23312165241246596","url":null,"abstract":"<p><p>The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which is conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, brainstem responses to continuous speech presented via earphones have been recently detected using linear temporal response functions (TRFs). Here, we extend earlier studies by measuring subcortical responses to continuous speech presented in the sound-field, and assess the amount of data needed to estimate brainstem TRFs. Electroencephalography (EEG) was recorded from 24 normal hearing participants while they listened to clicks and stories presented via earphones and loudspeakers. Subcortical TRFs were computed after accounting for non-linear processing in the auditory periphery by either stimulus rectification or an auditory nerve model. Our results demonstrated that subcortical responses to continuous speech could be reliably measured in the sound-field. TRFs estimated using auditory nerve models outperformed simple rectification, and 16 minutes of data was sufficient for the TRFs of all participants to show clear wave V peaks for both earphones and sound-field stimuli. Subcortical TRFs to continuous speech were highly consistent in both earphone and sound-field conditions, and with click ABRs. However, sound-field TRFs required slightly more data (16 minutes) to achieve clear wave V peaks compared to earphone TRFs (12 minutes), possibly due to effects of room acoustics. By investigating subcortical responses to sound-field speech stimuli, this study lays the groundwork for bringing objective hearing assessment closer to real-life conditions, which may lead to improved hearing evaluations and smart hearing technologies.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11092544/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140913235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Trends in Hearing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1