首页 > 最新文献

Trends in Hearing最新文献

英文 中文
Performance and Reliability Evaluation of an Automated Bone-Conduction Audiometry Using Machine Learning. 利用机器学习对自动骨导听力计的性能和可靠性进行评估。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241286456
Nicolas Wallaert, Antoine Perry, Hadrien Jean, Gwenaelle Creff, Benoit Godey, Nihaad Paraouty

To date, pure-tone audiometry remains the gold standard for clinical auditory testing. However, pure-tone audiometry is time-consuming and only provides a discrete estimate of hearing acuity. Here, we aim to address these two main drawbacks by developing a machine learning (ML)-based approach for fully automated bone-conduction (BC) audiometry tests with forehead vibrator placement. Study 1 examines the occlusion effects when the headphones are positioned on both ears during BC forehead testing. Study 2 describes the ML-based approach for BC audiometry, with automated contralateral masking rules, compensation for occlusion effects and forehead-mastoid corrections. Next, the performance of ML-audiometry is examined in comparison to manual and conventional BC audiometry with mastoid placement. Finally, Study 3 examines the test-retest reliability of ML-audiometry. Our results show no significant performance difference between automated ML-audiometry and manual conventional audiometry. High test-retest reliability is achieved with the automated ML-audiometry. Together, our findings demonstrate the performance and reliability of the automated ML-based BC audiometry for both normal-hearing and hearing-impaired adult listeners with mild to severe hearing losses.

迄今为止,纯音测听仍是临床听觉测试的黄金标准。然而,纯音测听耗时较长,而且只能提供离散的听敏度估计值。在此,我们旨在通过开发一种基于机器学习(ML)的方法来解决这两个主要缺点,即使用前额振动器进行全自动骨传导(BC)听力测试。研究 1 探讨了 BC 前额测试中耳机置于双耳时的闭塞效应。研究 2 介绍了基于 ML 的 BC 听力测量方法,包括自动对侧掩蔽规则、闭塞效应补偿和前额-乳突校正。接下来,研究人员将 ML 测听法的性能与手动测听法和乳突置位的传统 BC 测听法进行了比较。最后,研究 3 检验了 ML 听力测定法的重复测试可靠性。研究结果表明,自动 ML 听力测定法与手动传统听力测定法之间没有明显的性能差异。自动 ML 听力测定法的测试再测可靠性很高。总之,我们的研究结果表明,对于听力正常和听力受损的轻度至重度听力损失的成年听众,基于 ML 的自动 BC 听力测定法都具有良好的性能和可靠性。
{"title":"Performance and Reliability Evaluation of an Automated Bone-Conduction Audiometry Using Machine Learning.","authors":"Nicolas Wallaert, Antoine Perry, Hadrien Jean, Gwenaelle Creff, Benoit Godey, Nihaad Paraouty","doi":"10.1177/23312165241286456","DOIUrl":"https://doi.org/10.1177/23312165241286456","url":null,"abstract":"<p><p>To date, pure-tone audiometry remains the gold standard for clinical auditory testing. However, pure-tone audiometry is time-consuming and only provides a discrete estimate of hearing acuity. Here, we aim to address these two main drawbacks by developing a machine learning (ML)-based approach for fully automated bone-conduction (BC) audiometry tests with forehead vibrator placement. Study 1 examines the occlusion effects when the headphones are positioned on both ears during BC forehead testing. Study 2 describes the ML-based approach for BC audiometry, with automated contralateral masking rules, compensation for occlusion effects and forehead-mastoid corrections. Next, the performance of ML-audiometry is examined in comparison to manual and conventional BC audiometry with mastoid placement. Finally, Study 3 examines the test-retest reliability of ML-audiometry. Our results show no significant performance difference between automated ML-audiometry and manual conventional audiometry. High test-retest reliability is achieved with the automated ML-audiometry. Together, our findings demonstrate the performance and reliability of the automated ML-based BC audiometry for both normal-hearing and hearing-impaired adult listeners with mild to severe hearing losses.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241286456"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142570248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is Recognition of Speech in Noise Related to Memory Disruption Caused by Irrelevant Sound? 噪音中的语音识别与无关声音造成的记忆中断有关吗?
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241262517
Daniel Oberfeld, Katharina Staab, Florian Kattner, Wolfgang Ellermeier

Listeners with normal audiometric thresholds show substantial variability in their ability to understand speech in noise (SiN). These individual differences have been reported to be associated with a range of auditory and cognitive abilities. The present study addresses the association between SiN processing and the individual susceptibility of short-term memory to auditory distraction (i.e., the irrelevant sound effect [ISE]). In a sample of 67 young adult participants with normal audiometric thresholds, we measured speech recognition performance in a spatial listening task with two interfering talkers (speech-in-speech identification), audiometric thresholds, binaural sensitivity to the temporal fine structure (interaural phase differences [IPD]), serial memory with and without interfering talkers, and self-reported noise sensitivity. Speech-in-speech processing was not significantly associated with the ISE. The most important predictors of high speech-in-speech recognition performance were a large short-term memory span, low IPD thresholds, bilaterally symmetrical audiometric thresholds, and low individual noise sensitivity. Surprisingly, the susceptibility of short-term memory to irrelevant sound accounted for a substantially smaller amount of variance in speech-in-speech processing than the nondisrupted short-term memory capacity. The data confirm the role of binaural sensitivity to the temporal fine structure, although its association to SiN recognition was weaker than in some previous studies. The inverse association between self-reported noise sensitivity and SiN processing deserves further investigation.

听力阈值正常的听者在理解噪声语音(SiN)的能力上存在很大差异。据报道,这些个体差异与一系列听觉和认知能力有关。本研究探讨了噪音语言处理能力与个体短期记忆对听觉干扰(即无关声音效应 [ISE])的敏感性之间的关联。我们以听阈正常的 67 名年轻成年参与者为样本,测量了在有两个干扰说话者的空间听力任务中的语音识别成绩(语音中的语音识别)、听阈、对时间精细结构的双耳敏感度(耳间相位差 [IPD])、有干扰说话者和无干扰说话者的序列记忆以及自我报告的噪声敏感度。语音中的语音处理与 ISE 没有明显关联。短期记忆跨度大、IPD阈值低、双侧听力阈值对称和个体噪声敏感度低是预测高语音识别能力的最重要因素。令人惊讶的是,短期记忆对无关声音的易感性在语音-语音处理中造成的差异远远小于未受干扰的短期记忆能力。这些数据证实了双耳对时间精细结构的敏感性所起的作用,尽管它与 SiN 识别的关联性比以前的一些研究要弱。自我报告的噪声敏感度与 SiN 处理之间的反向关联值得进一步研究。
{"title":"Is Recognition of Speech in Noise Related to Memory Disruption Caused by Irrelevant Sound?","authors":"Daniel Oberfeld, Katharina Staab, Florian Kattner, Wolfgang Ellermeier","doi":"10.1177/23312165241262517","DOIUrl":"10.1177/23312165241262517","url":null,"abstract":"<p><p>Listeners with normal audiometric thresholds show substantial variability in their ability to understand speech in noise (SiN). These individual differences have been reported to be associated with a range of auditory and cognitive abilities. The present study addresses the association between SiN processing and the individual susceptibility of short-term memory to auditory distraction (i.e., the irrelevant sound effect [ISE]). In a sample of 67 young adult participants with normal audiometric thresholds, we measured speech recognition performance in a spatial listening task with two interfering talkers (speech-in-speech identification), audiometric thresholds, binaural sensitivity to the temporal fine structure (interaural phase differences [IPD]), serial memory with and without interfering talkers, and self-reported noise sensitivity. Speech-in-speech processing was not significantly associated with the ISE. The most important predictors of high speech-in-speech recognition performance were a large short-term memory span, low IPD thresholds, bilaterally symmetrical audiometric thresholds, and low individual noise sensitivity. Surprisingly, the susceptibility of short-term memory to irrelevant sound accounted for a substantially smaller amount of variance in speech-in-speech processing than the nondisrupted short-term memory capacity. The data confirm the role of binaural sensitivity to the temporal fine structure, although its association to SiN recognition was weaker than in some previous studies. The inverse association between self-reported noise sensitivity and SiN processing deserves further investigation.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241262517"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11273587/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141761865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Factors Influencing Stream Segregation Based on Interaural Phase Difference Cues.
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241293787
Nicholas R Haywood, David McAlpine, Deborah Vickers, Brian Roberts

Interaural time differences are often considered a weak cue for stream segregation. We investigated this claim with headphone-presented pure tones differing in a related form of interaural configuration-interaural phase differences (ΔIPD)-or/and in frequency (ΔF). In experiment 1, sequences comprised 5 × ABA- repetitions (A and B = 80-ms tones, "-" = 160-ms silence), and listeners reported whether integration or segregation was heard. Envelope shape was varied but remained constant across all tones within a trial. Envelopes were either quasi-trapezoidal or had a fast attack and slow release (FA-SR) or vice versa (SA-FR). The FA-SR envelope caused more segregation than SA-FR in a task where only ΔIPD cues were present, but not in a corresponding ΔF-only task. In experiment 2, interstimulus interval (ISI) was varied (0-60 ms) between FA-SR tones. ΔF-based segregation decreased with increasing ISI, whereas ΔIPD-based segregation increased. This suggests that binaural temporal integration may limit segregation at short ISIs. In another task, ΔF and ΔIPD cues were presented alone or in combination. Here, ΔIPD-based segregation was greatly reduced, suggesting ΔIPD-based segregation is highly sensitive to experimental context. Experiments 1-2 demonstrate that ΔIPD can promote segregation in optimized stimuli/tasks. Experiment 3 employed a task requiring integration for good performance. Listeners detected a delay on the final four B tones of an 8 × ABA- sequence. Although performance worsened with increasing ΔF, increasing ΔIPD had only a marginal impact. This suggests that, even in stimuli optimized for ΔIPD-based segregation, listeners remained mostly able to disregard ΔIPD when segregation was detrimental to performance.

{"title":"Factors Influencing Stream Segregation Based on Interaural Phase Difference Cues.","authors":"Nicholas R Haywood, David McAlpine, Deborah Vickers, Brian Roberts","doi":"10.1177/23312165241293787","DOIUrl":"10.1177/23312165241293787","url":null,"abstract":"<p><p>Interaural time differences are often considered a weak cue for stream segregation. We investigated this claim with headphone-presented pure tones differing in a related form of interaural configuration-interaural phase differences (ΔIPD)-or/and in frequency (ΔF). In experiment 1, sequences comprised 5 × ABA- repetitions (A and B = 80-ms tones, \"-\" = 160-ms silence), and listeners reported whether integration or segregation was heard. Envelope shape was varied but remained constant across all tones within a trial. Envelopes were either quasi-trapezoidal or had a fast attack and slow release (FA-SR) or vice versa (SA-FR). The FA-SR envelope caused more segregation than SA-FR in a task where only ΔIPD cues were present, but not in a corresponding ΔF-only task. In experiment 2, interstimulus interval (ISI) was varied (0-60 ms) between FA-SR tones. ΔF-based segregation decreased with increasing ISI, whereas ΔIPD-based segregation increased. This suggests that binaural temporal integration may limit segregation at short ISIs. In another task, ΔF and ΔIPD cues were presented alone or in combination. Here, ΔIPD-based segregation was greatly reduced, suggesting ΔIPD-based segregation is highly sensitive to experimental context. Experiments 1-2 demonstrate that ΔIPD can promote segregation in optimized stimuli/tasks. Experiment 3 employed a task requiring integration for good performance. Listeners detected a delay on the final four B tones of an 8 × ABA- sequence. Although performance worsened with increasing ΔF, increasing ΔIPD had only a marginal impact. This suggests that, even in stimuli optimized for ΔIPD-based segregation, listeners remained mostly able to disregard ΔIPD when segregation was detrimental to performance.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241293787"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11629429/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142802838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Phoneme-Scale Assessment of Multichannel Speech Enhancement Algorithms.
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241292205
Nasser-Eddine Monir, Paul Magron, Romain Serizel

In the intricate acoustic landscapes where speech intelligibility is challenged by noise and reverberation, multichannel speech enhancement emerges as a promising solution for individuals with hearing loss. Such algorithms are commonly evaluated at the utterance scale. However, this approach overlooks the granular acoustic nuances revealed by phoneme-specific analysis, potentially obscuring key insights into their performance. This paper presents an in-depth phoneme-scale evaluation of three state-of-the-art multichannel speech enhancement algorithms. These algorithms-filter-and-sum network, minimum variance distortionless response, and Tango-are here extensively evaluated across different noise conditions and spatial setups, employing realistic acoustic simulations with measured room impulse responses, and leveraging diversity offered by multiple microphones in a binaural hearing setup. The study emphasizes the fine-grained phoneme-scale analysis, revealing that while some phonemes like plosives are heavily impacted by environmental acoustics and challenging to deal with by the algorithms, others like nasals and sibilants see substantial improvements after enhancement. These investigations demonstrate important improvements in phoneme clarity in noisy conditions, with insights that could drive the development of more personalized and phoneme-aware hearing aid technologies. Additionally, while this study provides extensive data on the physical metrics of processed speech, these physical metrics do not necessarily imitate human perceptions of speech, and the impact of the findings presented would have to be investigated through listening tests.

{"title":"A Phoneme-Scale Assessment of Multichannel Speech Enhancement Algorithms.","authors":"Nasser-Eddine Monir, Paul Magron, Romain Serizel","doi":"10.1177/23312165241292205","DOIUrl":"10.1177/23312165241292205","url":null,"abstract":"<p><p>In the intricate acoustic landscapes where speech intelligibility is challenged by noise and reverberation, multichannel speech enhancement emerges as a promising solution for individuals with hearing loss. Such algorithms are commonly evaluated at the utterance scale. However, this approach overlooks the granular acoustic nuances revealed by phoneme-specific analysis, potentially obscuring key insights into their performance. This paper presents an in-depth phoneme-scale evaluation of three state-of-the-art multichannel speech enhancement algorithms. These algorithms-filter-and-sum network, minimum variance distortionless response, and Tango-are here extensively evaluated across different noise conditions and spatial setups, employing realistic acoustic simulations with measured room impulse responses, and leveraging diversity offered by multiple microphones in a binaural hearing setup. The study emphasizes the fine-grained phoneme-scale analysis, revealing that while some phonemes like plosives are heavily impacted by environmental acoustics and challenging to deal with by the algorithms, others like nasals and sibilants see substantial improvements after enhancement. These investigations demonstrate important improvements in phoneme clarity in noisy conditions, with insights that could drive the development of more personalized and phoneme-aware hearing aid technologies. Additionally, while this study provides extensive data on the physical metrics of processed speech, these physical metrics do not necessarily imitate human perceptions of speech, and the impact of the findings presented would have to be investigated through listening tests.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241292205"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11638999/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ADT Network: A Novel Nonlinear Method for Decoding Speech Envelopes From EEG Signals. ADT 网络:从脑电图信号中解码语音包络的新型非线性方法
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241282872
Ruixiang Liu, Chang Liu, Dan Cui, Huan Zhang, Xinmeng Xu, Yuxin Duan, Yihu Chao, Xianzheng Sha, Limin Sun, Xiulan Ma, Shuo Li, Shijie Chang

Decoding speech envelopes from electroencephalogram (EEG) signals holds potential as a research tool for objectively assessing auditory processing, which could contribute to future developments in hearing loss diagnosis. However, current methods struggle to meet both high accuracy and interpretability. We propose a deep learning model called the auditory decoding transformer (ADT) network for speech envelope reconstruction from EEG signals to address these issues. The ADT network uses spatio-temporal convolution for feature extraction, followed by a transformer decoder to decode the speech envelopes. Through anticausal masking, the ADT considers only the current and future EEG features to match the natural relationship of speech and EEG. Performance evaluation shows that the ADT network achieves average reconstruction scores of 0.168 and 0.167 on the SparrKULee and DTU datasets, respectively, rivaling those of other nonlinear models. Furthermore, by visualizing the weights of the spatio-temporal convolution layer as time-domain filters and brain topographies, combined with an ablation study of the temporal convolution kernels, we analyze the behavioral patterns of the ADT network in decoding speech envelopes. The results indicate that low- (0.5-8 Hz) and high-frequency (14-32 Hz) EEG signals are more critical for envelope reconstruction and that the active brain regions are primarily distributed bilaterally in the auditory cortex, consistent with previous research. Visualization of attention scores further validated previous research. In summary, the ADT network balances high performance and interpretability, making it a promising tool for studying neural speech envelope tracking.

从脑电图(EEG)信号中解码语音包络线有望成为客观评估听觉处理过程的研究工具,这将有助于未来听力损失诊断的发展。然而,目前的方法很难同时满足高准确性和可解释性的要求。为了解决这些问题,我们提出了一种名为听觉解码转换器(ADT)网络的深度学习模型,用于从脑电图信号重建语音包络。ADT 网络使用时空卷积进行特征提取,然后使用变压器解码器对语音包络进行解码。通过反因果掩蔽,ADT 只考虑当前和未来的脑电图特征,以符合语音和脑电图的自然关系。性能评估结果表明,ADT 网络在 SparrKULee 和 DTU 数据集上的平均重建分数分别达到了 0.168 和 0.167,可与其他非线性模型相媲美。此外,通过将时空卷积层的权重可视化为时域滤波器和脑拓扑图,并结合对时空卷积核的消融研究,我们分析了 ADT 网络在解码语音包络时的行为模式。结果表明,低频(0.5-8 Hz)和高频(14-32 Hz)脑电信号对包络重构更为关键,而活跃的脑区主要分布在听觉皮层的双侧,这与之前的研究一致。注意力得分的可视化进一步验证了之前的研究。总之,ADT 网络兼顾了高性能和可解释性,是研究神经语音包络跟踪的理想工具。
{"title":"ADT Network: A Novel Nonlinear Method for Decoding Speech Envelopes From EEG Signals.","authors":"Ruixiang Liu, Chang Liu, Dan Cui, Huan Zhang, Xinmeng Xu, Yuxin Duan, Yihu Chao, Xianzheng Sha, Limin Sun, Xiulan Ma, Shuo Li, Shijie Chang","doi":"10.1177/23312165241282872","DOIUrl":"https://doi.org/10.1177/23312165241282872","url":null,"abstract":"<p><p>Decoding speech envelopes from electroencephalogram (EEG) signals holds potential as a research tool for objectively assessing auditory processing, which could contribute to future developments in hearing loss diagnosis. However, current methods struggle to meet both high accuracy and interpretability. We propose a deep learning model called the auditory decoding transformer (ADT) network for speech envelope reconstruction from EEG signals to address these issues. The ADT network uses spatio-temporal convolution for feature extraction, followed by a transformer decoder to decode the speech envelopes. Through anticausal masking, the ADT considers only the current and future EEG features to match the natural relationship of speech and EEG. Performance evaluation shows that the ADT network achieves average reconstruction scores of 0.168 and 0.167 on the SparrKULee and DTU datasets, respectively, rivaling those of other nonlinear models. Furthermore, by visualizing the weights of the spatio-temporal convolution layer as time-domain filters and brain topographies, combined with an ablation study of the temporal convolution kernels, we analyze the behavioral patterns of the ADT network in decoding speech envelopes. The results indicate that low- (0.5-8 Hz) and high-frequency (14-32 Hz) EEG signals are more critical for envelope reconstruction and that the active brain regions are primarily distributed bilaterally in the auditory cortex, consistent with previous research. Visualization of attention scores further validated previous research. In summary, the ADT network balances high performance and interpretability, making it a promising tool for studying neural speech envelope tracking.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241282872"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11489951/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142478206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of a Phrase-Based Speech-Recognition Test Using Synthetic Speech. 利用合成语音开发基于短语的语音识别测试。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241261490
Saskia Ibelings, Thomas Brand, Esther Ruigendijk, Inga Holube

Speech-recognition tests are widely used in both clinical and research audiology. The purpose of this study was the development of a novel speech-recognition test that combines concepts of different speech-recognition tests to reduce training effects and allows for a large set of speech material. The new test consists of four different words per trial in a meaningful construct with a fixed structure, the so-called phrases. Various free databases were used to select the words and to determine their frequency. Highly frequent nouns were grouped into thematic categories and combined with related adjectives and infinitives. After discarding inappropriate and unnatural combinations, and eliminating duplications of (sub-)phrases, a total number of 772 phrases remained. Subsequently, the phrases were synthesized using a text-to-speech system. The synthesis significantly reduces the effort compared to recordings with a real speaker. After excluding outliers, measured speech-recognition scores for the phrases with 31 normal-hearing participants at fixed signal-to-noise ratios (SNR) revealed speech-recognition thresholds (SRT) for each phrase varying up to 4 dB. The median SRT was -9.1 dB SNR and thus comparable to existing sentence tests. The psychometric function's slope of 15 percentage points per dB is also comparable and enables efficient use in audiology. Summarizing, the principle of creating speech material in a modular system has many potential applications.

语音识别测试广泛应用于临床和研究听力学领域。本研究的目的是开发一种新的语音识别测试,它结合了不同语音识别测试的概念,以减少训练效果,并允许使用大量的语音材料。新测试由每次试验的四个不同单词组成,这些单词具有固定的结构,即所谓的短语。我们使用各种免费数据库来选择单词并确定其频率。高频名词被归入主题类别,并与相关的形容词和不定式结合在一起。在剔除了不恰当和不自然的组合以及重复的(子)短语后,共剩下 772 个短语。随后,使用文本到语音系统对这些短语进行了合成。与真实说话者的录音相比,合成大大减少了工作量。排除异常值后,在固定信噪比(SNR)条件下对 31 名听力正常的参与者进行的短语语音识别评分显示,每个短语的语音识别阈值(SRT)最高相差 4 分贝。SRT 的中位数为 -9.1 dB SNR,因此可与现有的句子测试相媲美。心理测量函数的斜率为每分贝 15 个百分点,也具有可比性,可在听力学中有效使用。总之,在模块化系统中创建语音材料的原理具有许多潜在的应用价值。
{"title":"Development of a Phrase-Based Speech-Recognition Test Using Synthetic Speech.","authors":"Saskia Ibelings, Thomas Brand, Esther Ruigendijk, Inga Holube","doi":"10.1177/23312165241261490","DOIUrl":"10.1177/23312165241261490","url":null,"abstract":"<p><p>Speech-recognition tests are widely used in both clinical and research audiology. The purpose of this study was the development of a novel speech-recognition test that combines concepts of different speech-recognition tests to reduce training effects and allows for a large set of speech material. The new test consists of four different words per trial in a meaningful construct with a fixed structure, the so-called phrases. Various free databases were used to select the words and to determine their frequency. Highly frequent nouns were grouped into thematic categories and combined with related adjectives and infinitives. After discarding inappropriate and unnatural combinations, and eliminating duplications of (sub-)phrases, a total number of 772 phrases remained. Subsequently, the phrases were synthesized using a text-to-speech system. The synthesis significantly reduces the effort compared to recordings with a real speaker. After excluding outliers, measured speech-recognition scores for the phrases with 31 normal-hearing participants at fixed signal-to-noise ratios (SNR) revealed speech-recognition thresholds (SRT) for each phrase varying up to 4 dB. The median SRT was -9.1 dB SNR and thus comparable to existing sentence tests. The psychometric function's slope of 15 percentage points per dB is also comparable and enables efficient use in audiology. Summarizing, the principle of creating speech material in a modular system has many potential applications.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241261490"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11273571/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141761864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speech-Identification During Standing as a Multitasking Challenge for Young, Middle-Aged and Older Adults. 站立时的语音识别是年轻人、中年人和老年人面临的一项多重任务挑战。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241260621
Mira Van Wilderode, Nathan Van Humbeeck, Ralf Krampe, Astrid van Wieringen

While listening, we commonly participate in simultaneous activities. For instance, at receptions people often stand while engaging in conversation. It is known that listening and postural control are associated with each other. Previous studies focused on the interplay of listening and postural control when the speech identification task had rather high cognitive control demands. This study aimed to determine whether listening and postural control interact when the speech identification task requires minimal cognitive control, i.e., when words are presented without background noise, or a large memory load. This study included 22 young adults, 27 middle-aged adults, and 21 older adults. Participants performed a speech identification task (auditory single task), a postural control task (posture single task) and combined postural control and speech identification tasks (dual task) to assess the effects of multitasking. The difficulty levels of the listening and postural control tasks were manipulated by altering the level of the words (25 or 30 dB SPL) and the mobility of the platform (stable or moving). The sound level was increased for adults with a hearing impairment. In the dual-task, listening performance decreased, especially for middle-aged and older adults, while postural control improved. These results suggest that even when cognitive control demands for listening are minimal, interaction with postural control occurs. Correlational analysis revealed that hearing loss was a better predictor than age of speech identification and postural control.

在聆听时,我们通常会同时参与一些活动。例如,在招待会上,人们常常一边站着一边交谈。众所周知,听力和姿势控制是相互关联的。以前的研究主要集中在语音识别任务对认知控制要求较高时,听力和姿势控制的相互作用。本研究旨在确定当语音识别任务对认知控制要求最低时,即单词出现时没有背景噪音,或记忆负荷较大时,听力和姿势控制是否会相互作用。这项研究包括 22 名年轻人、27 名中年人和 21 名老年人。受试者分别完成了语音识别任务(听觉单一任务)、姿势控制任务(姿势单一任务)以及姿势控制和语音识别联合任务(双重任务),以评估多任务的影响。听力和姿势控制任务的难度是通过改变词语的音量(25 或 30 dB SPL)和平台的移动性(稳定或移动)来控制的。对于有听力障碍的成年人,声级会提高。在双重任务中,听力表现下降,尤其是中老年人,而姿势控制能力则有所提高。这些结果表明,即使对听力的认知控制要求很低,也会与姿势控制发生相互作用。相关分析表明,听力损失比年龄更能预测语言识别能力和姿势控制能力。
{"title":"Speech-Identification During Standing as a Multitasking Challenge for Young, Middle-Aged and Older Adults.","authors":"Mira Van Wilderode, Nathan Van Humbeeck, Ralf Krampe, Astrid van Wieringen","doi":"10.1177/23312165241260621","DOIUrl":"10.1177/23312165241260621","url":null,"abstract":"<p><p>While listening, we commonly participate in simultaneous activities. For instance, at receptions people often stand while engaging in conversation. It is known that listening and postural control are associated with each other. Previous studies focused on the interplay of listening and postural control when the speech identification task had rather high cognitive control demands. This study aimed to determine whether listening and postural control interact when the speech identification task requires minimal cognitive control, i.e., when words are presented without background noise, or a large memory load. This study included 22 young adults, 27 middle-aged adults, and 21 older adults. Participants performed a speech identification task (auditory single task), a postural control task (posture single task) and combined postural control and speech identification tasks (dual task) to assess the effects of multitasking. The difficulty levels of the listening and postural control tasks were manipulated by altering the level of the words (25 or 30 dB SPL) and the mobility of the platform (stable or moving). The sound level was increased for adults with a hearing impairment. In the dual-task, listening performance decreased, especially for middle-aged and older adults, while postural control improved. These results suggest that even when cognitive control demands for listening are minimal, interaction with postural control occurs. Correlational analysis revealed that hearing loss was a better predictor than age of speech identification and postural control.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241260621"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11282555/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141761866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Perspective on Auditory Wellness: What It Is, Why It Is Important, and How It Can Be Managed. 听觉健康透视:听觉健康是什么、为什么重要以及如何管理。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241273342
Larry E Humes, Sumitrajit Dhar, Vinaya Manchaiah, Anu Sharma, Theresa H Chisolm, Michelle L Arnold, Victoria A Sanchez

During the last decade, there has been a move towards consumer-centric hearing healthcare. This is a direct result of technological advancements (e.g., merger of consumer grade hearing aids with consumer grade earphones creating a wide range of hearing devices) as well as policy changes (e.g., the U.S. Food and Drug Administration creating a new over-the-counter [OTC] hearing aid category). In addition to various direct-to-consumer (DTC) hearing devices available on the market, there are also several validated tools for the self-assessment of auditory function and the detection of ear disease, as well as tools for education about hearing loss, hearing devices, and communication strategies. Further, all can be made easily available to a wide range of people. This perspective provides a framework and identifies tools to improve and maintain optimal auditory wellness across the adult life course. A broadly available and accessible set of tools that can be made available on a digital platform to aid adults in the assessment and as needed, the improvement, of auditory wellness is discussed.

在过去的十年中,听力保健已开始向以消费者为中心的方向发展。这是技术进步(例如,消费级助听器与消费级耳机的合并创造了多种听力设备)和政策变化(例如,美国食品和药物管理局创建了一个新的非处方[OTC]助听器类别)的直接结果。除了市场上各种直接面向消费者(DTC)的听力设备外,还有几种经过验证的听觉功能自我评估和耳病检测工具,以及听力损失、听力设备和沟通策略教育工具。此外,所有这些工具都可以方便地提供给广大人群。这一观点提供了一个框架,并确定了在整个成人生活过程中改善和保持最佳听觉健康的工具。本文讨论了一套可在数字平台上广泛使用和获取的工具,以帮助成年人评估听力健康状况,并根据需要改善听力健康状况。
{"title":"A Perspective on Auditory Wellness: What It Is, Why It Is Important, and How It Can Be Managed.","authors":"Larry E Humes, Sumitrajit Dhar, Vinaya Manchaiah, Anu Sharma, Theresa H Chisolm, Michelle L Arnold, Victoria A Sanchez","doi":"10.1177/23312165241273342","DOIUrl":"10.1177/23312165241273342","url":null,"abstract":"<p><p>During the last decade, there has been a move towards consumer-centric hearing healthcare. This is a direct result of technological advancements (e.g., merger of consumer grade hearing aids with consumer grade earphones creating a wide range of hearing devices) as well as policy changes (e.g., the U.S. Food and Drug Administration creating a new over-the-counter [OTC] hearing aid category). In addition to various direct-to-consumer (DTC) hearing devices available on the market, there are also several validated tools for the self-assessment of auditory function and the detection of ear disease, as well as tools for education about hearing loss, hearing devices, and communication strategies. Further, all can be made easily available to a wide range of people. This <i>perspective</i> provides a framework and identifies tools to improve and maintain optimal auditory wellness across the adult life course. A broadly available and accessible set of tools that can be made available on a digital platform to aid adults in the assessment and as needed, the improvement, of auditory wellness is discussed.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241273342"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11329910/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and Evaluation of a Loudness Validation Method With Natural Signals for Hearing Aid Fitting.
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241299778
Mats Exter, Theresa Jansen, Laura Hartog, Dirk Oetting

Loudness is a fundamental dimension of auditory perception. When hearing impairment results in a loudness deficit, hearing aids are typically prescribed to compensate for this. However, the relationship between an individual's specific hearing impairment and the hearing aid fitting strategy used to address it is usually not straightforward. Various iterations of fine-tuning and troubleshooting by the hearing care professional are required, based largely on experience and the introspective feedback from the hearing aid user. We present the development of a new method for validating an individual's loudness perception of natural signals relative to a normal-hearing reference. It is a measurement method specifically designed for the situation typically encountered by hearing care professionals, namely, with hearing-impaired individuals in the free field with their hearing aids in place. In combination with the qualitative user feedback that the measurement is fast and that its results are intuitively displayed and easily interpretable, the method fills a gap between existing tools and is well suited to provide concrete guidance and orientation to the hearing care professional in the process of individual gain adjustment.

{"title":"Development and Evaluation of a Loudness Validation Method With Natural Signals for Hearing Aid Fitting.","authors":"Mats Exter, Theresa Jansen, Laura Hartog, Dirk Oetting","doi":"10.1177/23312165241299778","DOIUrl":"https://doi.org/10.1177/23312165241299778","url":null,"abstract":"<p><p>Loudness is a fundamental dimension of auditory perception. When hearing impairment results in a loudness deficit, hearing aids are typically prescribed to compensate for this. However, the relationship between an individual's specific hearing impairment and the hearing aid fitting strategy used to address it is usually not straightforward. Various iterations of fine-tuning and troubleshooting by the hearing care professional are required, based largely on experience and the introspective feedback from the hearing aid user. We present the development of a new method for validating an individual's loudness perception of natural signals relative to a normal-hearing reference. It is a measurement method specifically designed for the situation typically encountered by hearing care professionals, namely, with hearing-impaired individuals in the free field with their hearing aids in place. In combination with the qualitative user feedback that the measurement is fast and that its results are intuitively displayed and easily interpretable, the method fills a gap between existing tools and is well suited to provide concrete guidance and orientation to the hearing care professional in the process of individual gain adjustment.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241299778"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142781551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of Monaural Temporal Electrode Asynchrony and Channel Interactions in Bilateral and Unilateral Cochlear-Implant Stimulation. 单耳颞电极不同步和通道相互作用对双侧和单侧人工耳蜗刺激的影响
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241271340
Martin J Lindenbeck, Piotr Majdak, Bernhard Laback

Timing cues such as interaural time differences (ITDs) and temporal pitch are pivotal for sound localization and source segregation, but their perception is degraded in cochlear-implant (CI) listeners as compared to normal-hearing listeners. In multi-electrode stimulation, intra-aural channel interactions between electrodes are assumed to be an important factor limiting access to those cues. The monaural asynchrony of stimulation timing across electrodes is assumed to mediate the amount of these interactions. This study investigated the effect of the monaural temporal electrode asynchrony (mTEA) between two electrodes, applied similarly in both ears, on ITD-based left/right discrimination sensitivity in five CI listeners, using pulse trains with 100 pulses per second and per electrode. Forward-masked spatial tuning curves were measured at both ears to find electrode separations evoking controlled degrees of across-electrode masking. For electrode separations smaller than 3 mm, results showed an effect of mTEA. Patterns were u/v-shaped, consistent with an explanation in terms of the effective pulse rate that appears to be subject to the well-known rate limitation in electric hearing. For separations larger than 7 mm, no mTEA effects were observed. A comparison to monaural rate-pitch discrimination in a separate set of listeners and in a matched setup showed no systematic differences between percepts. Overall, an important role of the mTEA in both binaural and monaural dual-electrode stimulation is consistent with a monaural pulse-rate limitation whose effect is mediated by channel interactions. Future CI stimulation strategies aiming at improved timing-cue encoding should minimize the stimulation delay between nearby electrodes that need to be stimulated successively.

耳间时差(ITD)和时间音高等时间线索对于声音定位和声源分离至关重要,但与正常听力的听众相比,人工耳蜗植入者(CI)对这些线索的感知能力有所下降。在多电极刺激中,电极之间的耳内通道相互作用被认为是限制获得这些线索的一个重要因素。各电极间单耳不同步的刺激时间被认为会影响这些相互作用的程度。本研究调查了两个电极之间的单耳时间电极不同步(mTEA)对基于 ITD 的左/右辨别灵敏度的影响。测量双耳的前向掩蔽空间调谐曲线,以找到可引起受控程度的跨电极掩蔽的电极间距。当电极间距小于 3 毫米时,结果显示出 mTEA 的影响。模式呈 u/v 形,符合有效脉冲速率的解释,而有效脉冲速率似乎受到众所周知的电听速率限制的影响。对于大于 7 毫米的间隔,没有观察到 mTEA 的影响。在一组单独的听者和一个匹配的设置中与单耳速率-音高辨别进行的比较显示,不同的感知之间没有系统性的差异。总之,mTEA 在双耳和单耳双电极刺激中的重要作用与单耳脉冲速率限制一致,而单耳脉冲速率限制的影响是由通道相互作用介导的。未来旨在改进时间线索编码的 CI 刺激策略应尽量减少需要连续刺激的邻近电极之间的刺激延迟。
{"title":"Effects of Monaural Temporal Electrode Asynchrony and Channel Interactions in Bilateral and Unilateral Cochlear-Implant Stimulation.","authors":"Martin J Lindenbeck, Piotr Majdak, Bernhard Laback","doi":"10.1177/23312165241271340","DOIUrl":"10.1177/23312165241271340","url":null,"abstract":"<p><p>Timing cues such as interaural time differences (ITDs) and temporal pitch are pivotal for sound localization and source segregation, but their perception is degraded in cochlear-implant (CI) listeners as compared to normal-hearing listeners. In multi-electrode stimulation, intra-aural channel interactions between electrodes are assumed to be an important factor limiting access to those cues. The monaural asynchrony of stimulation timing across electrodes is assumed to mediate the amount of these interactions. This study investigated the effect of the monaural temporal electrode asynchrony (mTEA) between two electrodes, applied similarly in both ears, on ITD-based left/right discrimination sensitivity in five CI listeners, using pulse trains with 100 pulses per second and per electrode. Forward-masked spatial tuning curves were measured at both ears to find electrode separations evoking controlled degrees of across-electrode masking. For electrode separations smaller than 3 mm, results showed an effect of mTEA. Patterns were u/v-shaped, consistent with an explanation in terms of the effective pulse rate that appears to be subject to the well-known rate limitation in electric hearing. For separations larger than 7 mm, no mTEA effects were observed. A comparison to monaural rate-pitch discrimination in a separate set of listeners and in a matched setup showed no systematic differences between percepts. Overall, an important role of the mTEA in both binaural and monaural dual-electrode stimulation is consistent with a monaural pulse-rate limitation whose effect is mediated by channel interactions. Future CI stimulation strategies aiming at improved timing-cue encoding should minimize the stimulation delay between nearby electrodes that need to be stimulated successively.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241271340"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11382250/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142113726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Trends in Hearing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1