首页 > 最新文献

Trends in Hearing最新文献

英文 中文
Assessment of Speech Processing and Listening Effort Associated With Speech-on-Speech Masking Using the Visual World Paradigm and Pupillometry. 使用视觉世界范式和瞳孔测量法评估语音对语音掩蔽相关的语音处理和听力努力。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 DOI: 10.1177/23312165241306091
Khaled H A Abdel-Latif, Thomas Koelewijn, Deniz Başkent, Hartmut Meister

Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal. This study aimed to propose a new VWP to examine the time course of speech segregation when competing sentences are presented and to collect pupil size data as a measure of listening effort. Twelve young normal-hearing participants were presented with competing matrix sentences (structure "name-verb-numeral-adjective-object") diotically via headphones at four target-to-masker ratios (TMRs), corresponding to intermediate to near perfect speech recognition. The VWP visually presented the number and object words from both the target and masker sentences. Participants were instructed to gaze at the corresponding words of the target sentence without providing verbal responses. The gaze fixations consistently reflected the different TMRs for both number and object words. The slopes of the fixation curves were steeper, and the proportion of target fixations increased with higher TMRs, suggesting more efficient segregation under more favorable conditions. Temporal analysis of pupil data using Bayesian paired sample t-tests showed a corresponding reduction in pupil dilation with increasing TMR, indicating reduced listening effort. The results support the conclusion that the proposed VWP and the captured eye movements and pupil dilation are suitable for objective assessment of sentence-based speech-on-speech segregation and the corresponding listening effort.

在日常语言交流中,语音对语音的掩蔽是一种常见且具有挑战性的情况。要将注意力集中在目标语音上,就必须具备分离相互竞争的听觉流的能力。视觉世界范式(Visual World Paradigm,VWP)通过捕捉对反映语音信号的视觉呈现图标的注视固定来深入了解语音处理过程。本研究旨在提出一种新的视觉世界范式,以考察在出现竞争句子时语音分离的时间过程,并收集瞳孔大小数据作为听力努力程度的测量指标。研究人员通过耳机向 12 名听力正常的年轻受试者连续呈现了四种目标与掩码比(TMRs)的竞争矩阵句子(结构为 "名称-动词-名词-形容词-宾语"),这四种目标与掩码比分别对应于中等到接近完美的语音识别能力。VWP 可视化呈现目标句和掩蔽句中的数词和宾词。受试者被要求注视目标句子中的相应单词,而不提供口头回答。注视定着一致地反映了数字词和物词的不同 TMR。固定曲线的斜率更陡峭,目标固定的比例随 TMR 越高而增加,这表明在更有利的条件下,分离的效率更高。使用贝叶斯配对样本 t 检验法对瞳孔数据进行的时间分析表明,随着 TMR 的增加,瞳孔放大的程度也相应减小,这表明听力强度降低了。这些结果支持这样的结论,即所提出的 VWP 以及捕捉到的眼球运动和瞳孔放大适合用于客观评估基于句子的语音分离和相应的听力强度。
{"title":"Assessment of Speech Processing and Listening Effort Associated With Speech-on-Speech Masking Using the Visual World Paradigm and Pupillometry.","authors":"Khaled H A Abdel-Latif, Thomas Koelewijn, Deniz Başkent, Hartmut Meister","doi":"10.1177/23312165241306091","DOIUrl":"10.1177/23312165241306091","url":null,"abstract":"<p><p>Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal. This study aimed to propose a new VWP to examine the time course of speech segregation when competing sentences are presented and to collect pupil size data as a measure of listening effort. Twelve young normal-hearing participants were presented with competing matrix sentences (structure \"name-verb-numeral-adjective-object\") diotically via headphones at four target-to-masker ratios (TMRs), corresponding to intermediate to near perfect speech recognition. The VWP visually presented the number and object words from both the target and masker sentences. Participants were instructed to gaze at the corresponding words of the target sentence without providing verbal responses. The gaze fixations consistently reflected the different TMRs for both number and object words. The slopes of the fixation curves were steeper, and the proportion of target fixations increased with higher TMRs, suggesting more efficient segregation under more favorable conditions. Temporal analysis of pupil data using Bayesian paired sample <i>t</i>-tests showed a corresponding reduction in pupil dilation with increasing TMR, indicating reduced listening effort. The results support the conclusion that the proposed VWP and the captured eye movements and pupil dilation are suitable for objective assessment of sentence-based speech-on-speech segregation and the corresponding listening effort.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241306091"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11726529/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142972857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Association of Increased Risk of Injury in Adults With Hearing Loss: A Population-Based Cohort Study. 听力损失成人损伤风险增加的相关性:一项基于人群的队列研究
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 DOI: 10.1177/23312165241309589
Kuan-Yu Lai, Hung-Che Lin, Wan-Ting Shih, Wu-Chien Chien, Chi-Hsiang Chung, Mingchih Chen, Jeng-Wen Chen, Hung-Chun Chung

This nationwide retrospective cohort study examines the association between adults with hearing loss (HL) and subsequent injury risk. Utilizing data from the Taiwan National Health Insurance Research Database (2000-2017), the study included 19,480 patients with HL and 77,920 matched controls. Over an average follow-up of 9.08 years, 18.30% of the 97,400 subjects sustained subsequent all-cause injuries. The injury incidence was significantly higher in the HL group compared to the control group (24.04% vs. 16.86%, p < .001). After adjusting for demographics and comorbidities, the adjusted hazard ratio (aHR) for injury in the HL cohort was 2.35 (95% CI: 2.22-2.49). Kaplan-Meier analysis showed significant differences in injury-free survival between the HL and control groups (log-rank test, p < .001). The increased risk was consistent across age groups (18-64 and ≥65 years), with the HL group showing a higher risk of unintentional injuries (aHR: 2.62; 95% CI: 2.45-2.80), including falls (aHR: 2.83; 95% CI: 2.52-3.17) and traffic-related injuries (aHR: 2.38; 95% CI: 2.07-2.74). These findings highlight an independent association between HL and increased injury risk, underscoring the need for healthcare providers to counsel adult HL patients on preventive measures.

这项全国性的回顾性队列研究探讨了成人听力损失(HL)与随后的损伤风险之间的关系。​在平均9.08年的随访中,97,400名受试者中有18.30%随后遭受了全因损伤。HL组损伤发生率明显高于对照组(24.04% vs. 16.86%, p < 0.05)
{"title":"Association of Increased Risk of Injury in Adults With Hearing Loss: A Population-Based Cohort Study.","authors":"Kuan-Yu Lai, Hung-Che Lin, Wan-Ting Shih, Wu-Chien Chien, Chi-Hsiang Chung, Mingchih Chen, Jeng-Wen Chen, Hung-Chun Chung","doi":"10.1177/23312165241309589","DOIUrl":"10.1177/23312165241309589","url":null,"abstract":"<p><p>This nationwide retrospective cohort study examines the association between adults with hearing loss (HL) and subsequent injury risk. Utilizing data from the Taiwan National Health Insurance Research Database (2000-2017), the study included 19,480 patients with HL and 77,920 matched controls. Over an average follow-up of 9.08 years, 18.30% of the 97,400 subjects sustained subsequent all-cause injuries. The injury incidence was significantly higher in the HL group compared to the control group (24.04% vs. 16.86%, <i>p </i>< .001). After adjusting for demographics and comorbidities, the adjusted hazard ratio (aHR) for injury in the HL cohort was 2.35 (95% CI: 2.22-2.49). Kaplan-Meier analysis showed significant differences in injury-free survival between the HL and control groups (log-rank test, <i>p </i>< .001). The increased risk was consistent across age groups (18-64 and ≥65 years), with the HL group showing a higher risk of unintentional injuries (aHR: 2.62; 95% CI: 2.45-2.80), including falls (aHR: 2.83; 95% CI: 2.52-3.17) and traffic-related injuries (aHR: 2.38; 95% CI: 2.07-2.74). These findings highlight an independent association between HL and increased injury risk, underscoring the need for healthcare providers to counsel adult HL patients on preventive measures.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241309589"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11736742/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Individual Differences in the Recognition of Spectrally Degraded Speech: Associations With Neurocognitive Functions in Adult Cochlear Implant Users and With Noise-Vocoded Simulations. 频谱退化语音识别的个体差异:与成年人工耳蜗使用者的神经认知功能和噪声编码模拟的关联。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 DOI: 10.1177/23312165241312449
Aaron C Moberly, Liping Du, Terrin N Tamati

When listening to speech under adverse conditions, listeners compensate using neurocognitive resources. A clinically relevant form of adverse listening is listening through a cochlear implant (CI), which provides a spectrally degraded signal. CI listening is often simulated through noise-vocoding. This study investigated the neurocognitive mechanisms supporting recognition of spectrally degraded speech in adult CI users and normal-hearing (NH) peers listening to noise-vocoded speech, with the hypothesis that an overlapping set of neurocognitive functions would contribute to speech recognition in both groups. Ninety-seven adults with either a CI (54 CI individuals, mean age 66.6 years, range 45-87 years) or age-normal hearing (43 NH individuals, mean age 66.8 years, range 50-81 years) participated. Listeners heard materials varying in linguistic complexity consisting of isolated words, meaningful sentences, anomalous sentences, high-variability sentences, and audiovisually (AV) presented sentences. Participants were also tested for vocabulary knowledge, nonverbal reasoning, working memory capacity, inhibition-concentration, and speed of lexical and phonological access. Linear regression analyses with robust standard errors were performed for speech recognition tasks on neurocognitive functions. Nonverbal reasoning contributed to meaningful sentence recognition in NH peers and anomalous sentence recognition in CI users. Speed of lexical access contributed to performance on most speech tasks for CI users but not for NH peers. Finally, inhibition-concentration and vocabulary knowledge contributed to AV sentence recognition in NH listeners alone. Findings suggest that the complexity of speech materials may determine the particular contributions of neurocognitive skills, and that NH processing of noise-vocoded speech may not represent how CI listeners process speech.

当在不利条件下听演讲时,听者使用神经认知资源进行补偿。不良听力的临床相关形式是通过人工耳蜗(CI)进行听力,它提供频谱退化信号。CI听力通常通过噪声语音编码来模拟。本研究研究了支持成年CI使用者和正常听力(NH)同龄人在听噪声编码语音时识别频谱退化语音的神经认知机制,并假设一组重叠的神经认知功能将有助于两组的语音识别。97名患有CI(54名CI个体,平均年龄66.6岁,范围45-87岁)或年龄正常听力(43名NH个体,平均年龄66.8岁,范围50-81岁)的成年人参与了研究。听众听到的材料在语言复杂性上各不相同,包括孤立的单词、有意义的句子、反常的句子、高变异性的句子和视听呈现的句子。参与者还接受了词汇知识、非语言推理、工作记忆能力、抑制-集中以及词汇和语音获取速度的测试。对语音识别任务的神经认知功能进行了鲁棒标准误差线性回归分析。非语言推理有助于汉语同伴的有意义句子识别和汉语使用者的异常句子识别。词法访问的速度对CI用户的大多数语音任务的性能有贡献,但对NH用户没有贡献。最后,抑制-集中和词汇知识单独对NH听者的反音句识别有贡献。研究结果表明,语音材料的复杂性可能决定了神经认知技能的特殊贡献,并且NH对噪声编码语音的处理可能并不代表CI听众如何处理语音。
{"title":"Individual Differences in the Recognition of Spectrally Degraded Speech: Associations With Neurocognitive Functions in Adult Cochlear Implant Users and With Noise-Vocoded Simulations.","authors":"Aaron C Moberly, Liping Du, Terrin N Tamati","doi":"10.1177/23312165241312449","DOIUrl":"10.1177/23312165241312449","url":null,"abstract":"<p><p>When listening to speech under adverse conditions, listeners compensate using neurocognitive resources. A clinically relevant form of adverse listening is listening through a cochlear implant (CI), which provides a spectrally degraded signal. CI listening is often simulated through noise-vocoding. This study investigated the neurocognitive mechanisms supporting recognition of spectrally degraded speech in adult CI users and normal-hearing (NH) peers listening to noise-vocoded speech, with the hypothesis that an overlapping set of neurocognitive functions would contribute to speech recognition in both groups. Ninety-seven adults with either a CI (54 CI individuals, mean age 66.6 years, range 45-87 years) or age-normal hearing (43 NH individuals, mean age 66.8 years, range 50-81 years) participated. Listeners heard materials varying in linguistic complexity consisting of isolated words, meaningful sentences, anomalous sentences, high-variability sentences, and audiovisually (AV) presented sentences. Participants were also tested for vocabulary knowledge, nonverbal reasoning, working memory capacity, inhibition-concentration, and speed of lexical and phonological access. Linear regression analyses with robust standard errors were performed for speech recognition tasks on neurocognitive functions. Nonverbal reasoning contributed to meaningful sentence recognition in NH peers and anomalous sentence recognition in CI users. Speed of lexical access contributed to performance on most speech tasks for CI users but not for NH peers. Finally, inhibition-concentration and vocabulary knowledge contributed to AV sentence recognition in NH listeners alone. Findings suggest that the complexity of speech materials may determine the particular contributions of neurocognitive skills, and that NH processing of noise-vocoded speech may not represent how CI listeners process speech.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241312449"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11742172/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural-WDRC: A Deep Learning Wide Dynamic Range Compression Method Combined With Controllable Noise Reduction for Hearing Aids.
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 DOI: 10.1177/23312165241309301
Huiyong Zhang, Brian C J Moore, Feng Jiang, Mingfang Diao, Fei Ji, Xiaodong Li, Chengshi Zheng

Wide dynamic range compression (WDRC) and noise reduction both play important roles in hearing aids. WDRC provides level-dependent amplification so that the level of sound produced by the hearing aid falls between the hearing threshold and the highest comfortable level of the listener, while noise reduction reduces ambient noise with the goal of improving intelligibility and listening comfort and reducing effort. In most current hearing aids, noise reduction and WDRC are implemented sequentially, but this may lead to distortion of the amplitude modulation patterns of both the speech and the noise. This paper describes a deep learning method, called Neural-WDRC, for implementing both noise reduction and WDRC, employing a two-stage low-complexity network. The network initially estimates the noise alone and the speech alone. Fast-acting compression is applied to the estimated speech and slow-acting compression to the estimated noise, but with a controllable residual noise level to help the user to perceive natural environmental sounds. Neural-WDRC is frame-based, and the output of the current frame is determined only by the current and preceding frames. Neural-WDRC was compared with conventional slow- and fast-acting compression and with signal-to-noise ratio (SNR)-aware compression using objective measures and listening tests based on normal-hearing participants listening to signals processed to simulate the effects of hearing loss and hearing-impaired participants. The objective measures demonstrated that Neural-WDRC effectively reduced negative interactions of speech and noise in highly non-stationary noise scenarios. The listening tests showed that Neural-WDRC was preferred over the other compression methods for speech in non-stationary noises.

{"title":"Neural-WDRC: A Deep Learning Wide Dynamic Range Compression Method Combined With Controllable Noise Reduction for Hearing Aids.","authors":"Huiyong Zhang, Brian C J Moore, Feng Jiang, Mingfang Diao, Fei Ji, Xiaodong Li, Chengshi Zheng","doi":"10.1177/23312165241309301","DOIUrl":"10.1177/23312165241309301","url":null,"abstract":"<p><p>Wide dynamic range compression (WDRC) and noise reduction both play important roles in hearing aids. WDRC provides level-dependent amplification so that the level of sound produced by the hearing aid falls between the hearing threshold and the highest comfortable level of the listener, while noise reduction reduces ambient noise with the goal of improving intelligibility and listening comfort and reducing effort. In most current hearing aids, noise reduction and WDRC are implemented sequentially, but this may lead to distortion of the amplitude modulation patterns of both the speech and the noise. This paper describes a deep learning method, called Neural-WDRC, for implementing both noise reduction and WDRC, employing a two-stage low-complexity network. The network initially estimates the noise alone and the speech alone. Fast-acting compression is applied to the estimated speech and slow-acting compression to the estimated noise, but with a controllable residual noise level to help the user to perceive natural environmental sounds. Neural-WDRC is frame-based, and the output of the current frame is determined only by the current and preceding frames. Neural-WDRC was compared with conventional slow- and fast-acting compression and with signal-to-noise ratio (SNR)-aware compression using objective measures and listening tests based on normal-hearing participants listening to signals processed to simulate the effects of hearing loss and hearing-impaired participants. The objective measures demonstrated that Neural-WDRC effectively reduced negative interactions of speech and noise in highly non-stationary noise scenarios. The listening tests showed that Neural-WDRC was preferred over the other compression methods for speech in non-stationary noises.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241309301"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11770718/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143048166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measuring Speech Discrimination Ability in Sleeping Infants Using fNIRS-A Proof of Principle.
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 DOI: 10.1177/23312165241311721
Onn Wah Lee, Demi Gao, Tommy Peng, Julia Wunderlich, Darren Mao, Gautam Balasubramanian, Colette M McKay

This study used functional near-infrared spectroscopy (fNIRS) to measure aspects of the speech discrimination ability of sleeping infants. We examined the morphology of the fNIRS response to three different speech contrasts, namely "Tea/Ba," "Bee/Ba," and "Ga/Ba." Sixteen infants aged between 3 and 13 months old were included in this study and their fNIRS data were recorded during natural sleep. The stimuli were presented using a nonsilence baseline paradigm, where repeated standard stimuli were presented between the novel stimuli blocks without any silence periods. The morphology of fNIRS responses varied between speech contrasts. The data were fit with a model in which the responses were the sum of two independent and concurrent response mechanisms that were derived from previously published fNIRS detection responses. These independent components were an oxyhemoglobin (HbO)-positive early-latency response and an HbO-negative late latency response, hypothesized to be related to an auditory canonical response and a brain arousal response, respectively. The goodness of fit of the model with the data was high with median goodness of fit of 81%. The data showed that both response components had later latency when the left ear was the test ear (p < .05) compared to the right ear and that the negative component, due to brain arousal, was smallest for the most subtle contrast, "Ga/Ba" (p = .003).

{"title":"Measuring Speech Discrimination Ability in Sleeping Infants Using fNIRS-A Proof of Principle.","authors":"Onn Wah Lee, Demi Gao, Tommy Peng, Julia Wunderlich, Darren Mao, Gautam Balasubramanian, Colette M McKay","doi":"10.1177/23312165241311721","DOIUrl":"10.1177/23312165241311721","url":null,"abstract":"<p><p>This study used functional near-infrared spectroscopy (fNIRS) to measure aspects of the speech discrimination ability of sleeping infants. We examined the morphology of the fNIRS response to three different speech contrasts, namely \"Tea/Ba,\" \"Bee/Ba,\" and \"Ga/Ba.\" Sixteen infants aged between 3 and 13 months old were included in this study and their fNIRS data were recorded during natural sleep. The stimuli were presented using a nonsilence baseline paradigm, where repeated standard stimuli were presented between the novel stimuli blocks without any silence periods. The morphology of fNIRS responses varied between speech contrasts. The data were fit with a model in which the responses were the sum of two independent and concurrent response mechanisms that were derived from previously published fNIRS detection responses. These independent components were an oxyhemoglobin (HbO)-positive early-latency response and an HbO-negative late latency response, hypothesized to be related to an auditory canonical response and a brain arousal response, respectively. The goodness of fit of the model with the data was high with median goodness of fit of 81%. The data showed that both response components had later latency when the left ear was the test ear (<i>p</i> < .05) compared to the right ear and that the negative component, due to brain arousal, was smallest for the most subtle contrast, \"Ga/Ba\" (<i>p</i> = .003).</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241311721"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11758514/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143030151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptation to Noise in Spectrotemporal Modulation Detection and Word Recognition 谱时调制检测和单词识别中的噪声适应性
IF 2.7 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-09-14 DOI: 10.1177/23312165241266322
David López-Ramos, Miriam I. Marrufo-Pérez, Almudena Eustaquio-Martín, Luis E. López-Bascuas, Enrique A. Lopez-Poveda
Noise adaptation is the improvement in auditory function as the signal of interest is delayed in the noise. Here, we investigated if noise adaptation occurs in spectral, temporal, and spectrotemporal modulation detection as well as in speech recognition. Eighteen normal-hearing adults participated in the experiments. In the modulation detection tasks, the signal was a 200ms spectrally and/or temporally modulated ripple noise. The spectral modulation rate was two cycles per octave, the temporal modulation rate was 10 Hz, and the spectrotemporal modulations combined these two modulations, which resulted in a downward-moving ripple. A control experiment was performed to determine if the results generalized to upward-moving ripples. In the speech recognition task, the signal consisted of disyllabic words unprocessed or vocoded to maintain only envelope cues. Modulation detection thresholds at 0 dB signal-to-noise ratio and speech reception thresholds were measured in quiet and in white noise (at 60 dB SPL) for noise-signal onset delays of 50 ms (early condition) and 800 ms (late condition). Adaptation was calculated as the threshold difference between the early and late conditions. Adaptation in word recognition was statistically significant for vocoded words (2.1 dB) but not for natural words (0.6 dB). Adaptation was found to be statistically significant in spectral (2.1 dB) and temporal (2.2 dB) modulation detection but not in spectrotemporal modulation detection (downward ripple: 0.0 dB, upward ripple: −0.4 dB). Findings suggest that noise adaptation in speech recognition is unrelated to improvements in the encoding of spectrotemporal modulation cues.
噪声适应是指当感兴趣的信号在噪声中延迟时,听觉功能得到改善。在此,我们研究了噪声适应是否发生在频谱、时间和频谱时空调制检测以及语音识别中。18 名听力正常的成年人参加了实验。在调制检测任务中,信号为 200ms 的频谱和/或时间调制波纹噪声。频谱调制率为每倍频程两个周期,时间调制率为 10 赫兹,频谱-时间调制将这两种调制结合在一起,形成一个向下移动的波纹。我们还进行了对照实验,以确定实验结果是否适用于向上移动的波纹。在语音识别任务中,信号由未经处理或仅保留包络线索的声码字组成。在安静和白噪声(60 dB SPL)条件下,噪声-信号开始延迟为 50 ms(早期条件)和 800 ms(晚期条件)时,测量信噪比为 0 dB 时的调制检测阈值和语音接收阈值。适应度以早期和晚期条件下的阈值差计算。词汇识别的适应性对词汇编码(2.1 dB)有显著的统计学意义,而对自然词汇(0.6 dB)则没有。在频谱(2.1 dB)和时间(2.2 dB)调制检测中,发现适应具有显著的统计意义,但在频谱-时间调制检测中却没有发现适应(向下波纹:0.0 dB,向上波纹:-0.4 dB)。研究结果表明,语音识别中的噪声适应与谱时调制线索编码的改进无关。
{"title":"Adaptation to Noise in Spectrotemporal Modulation Detection and Word Recognition","authors":"David López-Ramos, Miriam I. Marrufo-Pérez, Almudena Eustaquio-Martín, Luis E. López-Bascuas, Enrique A. Lopez-Poveda","doi":"10.1177/23312165241266322","DOIUrl":"https://doi.org/10.1177/23312165241266322","url":null,"abstract":"Noise adaptation is the improvement in auditory function as the signal of interest is delayed in the noise. Here, we investigated if noise adaptation occurs in spectral, temporal, and spectrotemporal modulation detection as well as in speech recognition. Eighteen normal-hearing adults participated in the experiments. In the modulation detection tasks, the signal was a 200ms spectrally and/or temporally modulated ripple noise. The spectral modulation rate was two cycles per octave, the temporal modulation rate was 10 Hz, and the spectrotemporal modulations combined these two modulations, which resulted in a downward-moving ripple. A control experiment was performed to determine if the results generalized to upward-moving ripples. In the speech recognition task, the signal consisted of disyllabic words unprocessed or vocoded to maintain only envelope cues. Modulation detection thresholds at 0 dB signal-to-noise ratio and speech reception thresholds were measured in quiet and in white noise (at 60 dB SPL) for noise-signal onset delays of 50 ms (early condition) and 800 ms (late condition). Adaptation was calculated as the threshold difference between the early and late conditions. Adaptation in word recognition was statistically significant for vocoded words (2.1 dB) but not for natural words (0.6 dB). Adaptation was found to be statistically significant in spectral (2.1 dB) and temporal (2.2 dB) modulation detection but not in spectrotemporal modulation detection (downward ripple: 0.0 dB, upward ripple: −0.4 dB). Findings suggest that noise adaptation in speech recognition is unrelated to improvements in the encoding of spectrotemporal modulation cues.","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142256779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Feasibility of Using Behavioral Listening Effort Test Methods to Evaluate Auditory Performance in Cochlear Implant Users 使用行为听力努力测试方法评估人工耳蜗使用者听力表现的可行性
IF 2.7 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-04-27 DOI: 10.1177/23312165241240572
Maartje M. E. Hendrikse, Gertjan Dingemanse, André Goedegebure
Realistic outcome measures that reflect everyday hearing challenges are needed to assess hearing aid and cochlear implant (CI) fitting. Literature suggests that listening effort measures may be more sensitive to differences between hearing-device settings than established speech intelligibility measures when speech intelligibility is near maximum. Which method provides the most effective measurement of listening effort for this purpose is currently unclear. This study aimed to investigate the feasibility of two tests for measuring changes in listening effort in CI users due to signal-to-noise ratio (SNR) differences, as would arise from different hearing-device settings. By comparing the effect size of SNR differences on listening effort measures with test–retest differences, the study evaluated the suitability of these tests for clinical use. Nineteen CI users underwent two listening effort tests at two SNRs (+4 and +8 dB relative to individuals’ 50% speech perception threshold). We employed dual-task paradigms—a sentence-final word identification and recall test (SWIRT) and a sentence verification test (SVT)—to assess listening effort at these two SNRs. Our results show a significant difference in listening effort between the SNRs for both test methods, although the effect size was comparable to the test–retest difference, and the sensitivity was not superior to speech intelligibility measures. Thus, the implementations of SVT and SWIRT used in this study are not suitable for clinical use to measure listening effort differences of this magnitude in individual CI users. However, they can be used in research involving CI users to analyze group data.
在评估助听器和人工耳蜗(CI)验配时,需要能反映日常听力挑战的真实结果测量。文献表明,当言语清晰度接近最大值时,听力努力测量法可能比既定的言语清晰度测量法对听力设备设置之间的差异更加敏感。目前还不清楚哪种方法能最有效地测量听力强度。本研究旨在调查两种测试方法的可行性,以测量 CI 用户因不同听力设备设置造成的信噪比(SNR)差异而引起的聆听强度变化。通过比较信噪比差异对听力测量的影响大小和测试-再测试差异,该研究评估了这些测试在临床应用中的适用性。19 名 CI 用户在两种信噪比(相对于个人 50%言语感知阈值的 +4 和 +8 dB)下进行了两次听力努力测试。我们采用了双任务范式--句子末尾单词识别和回忆测试 (SWIRT) 和句子验证测试 (SVT)--来评估这两种信噪比下的听力强度。我们的结果表明,两种测试方法在不同信噪比下的听力努力程度存在显著差异,但其效应大小与测试-重复差异相当,灵敏度也不优于语音清晰度测量。因此,本研究中使用的 SVT 和 SWIRT 不适合在临床上用于测量 CI 用户听力差异。不过,它们可用于涉及 CI 用户的研究,以分析群体数据。
{"title":"On the Feasibility of Using Behavioral Listening Effort Test Methods to Evaluate Auditory Performance in Cochlear Implant Users","authors":"Maartje M. E. Hendrikse, Gertjan Dingemanse, André Goedegebure","doi":"10.1177/23312165241240572","DOIUrl":"https://doi.org/10.1177/23312165241240572","url":null,"abstract":"Realistic outcome measures that reflect everyday hearing challenges are needed to assess hearing aid and cochlear implant (CI) fitting. Literature suggests that listening effort measures may be more sensitive to differences between hearing-device settings than established speech intelligibility measures when speech intelligibility is near maximum. Which method provides the most effective measurement of listening effort for this purpose is currently unclear. This study aimed to investigate the feasibility of two tests for measuring changes in listening effort in CI users due to signal-to-noise ratio (SNR) differences, as would arise from different hearing-device settings. By comparing the effect size of SNR differences on listening effort measures with test–retest differences, the study evaluated the suitability of these tests for clinical use. Nineteen CI users underwent two listening effort tests at two SNRs (+4 and +8 dB relative to individuals’ 50% speech perception threshold). We employed dual-task paradigms—a sentence-final word identification and recall test (SWIRT) and a sentence verification test (SVT)—to assess listening effort at these two SNRs. Our results show a significant difference in listening effort between the SNRs for both test methods, although the effect size was comparable to the test–retest difference, and the sensitivity was not superior to speech intelligibility measures. Thus, the implementations of SVT and SWIRT used in this study are not suitable for clinical use to measure listening effort differences of this magnitude in individual CI users. However, they can be used in research involving CI users to analyze group data.","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"37 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140810906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Focusing on Positive Listening Experiences Improves Speech Intelligibility in Experienced Hearing Aid Users 关注积极的聆听体验可提高助听器使用者的言语清晰度
IF 2.7 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-04-24 DOI: 10.1177/23312165241246616
Dina Lelic, Line Louise Aaberg Nielsen, Anja Kofoed Pedersen, Tobias Neher
Negativity bias is a cognitive bias that results in negative events being perceptually more salient than positive ones. For hearing care, this means that hearing aid benefits can potentially be overshadowed by adverse experiences. Research has shown that sustaining focus on positive experiences has the potential to mitigate negativity bias. The purpose of the current study was to investigate whether a positive focus (PF) intervention can improve speech-in-noise abilities for experienced hearing aid users. Thirty participants were randomly allocated to a control or PF group (N = 2 × 15). Prior to hearing aid fitting, all participants filled out the short form of the Speech, Spatial and Qualities of Hearing scale (SSQ12) based on their own hearing aids. At the first visit, they were fitted with study hearing aids, and speech-in-noise testing was performed. Both groups then wore the study hearing aids for two weeks and sent daily text messages reporting hours of hearing aid use to an experimenter. In addition, the PF group was instructed to focus on positive listening experiences and to also report them in the daily text messages. After the 2-week trial, all participants filled out the SSQ12 questionnaire based on the study hearing aids and completed the speech-in-noise testing again. Speech-in-noise performance and SSQ12 Qualities score were improved for the PF group but not for the control group. This finding indicates that the PF intervention can improve subjective and objective hearing aid benefits.
负面偏差是一种认知偏差,它导致负面事件在知觉上比正面事件更突出。对于听力保健来说,这意味着助听器的好处可能会被负面经历所掩盖。研究表明,持续关注积极的经历有可能减轻消极偏差。本研究的目的是调查积极关注(PF)干预是否能提高经验丰富的助听器用户的噪声言语能力。30 名参与者被随机分配到对照组或 PF 组(N = 2 × 15)。在验配助听器之前,所有参与者都根据自己的助听器填写了言语、空间和听力质量量表(SSQ12)的简表。在首次就诊时,为他们验配了研究型助听器,并进行了噪声言语测试。然后,两组人都佩戴研究助听器两周,并每天发送短信向实验人员报告助听器的使用时长。此外,PF 组还被要求关注积极的聆听体验,并在每日短信中报告这些体验。为期两周的试验结束后,所有参与者都根据研究助听器填写了 SSQ12 问卷,并再次完成了噪声言语测试。PF组的噪声中言语表现和SSQ12质量得分均有所提高,而对照组则没有提高。这一结果表明,助听器干预可以提高助听器的主观和客观效益。
{"title":"Focusing on Positive Listening Experiences Improves Speech Intelligibility in Experienced Hearing Aid Users","authors":"Dina Lelic, Line Louise Aaberg Nielsen, Anja Kofoed Pedersen, Tobias Neher","doi":"10.1177/23312165241246616","DOIUrl":"https://doi.org/10.1177/23312165241246616","url":null,"abstract":"Negativity bias is a cognitive bias that results in negative events being perceptually more salient than positive ones. For hearing care, this means that hearing aid benefits can potentially be overshadowed by adverse experiences. Research has shown that sustaining focus on positive experiences has the potential to mitigate negativity bias. The purpose of the current study was to investigate whether a positive focus (PF) intervention can improve speech-in-noise abilities for experienced hearing aid users. Thirty participants were randomly allocated to a control or PF group (N = 2 × 15). Prior to hearing aid fitting, all participants filled out the short form of the Speech, Spatial and Qualities of Hearing scale (SSQ12) based on their own hearing aids. At the first visit, they were fitted with study hearing aids, and speech-in-noise testing was performed. Both groups then wore the study hearing aids for two weeks and sent daily text messages reporting hours of hearing aid use to an experimenter. In addition, the PF group was instructed to focus on positive listening experiences and to also report them in the daily text messages. After the 2-week trial, all participants filled out the SSQ12 questionnaire based on the study hearing aids and completed the speech-in-noise testing again. Speech-in-noise performance and SSQ12 Qualities score were improved for the PF group but not for the control group. This finding indicates that the PF intervention can improve subjective and objective hearing aid benefits.","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"46 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140802270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
(Why) Do Transparent Hearing Devices Impair Speech Perception in Collocated Noise? (为什么)透明听力设备会影响同处噪音中的语音感知?
IF 2.7 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-04-17 DOI: 10.1177/23312165241246597
Florian Denk, Luca Wiederschein, Markus Kemper, Hendrik Husstedt
Hearing aids and other hearing devices should provide the user with a benefit, for example, compensate for effects of a hearing loss or cancel undesired sounds. However, wearing hearing devices can also have negative effects on perception, previously demonstrated mostly for spatial hearing, sound quality and the perception of the own voice. When hearing devices are set to transparency, that is, provide no gain and resemble open-ear listening as well as possible, these side effects can be studied in isolation. In the present work, we conducted a series of experiments that are concerned with the effect of transparent hearing devices on speech perception in a collocated speech-in-noise task. In such a situation, listening through a hearing device is not expected to have any negative effect, since both speech and noise undergo identical processing, such that the signal-to-noise ratio at ear is not altered and spatial effects are irrelevant. However, we found a consistent hearing device disadvantage for speech intelligibility and similar trends for rated listening effort. Several hypotheses for the possible origin for this disadvantage were tested by including several different devices, gain settings and stimulus levels. While effects of self-noise and nonlinear distortions were ruled out, the exact reason for a hearing device disadvantage on speech perception is still unclear. However, a significant relation to auditory model predictions demonstrate that the speech intelligibility disadvantage is related to sound quality, and is most probably caused by insufficient equalization, artifacts of frequency-dependent signal processing and processing delays.
助听器和其他听力设备应为用户带来好处,例如补偿听力损失或消除不想要的声音。然而,佩戴助听器也会对感知能力产生负面影响,这主要是在空间听力、音质和对自己声音的感知方面。如果将助听器设置为透明状态,即不提供增益,并尽可能类似于开耳式聆听,则可以单独研究这些副作用。在本研究中,我们进行了一系列实验,研究透明听力设备对噪声中语音任务中语音感知的影响。在这种情况下,通过助听器聆听预计不会产生任何负面影响,因为语音和噪声都经过了相同的处理,因此耳朵的信噪比不会改变,空间效应也无关紧要。然而,我们发现听力设备对语音清晰度的不利影响是一致的,而对听力努力程度的影响也有类似的趋势。通过使用几种不同的设备、增益设置和刺激水平,我们对造成这种劣势的可能原因进行了几种假设检验。虽然排除了自噪声和非线性失真的影响,但听力设备对言语感知不利的确切原因仍不清楚。然而,与听觉模型预测的显著关系表明,语音清晰度的劣势与音质有关,很可能是由于均衡不足、频率相关信号处理的伪影和处理延迟造成的。
{"title":"(Why) Do Transparent Hearing Devices Impair Speech Perception in Collocated Noise?","authors":"Florian Denk, Luca Wiederschein, Markus Kemper, Hendrik Husstedt","doi":"10.1177/23312165241246597","DOIUrl":"https://doi.org/10.1177/23312165241246597","url":null,"abstract":"Hearing aids and other hearing devices should provide the user with a benefit, for example, compensate for effects of a hearing loss or cancel undesired sounds. However, wearing hearing devices can also have negative effects on perception, previously demonstrated mostly for spatial hearing, sound quality and the perception of the own voice. When hearing devices are set to transparency, that is, provide no gain and resemble open-ear listening as well as possible, these side effects can be studied in isolation. In the present work, we conducted a series of experiments that are concerned with the effect of transparent hearing devices on speech perception in a collocated speech-in-noise task. In such a situation, listening through a hearing device is not expected to have any negative effect, since both speech and noise undergo identical processing, such that the signal-to-noise ratio at ear is not altered and spatial effects are irrelevant. However, we found a consistent hearing device disadvantage for speech intelligibility and similar trends for rated listening effort. Several hypotheses for the possible origin for this disadvantage were tested by including several different devices, gain settings and stimulus levels. While effects of self-noise and nonlinear distortions were ruled out, the exact reason for a hearing device disadvantage on speech perception is still unclear. However, a significant relation to auditory model predictions demonstrate that the speech intelligibility disadvantage is related to sound quality, and is most probably caused by insufficient equalization, artifacts of frequency-dependent signal processing and processing delays.","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140614038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Remixing Preferences for Western Instrumental Classical Music of Bilateral Cochlear Implant Users 双侧人工耳蜗使用者对西方古典器乐的混音偏好
IF 2.7 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-04-13 DOI: 10.1177/23312165241245219
Jonas Althoff, Tom Gajecki, Waldo Nogueira
For people with profound hearing loss, a cochlear implant (CI) is able to provide access to sounds that support speech perception. With current technology, most CI users obtain very good speech understanding in quiet listening environments. However, many CI users still struggle when listening to music. Efforts have been made to preprocess music for CI users and improve their music enjoyment. This work investigates potential modifications of instrumental music to make it more accessible for CI users. For this purpose, we used two datasets with varying complexity and containing individual tracks of instrumental music. The first dataset contained trios and it was newly created and synthesized for this study. The second dataset contained orchestral music with a large number of instruments. Bilateral CI users and normal hearing listeners were asked to remix the multitracks grouped into melody, bass, accompaniment, and percussion. Remixes could be performed in the amplitude, spatial, and spectral domains. Results showed that CI users preferred tracks being panned toward the right side, especially the percussion component. When CI users were grouped into frequent or occasional music listeners, significant differences in remixing preferences in all domains were observed.
对于重度听力损失患者来说,人工耳蜗(CI)能够提供支持言语感知的声音。利用现有技术,大多数 CI 用户在安静的聆听环境中都能很好地理解语音。然而,许多 CI 用户在聆听音乐时仍有困难。人们一直在努力为 CI 用户预处理音乐,提高他们的音乐欣赏能力。这项工作研究了对器乐进行修改的可能性,以使 CI 用户更容易接受音乐。为此,我们使用了两个复杂程度不同的数据集,其中包含器乐的单个音轨。第一个数据集包含三重奏,是为本研究新创建和合成的。第二个数据集包含大量乐器的管弦乐。研究人员要求双侧 CI 使用者和听力正常的听者将多轨音乐按旋律、低音、伴奏和打击乐进行混音。混音可以在振幅、空间和频谱域进行。结果显示,CI 用户更喜欢向右侧平移的曲目,尤其是打击乐部分。如果将 CI 用户分为经常听音乐和偶尔听音乐两类,则会发现他们在所有领域的混音偏好都存在显著差异。
{"title":"Remixing Preferences for Western Instrumental Classical Music of Bilateral Cochlear Implant Users","authors":"Jonas Althoff, Tom Gajecki, Waldo Nogueira","doi":"10.1177/23312165241245219","DOIUrl":"https://doi.org/10.1177/23312165241245219","url":null,"abstract":"For people with profound hearing loss, a cochlear implant (CI) is able to provide access to sounds that support speech perception. With current technology, most CI users obtain very good speech understanding in quiet listening environments. However, many CI users still struggle when listening to music. Efforts have been made to preprocess music for CI users and improve their music enjoyment. This work investigates potential modifications of instrumental music to make it more accessible for CI users. For this purpose, we used two datasets with varying complexity and containing individual tracks of instrumental music. The first dataset contained trios and it was newly created and synthesized for this study. The second dataset contained orchestral music with a large number of instruments. Bilateral CI users and normal hearing listeners were asked to remix the multitracks grouped into melody, bass, accompaniment, and percussion. Remixes could be performed in the amplitude, spatial, and spectral domains. Results showed that CI users preferred tracks being panned toward the right side, especially the percussion component. When CI users were grouped into frequent or occasional music listeners, significant differences in remixing preferences in all domains were observed.","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"1 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140583743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Trends in Hearing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1