首页 > 最新文献

Trends in Hearing最新文献

英文 中文
Association of Increased Risk of Injury in Adults With Hearing Loss: A Population-Based Cohort Study. 听力损失成人损伤风险增加的相关性:一项基于人群的队列研究
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 DOI: 10.1177/23312165241309589
Kuan-Yu Lai, Hung-Che Lin, Wan-Ting Shih, Wu-Chien Chien, Chi-Hsiang Chung, Mingchih Chen, Jeng-Wen Chen, Hung-Chun Chung

This nationwide retrospective cohort study examines the association between adults with hearing loss (HL) and subsequent injury risk. Utilizing data from the Taiwan National Health Insurance Research Database (2000-2017), the study included 19,480 patients with HL and 77,920 matched controls. Over an average follow-up of 9.08 years, 18.30% of the 97,400 subjects sustained subsequent all-cause injuries. The injury incidence was significantly higher in the HL group compared to the control group (24.04% vs. 16.86%, p < .001). After adjusting for demographics and comorbidities, the adjusted hazard ratio (aHR) for injury in the HL cohort was 2.35 (95% CI: 2.22-2.49). Kaplan-Meier analysis showed significant differences in injury-free survival between the HL and control groups (log-rank test, p < .001). The increased risk was consistent across age groups (18-64 and ≥65 years), with the HL group showing a higher risk of unintentional injuries (aHR: 2.62; 95% CI: 2.45-2.80), including falls (aHR: 2.83; 95% CI: 2.52-3.17) and traffic-related injuries (aHR: 2.38; 95% CI: 2.07-2.74). These findings highlight an independent association between HL and increased injury risk, underscoring the need for healthcare providers to counsel adult HL patients on preventive measures.

这项全国性的回顾性队列研究探讨了成人听力损失(HL)与随后的损伤风险之间的关系。​在平均9.08年的随访中,97,400名受试者中有18.30%随后遭受了全因损伤。HL组损伤发生率明显高于对照组(24.04% vs. 16.86%, p < 0.05)
{"title":"Association of Increased Risk of Injury in Adults With Hearing Loss: A Population-Based Cohort Study.","authors":"Kuan-Yu Lai, Hung-Che Lin, Wan-Ting Shih, Wu-Chien Chien, Chi-Hsiang Chung, Mingchih Chen, Jeng-Wen Chen, Hung-Chun Chung","doi":"10.1177/23312165241309589","DOIUrl":"10.1177/23312165241309589","url":null,"abstract":"<p><p>This nationwide retrospective cohort study examines the association between adults with hearing loss (HL) and subsequent injury risk. Utilizing data from the Taiwan National Health Insurance Research Database (2000-2017), the study included 19,480 patients with HL and 77,920 matched controls. Over an average follow-up of 9.08 years, 18.30% of the 97,400 subjects sustained subsequent all-cause injuries. The injury incidence was significantly higher in the HL group compared to the control group (24.04% vs. 16.86%, <i>p </i>< .001). After adjusting for demographics and comorbidities, the adjusted hazard ratio (aHR) for injury in the HL cohort was 2.35 (95% CI: 2.22-2.49). Kaplan-Meier analysis showed significant differences in injury-free survival between the HL and control groups (log-rank test, <i>p </i>< .001). The increased risk was consistent across age groups (18-64 and ≥65 years), with the HL group showing a higher risk of unintentional injuries (aHR: 2.62; 95% CI: 2.45-2.80), including falls (aHR: 2.83; 95% CI: 2.52-3.17) and traffic-related injuries (aHR: 2.38; 95% CI: 2.07-2.74). These findings highlight an independent association between HL and increased injury risk, underscoring the need for healthcare providers to counsel adult HL patients on preventive measures.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241309589"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11736742/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pupil Responses During Interactive Conversation. 学生在互动对话中的反应。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 Epub Date: 2025-05-14 DOI: 10.1177/23312165251342441
Benjamin Masters, Susan Aliakbaryhosseinabadi, Dorothea Wendt, Ewen N MacDonald

Pupillometry has been used to assess effort in a variety of listening experiments. However, measuring listening effort during conversational interaction remains difficult as it requires a complex overlap of attention and effort directed to both listening and speech planning. This work introduces a method for measuring how the pupil responds consistently to turn-taking over the course of an entire conversation. Pupillary temporal response functions to the so-called conversational state changes are derived and analyzed for consistent differences that exist across people and acoustic environmental conditions. Additional considerations are made to account for changes in the pupil response that could be attributed to eye-gaze behavior. Our findings, based on data collected from 12 normal-hearing pairs of talkers, reveal that the pupil does respond in a time-synchronous manner to turn-taking. Preliminary interpretation suggests that these variations correspond to our expectations around effort direction in conversation.

瞳孔测量法已被用于评估各种听力实验中的努力程度。然而,在会话互动中测量听力的努力仍然很困难,因为它需要在听力和言语计划上的注意力和努力的复杂重叠。这项工作介绍了一种测量学生如何在整个谈话过程中始终如一地对轮流作出反应的方法。对所谓的对话状态变化的瞳孔时间响应函数进行了推导和分析,以确定存在于人和声环境条件之间的一致差异。额外的考虑是考虑瞳孔反应的变化,这可能归因于眼睛的注视行为。我们的研究结果,基于从12对正常听力的说话者身上收集的数据,揭示了学生确实以时间同步的方式对轮流做出反应。初步的解释表明,这些变化符合我们对谈话中努力方向的期望。
{"title":"Pupil Responses During Interactive Conversation.","authors":"Benjamin Masters, Susan Aliakbaryhosseinabadi, Dorothea Wendt, Ewen N MacDonald","doi":"10.1177/23312165251342441","DOIUrl":"https://doi.org/10.1177/23312165251342441","url":null,"abstract":"<p><p>Pupillometry has been used to assess effort in a variety of listening experiments. However, measuring listening effort during conversational interaction remains difficult as it requires a complex overlap of attention and effort directed to both listening and speech planning. This work introduces a method for measuring how the pupil responds consistently to turn-taking over the course of an entire conversation. Pupillary temporal response functions to the so-called conversational state changes are derived and analyzed for consistent differences that exist across people and acoustic environmental conditions. Additional considerations are made to account for changes in the pupil response that could be attributed to eye-gaze behavior. Our findings, based on data collected from 12 normal-hearing pairs of talkers, reveal that the pupil does respond in a time-synchronous manner to turn-taking. Preliminary interpretation suggests that these variations correspond to our expectations around effort direction in conversation.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251342441"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12078965/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Externalization of Virtual Sound Sources With Bone and Air Conduction Stimulation. 骨和空气传导刺激的虚拟声源外化。
IF 3 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 Epub Date: 2025-09-17 DOI: 10.1177/23312165251378355
Jie Wang, Huanyong Zheng, Stefan Stenfelt, Qiongyao Qu, Jinqiu Sang, Chengshi Zheng

Current research on sound source externalization primarily focuses on air conduction (AC). As bone conduction (BC) technology advances and BC headphones become more common, the perception of externalization for BC-generated virtual sound sources has emerged as an area of significant interest. However, there remains a shortage of relevant research in this domain. The current study investigates the impact of reverberant sound components on the perception of externalization for BC virtual sound sources, both with the ear open (BC-open) and with the ear canals occluded (BC-blocked). To modify the reverberant components of the Binaural Room Impulse Responses (BRIRs), the BRIRs were either truncated or had their reverberation energy scaled. The experimental findings suggest that the perception of externalization does not significantly differ across the three stimulation modalities: AC, BC-open, and BC-blocked. Across both AC and BC transmission modes, the perception of externalization for virtual sound sources was primarily influenced by the reverberation present in the contralateral ear. The results were consistent between the BC-open and BC-blocked conditions, indicating that air radiated sounds from the BC transducer did not impact the results. Regression analyses indicated that under AC stimulation, sound source externalization ratings exhibited strong linear relationships with the Direct-to-Reverberant Energy Ratio (DRR), Frequency-to-Frequency Variability (FFV), and Interaural Coherence (IC). The results suggests that BC transducers provide a similar degree of sound source externalization as AC headphones.

目前对声源外化的研究主要集中在空气传导方面。随着骨传导(BC)技术的进步和BC耳机的普及,对BC生成的虚拟声源的外化感知已经成为一个重要的兴趣领域。然而,这一领域的相关研究还很缺乏。目前的研究调查了混响声成分对BC虚拟声源外化感知的影响,包括耳朵打开(BC-打开)和耳道阻塞(BC-阻塞)。为了改变双耳房间脉冲响应(brir)的混响分量,对brir进行截断或对其混响能量进行缩放。实验结果表明,外化知觉在三种刺激模式(交流、bc -开放和bc -阻断)之间没有显著差异。在交流和BC传输模式中,对虚拟声源的外化感知主要受到对侧耳存在的混响的影响。BC-打开和BC-阻塞条件下的结果是一致的,这表明来自BC换能器的空气辐射声音不会影响结果。回归分析表明,在交流刺激下,声源外化等级与直接-混响能量比(DRR)、频频变异性(FFV)和耳间相干性(IC)呈强线性关系。结果表明,BC换能器提供的声源外化程度与交流耳机相似。
{"title":"Externalization of Virtual Sound Sources With Bone and Air Conduction Stimulation.","authors":"Jie Wang, Huanyong Zheng, Stefan Stenfelt, Qiongyao Qu, Jinqiu Sang, Chengshi Zheng","doi":"10.1177/23312165251378355","DOIUrl":"10.1177/23312165251378355","url":null,"abstract":"<p><p>Current research on sound source externalization primarily focuses on air conduction (AC). As bone conduction (BC) technology advances and BC headphones become more common, the perception of externalization for BC-generated virtual sound sources has emerged as an area of significant interest. However, there remains a shortage of relevant research in this domain. The current study investigates the impact of reverberant sound components on the perception of externalization for BC virtual sound sources, both with the ear open (BC-open) and with the ear canals occluded (BC-blocked). To modify the reverberant components of the Binaural Room Impulse Responses (BRIRs), the BRIRs were either truncated or had their reverberation energy scaled. The experimental findings suggest that the perception of externalization does not significantly differ across the three stimulation modalities: AC, BC-open, and BC-blocked. Across both AC and BC transmission modes, the perception of externalization for virtual sound sources was primarily influenced by the reverberation present in the contralateral ear. The results were consistent between the BC-open and BC-blocked conditions, indicating that air radiated sounds from the BC transducer did not impact the results. Regression analyses indicated that under AC stimulation, sound source externalization ratings exhibited strong linear relationships with the Direct-to-Reverberant Energy Ratio (DRR), Frequency-to-Frequency Variability (FFV), and Interaural Coherence (IC). The results suggests that BC transducers provide a similar degree of sound source externalization as AC headphones.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251378355"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12444071/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145081988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward an Extended Classification of Noise-Distortion Preferences by Modeling Longitudinal Dynamics of Listening Choices. 通过模拟聆听选择的纵向动力学来扩展噪声失真偏好的分类。
IF 3 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 Epub Date: 2025-08-07 DOI: 10.1177/23312165251362018
Giulia Angonese, Mareike Buhl, Jonathan A Gößwein, Birger Kollmeier, Andrea Hildebrandt

Individuals have different preferences for setting hearing aid (HA) algorithms that reduce ambient noise but introduce signal distortions. "Noise haters" prefer greater noise reduction, even at the expense of signal quality. "Distortion haters" accept higher noise levels to avoid signal distortion. These preferences have so far been assumed to be stable over time, and individuals were classified on the basis of these stable, trait scores. However, the question remains as to how stable individual listening preferences are and whether day-to-day state-related variability needs to be considered as further criterion for classification. We designed a mobile task to measure noise-distortion preferences over 2 weeks in an ecological momentary assessment study with N = 185 (106 f, Mage = 63.1, SDage = 6.5) individuals. Latent State-Trait Autoregressive (LST-AR) modeling was used to assess stability and dynamics of individual listening preferences for signals simulating the effects of noise reduction algorithms, presented in a web browser app. The analysis revealed a significant amount of state-related variance. The model has been extended to mixture LST-AR model for data-driven classification, taking into account state and trait components of listening preferences. In addition to successful identification of noise haters, distortion haters and a third intermediate class based on longitudinal, outside-of-the-lab data, we further differentiated individuals with different degrees of variability in listening preferences. Individualization of HA fitting could be improved by assessing individual preferences along the noise-distortion trade-off, and the day-to-day variability of these preferences needs to be taken into account for some individuals more than others.

个人对设置助听器(HA)算法有不同的偏好,这些算法可以减少环境噪声,但会引入信号失真。“讨厌噪音的人”喜欢更大幅度的降噪,甚至不惜牺牲信号质量。“防失真”接受更高的噪声水平,以避免信号失真。到目前为止,这些偏好被认为随着时间的推移是稳定的,个体是根据这些稳定的特征得分进行分类的。然而,问题仍然是个人的听力偏好有多稳定,以及是否需要将日常状态相关的变化作为进一步的分类标准。在一项生态瞬时评估研究中,我们设计了一个移动任务,在2周内测量噪声失真偏好,N = 185 (106 f, Mage = 63.1, SDage = 6.5)个个体。使用潜在状态-特质自回归(LST-AR)模型来评估个人对模拟网络浏览器应用程序中降噪算法效果的信号的收听偏好的稳定性和动态。分析显示了大量与状态相关的方差。该模型被扩展为混合LST-AR模型,用于数据驱动分类,考虑了听力偏好的状态和特征成分。除了根据实验室外的纵向数据成功识别噪音厌恶者、失真厌恶者和第三个中间类别外,我们还进一步区分了听力偏好差异程度不同的个体。通过评估个人对噪声失真权衡的偏好,可以改善HA拟合的个性化,并且需要对这些偏好的日常变化进行更多的考虑。
{"title":"Toward an Extended Classification of Noise-Distortion Preferences by Modeling Longitudinal Dynamics of Listening Choices.","authors":"Giulia Angonese, Mareike Buhl, Jonathan A Gößwein, Birger Kollmeier, Andrea Hildebrandt","doi":"10.1177/23312165251362018","DOIUrl":"10.1177/23312165251362018","url":null,"abstract":"<p><p>Individuals have different preferences for setting hearing aid (HA) algorithms that reduce ambient noise but introduce signal distortions. \"Noise haters\" prefer greater noise reduction, even at the expense of signal quality. \"Distortion haters\" accept higher noise levels to avoid signal distortion. These preferences have so far been assumed to be stable over time, and individuals were classified on the basis of these stable, trait scores. However, the question remains as to how stable individual listening preferences are and whether day-to-day state-related variability needs to be considered as further criterion for classification. We designed a mobile task to measure noise-distortion preferences over 2 weeks in an ecological momentary assessment study with <i>N</i> = 185 (106 f, <i>M</i><sub>age</sub> = 63.1, SD<sub>age</sub> = 6.5) individuals. Latent State-Trait Autoregressive (LST-AR) modeling was used to assess stability and dynamics of individual listening preferences for signals simulating the effects of noise reduction algorithms, presented in a web browser app. The analysis revealed a significant amount of state-related variance. The model has been extended to mixture LST-AR model for data-driven classification, taking into account state and trait components of listening preferences. In addition to successful identification of noise haters, distortion haters and a third intermediate class based on longitudinal, outside-of-the-lab data, we further differentiated individuals with different degrees of variability in listening preferences. Individualization of HA fitting could be improved by assessing individual preferences along the noise-distortion trade-off, and the day-to-day variability of these preferences needs to be taken into account for some individuals more than others.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251362018"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12332338/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144795906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reduced Eye Blinking During Sentence Listening Reflects Increased Cognitive Load in Challenging Auditory Conditions. 听句子时眨眼减少反映了挑战性听觉条件下认知负荷的增加。
IF 3 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 Epub Date: 2025-09-05 DOI: 10.1177/23312165251371118
Penelope Coupal, Yue Zhang, Mickael Deroche

While blink analysis was traditionally conducted within vision research, recent studies suggest that blinks might reflect a more general cognitive strategy for resource allocation, including with auditory tasks, but its use within the fields of Audiology or Psychoacoustics remains scarce and its interpretation largely speculative. It is hypothesized that as listening conditions become more difficult, the number of blinks would decrease, especially during stimulus presentation, because it reflects a window of alertness. In experiment 1, 21 participants were presented with 80 sentences at different signal-to-noise ratios (SNRs): 0,  + 7,  + 14 dB and in quiet, in a sound-proof room with gaze and luminance controlled (75 lux). In experiment 2, 28 participants were presented with 120 sentences at only 0 and +14 dB SNR, but in three luminance conditions (dark at 0 lux, medium at 75 lux, bright at 220 lux). Each pupil trace was manually screened for the number of blinks, along with their respective onset and offset. Results showed that blink occurrence decreased during sentence presentation, with the reduction becoming more pronounced at more adverse SNRs. Experiment 2 replicated this finding, regardless of luminance level. It is concluded that blinks could serve as an additional physiological correlate to listening effort in simple speech recognition tasks, and that it may be a useful indicator of cognitive load regardless of the modality of the processed information.

虽然眨眼分析传统上是在视觉研究中进行的,但最近的研究表明,眨眼可能反映了一种更普遍的资源分配认知策略,包括听觉任务,但它在听力学或心理声学领域的应用仍然很少,其解释很大程度上是推测性的。据推测,当听力条件变得更加困难时,眨眼的次数会减少,尤其是在刺激呈现时,因为它反映了警觉性的窗口期。在实验1中,21名参与者在安静的隔音房间(75勒克斯)中接受80个不同信噪比(信噪比为0、+ 7、+ 14 dB)的句子。在实验2中,28名参与者在3种亮度条件下(0勒克斯暗、75勒克斯中、220勒克斯亮),在0和+14分贝信噪比下看120个句子。每个瞳孔轨迹都是手动筛选眨眼的次数,以及它们各自的开始和偏移量。结果表明,在句子呈现过程中,眨眼次数减少,且在不利信噪比下减少更为明显。实验2重复了这一发现,无论亮度水平如何。由此得出结论,在简单的语音识别任务中,眨眼可以作为听力努力的额外生理关联,并且它可能是一个有用的认知负荷指标,而不管所处理的信息的形态如何。
{"title":"Reduced Eye Blinking During Sentence Listening Reflects Increased Cognitive Load in Challenging Auditory Conditions.","authors":"Penelope Coupal, Yue Zhang, Mickael Deroche","doi":"10.1177/23312165251371118","DOIUrl":"10.1177/23312165251371118","url":null,"abstract":"<p><p>While blink analysis was traditionally conducted within vision research, recent studies suggest that blinks might reflect a more general cognitive strategy for resource allocation, including with auditory tasks, but its use within the fields of Audiology or Psychoacoustics remains scarce and its interpretation largely speculative. It is hypothesized that as listening conditions become more difficult, the number of blinks would decrease, especially during stimulus presentation, because it reflects a window of alertness. In experiment 1, 21 participants were presented with 80 sentences at different signal-to-noise ratios (SNRs): 0,  + 7,  + 14 dB and in quiet, in a sound-proof room with gaze and luminance controlled (75 lux). In experiment 2, 28 participants were presented with 120 sentences at only 0 and +14 dB SNR, but in three luminance conditions (dark at 0 lux, medium at 75 lux, bright at 220 lux). Each pupil trace was manually screened for the number of blinks, along with their respective onset and offset. Results showed that blink occurrence decreased during sentence presentation, with the reduction becoming more pronounced at more adverse SNRs. Experiment 2 replicated this finding, regardless of luminance level. It is concluded that blinks could serve as an additional physiological correlate to listening effort in simple speech recognition tasks, and that it may be a useful indicator of cognitive load regardless of the modality of the processed information.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251371118"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12413523/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145001759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Release from Speech-on-Speech Masking: Additivity of Segregation Cues and Build-Up of Segregation. 从语音对语音掩蔽中释放:隔离线索的可加性和隔离的建立。
IF 3 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 Epub Date: 2025-10-16 DOI: 10.1177/23312165251388430
Maike Klingel, Bernhard Laback

Several segregation cues help listeners understand speech in the presence of distractor talkers, most notably differences in talker sex (i.e., differences in fundamental frequency and vocal tract length) and spatial location. It is unclear, however, how these cues work together, namely whether they show additive or even synergistic effects. Furthermore, previous research suggests better performance for target words that occur later in a sentence or sequence. We additionally investigate for which segregation cues or cue combinations this build-up occurs and whether it depends on memory effects. Twenty normal-hearing participants completed a speech-on-speech masking experiment using the OLSA (a German matrix test) speech material. We adaptively measured speech-reception thresholds for different segregation cues (differences in spatial location, fundamental frequency, and talker sex) and response conditions (which word(s) need(s) to be reported). The results show better thresholds for single-word reports, reflecting memory constraints for multiple-word reports. We also found additivity of segregation cues for multiple- but sub-additivity for single-word reports. Finally, we observed a build-up of release from speech-on-speech masking that depended on response and cue conditions, namely no build-up for multiple-word reports and continuous build-up except for the easiest condition, i.e., different sex/spatially separated maskers for single-word reports. These results shed further light on how listeners follow a target talker in the presence of competing talkers, i.e., the classical cocktail-party problem, and indicate the potential for performance improvement from enhancing segregation cues in the hearing-impaired.

有几个分离线索可以帮助听者在说话时理解说话者的话语,最明显的是说话者性别(即基本频率和声道长度的差异)和空间位置的差异。然而,目前尚不清楚这些线索是如何协同工作的,也就是说,它们是否表现出相加或甚至协同效应。此外,先前的研究表明,在句子或序列中出现较晚的目标单词表现更好。我们还研究了这种积累发生在哪些分离线索或线索组合中,以及是否取决于记忆效应。20名听力正常的参与者使用OLSA(德语矩阵测试)语音材料完成了语音对语音掩蔽实验。我们自适应地测量了不同隔离线索(空间位置、基本频率和说话者性别的差异)和反应条件(需要报告哪些单词)的语音接收阈值。结果显示单单词报告的阈值更好,反映了多单词报告的内存约束。我们还发现了多词报告的分离线索的可加性,但单词报告的次可加性。最后,我们观察到语音对语音掩蔽释放的积累依赖于反应和线索条件,即多词报告没有积累,除了最简单的条件,即不同性别/空间分隔的掩蔽者,持续积累。这些结果进一步揭示了听众如何在有竞争的谈话者在场的情况下跟随目标谈话者,即经典的鸡尾酒会问题,并表明通过增强听障人士的隔离提示来提高表现的潜力。
{"title":"Release from Speech-on-Speech Masking: Additivity of Segregation Cues and Build-Up of Segregation.","authors":"Maike Klingel, Bernhard Laback","doi":"10.1177/23312165251388430","DOIUrl":"10.1177/23312165251388430","url":null,"abstract":"<p><p>Several segregation cues help listeners understand speech in the presence of distractor talkers, most notably differences in talker sex (i.e., differences in fundamental frequency and vocal tract length) and spatial location. It is unclear, however, how these cues work together, namely whether they show additive or even synergistic effects. Furthermore, previous research suggests better performance for target words that occur later in a sentence or sequence. We additionally investigate for which segregation cues or cue combinations this build-up occurs and whether it depends on memory effects. Twenty normal-hearing participants completed a speech-on-speech masking experiment using the OLSA (a German matrix test) speech material. We adaptively measured speech-reception thresholds for different segregation cues (differences in spatial location, fundamental frequency, and talker sex) and response conditions (which word(s) need(s) to be reported). The results show better thresholds for single-word reports, reflecting memory constraints for multiple-word reports. We also found additivity of segregation cues for multiple- but sub-additivity for single-word reports. Finally, we observed a build-up of release from speech-on-speech masking that depended on response and cue conditions, namely no build-up for multiple-word reports and continuous build-up except for the easiest condition, i.e., different sex/spatially separated maskers for single-word reports. These results shed further light on how listeners follow a target talker in the presence of competing talkers, i.e., the classical cocktail-party problem, and indicate the potential for performance improvement from enhancing segregation cues in the hearing-impaired.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251388430"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12536088/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145309622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Purposeful Adaptive Responses to Adverse Conditions Facilitate Successful Auditory Functioning: A Conceptual Model. 对不利条件有目的的适应性反应如何促进成功的听觉功能:一个概念模型。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 Epub Date: 2025-03-16 DOI: 10.1177/23312165251317010
Timothy Beechey, Graham Naylor

This paper describes a conceptual model of adaptive responses to adverse auditory conditions with the aim of providing a basis for better understanding the demands of, and opportunities for, successful real-life auditory functioning. We review examples of behaviors that facilitate auditory functioning in adverse conditions. Next, we outline the concept of purpose-driven behavior and describe how changing behavior can ensure stable performance in a changing environment. We describe how tasks and environments (both physical and social) dictate which behaviors are possible and effective facilitators of auditory functioning, and how hearing disability may be understood in terms of capacity to adapt to the environment. A conceptual model of adaptive cognitive, physical, and linguistic responses within a moderating negative feedback system is presented along with implications for the interpretation of auditory experiments which seek to predict functioning outside the laboratory or clinic. We argue that taking account of how people can improve their own performance by adapting their behavior and modifying their environment may contribute to more robust and generalizable experimental findings.

本文描述了对不利听觉条件的适应性反应的概念模型,旨在为更好地理解成功的现实生活听觉功能的需求和机会提供基础。我们回顾了在不利条件下促进听觉功能的行为的例子。接下来,我们概述了目的驱动行为的概念,并描述了改变行为如何在不断变化的环境中确保稳定的性能。我们描述了任务和环境(物理和社会)如何决定哪些行为是听觉功能的可能和有效的促进因素,以及如何从适应环境的能力方面理解听力残疾。在一个缓和的负反馈系统中,提出了适应性认知、身体和语言反应的概念模型,以及对听觉实验的解释的含义,听觉实验试图预测实验室或诊所以外的功能。我们认为,考虑到人们如何通过调整自己的行为和改变环境来提高自己的表现,可能有助于获得更强大和可推广的实验结果。
{"title":"How Purposeful Adaptive Responses to Adverse Conditions Facilitate Successful Auditory Functioning: A Conceptual Model.","authors":"Timothy Beechey, Graham Naylor","doi":"10.1177/23312165251317010","DOIUrl":"10.1177/23312165251317010","url":null,"abstract":"<p><p>This paper describes a conceptual model of adaptive responses to adverse auditory conditions with the aim of providing a basis for better understanding the demands of, and opportunities for, successful real-life auditory functioning. We review examples of behaviors that facilitate auditory functioning in adverse conditions. Next, we outline the concept of purpose-driven behavior and describe how changing behavior can ensure stable performance in a changing environment. We describe how tasks and environments (both physical and social) dictate which behaviors are possible and effective facilitators of auditory functioning, and how hearing disability may be understood in terms of capacity to adapt to the environment. A conceptual model of adaptive cognitive, physical, and linguistic responses within a moderating negative feedback system is presented along with implications for the interpretation of auditory experiments which seek to predict functioning outside the laboratory or clinic. We argue that taking account of how people can improve their own performance by adapting their behavior and modifying their environment may contribute to more robust and generalizable experimental findings.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251317010"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11912170/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143651562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auditory Learning and Generalization in Older Adults: Evidence from Voice Discrimination Training. 老年人的听觉学习和泛化:来自声音辨别训练的证据。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 Epub Date: 2025-05-27 DOI: 10.1177/23312165251342436
Nuphar Singer, Yael Zaltz

Auditory learning is essential for adapting to continuously changing acoustic environments. This adaptive capability, however, may be impacted by age-related declines in sensory and cognitive functions, potentially limiting learning efficiency and generalization in older adults. This study investigated auditory learning and generalization in 24 older (65-82 years) and 24 younger (18-34 years) adults through voice discrimination (VD) training. Participants were divided into training (12 older, 12 younger adults) and control groups (12 older, 12 younger adults). Trained participants completed five sessions: Two testing sessions assessing VD performance using a 2-down 1-up adaptive procedure with F0-only, formant-only, and combined F0 + formant cues, and three training sessions focusing exclusively on VD with F0 cues. Control groups participated only in the two testing sessions, with no intermediate training. Results revealed significant training-induced improvements in VD with F0 cues for both younger and older adults, with comparable learning efficiency and gains across groups. However, generalization to the formant-only cue was observed only in younger adults, suggesting limited learning transfer in older adults. Additionally, VD training did not improve performance in the combined F0 + formant condition beyond control group improvements, underscoring the specificity of perceptual learning. These findings provide novel insights into auditory learning in older adults, showing that while they retain the ability for significant auditory skill acquisition, age-related declines in perceptual flexibility may limit broader generalization. This study highlights the importance of designing targeted auditory interventions for older adults, considering their specific limitations in generalizing learning gains across different acoustic cues.

听觉学习对于适应不断变化的声音环境至关重要。然而,这种适应能力可能会受到与年龄相关的感觉和认知功能下降的影响,可能会限制老年人的学习效率和泛化能力。本研究通过声音辨别训练对24名老年人(65 ~ 82岁)和24名年轻人(18 ~ 34岁)的听觉学习和泛化进行了研究。参与者被分为训练组(12名老年人,12名年轻人)和对照组(12名老年人,12名年轻人)。训练后的参与者完成了五个阶段:两个测试阶段使用2-down - 1-up自适应程序评估VD性能,仅使用F0,仅使用共振峰,以及组合F0 +共振峰提示,三个训练阶段只关注F0提示的VD。对照组只参加了两次测试,没有中间训练。结果显示,在F0提示下,年轻人和老年人的VD均有显著的训练改善,两组之间的学习效率和收益相当。然而,仅在年轻人中观察到对对象线索的泛化,这表明老年人的学习迁移有限。此外,VD训练并没有提高F0 +峰组合条件下的表现,而不是对照组的改善,强调了知觉学习的特殊性。这些发现为老年人的听觉学习提供了新的见解,表明尽管他们保留了显著的听觉技能习得能力,但与年龄相关的感知灵活性下降可能会限制更广泛的推广。这项研究强调了为老年人设计有针对性的听觉干预的重要性,考虑到他们在概括不同声音线索的学习收益方面的特殊局限性。
{"title":"Auditory Learning and Generalization in Older Adults: Evidence from Voice Discrimination Training.","authors":"Nuphar Singer, Yael Zaltz","doi":"10.1177/23312165251342436","DOIUrl":"10.1177/23312165251342436","url":null,"abstract":"<p><p>Auditory learning is essential for adapting to continuously changing acoustic environments. This adaptive capability, however, may be impacted by age-related declines in sensory and cognitive functions, potentially limiting learning efficiency and generalization in older adults. This study investigated auditory learning and generalization in 24 older (65-82 years) and 24 younger (18-34 years) adults through voice discrimination (VD) training. Participants were divided into training (12 older, 12 younger adults) and control groups (12 older, 12 younger adults). Trained participants completed five sessions: Two testing sessions assessing VD performance using a 2-down 1-up adaptive procedure with F0-only, formant-only, and combined F0 + formant cues, and three training sessions focusing exclusively on VD with F0 cues. Control groups participated only in the two testing sessions, with no intermediate training. Results revealed significant training-induced improvements in VD with F0 cues for both younger and older adults, with comparable learning efficiency and gains across groups. However, generalization to the formant-only cue was observed only in younger adults, suggesting limited learning transfer in older adults. Additionally, VD training did not improve performance in the combined F0 + formant condition beyond control group improvements, underscoring the specificity of perceptual learning. These findings provide novel insights into auditory learning in older adults, showing that while they retain the ability for significant auditory skill acquisition, age-related declines in perceptual flexibility may limit broader generalization. This study highlights the importance of designing targeted auditory interventions for older adults, considering their specific limitations in generalizing learning gains across different acoustic cues.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251342436"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12117233/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144152623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digits-In-Noise Hearing Test Using Text-to-Speech and Automatic Speech Recognition: Proof-of-Concept Study. 使用文本到语音和自动语音识别的数字噪声听力测试:概念验证研究。
IF 3 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 Epub Date: 2025-10-01 DOI: 10.1177/23312165251367625
Mohsen Fatehifar, Kevin J Munro, Michael A Stone, David Wong, Tim Cootes, Josef Schlittenlacher

This proof-of-concept study evaluated the implementation of a digits-in-noise test we call the 'AI-powered test' that used text-to-speech (TTS) and automatic speech recognition (ASR). Two other digits-in-noise tests formed the baselines for comparison: the 'keyboard-based test' which used the same configurations as the AI-powered test, and the 'independent test', a third-party-sourced test not modified by us. The validity of the AI-powered test was evaluated by measuring its difference from the independent test and comparing it with the baseline, which was the difference between the Keyboard-based test and the Independent test. The reliability of the AI-powered test was measured by comparing the similarity of two runs of this test and the Independent test. The study involved 31 participants: 10 with hearing loss and 21 with normal-hearing. Achieved mean bias and limits-of-agreement showed that the agreement between the AI-powered test and the independent test (-1.3 ± 4.9 dB) was similar to the agreement between the keyboard-based test and the Independent test (-0.2 ± 4.4 dB), indicating that the addition of TTS and ASR did not have a negative impact. The AI-powered test had a reliability of -1.0 ± 5.7 dB, which was poorer than the baseline reliability (-0.4 ± 3.8 dB), but this was improved to -0.9 ± 3.8 dB when outliers were removed, showing that low-error ASR (as shown with the Whisper model) makes the test as reliable as independent tests. These findings suggest that a digits-in-noise test using synthetic stimuli and automatic speech recognition is a viable alternative to traditional tests and could have real-world applications.

这项概念验证研究评估了使用文本到语音(TTS)和自动语音识别(ASR)的数字噪声测试的实施情况,我们称之为“人工智能测试”。另外两个噪声数字测试构成了比较的基准:“基于键盘的测试”(使用与ai测试相同的配置)和“独立测试”(未经我们修改的第三方测试)。人工智能测试的有效性是通过测量与独立测试的差异,并将其与基线(键盘测试与独立测试的差异)进行比较来评估的。人工智能驱动测试的可靠性是通过比较该测试和独立测试的两次运行的相似性来衡量的。这项研究涉及31名参与者:10名听力受损,21名听力正常。获得的平均偏倚和一致限显示,人工智能驱动测试与独立测试之间的一致性(-1.3±4.9 dB)与基于键盘的测试与独立测试之间的一致性(-0.2±4.4 dB)相似,表明TTS和ASR的添加没有负面影响。人工智能支持的测试的可靠性为-1.0±5.7 dB,低于基线可靠性(-0.4±3.8 dB),但在去除异常值后,该可靠性提高到-0.9±3.8 dB,这表明低误差ASR(如Whisper模型所示)使测试与独立测试一样可靠。这些发现表明,使用合成刺激和自动语音识别的噪声数字测试是传统测试的可行替代方案,可以在现实世界中应用。
{"title":"Digits-In-Noise Hearing Test Using Text-to-Speech and Automatic Speech Recognition: Proof-of-Concept Study.","authors":"Mohsen Fatehifar, Kevin J Munro, Michael A Stone, David Wong, Tim Cootes, Josef Schlittenlacher","doi":"10.1177/23312165251367625","DOIUrl":"10.1177/23312165251367625","url":null,"abstract":"<p><p>This proof-of-concept study evaluated the implementation of a digits-in-noise test we call the 'AI-powered test' that used text-to-speech (TTS) and automatic speech recognition (ASR). Two other digits-in-noise tests formed the baselines for comparison: the 'keyboard-based test' which used the same configurations as the AI-powered test, and the 'independent test', a third-party-sourced test not modified by us. The validity of the AI-powered test was evaluated by measuring its difference from the independent test and comparing it with the baseline, which was the difference between the Keyboard-based test and the Independent test. The reliability of the AI-powered test was measured by comparing the similarity of two runs of this test and the Independent test. The study involved 31 participants: 10 with hearing loss and 21 with normal-hearing. Achieved mean bias and limits-of-agreement showed that the agreement between the AI-powered test and the independent test (-1.3 ± 4.9 dB) was similar to the agreement between the keyboard-based test and the Independent test (-0.2 ± 4.4 dB), indicating that the addition of TTS and ASR did not have a negative impact. The AI-powered test had a reliability of -1.0 ± 5.7 dB, which was poorer than the baseline reliability (-0.4 ± 3.8 dB), but this was improved to -0.9 ± 3.8 dB when outliers were removed, showing that low-error ASR (as shown with the Whisper model) makes the test as reliable as independent tests. These findings suggest that a digits-in-noise test using synthetic stimuli and automatic speech recognition is a viable alternative to traditional tests and could have real-world applications.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251367625"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12489207/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145208105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bimodal Cochlear Implants: Measurement of the Localization Performance as a Function of Device Latency Difference. 双峰人工耳蜗:定位性能的测量作为设备延迟差的函数。
IF 3 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 Epub Date: 2025-11-24 DOI: 10.1177/23312165251396658
Rebecca C Felsheim, Sabine Hochmuth, Alina Kleinow, Andreas Radeloff, Mathias Dietz

Bimodal cochlear implant users show poor localization performance. One reason for this is a difference in the processing latency between the hearing aid and the cochlear implant side. It has been shown that reducing this latency difference acutely improves the localization performance of bimodal cochlear implant users. However, due to the frequency dependency of both the device latencies and the acoustic hearing ear, current frequency-independent latency adjustments cannot fully compensate for the differences, leaving open which latency adjustment is best. We therefore measured the localization performance of 11 bimodal cochlear implant users for multiple cochlear implant latencies. We confirmed previous studies that adjusting the interaural latency improves localization in most of our subjects. However, the latency that leads to the best localization performance for most subjects was not necessarily at the latency estimated to compensate for the interaural difference at intermediate frequencies (1 kHz). Nine of 11 subjects localized best with a cochlear implant latency that was slightly shorter than the estimated latency compensation.

双模人工耳蜗使用者的定位表现较差。其中一个原因是助听器和人工耳蜗在处理延迟上的差异。研究表明,减少这种潜伏期差异可显著改善双模人工耳蜗使用者的定位表现。然而,由于设备延迟和声学听觉耳朵的频率依赖性,当前与频率无关的延迟调整不能完全补偿差异,留下了最佳延迟调整的开放。因此,我们测量了11名双模人工耳蜗使用者在多次人工耳蜗潜伏期的定位表现。我们证实了先前的研究,调整耳间潜伏期可以改善大多数受试者的定位。然而,对于大多数受试者来说,导致最佳定位性能的延迟并不一定是在中频(1khz)估计的补偿耳间差异的延迟。11名受试者中有9名在人工耳蜗的潜伏期比估计的潜伏期补偿略短时定位效果最好。
{"title":"Bimodal Cochlear Implants: Measurement of the Localization Performance as a Function of Device Latency Difference.","authors":"Rebecca C Felsheim, Sabine Hochmuth, Alina Kleinow, Andreas Radeloff, Mathias Dietz","doi":"10.1177/23312165251396658","DOIUrl":"10.1177/23312165251396658","url":null,"abstract":"<p><p>Bimodal cochlear implant users show poor localization performance. One reason for this is a difference in the processing latency between the hearing aid and the cochlear implant side. It has been shown that reducing this latency difference acutely improves the localization performance of bimodal cochlear implant users. However, due to the frequency dependency of both the device latencies and the acoustic hearing ear, current frequency-independent latency adjustments cannot fully compensate for the differences, leaving open which latency adjustment is best. We therefore measured the localization performance of 11 bimodal cochlear implant users for multiple cochlear implant latencies. We confirmed previous studies that adjusting the interaural latency improves localization in most of our subjects. However, the latency that leads to the best localization performance for most subjects was not necessarily at the latency estimated to compensate for the interaural difference at intermediate frequencies (1 kHz). Nine of 11 subjects localized best with a cochlear implant latency that was slightly shorter than the estimated latency compensation.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251396658"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12644428/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145597477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Trends in Hearing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1