首页 > 最新文献

Trends in Hearing最新文献

英文 中文
Assessment of Speech Processing and Listening Effort Associated With Speech-on-Speech Masking Using the Visual World Paradigm and Pupillometry. 使用视觉世界范式和瞳孔测量法评估语音对语音掩蔽相关的语音处理和听力努力。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 DOI: 10.1177/23312165241306091
Khaled H A Abdel-Latif, Thomas Koelewijn, Deniz Başkent, Hartmut Meister

Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal. This study aimed to propose a new VWP to examine the time course of speech segregation when competing sentences are presented and to collect pupil size data as a measure of listening effort. Twelve young normal-hearing participants were presented with competing matrix sentences (structure "name-verb-numeral-adjective-object") diotically via headphones at four target-to-masker ratios (TMRs), corresponding to intermediate to near perfect speech recognition. The VWP visually presented the number and object words from both the target and masker sentences. Participants were instructed to gaze at the corresponding words of the target sentence without providing verbal responses. The gaze fixations consistently reflected the different TMRs for both number and object words. The slopes of the fixation curves were steeper, and the proportion of target fixations increased with higher TMRs, suggesting more efficient segregation under more favorable conditions. Temporal analysis of pupil data using Bayesian paired sample t-tests showed a corresponding reduction in pupil dilation with increasing TMR, indicating reduced listening effort. The results support the conclusion that the proposed VWP and the captured eye movements and pupil dilation are suitable for objective assessment of sentence-based speech-on-speech segregation and the corresponding listening effort.

在日常语言交流中,语音对语音的掩蔽是一种常见且具有挑战性的情况。要将注意力集中在目标语音上,就必须具备分离相互竞争的听觉流的能力。视觉世界范式(Visual World Paradigm,VWP)通过捕捉对反映语音信号的视觉呈现图标的注视固定来深入了解语音处理过程。本研究旨在提出一种新的视觉世界范式,以考察在出现竞争句子时语音分离的时间过程,并收集瞳孔大小数据作为听力努力程度的测量指标。研究人员通过耳机向 12 名听力正常的年轻受试者连续呈现了四种目标与掩码比(TMRs)的竞争矩阵句子(结构为 "名称-动词-名词-形容词-宾语"),这四种目标与掩码比分别对应于中等到接近完美的语音识别能力。VWP 可视化呈现目标句和掩蔽句中的数词和宾词。受试者被要求注视目标句子中的相应单词,而不提供口头回答。注视定着一致地反映了数字词和物词的不同 TMR。固定曲线的斜率更陡峭,目标固定的比例随 TMR 越高而增加,这表明在更有利的条件下,分离的效率更高。使用贝叶斯配对样本 t 检验法对瞳孔数据进行的时间分析表明,随着 TMR 的增加,瞳孔放大的程度也相应减小,这表明听力强度降低了。这些结果支持这样的结论,即所提出的 VWP 以及捕捉到的眼球运动和瞳孔放大适合用于客观评估基于句子的语音分离和相应的听力强度。
{"title":"Assessment of Speech Processing and Listening Effort Associated With Speech-on-Speech Masking Using the Visual World Paradigm and Pupillometry.","authors":"Khaled H A Abdel-Latif, Thomas Koelewijn, Deniz Başkent, Hartmut Meister","doi":"10.1177/23312165241306091","DOIUrl":"10.1177/23312165241306091","url":null,"abstract":"<p><p>Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal. This study aimed to propose a new VWP to examine the time course of speech segregation when competing sentences are presented and to collect pupil size data as a measure of listening effort. Twelve young normal-hearing participants were presented with competing matrix sentences (structure \"name-verb-numeral-adjective-object\") diotically via headphones at four target-to-masker ratios (TMRs), corresponding to intermediate to near perfect speech recognition. The VWP visually presented the number and object words from both the target and masker sentences. Participants were instructed to gaze at the corresponding words of the target sentence without providing verbal responses. The gaze fixations consistently reflected the different TMRs for both number and object words. The slopes of the fixation curves were steeper, and the proportion of target fixations increased with higher TMRs, suggesting more efficient segregation under more favorable conditions. Temporal analysis of pupil data using Bayesian paired sample <i>t</i>-tests showed a corresponding reduction in pupil dilation with increasing TMR, indicating reduced listening effort. The results support the conclusion that the proposed VWP and the captured eye movements and pupil dilation are suitable for objective assessment of sentence-based speech-on-speech segregation and the corresponding listening effort.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241306091"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11726529/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142972857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Who Said That? The Effect of Hearing Ability on Following Sequential Utterances From Varying Talkers in Noise.
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 DOI: 10.1177/23312165251320794
Alexina Whitley, Timothy Beechey, Lauren V Hadley

Many of our conversations occur in nonideal situations, from the hum of a car to the babble of a cocktail party. Additionally, in conversation, listeners are often required to switch their attention between multiple talkers, which places demands on both auditory and cognitive processes. Speech understanding in such situations appears to be particularly demanding for older adults with hearing impairment. This study examined the effects of age and hearing ability on performance in an online speech recall task. Two target sentences, spoken by the same talker or different talkers, were presented one after the other, analogous to a conversational turn switch. The first target sentence was presented in quiet, and the second target sentence was presented alongside either a noise masker (steady-state speech-shaped noise) or a speech masker (another nontarget sentence). Relative to when the target talker remained the same between sentences, listeners were less accurate at recalling information in the second target sentence when the target talker changed, particularly when the target talker for sentence one became the masker for sentence two. Listeners with poorer speech-in-noise reception thresholds were less accurate in both noise- and speech-masked trials and made more masker confusions in speech-masked trials. Furthermore, an interaction revealed that listeners with poorer speech reception thresholds had particular difficulty when the target talker remained the same. Our study replicates previous research regarding the costs of switching nonspatial attention, extending these findings to older adults with a range of hearing abilities.

{"title":"Who Said That? The Effect of Hearing Ability on Following Sequential Utterances From Varying Talkers in Noise.","authors":"Alexina Whitley, Timothy Beechey, Lauren V Hadley","doi":"10.1177/23312165251320794","DOIUrl":"10.1177/23312165251320794","url":null,"abstract":"<p><p>Many of our conversations occur in nonideal situations, from the hum of a car to the babble of a cocktail party. Additionally, in conversation, listeners are often required to switch their attention between multiple talkers, which places demands on both auditory and cognitive processes. Speech understanding in such situations appears to be particularly demanding for older adults with hearing impairment. This study examined the effects of age and hearing ability on performance in an online speech recall task. Two target sentences, spoken by the same talker or different talkers, were presented one after the other, analogous to a conversational turn switch. The first target sentence was presented in quiet, and the second target sentence was presented alongside either a noise masker (steady-state speech-shaped noise) or a speech masker (another nontarget sentence). Relative to when the target talker remained the same between sentences, listeners were less accurate at recalling information in the second target sentence when the target talker changed, particularly when the target talker for sentence one became the masker for sentence two. Listeners with poorer speech-in-noise reception thresholds were less accurate in both noise- and speech-masked trials and made more masker confusions in speech-masked trials. Furthermore, an interaction revealed that listeners with poorer speech reception thresholds had particular difficulty when the target talker remained the same. Our study replicates previous research regarding the costs of switching nonspatial attention, extending these findings to older adults with a range of hearing abilities.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251320794"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11851761/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143484318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social Anxiety, Negative Affect, and Hearing Difficulties in Adults.
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 DOI: 10.1177/23312165251317925
Katrina Kate S McClannahan, Sarah McConkey, Julia M Levitan, Thomas L Rodebaugh, Jonathan E Peelle

Subjective ratings of communication function reflect both auditory sensitivity and the situational, social, and emotional consequences of communication difficulties. Listeners interact with people and their environment differently, have various ways of handling stressful situations, and have diverse communication needs. Therefore, understanding the relationship between auditory and mental health factors is crucial for the holistic diagnosis and treatment of communication difficulty, particularly as mental health and communication function may have bidirectional effects. The goal of this study was to evaluate the degree to which social anxiety and negative affect (encompassing generalized anxiety, depression, and anger) contributed to subjective communication function (hearing handicap) in adult listeners. A cross-sectional online survey was administered via REDCap. Primary measures were brief assessments of social anxiety, negative affect, and subjective communication function measures. Participants were 628 adults (408 women, 220 men), ages 19 to 87 years (mean = 43) living in the United States. Results indicated that individuals reporting higher social anxiety and higher negative affect also reported poorer communication function. Multiple linear regression analysis revealed that both negative affect and social anxiety were significant and unique predictors of subjective communication function. Social anxiety and negative affect both significantly, and uniquely, contribute to how much someone feels a hearing loss impacts their daily communication function. Further examination of social anxiety and negative affect in older adults with hearing loss may help researchers and clinicians understand the complex interactions between mental health and sensory function during everyday communication, in this rapidly growing clinical population.

{"title":"Social Anxiety, Negative Affect, and Hearing Difficulties in Adults.","authors":"Katrina Kate S McClannahan, Sarah McConkey, Julia M Levitan, Thomas L Rodebaugh, Jonathan E Peelle","doi":"10.1177/23312165251317925","DOIUrl":"10.1177/23312165251317925","url":null,"abstract":"<p><p>Subjective ratings of communication function reflect both auditory sensitivity and the situational, social, and emotional consequences of communication difficulties. Listeners interact with people and their environment differently, have various ways of handling stressful situations, and have diverse communication needs. Therefore, understanding the relationship between auditory and mental health factors is crucial for the holistic diagnosis and treatment of communication difficulty, particularly as mental health and communication function may have bidirectional effects. The goal of this study was to evaluate the degree to which social anxiety and negative affect (encompassing generalized anxiety, depression, and anger) contributed to subjective communication function (hearing handicap) in adult listeners. A cross-sectional online survey was administered via REDCap. Primary measures were brief assessments of social anxiety, negative affect, and subjective communication function measures. Participants were 628 adults (408 women, 220 men), ages 19 to 87 years (mean = 43) living in the United States. Results indicated that individuals reporting higher social anxiety and higher negative affect also reported poorer communication function. Multiple linear regression analysis revealed that both negative affect and social anxiety were significant and unique predictors of subjective communication function. Social anxiety and negative affect both significantly, and uniquely, contribute to how much someone feels a hearing loss impacts their daily communication function. Further examination of social anxiety and negative affect in older adults with hearing loss may help researchers and clinicians understand the complex interactions between mental health and sensory function during everyday communication, in this rapidly growing clinical population.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251317925"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11803679/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143366040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Association of Increased Risk of Injury in Adults With Hearing Loss: A Population-Based Cohort Study. 听力损失成人损伤风险增加的相关性:一项基于人群的队列研究
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 DOI: 10.1177/23312165241309589
Kuan-Yu Lai, Hung-Che Lin, Wan-Ting Shih, Wu-Chien Chien, Chi-Hsiang Chung, Mingchih Chen, Jeng-Wen Chen, Hung-Chun Chung

This nationwide retrospective cohort study examines the association between adults with hearing loss (HL) and subsequent injury risk. Utilizing data from the Taiwan National Health Insurance Research Database (2000-2017), the study included 19,480 patients with HL and 77,920 matched controls. Over an average follow-up of 9.08 years, 18.30% of the 97,400 subjects sustained subsequent all-cause injuries. The injury incidence was significantly higher in the HL group compared to the control group (24.04% vs. 16.86%, p < .001). After adjusting for demographics and comorbidities, the adjusted hazard ratio (aHR) for injury in the HL cohort was 2.35 (95% CI: 2.22-2.49). Kaplan-Meier analysis showed significant differences in injury-free survival between the HL and control groups (log-rank test, p < .001). The increased risk was consistent across age groups (18-64 and ≥65 years), with the HL group showing a higher risk of unintentional injuries (aHR: 2.62; 95% CI: 2.45-2.80), including falls (aHR: 2.83; 95% CI: 2.52-3.17) and traffic-related injuries (aHR: 2.38; 95% CI: 2.07-2.74). These findings highlight an independent association between HL and increased injury risk, underscoring the need for healthcare providers to counsel adult HL patients on preventive measures.

这项全国性的回顾性队列研究探讨了成人听力损失(HL)与随后的损伤风险之间的关系。​在平均9.08年的随访中,97,400名受试者中有18.30%随后遭受了全因损伤。HL组损伤发生率明显高于对照组(24.04% vs. 16.86%, p < 0.05)
{"title":"Association of Increased Risk of Injury in Adults With Hearing Loss: A Population-Based Cohort Study.","authors":"Kuan-Yu Lai, Hung-Che Lin, Wan-Ting Shih, Wu-Chien Chien, Chi-Hsiang Chung, Mingchih Chen, Jeng-Wen Chen, Hung-Chun Chung","doi":"10.1177/23312165241309589","DOIUrl":"10.1177/23312165241309589","url":null,"abstract":"<p><p>This nationwide retrospective cohort study examines the association between adults with hearing loss (HL) and subsequent injury risk. Utilizing data from the Taiwan National Health Insurance Research Database (2000-2017), the study included 19,480 patients with HL and 77,920 matched controls. Over an average follow-up of 9.08 years, 18.30% of the 97,400 subjects sustained subsequent all-cause injuries. The injury incidence was significantly higher in the HL group compared to the control group (24.04% vs. 16.86%, <i>p </i>< .001). After adjusting for demographics and comorbidities, the adjusted hazard ratio (aHR) for injury in the HL cohort was 2.35 (95% CI: 2.22-2.49). Kaplan-Meier analysis showed significant differences in injury-free survival between the HL and control groups (log-rank test, <i>p </i>< .001). The increased risk was consistent across age groups (18-64 and ≥65 years), with the HL group showing a higher risk of unintentional injuries (aHR: 2.62; 95% CI: 2.45-2.80), including falls (aHR: 2.83; 95% CI: 2.52-3.17) and traffic-related injuries (aHR: 2.38; 95% CI: 2.07-2.74). These findings highlight an independent association between HL and increased injury risk, underscoring the need for healthcare providers to counsel adult HL patients on preventive measures.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241309589"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11736742/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effect of Hearing Aids on Phonation and Perceived Voice Qualities. 助听器对发音和感知语音质量的影响
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 DOI: 10.1177/23312165251322064
Johanna Hengen, Inger Lundeborg Hammarström, Stefan Stenfelt

Problems with own-voice sounds are common in hearing aid users. As auditory feedback is used to regulate the voice, it is possible that hearing aid use affects phonation. The aim of this paper is to compare hearing aid users' perception of their own voice with and without hearing aids and any effect on phonation. Eighty-five first-time and 85 experienced hearing aid users together with a control group of 70 completed evaluations of their own recorded and live voice in addition to two external voices. The participants' voice recordings were used for acoustic analysis. The results showed moderate to severe own-voice problems (OVP) in 17.6% of first-time users and 18.8% of experienced users. Hearing condition was a significant predictor of the perception of pitch in external voices and of monotony, lower naturalness, and lower pleasantness in their own live voice. The groups with hearing impairment had a higher mean fundamental frequency (f0) than the control group. Hearing aids decreased the speaking sound pressure level by 2 dB on average. Moreover, acoustic analysis shows a complex relationship between hearing impairment, hearing aids, and phonation and an immediate decrease in speech level when using hearing aids. Our findings support previous literature regarding auditory feedback and voice regulation. The results should motivate clinicians in hearing and voice care to routinely take hearing functions into account when assessing voice problems.

{"title":"Effect of Hearing Aids on Phonation and Perceived Voice Qualities.","authors":"Johanna Hengen, Inger Lundeborg Hammarström, Stefan Stenfelt","doi":"10.1177/23312165251322064","DOIUrl":"10.1177/23312165251322064","url":null,"abstract":"<p><p>Problems with own-voice sounds are common in hearing aid users. As auditory feedback is used to regulate the voice, it is possible that hearing aid use affects phonation. The aim of this paper is to compare hearing aid users' perception of their own voice with and without hearing aids and any effect on phonation. Eighty-five first-time and 85 experienced hearing aid users together with a control group of 70 completed evaluations of their own recorded and live voice in addition to two external voices. The participants' voice recordings were used for acoustic analysis. The results showed moderate to severe own-voice problems (OVP) in 17.6% of first-time users and 18.8% of experienced users. Hearing condition was a significant predictor of the perception of pitch in external voices and of monotony, lower naturalness, and lower pleasantness in their own live voice. The groups with hearing impairment had a higher mean fundamental frequency (f0) than the control group. Hearing aids decreased the speaking sound pressure level by 2 dB on average. Moreover, acoustic analysis shows a complex relationship between hearing impairment, hearing aids, and phonation and an immediate decrease in speech level when using hearing aids. Our findings support previous literature regarding auditory feedback and voice regulation. The results should motivate clinicians in hearing and voice care to routinely take hearing functions into account when assessing voice problems.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251322064"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11873921/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143537883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Individual Differences in the Recognition of Spectrally Degraded Speech: Associations With Neurocognitive Functions in Adult Cochlear Implant Users and With Noise-Vocoded Simulations. 频谱退化语音识别的个体差异:与成年人工耳蜗使用者的神经认知功能和噪声编码模拟的关联。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 DOI: 10.1177/23312165241312449
Aaron C Moberly, Liping Du, Terrin N Tamati

When listening to speech under adverse conditions, listeners compensate using neurocognitive resources. A clinically relevant form of adverse listening is listening through a cochlear implant (CI), which provides a spectrally degraded signal. CI listening is often simulated through noise-vocoding. This study investigated the neurocognitive mechanisms supporting recognition of spectrally degraded speech in adult CI users and normal-hearing (NH) peers listening to noise-vocoded speech, with the hypothesis that an overlapping set of neurocognitive functions would contribute to speech recognition in both groups. Ninety-seven adults with either a CI (54 CI individuals, mean age 66.6 years, range 45-87 years) or age-normal hearing (43 NH individuals, mean age 66.8 years, range 50-81 years) participated. Listeners heard materials varying in linguistic complexity consisting of isolated words, meaningful sentences, anomalous sentences, high-variability sentences, and audiovisually (AV) presented sentences. Participants were also tested for vocabulary knowledge, nonverbal reasoning, working memory capacity, inhibition-concentration, and speed of lexical and phonological access. Linear regression analyses with robust standard errors were performed for speech recognition tasks on neurocognitive functions. Nonverbal reasoning contributed to meaningful sentence recognition in NH peers and anomalous sentence recognition in CI users. Speed of lexical access contributed to performance on most speech tasks for CI users but not for NH peers. Finally, inhibition-concentration and vocabulary knowledge contributed to AV sentence recognition in NH listeners alone. Findings suggest that the complexity of speech materials may determine the particular contributions of neurocognitive skills, and that NH processing of noise-vocoded speech may not represent how CI listeners process speech.

当在不利条件下听演讲时,听者使用神经认知资源进行补偿。不良听力的临床相关形式是通过人工耳蜗(CI)进行听力,它提供频谱退化信号。CI听力通常通过噪声语音编码来模拟。本研究研究了支持成年CI使用者和正常听力(NH)同龄人在听噪声编码语音时识别频谱退化语音的神经认知机制,并假设一组重叠的神经认知功能将有助于两组的语音识别。97名患有CI(54名CI个体,平均年龄66.6岁,范围45-87岁)或年龄正常听力(43名NH个体,平均年龄66.8岁,范围50-81岁)的成年人参与了研究。听众听到的材料在语言复杂性上各不相同,包括孤立的单词、有意义的句子、反常的句子、高变异性的句子和视听呈现的句子。参与者还接受了词汇知识、非语言推理、工作记忆能力、抑制-集中以及词汇和语音获取速度的测试。对语音识别任务的神经认知功能进行了鲁棒标准误差线性回归分析。非语言推理有助于汉语同伴的有意义句子识别和汉语使用者的异常句子识别。词法访问的速度对CI用户的大多数语音任务的性能有贡献,但对NH用户没有贡献。最后,抑制-集中和词汇知识单独对NH听者的反音句识别有贡献。研究结果表明,语音材料的复杂性可能决定了神经认知技能的特殊贡献,并且NH对噪声编码语音的处理可能并不代表CI听众如何处理语音。
{"title":"Individual Differences in the Recognition of Spectrally Degraded Speech: Associations With Neurocognitive Functions in Adult Cochlear Implant Users and With Noise-Vocoded Simulations.","authors":"Aaron C Moberly, Liping Du, Terrin N Tamati","doi":"10.1177/23312165241312449","DOIUrl":"10.1177/23312165241312449","url":null,"abstract":"<p><p>When listening to speech under adverse conditions, listeners compensate using neurocognitive resources. A clinically relevant form of adverse listening is listening through a cochlear implant (CI), which provides a spectrally degraded signal. CI listening is often simulated through noise-vocoding. This study investigated the neurocognitive mechanisms supporting recognition of spectrally degraded speech in adult CI users and normal-hearing (NH) peers listening to noise-vocoded speech, with the hypothesis that an overlapping set of neurocognitive functions would contribute to speech recognition in both groups. Ninety-seven adults with either a CI (54 CI individuals, mean age 66.6 years, range 45-87 years) or age-normal hearing (43 NH individuals, mean age 66.8 years, range 50-81 years) participated. Listeners heard materials varying in linguistic complexity consisting of isolated words, meaningful sentences, anomalous sentences, high-variability sentences, and audiovisually (AV) presented sentences. Participants were also tested for vocabulary knowledge, nonverbal reasoning, working memory capacity, inhibition-concentration, and speed of lexical and phonological access. Linear regression analyses with robust standard errors were performed for speech recognition tasks on neurocognitive functions. Nonverbal reasoning contributed to meaningful sentence recognition in NH peers and anomalous sentence recognition in CI users. Speed of lexical access contributed to performance on most speech tasks for CI users but not for NH peers. Finally, inhibition-concentration and vocabulary knowledge contributed to AV sentence recognition in NH listeners alone. Findings suggest that the complexity of speech materials may determine the particular contributions of neurocognitive skills, and that NH processing of noise-vocoded speech may not represent how CI listeners process speech.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241312449"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11742172/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Repairing Misperceptions of Words Early in a Sentence is More Effortful Than Repairing Later Words, Especially for Listeners With Cochlear Implants.
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 DOI: 10.1177/23312165251320789
Michael L Smith, Matthew B Winn

The process of repairing misperceptions has been identified as a contributor to effortful listening in people who use cochlear implants (CIs). The current study was designed to examine the relative cost of repairing misperceptions at earlier or later parts of a sentence that contained contextual information that could be used to infer words both predictively and retroactively. Misperceptions were enforced at specific times by replacing single words with noise. Changes in pupil dilation were analyzed to track differences in the timing and duration of effort, comparing listeners with typical hearing (TH) or with CIs. Increases in pupil dilation were time-locked to the moment of the missing word, with longer-lasting increases when the missing word was earlier in the sentence. Compared to listeners with TH, CI listeners showed elevated pupil dilation for longer periods of time after listening, suggesting a lingering effect of effort after sentence offset. When needing to mentally repair missing words, CI listeners also made more mistakes on words elsewhere in the sentence, even though these words were not masked. Changes in effort based on the position of the missing word were not evident in basic measures like peak pupil dilation and only emerged when the full-time course was analyzed, suggesting the timing analysis adds new information to our understanding of listening effort. These results demonstrate that some mistakes are more costly than others and incur different levels of mental effort to resolve the mistake, underscoring the information lost when characterizing speech perception with simple measures like percent-correct scores.

{"title":"Repairing Misperceptions of Words Early in a Sentence is More Effortful Than Repairing Later Words, Especially for Listeners With Cochlear Implants.","authors":"Michael L Smith, Matthew B Winn","doi":"10.1177/23312165251320789","DOIUrl":"10.1177/23312165251320789","url":null,"abstract":"<p><p>The process of repairing misperceptions has been identified as a contributor to effortful listening in people who use cochlear implants (CIs). The current study was designed to examine the relative cost of repairing misperceptions at earlier or later parts of a sentence that contained contextual information that could be used to infer words both predictively and retroactively. Misperceptions were enforced at specific times by replacing single words with noise. Changes in pupil dilation were analyzed to track differences in the timing and duration of effort, comparing listeners with typical hearing (TH) or with CIs. Increases in pupil dilation were time-locked to the moment of the missing word, with longer-lasting increases when the missing word was earlier in the sentence. Compared to listeners with TH, CI listeners showed elevated pupil dilation for longer periods of time after listening, suggesting a lingering effect of effort after sentence offset. When needing to mentally repair missing words, CI listeners also made more mistakes on words elsewhere in the sentence, even though these words were not masked. Changes in effort based on the position of the missing word were not evident in basic measures like peak pupil dilation and only emerged when the full-time course was analyzed, suggesting the timing analysis adds new information to our understanding of listening effort. These results demonstrate that some mistakes are more costly than others and incur different levels of mental effort to resolve the mistake, underscoring the information lost when characterizing speech perception with simple measures like percent-correct scores.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251320789"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11851752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143494387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural-WDRC: A Deep Learning Wide Dynamic Range Compression Method Combined With Controllable Noise Reduction for Hearing Aids.
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 DOI: 10.1177/23312165241309301
Huiyong Zhang, Brian C J Moore, Feng Jiang, Mingfang Diao, Fei Ji, Xiaodong Li, Chengshi Zheng

Wide dynamic range compression (WDRC) and noise reduction both play important roles in hearing aids. WDRC provides level-dependent amplification so that the level of sound produced by the hearing aid falls between the hearing threshold and the highest comfortable level of the listener, while noise reduction reduces ambient noise with the goal of improving intelligibility and listening comfort and reducing effort. In most current hearing aids, noise reduction and WDRC are implemented sequentially, but this may lead to distortion of the amplitude modulation patterns of both the speech and the noise. This paper describes a deep learning method, called Neural-WDRC, for implementing both noise reduction and WDRC, employing a two-stage low-complexity network. The network initially estimates the noise alone and the speech alone. Fast-acting compression is applied to the estimated speech and slow-acting compression to the estimated noise, but with a controllable residual noise level to help the user to perceive natural environmental sounds. Neural-WDRC is frame-based, and the output of the current frame is determined only by the current and preceding frames. Neural-WDRC was compared with conventional slow- and fast-acting compression and with signal-to-noise ratio (SNR)-aware compression using objective measures and listening tests based on normal-hearing participants listening to signals processed to simulate the effects of hearing loss and hearing-impaired participants. The objective measures demonstrated that Neural-WDRC effectively reduced negative interactions of speech and noise in highly non-stationary noise scenarios. The listening tests showed that Neural-WDRC was preferred over the other compression methods for speech in non-stationary noises.

{"title":"Neural-WDRC: A Deep Learning Wide Dynamic Range Compression Method Combined With Controllable Noise Reduction for Hearing Aids.","authors":"Huiyong Zhang, Brian C J Moore, Feng Jiang, Mingfang Diao, Fei Ji, Xiaodong Li, Chengshi Zheng","doi":"10.1177/23312165241309301","DOIUrl":"10.1177/23312165241309301","url":null,"abstract":"<p><p>Wide dynamic range compression (WDRC) and noise reduction both play important roles in hearing aids. WDRC provides level-dependent amplification so that the level of sound produced by the hearing aid falls between the hearing threshold and the highest comfortable level of the listener, while noise reduction reduces ambient noise with the goal of improving intelligibility and listening comfort and reducing effort. In most current hearing aids, noise reduction and WDRC are implemented sequentially, but this may lead to distortion of the amplitude modulation patterns of both the speech and the noise. This paper describes a deep learning method, called Neural-WDRC, for implementing both noise reduction and WDRC, employing a two-stage low-complexity network. The network initially estimates the noise alone and the speech alone. Fast-acting compression is applied to the estimated speech and slow-acting compression to the estimated noise, but with a controllable residual noise level to help the user to perceive natural environmental sounds. Neural-WDRC is frame-based, and the output of the current frame is determined only by the current and preceding frames. Neural-WDRC was compared with conventional slow- and fast-acting compression and with signal-to-noise ratio (SNR)-aware compression using objective measures and listening tests based on normal-hearing participants listening to signals processed to simulate the effects of hearing loss and hearing-impaired participants. The objective measures demonstrated that Neural-WDRC effectively reduced negative interactions of speech and noise in highly non-stationary noise scenarios. The listening tests showed that Neural-WDRC was preferred over the other compression methods for speech in non-stationary noises.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241309301"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11770718/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143048166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measuring Speech Discrimination Ability in Sleeping Infants Using fNIRS-A Proof of Principle.
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 DOI: 10.1177/23312165241311721
Onn Wah Lee, Demi Gao, Tommy Peng, Julia Wunderlich, Darren Mao, Gautam Balasubramanian, Colette M McKay

This study used functional near-infrared spectroscopy (fNIRS) to measure aspects of the speech discrimination ability of sleeping infants. We examined the morphology of the fNIRS response to three different speech contrasts, namely "Tea/Ba," "Bee/Ba," and "Ga/Ba." Sixteen infants aged between 3 and 13 months old were included in this study and their fNIRS data were recorded during natural sleep. The stimuli were presented using a nonsilence baseline paradigm, where repeated standard stimuli were presented between the novel stimuli blocks without any silence periods. The morphology of fNIRS responses varied between speech contrasts. The data were fit with a model in which the responses were the sum of two independent and concurrent response mechanisms that were derived from previously published fNIRS detection responses. These independent components were an oxyhemoglobin (HbO)-positive early-latency response and an HbO-negative late latency response, hypothesized to be related to an auditory canonical response and a brain arousal response, respectively. The goodness of fit of the model with the data was high with median goodness of fit of 81%. The data showed that both response components had later latency when the left ear was the test ear (p < .05) compared to the right ear and that the negative component, due to brain arousal, was smallest for the most subtle contrast, "Ga/Ba" (p = .003).

{"title":"Measuring Speech Discrimination Ability in Sleeping Infants Using fNIRS-A Proof of Principle.","authors":"Onn Wah Lee, Demi Gao, Tommy Peng, Julia Wunderlich, Darren Mao, Gautam Balasubramanian, Colette M McKay","doi":"10.1177/23312165241311721","DOIUrl":"10.1177/23312165241311721","url":null,"abstract":"<p><p>This study used functional near-infrared spectroscopy (fNIRS) to measure aspects of the speech discrimination ability of sleeping infants. We examined the morphology of the fNIRS response to three different speech contrasts, namely \"Tea/Ba,\" \"Bee/Ba,\" and \"Ga/Ba.\" Sixteen infants aged between 3 and 13 months old were included in this study and their fNIRS data were recorded during natural sleep. The stimuli were presented using a nonsilence baseline paradigm, where repeated standard stimuli were presented between the novel stimuli blocks without any silence periods. The morphology of fNIRS responses varied between speech contrasts. The data were fit with a model in which the responses were the sum of two independent and concurrent response mechanisms that were derived from previously published fNIRS detection responses. These independent components were an oxyhemoglobin (HbO)-positive early-latency response and an HbO-negative late latency response, hypothesized to be related to an auditory canonical response and a brain arousal response, respectively. The goodness of fit of the model with the data was high with median goodness of fit of 81%. The data showed that both response components had later latency when the left ear was the test ear (<i>p</i> < .05) compared to the right ear and that the negative component, due to brain arousal, was smallest for the most subtle contrast, \"Ga/Ba\" (<i>p</i> = .003).</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241311721"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11758514/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143030151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptation to Noise in Spectrotemporal Modulation Detection and Word Recognition 谱时调制检测和单词识别中的噪声适应性
IF 2.7 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-09-14 DOI: 10.1177/23312165241266322
David López-Ramos, Miriam I. Marrufo-Pérez, Almudena Eustaquio-Martín, Luis E. López-Bascuas, Enrique A. Lopez-Poveda
Noise adaptation is the improvement in auditory function as the signal of interest is delayed in the noise. Here, we investigated if noise adaptation occurs in spectral, temporal, and spectrotemporal modulation detection as well as in speech recognition. Eighteen normal-hearing adults participated in the experiments. In the modulation detection tasks, the signal was a 200ms spectrally and/or temporally modulated ripple noise. The spectral modulation rate was two cycles per octave, the temporal modulation rate was 10 Hz, and the spectrotemporal modulations combined these two modulations, which resulted in a downward-moving ripple. A control experiment was performed to determine if the results generalized to upward-moving ripples. In the speech recognition task, the signal consisted of disyllabic words unprocessed or vocoded to maintain only envelope cues. Modulation detection thresholds at 0 dB signal-to-noise ratio and speech reception thresholds were measured in quiet and in white noise (at 60 dB SPL) for noise-signal onset delays of 50 ms (early condition) and 800 ms (late condition). Adaptation was calculated as the threshold difference between the early and late conditions. Adaptation in word recognition was statistically significant for vocoded words (2.1 dB) but not for natural words (0.6 dB). Adaptation was found to be statistically significant in spectral (2.1 dB) and temporal (2.2 dB) modulation detection but not in spectrotemporal modulation detection (downward ripple: 0.0 dB, upward ripple: −0.4 dB). Findings suggest that noise adaptation in speech recognition is unrelated to improvements in the encoding of spectrotemporal modulation cues.
噪声适应是指当感兴趣的信号在噪声中延迟时,听觉功能得到改善。在此,我们研究了噪声适应是否发生在频谱、时间和频谱时空调制检测以及语音识别中。18 名听力正常的成年人参加了实验。在调制检测任务中,信号为 200ms 的频谱和/或时间调制波纹噪声。频谱调制率为每倍频程两个周期,时间调制率为 10 赫兹,频谱-时间调制将这两种调制结合在一起,形成一个向下移动的波纹。我们还进行了对照实验,以确定实验结果是否适用于向上移动的波纹。在语音识别任务中,信号由未经处理或仅保留包络线索的声码字组成。在安静和白噪声(60 dB SPL)条件下,噪声-信号开始延迟为 50 ms(早期条件)和 800 ms(晚期条件)时,测量信噪比为 0 dB 时的调制检测阈值和语音接收阈值。适应度以早期和晚期条件下的阈值差计算。词汇识别的适应性对词汇编码(2.1 dB)有显著的统计学意义,而对自然词汇(0.6 dB)则没有。在频谱(2.1 dB)和时间(2.2 dB)调制检测中,发现适应具有显著的统计意义,但在频谱-时间调制检测中却没有发现适应(向下波纹:0.0 dB,向上波纹:-0.4 dB)。研究结果表明,语音识别中的噪声适应与谱时调制线索编码的改进无关。
{"title":"Adaptation to Noise in Spectrotemporal Modulation Detection and Word Recognition","authors":"David López-Ramos, Miriam I. Marrufo-Pérez, Almudena Eustaquio-Martín, Luis E. López-Bascuas, Enrique A. Lopez-Poveda","doi":"10.1177/23312165241266322","DOIUrl":"https://doi.org/10.1177/23312165241266322","url":null,"abstract":"Noise adaptation is the improvement in auditory function as the signal of interest is delayed in the noise. Here, we investigated if noise adaptation occurs in spectral, temporal, and spectrotemporal modulation detection as well as in speech recognition. Eighteen normal-hearing adults participated in the experiments. In the modulation detection tasks, the signal was a 200ms spectrally and/or temporally modulated ripple noise. The spectral modulation rate was two cycles per octave, the temporal modulation rate was 10 Hz, and the spectrotemporal modulations combined these two modulations, which resulted in a downward-moving ripple. A control experiment was performed to determine if the results generalized to upward-moving ripples. In the speech recognition task, the signal consisted of disyllabic words unprocessed or vocoded to maintain only envelope cues. Modulation detection thresholds at 0 dB signal-to-noise ratio and speech reception thresholds were measured in quiet and in white noise (at 60 dB SPL) for noise-signal onset delays of 50 ms (early condition) and 800 ms (late condition). Adaptation was calculated as the threshold difference between the early and late conditions. Adaptation in word recognition was statistically significant for vocoded words (2.1 dB) but not for natural words (0.6 dB). Adaptation was found to be statistically significant in spectral (2.1 dB) and temporal (2.2 dB) modulation detection but not in spectrotemporal modulation detection (downward ripple: 0.0 dB, upward ripple: −0.4 dB). Findings suggest that noise adaptation in speech recognition is unrelated to improvements in the encoding of spectrotemporal modulation cues.","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142256779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Trends in Hearing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1