首页 > 最新文献

Trends in Hearing最新文献

英文 中文
The Effect of Temporal Misalignment Between Acoustic and Simulated Electric Signals on the Time Compression Thresholds of Normal-Hearing Listeners. 声学信号与模拟电信号的时间错位对正常听力听者时间压缩阈值的影响。
IF 3 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 Epub Date: 2025-11-24 DOI: 10.1177/23312165251397699
Qi Gao, Lena L N Wong, Fei Chen

This study investigated the effect of temporal misalignment between acoustic and simulated electric signals on the ability to process fast speech in normal-hearing listeners. The within-ear integration of acoustic and electric hearing was simulated, mimicking the electric-acoustic stimulation (EAS) condition, where cochlear implant users receive acoustic input at low frequencies and electric stimulation at high frequencies in the same ear. Time-compression thresholds (TCTs), defined as the 50% correct performance for time-compressed sentences, were adaptively measured in quiet and in speech-spectrum noise (SSN) as well as amplitude-modulated noise (AMN) at 4 dB and 10 dB signal-to-noise ratio (SNR). Temporal misalignment was introduced by delaying the acoustic or the simulated electric signals, which were generated using a low-pass filter (cutoff frequency: 600 Hz) and a five-channel noise vocoder, respectively. Listeners showed significant benefits from the addition of low-frequency acoustic signals in terms of TCTs, regardless of temporal misalignment. Within the range from 0 ms to ±30 ms, temporal misalignment decreased listeners' TCTs, and its effect interacted with SNR such that the adverse impact of misalignment was more pronounced at higher SNR levels. When misalignment was limited to within ±7 ms, which is closer to the clinically relevant range, its effect disappeared. In conclusion, while temporal misalignment negatively affects the ability of listeners with simulated EAS hearing to process fast sentences in Mandarin, its effect is negligible when it is close to a clinically relevant range. Future research should validate these findings in real EAS users.

本研究探讨了声学信号和模拟电信号之间的时间错位对正常听力听者处理快速言语能力的影响。模拟电声刺激(EAS)条件,模拟耳蜗植入者在同一耳内接受低频声输入和高频电刺激的耳内声学和电听力的整合。时间压缩阈值(tct)定义为时间压缩句子的50%正确率,在安静、语音频谱噪声(SSN)和调幅噪声(AMN)下,在4 dB和10 dB信噪比(SNR)下自适应测量。时间失调是通过延迟使用低通滤波器(截止频率为600 Hz)和五通道噪声声码器分别产生的声学或模拟电信号来引入的。无论时间偏差如何,听众都从低频声信号的tct增加中获得了显著的好处。在0 ms至±30 ms范围内,时间偏差降低了听者的tct,其影响与信噪比相互作用,因此在信噪比较高时,时间偏差的不利影响更为明显。当不对准限制在±7 ms内,更接近临床相关范围时,其效果消失。综上所述,虽然时间偏差会对模拟EAS听力的听者处理普通话快速句子的能力产生负面影响,但当时间偏差接近临床相关范围时,其影响可以忽略不计。未来的研究应该在真正的EAS用户中验证这些发现。
{"title":"The Effect of Temporal Misalignment Between Acoustic and Simulated Electric Signals on the Time Compression Thresholds of Normal-Hearing Listeners.","authors":"Qi Gao, Lena L N Wong, Fei Chen","doi":"10.1177/23312165251397699","DOIUrl":"10.1177/23312165251397699","url":null,"abstract":"<p><p>This study investigated the effect of temporal misalignment between acoustic and simulated electric signals on the ability to process fast speech in normal-hearing listeners. The within-ear integration of acoustic and electric hearing was simulated, mimicking the electric-acoustic stimulation (EAS) condition, where cochlear implant users receive acoustic input at low frequencies and electric stimulation at high frequencies in the same ear. Time-compression thresholds (TCTs), defined as the 50% correct performance for time-compressed sentences, were adaptively measured in quiet and in speech-spectrum noise (SSN) as well as amplitude-modulated noise (AMN) at 4 dB and 10 dB signal-to-noise ratio (SNR). Temporal misalignment was introduced by delaying the acoustic or the simulated electric signals, which were generated using a low-pass filter (cutoff frequency: 600 Hz) and a five-channel noise vocoder, respectively. Listeners showed significant benefits from the addition of low-frequency acoustic signals in terms of TCTs, regardless of temporal misalignment. Within the range from 0 ms to ±30 ms, temporal misalignment decreased listeners' TCTs, and its effect interacted with SNR such that the adverse impact of misalignment was more pronounced at higher SNR levels. When misalignment was limited to within ±7 ms, which is closer to the clinically relevant range, its effect disappeared. In conclusion, while temporal misalignment negatively affects the ability of listeners with simulated EAS hearing to process fast sentences in Mandarin, its effect is negligible when it is close to a clinically relevant range. Future research should validate these findings in real EAS users.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251397699"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12644445/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145597739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perception of Recorded Music With Hearing Aids: Compression Differentially Affects Musical Scene Analysis and Musical Sound Quality. 用助听器对录制音乐的感知:压缩差异影响音乐场景分析和音乐音质。
IF 3 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 Epub Date: 2025-08-25 DOI: 10.1177/23312165251368669
Robin Hake, Michel Bürgel, Christophe Lesimple, Matthias Vormann, Kirsten C Wagener, Volker Kuehnel, Kai Siedenburg

Hearing aids have traditionally been designed to facilitate speech perception. With regards to music perception, previous work indicates that hearing aid users frequently complain about music sound quality. Yet, the effects of hearing aid amplification on musical perception abilities are largely unknown. This study aimed to investigate the effects of hearing aid amplification and dynamic range compression (DRC) settings on musical scene analysis (MSA) abilities and sound quality ratings (SQR) using polyphonic music recordings. Additionally, speech reception thresholds in noise (SRT) were measured. Thirty-three hearing aid users with moderate to severe hearing loss participated in three conditions: unaided, and aided with either slow or fast DRC settings. Overall, MSA abilities, SQR and SRT significantly improved with the use of hearing aids compared to the unaided condition. Yet, differences were observed regarding the choice of compression settings. Fast DRC led to better MSA performance, reflecting enhanced selective listening in musical mixtures, while slow DRC elicited more favorable SQR. Despite these improvements, variability in amplification benefit across DRC settings and tasks remained considerable, with some individuals showing limited or no improvement. These findings highlight a trade-off between scene transparency (indexed by MSA) and perceived sound quality, with individual differences emerging as a key factor in shaping amplification outcomes. Our results underscore the potential benefits of hearing aids for music perception and indicate the need for personalized fitting strategies tailored to task-specific demands.

传统上,助听器的设计是为了促进语言感知。在音乐感知方面,之前的研究表明助听器使用者经常抱怨音乐音质。然而,助听器放大对音乐感知能力的影响在很大程度上是未知的。本研究旨在探讨助听器放大和动态范围压缩(DRC)设置对音乐场景分析(MSA)能力和音质评分(SQR)的影响。此外,测量了噪声下语音接收阈值(SRT)。33名中度至重度听力损失的助听器使用者参加了三种情况的研究:无辅助,以及使用慢速或快速DRC设置辅助。总体而言,使用助听器的MSA能力,SQR和SRT与未使用助听器的情况相比显着改善。然而,在压缩设置的选择上观察到差异。快速DRC导致更好的MSA表现,反映了在音乐混合中增强的选择性聆听,而缓慢DRC引起更有利的SQR。尽管有这些改善,但在刚果民主共和国环境和任务中,放大效益的差异仍然很大,有些人表现出有限的改善或没有改善。这些发现强调了场景透明度(由MSA索引)和感知音质之间的权衡,个体差异成为影响放大效果的关键因素。我们的研究结果强调了助听器对音乐感知的潜在好处,并表明需要针对特定任务需求量身定制个性化的配套件策略。
{"title":"Perception of Recorded Music With Hearing Aids: Compression Differentially Affects Musical Scene Analysis and Musical Sound Quality.","authors":"Robin Hake, Michel Bürgel, Christophe Lesimple, Matthias Vormann, Kirsten C Wagener, Volker Kuehnel, Kai Siedenburg","doi":"10.1177/23312165251368669","DOIUrl":"https://doi.org/10.1177/23312165251368669","url":null,"abstract":"<p><p>Hearing aids have traditionally been designed to facilitate speech perception. With regards to music perception, previous work indicates that hearing aid users frequently complain about music sound quality. Yet, the effects of hearing aid amplification on musical perception abilities are largely unknown. This study aimed to investigate the effects of hearing aid amplification and dynamic range compression (DRC) settings on musical scene analysis (MSA) abilities and sound quality ratings (SQR) using polyphonic music recordings. Additionally, speech reception thresholds in noise (SRT) were measured. Thirty-three hearing aid users with moderate to severe hearing loss participated in three conditions: unaided, and aided with either slow or fast DRC settings. Overall, MSA abilities, SQR and SRT significantly improved with the use of hearing aids compared to the unaided condition. Yet, differences were observed regarding the choice of compression settings. Fast DRC led to better MSA performance, reflecting enhanced selective listening in musical mixtures, while slow DRC elicited more favorable SQR. Despite these improvements, variability in amplification benefit across DRC settings and tasks remained considerable, with some individuals showing limited or no improvement. These findings highlight a trade-off between scene transparency (indexed by MSA) and perceived sound quality, with individual differences emerging as a key factor in shaping amplification outcomes. Our results underscore the potential benefits of hearing aids for music perception and indicate the need for personalized fitting strategies tailored to task-specific demands.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251368669"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12378302/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144975114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Individual Differences in the Recognition of Spectrally Degraded Speech: Associations With Neurocognitive Functions in Adult Cochlear Implant Users and With Noise-Vocoded Simulations. 频谱退化语音识别的个体差异:与成年人工耳蜗使用者的神经认知功能和噪声编码模拟的关联。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 DOI: 10.1177/23312165241312449
Aaron C Moberly, Liping Du, Terrin N Tamati

When listening to speech under adverse conditions, listeners compensate using neurocognitive resources. A clinically relevant form of adverse listening is listening through a cochlear implant (CI), which provides a spectrally degraded signal. CI listening is often simulated through noise-vocoding. This study investigated the neurocognitive mechanisms supporting recognition of spectrally degraded speech in adult CI users and normal-hearing (NH) peers listening to noise-vocoded speech, with the hypothesis that an overlapping set of neurocognitive functions would contribute to speech recognition in both groups. Ninety-seven adults with either a CI (54 CI individuals, mean age 66.6 years, range 45-87 years) or age-normal hearing (43 NH individuals, mean age 66.8 years, range 50-81 years) participated. Listeners heard materials varying in linguistic complexity consisting of isolated words, meaningful sentences, anomalous sentences, high-variability sentences, and audiovisually (AV) presented sentences. Participants were also tested for vocabulary knowledge, nonverbal reasoning, working memory capacity, inhibition-concentration, and speed of lexical and phonological access. Linear regression analyses with robust standard errors were performed for speech recognition tasks on neurocognitive functions. Nonverbal reasoning contributed to meaningful sentence recognition in NH peers and anomalous sentence recognition in CI users. Speed of lexical access contributed to performance on most speech tasks for CI users but not for NH peers. Finally, inhibition-concentration and vocabulary knowledge contributed to AV sentence recognition in NH listeners alone. Findings suggest that the complexity of speech materials may determine the particular contributions of neurocognitive skills, and that NH processing of noise-vocoded speech may not represent how CI listeners process speech.

当在不利条件下听演讲时,听者使用神经认知资源进行补偿。不良听力的临床相关形式是通过人工耳蜗(CI)进行听力,它提供频谱退化信号。CI听力通常通过噪声语音编码来模拟。本研究研究了支持成年CI使用者和正常听力(NH)同龄人在听噪声编码语音时识别频谱退化语音的神经认知机制,并假设一组重叠的神经认知功能将有助于两组的语音识别。97名患有CI(54名CI个体,平均年龄66.6岁,范围45-87岁)或年龄正常听力(43名NH个体,平均年龄66.8岁,范围50-81岁)的成年人参与了研究。听众听到的材料在语言复杂性上各不相同,包括孤立的单词、有意义的句子、反常的句子、高变异性的句子和视听呈现的句子。参与者还接受了词汇知识、非语言推理、工作记忆能力、抑制-集中以及词汇和语音获取速度的测试。对语音识别任务的神经认知功能进行了鲁棒标准误差线性回归分析。非语言推理有助于汉语同伴的有意义句子识别和汉语使用者的异常句子识别。词法访问的速度对CI用户的大多数语音任务的性能有贡献,但对NH用户没有贡献。最后,抑制-集中和词汇知识单独对NH听者的反音句识别有贡献。研究结果表明,语音材料的复杂性可能决定了神经认知技能的特殊贡献,并且NH对噪声编码语音的处理可能并不代表CI听众如何处理语音。
{"title":"Individual Differences in the Recognition of Spectrally Degraded Speech: Associations With Neurocognitive Functions in Adult Cochlear Implant Users and With Noise-Vocoded Simulations.","authors":"Aaron C Moberly, Liping Du, Terrin N Tamati","doi":"10.1177/23312165241312449","DOIUrl":"10.1177/23312165241312449","url":null,"abstract":"<p><p>When listening to speech under adverse conditions, listeners compensate using neurocognitive resources. A clinically relevant form of adverse listening is listening through a cochlear implant (CI), which provides a spectrally degraded signal. CI listening is often simulated through noise-vocoding. This study investigated the neurocognitive mechanisms supporting recognition of spectrally degraded speech in adult CI users and normal-hearing (NH) peers listening to noise-vocoded speech, with the hypothesis that an overlapping set of neurocognitive functions would contribute to speech recognition in both groups. Ninety-seven adults with either a CI (54 CI individuals, mean age 66.6 years, range 45-87 years) or age-normal hearing (43 NH individuals, mean age 66.8 years, range 50-81 years) participated. Listeners heard materials varying in linguistic complexity consisting of isolated words, meaningful sentences, anomalous sentences, high-variability sentences, and audiovisually (AV) presented sentences. Participants were also tested for vocabulary knowledge, nonverbal reasoning, working memory capacity, inhibition-concentration, and speed of lexical and phonological access. Linear regression analyses with robust standard errors were performed for speech recognition tasks on neurocognitive functions. Nonverbal reasoning contributed to meaningful sentence recognition in NH peers and anomalous sentence recognition in CI users. Speed of lexical access contributed to performance on most speech tasks for CI users but not for NH peers. Finally, inhibition-concentration and vocabulary knowledge contributed to AV sentence recognition in NH listeners alone. Findings suggest that the complexity of speech materials may determine the particular contributions of neurocognitive skills, and that NH processing of noise-vocoded speech may not represent how CI listeners process speech.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241312449"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11742172/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spectral Weighting of Monaural Cues for Auditory Localization in Sagittal Planes. 矢状面听觉定位的单耳信号频谱加权。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 Epub Date: 2025-03-18 DOI: 10.1177/23312165251317027
Pedro Lladó, Piotr Majdak, Roberto Barumerli, Robert Baumgartner

Localization of sound sources in sagittal planes significantly relies on monaural spectral cues. These cues are primarily derived from the direction-specific filtering of the pinnae. The contribution of specific frequency regions to the cue evaluation has not been fully clarified. To this end, we analyzed how different spectral weighting schemes contribute to the explanatory power of a sagittal-plane localization model in response to wideband, flat-spectrum stimuli. Each weighting scheme emphasized the contribution of spectral cues within well-defined frequency bands, enabling us to assess their impact on the predictions of individual patterns of localization responses. By means of Bayesian model selection, we compared five model variants representing various spectral weights. Our results indicate a preference for the weighting schemes emphasizing the contribution of frequencies above 8 kHz, suggesting that, in the auditory system, spectral cue evaluation is upweighted in that frequency region. While various potential explanations are discussed, we conclude that special attention should be put on this high-frequency region in spatial-audio applications aiming at the best localization performance.

声源在矢状面上的定位很大程度上依赖于单声谱线索。这些线索主要来源于耳廓的定向过滤。特定频率区域对线索评估的贡献尚未得到充分澄清。为此,我们分析了不同的谱加权方案对矢状面定位模型在响应宽带、平谱刺激时的解释力的影响。每个加权方案都强调了在定义明确的频带内的频谱线索的贡献,使我们能够评估它们对定位响应个体模式预测的影响。通过贝叶斯模型选择,我们比较了代表不同谱权的五种模型变量。我们的研究结果表明,人们更倾向于强调8 kHz以上频率的权重方案,这表明,在听觉系统中,频谱提示评估在该频率区域的权重更高。虽然讨论了各种可能的解释,但我们得出结论,在空间音频应用中,为了获得最佳定位性能,应该特别注意这个高频区域。
{"title":"Spectral Weighting of Monaural Cues for Auditory Localization in Sagittal Planes.","authors":"Pedro Lladó, Piotr Majdak, Roberto Barumerli, Robert Baumgartner","doi":"10.1177/23312165251317027","DOIUrl":"10.1177/23312165251317027","url":null,"abstract":"<p><p>Localization of sound sources in sagittal planes significantly relies on monaural spectral cues. These cues are primarily derived from the direction-specific filtering of the pinnae. The contribution of specific frequency regions to the cue evaluation has not been fully clarified. To this end, we analyzed how different spectral weighting schemes contribute to the explanatory power of a sagittal-plane localization model in response to wideband, flat-spectrum stimuli. Each weighting scheme emphasized the contribution of spectral cues within well-defined frequency bands, enabling us to assess their impact on the predictions of individual patterns of localization responses. By means of Bayesian model selection, we compared five model variants representing various spectral weights. Our results indicate a preference for the weighting schemes emphasizing the contribution of frequencies above 8 kHz, suggesting that, in the auditory system, spectral cue evaluation is upweighted in that frequency region. While various potential explanations are discussed, we conclude that special attention should be put on this high-frequency region in spatial-audio applications aiming at the best localization performance.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251317027"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11920987/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143659047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Voice Familiarization Training Improves Speech Intelligibility and Reduces Listening Effort. 语音熟悉训练可以提高语音清晰度,减少听音的努力。
IF 3 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 Epub Date: 2025-12-08 DOI: 10.1177/23312165251401318
Freja Baxter, Harriet J Smith, Emma Holmes

Understanding speech among competing speech poses a substantial challenge. In these environments, familiar voices-including naturally familiar (e.g., friends, partners) and lab-trained voices-are more intelligible than unfamiliar voices. Yet, whether familiar voices also require less effort to understand is currently unknown. We trained 20 participants to become familiar with three voices, then tested listening effort during a speech intelligibility task. During familiarization and training, participants were exposed to three talkers for different lengths of time, either speaking 88, 166, or 478 sentences ("Least Familiar," "Moderately Familiar," or "Most Familiar" voice, respectively). During each trial of the speech intelligibility task, two competing sentences were presented at a target-to-masker ratio (TMR) of -6 or +3 dB. Participants reported target sentences that were spoken by trained or by novel, unfamiliar talkers. We assessed effort using self-reported ratings and physiologically, using pupil dilation. We found that self-report scores were more sensitive than pupil dilation to differences in TMR, with lower self-reported effort at +3 than -6 dB TMR. The two measures may also be differentially sensitive to the extent of training. We found lower self-reported effort for all three trained voices over unfamiliar voices, with no differences among the trained voices, whereas pupil dilation was only lower for the voice that had been trained for the longest. Thus, both self-report scores and pupil dilation showed advantages for the voice that was trained for the longest (∼1 h), but self-report scores additionally showed reduced effort even following relatively short durations of training (<10 min).

理解相互竞争的语言构成了巨大的挑战。在这些环境中,熟悉的声音——包括自然熟悉的声音(如朋友、伙伴)和实验室训练的声音——比不熟悉的声音更容易理解。然而,熟悉的声音是否也需要更少的努力来理解,目前尚不清楚。我们训练了20名参与者熟悉三种声音,然后在语音清晰度任务中测试听力的努力程度。在熟悉和训练期间,参与者被暴露在三个说话者的不同时间长度下,分别说88、166或478个句子(分别是“最不熟悉”、“一般熟悉”或“最熟悉”的声音)。在语音可理解性任务的每次试验中,以-6或+3 dB的目标与掩蔽比(TMR)呈现两个相互竞争的句子。参与者报告的目标句子是由训练有素的或陌生的说话者说的。我们用自我报告的评分来评估努力程度,生理学上用瞳孔扩张来评估努力程度。我们发现自我报告得分比瞳孔扩张对TMR差异更敏感,+3 dB TMR时自我报告的努力程度低于-6 dB TMR。这两种方法对训练程度的敏感程度也可能不同。我们发现,与不熟悉的声音相比,所有三种经过训练的声音的自我报告努力程度都较低,而经过训练的声音之间没有差异,而瞳孔扩张仅在训练时间最长的声音中较低。因此,自我报告分数和瞳孔扩张都显示了训练时间最长(约1小时)的声音的优势,但自我报告分数还显示,即使在相对较短的训练时间后,努力程度也会降低(
{"title":"Voice Familiarization Training Improves Speech Intelligibility and Reduces Listening Effort.","authors":"Freja Baxter, Harriet J Smith, Emma Holmes","doi":"10.1177/23312165251401318","DOIUrl":"10.1177/23312165251401318","url":null,"abstract":"<p><p>Understanding speech among competing speech poses a substantial challenge. In these environments, familiar voices-including naturally familiar (e.g., friends, partners) and lab-trained voices-are more intelligible than unfamiliar voices. Yet, whether familiar voices also require less effort to understand is currently unknown. We trained 20 participants to become familiar with three voices, then tested listening effort during a speech intelligibility task. During familiarization and training, participants were exposed to three talkers for different lengths of time, either speaking 88, 166, or 478 sentences (\"Least Familiar,\" \"Moderately Familiar,\" or \"Most Familiar\" voice, respectively). During each trial of the speech intelligibility task, two competing sentences were presented at a target-to-masker ratio (TMR) of -6 or +3 dB. Participants reported target sentences that were spoken by trained or by novel, unfamiliar talkers. We assessed effort using self-reported ratings and physiologically, using pupil dilation. We found that self-report scores were more sensitive than pupil dilation to differences in TMR, with lower self-reported effort at +3 than -6 dB TMR. The two measures may also be differentially sensitive to the extent of training. We found lower self-reported effort for all three trained voices over unfamiliar voices, with no differences among the trained voices, whereas pupil dilation was only lower for the voice that had been trained for the longest. Thus, both self-report scores and pupil dilation showed advantages for the voice that was trained for the longest (∼1 h), but self-report scores additionally showed reduced effort even following relatively short durations of training (<10 min).</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251401318"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12686366/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145702706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cochlear Tuning in Early Aging Estimated with Three Methods. 用三种方法估计早期衰老的耳蜗调谐。
IF 3 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 Epub Date: 2025-07-29 DOI: 10.1177/23312165251364675
Courtney Coburn Glavin, Sumitrajit Dhar

Age-related hearing loss (ARHL) currently affects over 20 million adults in the U.S. and its prevalence is expected to increase as the population ages. However, little is known about the earliest manifestations of ARHL, including its influence on auditory function beyond the threshold of sensation. This work explores the effects of early aging on frequency selectivity (i.e., "tuning"), a critical feature of normal hearing function. Tuning is estimated using both behavioral and physiological measures-fast psychophysical tuning curves (fPTC), distortion product otoacoustic emission level ratio functions (DPOAE LRFs), and stimulus-frequency OAE (SFOAE) phase gradient delay. All three measures were selected because they have high potential for clinical translation but have not been compared directly in the same sample of ears. Results indicate that there may be subtle changes in tuning during early aging, even in ears with clinically normal audiometric thresholds. Additionally, there are notable differences in tuning estimates derived from the three measures. Psychophysical tuning estimates are highly variable and statistically significantly different from OAE-derived tuning estimates, suggesting that behavioral tuning is uniquely influenced by factors not affecting OAE-based tuning. Across all measures, there is considerable individual variability that warrants future investigation. Collectively, this work suggests that age-related auditory decline begins in relatively young ears (<60 years) and in the absence of traditionally defined "hearing loss." These findings suggest the potential benefit of characterizing ARHL beyond threshold and establishing a gold standard for measuring frequency selectivity in humans.

年龄相关性听力损失(ARHL)目前影响着美国超过2000万成年人,随着人口老龄化,其患病率预计会增加。然而,对ARHL的早期表现知之甚少,包括其对感觉阈值以外听觉功能的影响。这项工作探讨了早期衰老对频率选择性(即“调谐”)的影响,这是正常听力功能的一个关键特征。通过行为和生理测量-快速心理物理调谐曲线(fPTC),失真积耳声发射电平比函数(DPOAE lrf)和刺激频率声发射(SFOAE)相位梯度延迟来估计调谐。选择这三种测量方法是因为它们具有很高的临床转化潜力,但尚未在同一耳朵样本中直接进行比较。结果表明,在早期衰老过程中,即使在临床上听力阈值正常的耳朵中,也可能存在细微的调谐变化。此外,从这三种度量中得出的调优估计存在显著差异。心理物理调谐估计是高度可变的,并且在统计上与基于oae的调谐估计有显著差异,这表明行为调谐仅受不影响基于oae的调谐的因素的影响。在所有的测量中,有相当大的个体差异,值得未来的调查。总的来说,这项研究表明,与年龄相关的听力衰退始于相对年轻的耳朵(
{"title":"Cochlear Tuning in Early Aging Estimated with Three Methods.","authors":"Courtney Coburn Glavin, Sumitrajit Dhar","doi":"10.1177/23312165251364675","DOIUrl":"10.1177/23312165251364675","url":null,"abstract":"<p><p>Age-related hearing loss (ARHL) currently affects over 20 million adults in the U.S. and its prevalence is expected to increase as the population ages. However, little is known about the earliest manifestations of ARHL, including its influence on auditory function beyond the threshold of sensation. This work explores the effects of early aging on frequency selectivity (i.e., \"tuning\"), a critical feature of normal hearing function. Tuning is estimated using both behavioral and physiological measures-fast psychophysical tuning curves (fPTC), distortion product otoacoustic emission level ratio functions (DPOAE LRFs), and stimulus-frequency OAE (SFOAE) phase gradient delay. All three measures were selected because they have high potential for clinical translation but have not been compared directly in the same sample of ears. Results indicate that there may be subtle changes in tuning during early aging, even in ears with clinically normal audiometric thresholds. Additionally, there are notable differences in tuning estimates derived from the three measures. Psychophysical tuning estimates are highly variable and statistically significantly different from OAE-derived tuning estimates, suggesting that behavioral tuning is uniquely influenced by factors not affecting OAE-based tuning. Across all measures, there is considerable individual variability that warrants future investigation. Collectively, this work suggests that age-related auditory decline begins in relatively young ears (<60 years) and in the absence of traditionally defined \"hearing loss.\" These findings suggest the potential benefit of characterizing ARHL beyond threshold and establishing a gold standard for measuring frequency selectivity in humans.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251364675"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12317184/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144745544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Listening Effort for Soft Speech in Quiet. 安静环境下轻声说话的听力努力。
IF 3 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 Epub Date: 2025-08-18 DOI: 10.1177/23312165251370006
Hendrik Husstedt, Jennifer Schmidt, Luca Wiederschein, Robert Wiedenbeck, Markus Kemper, Florian Denk

In addition to speech intelligibility, listening effort has emerged as a critical indicator of hearing performance. It can be defined as the effort experienced or invested in solving an auditory task. Subjective, behavioral, and physiological methods have been employed to assess listening effort. While previous studies have focused predominantly evaluated listening effort at clearly audible levels, such as in speech-in-noise conditions, we present findings from a study investigating listening effort for soft speech in quiet. Twenty young adults with normal hearing participated in speech intelligibility testing (OLSA), adaptive listening effort scaling (ACALES), and pupillometry. Experienced effort decreased with increasing speech level and "no effort" was reached at 40 dB sound pressure level (SPL). The difference between levels rated with "extreme effort" and "no effort" was, on average, 20.6 dB SPL. Thus, speech must be presented well above the speech-recognition threshold in quiet to achieve effortless listening. These results prompted a follow-up experiment involving 18 additional participants, who completed OLSA and ACALES tests with hearing threshold-simulating noise at conversational levels. Comparing the results of the main and follow-up experiments suggests that the observations in quiet cannot be fully attributed to the masking effects of internal noise but likely also reflect cognitive processes that are not yet fully understood. These findings have important implications, particularly regarding the benefits of amplification for soft sounds. We propose that the concept of a threshold for effortless listening has been overlooked and should be prioritized in future research, especially in the context of soft speech in quiet environments.

除了言语可理解性,听力努力也成为听力表现的一个重要指标。它可以被定义为在解决听觉任务中所经历或投入的努力。主观的、行为的和生理的方法被用来评估听力努力。虽然以前的研究主要集中在评估清晰可听水平下的听力努力,例如在噪音条件下的语音,但我们的研究结果来自于一项研究,调查了安静环境下软语的听力努力。20名听力正常的年轻人参加了语音清晰度测试(OLSA)、自适应听力努力量表(ACALES)和瞳孔测量。声压级(SPL)为40 dB时达到“不费力”;被评为“极度努力”和“不努力”的水平之间的差异平均为20.6 dB SPL。因此,语音必须在安静的情况下远高于语音识别阈值,以实现轻松的聆听。这些结果促使另外18名参与者进行了后续实验,他们在会话水平的听力阈值模拟噪音下完成了OLSA和ACALES测试。比较主要实验和后续实验的结果表明,安静环境下的观察结果不能完全归因于内部噪音的掩蔽效应,而可能也反映了尚未完全理解的认知过程。这些发现具有重要的意义,特别是关于对柔和声音的放大的好处。我们认为,在未来的研究中,特别是在安静环境中的软语环境中,容易倾听的阈值概念被忽视了,应该优先考虑。
{"title":"Listening Effort for Soft Speech in Quiet.","authors":"Hendrik Husstedt, Jennifer Schmidt, Luca Wiederschein, Robert Wiedenbeck, Markus Kemper, Florian Denk","doi":"10.1177/23312165251370006","DOIUrl":"10.1177/23312165251370006","url":null,"abstract":"<p><p>In addition to speech intelligibility, listening effort has emerged as a critical indicator of hearing performance. It can be defined as the effort experienced or invested in solving an auditory task. Subjective, behavioral, and physiological methods have been employed to assess listening effort. While previous studies have focused predominantly evaluated listening effort at clearly audible levels, such as in speech-in-noise conditions, we present findings from a study investigating listening effort for soft speech in quiet. Twenty young adults with normal hearing participated in speech intelligibility testing (OLSA), adaptive listening effort scaling (ACALES), and pupillometry. Experienced effort decreased with increasing speech level and \"no effort\" was reached at 40 dB sound pressure level (SPL). The difference between levels rated with \"extreme effort\" and \"no effort\" was, on average, 20.6 dB SPL. Thus, speech must be presented well above the speech-recognition threshold in quiet to achieve effortless listening. These results prompted a follow-up experiment involving 18 additional participants, who completed OLSA and ACALES tests with hearing threshold-simulating noise at conversational levels. Comparing the results of the main and follow-up experiments suggests that the observations in quiet cannot be fully attributed to the masking effects of internal noise but likely also reflect cognitive processes that are not yet fully understood. These findings have important implications, particularly regarding the benefits of amplification for soft sounds. We propose that the concept of a threshold for effortless listening has been overlooked and should be prioritized in future research, especially in the context of soft speech in quiet environments.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251370006"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12365469/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144876067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural-WDRC: A Deep Learning Wide Dynamic Range Compression Method Combined With Controllable Noise Reduction for Hearing Aids. 神经- wdrc:一种结合可控降噪的助听器深度学习宽动态范围压缩方法。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 DOI: 10.1177/23312165241309301
Huiyong Zhang, Brian C J Moore, Feng Jiang, Mingfang Diao, Fei Ji, Xiaodong Li, Chengshi Zheng

Wide dynamic range compression (WDRC) and noise reduction both play important roles in hearing aids. WDRC provides level-dependent amplification so that the level of sound produced by the hearing aid falls between the hearing threshold and the highest comfortable level of the listener, while noise reduction reduces ambient noise with the goal of improving intelligibility and listening comfort and reducing effort. In most current hearing aids, noise reduction and WDRC are implemented sequentially, but this may lead to distortion of the amplitude modulation patterns of both the speech and the noise. This paper describes a deep learning method, called Neural-WDRC, for implementing both noise reduction and WDRC, employing a two-stage low-complexity network. The network initially estimates the noise alone and the speech alone. Fast-acting compression is applied to the estimated speech and slow-acting compression to the estimated noise, but with a controllable residual noise level to help the user to perceive natural environmental sounds. Neural-WDRC is frame-based, and the output of the current frame is determined only by the current and preceding frames. Neural-WDRC was compared with conventional slow- and fast-acting compression and with signal-to-noise ratio (SNR)-aware compression using objective measures and listening tests based on normal-hearing participants listening to signals processed to simulate the effects of hearing loss and hearing-impaired participants. The objective measures demonstrated that Neural-WDRC effectively reduced negative interactions of speech and noise in highly non-stationary noise scenarios. The listening tests showed that Neural-WDRC was preferred over the other compression methods for speech in non-stationary noises.

宽动态范围压缩(WDRC)和降噪都是助听器的重要功能。WDRC提供电平相关的放大,使助听器产生的声音水平落在听者的听力阈值和最高舒适水平之间,而降噪则降低环境噪声,目的是提高可理解性和聆听舒适性,并减少努力。在目前的大多数助听器中,降噪和WDRC是依次实施的,但这可能导致语音和噪声的幅度调制模式失真。本文描述了一种深度学习方法,称为Neural-WDRC,用于实现降噪和WDRC,采用两阶段低复杂度网络。网络最初只估计噪声和语音。对估计的语音进行速动压缩,对估计的噪声进行慢动压缩,但具有可控的残余噪声水平,以帮助用户感知自然环境声音。Neural-WDRC是基于帧的,当前帧的输出仅由当前帧和之前的帧决定。通过客观测量和听力测试,将神经wdrc与传统的慢效压缩和速效压缩以及信噪比感知压缩进行比较,这些测试基于听力正常的参与者收听经过处理的信号,以模拟听力损失和听力受损参与者的影响。客观测量表明,在高度非平稳的噪声场景下,Neural-WDRC有效地减少了语音和噪声的负相互作用。听力测试表明,对于非平稳噪声环境下的语音,Neural-WDRC压缩方法优于其他压缩方法。
{"title":"Neural-WDRC: A Deep Learning Wide Dynamic Range Compression Method Combined With Controllable Noise Reduction for Hearing Aids.","authors":"Huiyong Zhang, Brian C J Moore, Feng Jiang, Mingfang Diao, Fei Ji, Xiaodong Li, Chengshi Zheng","doi":"10.1177/23312165241309301","DOIUrl":"10.1177/23312165241309301","url":null,"abstract":"<p><p>Wide dynamic range compression (WDRC) and noise reduction both play important roles in hearing aids. WDRC provides level-dependent amplification so that the level of sound produced by the hearing aid falls between the hearing threshold and the highest comfortable level of the listener, while noise reduction reduces ambient noise with the goal of improving intelligibility and listening comfort and reducing effort. In most current hearing aids, noise reduction and WDRC are implemented sequentially, but this may lead to distortion of the amplitude modulation patterns of both the speech and the noise. This paper describes a deep learning method, called Neural-WDRC, for implementing both noise reduction and WDRC, employing a two-stage low-complexity network. The network initially estimates the noise alone and the speech alone. Fast-acting compression is applied to the estimated speech and slow-acting compression to the estimated noise, but with a controllable residual noise level to help the user to perceive natural environmental sounds. Neural-WDRC is frame-based, and the output of the current frame is determined only by the current and preceding frames. Neural-WDRC was compared with conventional slow- and fast-acting compression and with signal-to-noise ratio (SNR)-aware compression using objective measures and listening tests based on normal-hearing participants listening to signals processed to simulate the effects of hearing loss and hearing-impaired participants. The objective measures demonstrated that Neural-WDRC effectively reduced negative interactions of speech and noise in highly non-stationary noise scenarios. The listening tests showed that Neural-WDRC was preferred over the other compression methods for speech in non-stationary noises.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241309301"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11770718/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143048166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Prospective, Multicentre Case-Control Trial Examining Factors That Explain Variable Clinical Performance in Post Lingual Adult CI Recipients. 一项前瞻性、多中心病例对照试验,研究解释语后成人CI受者临床表现变化的因素。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 Epub Date: 2025-06-27 DOI: 10.1177/23312165251347138
Pam Dawson, Amanda Fullerton, Harish Krishnamoorthi, Kerrie Plant, Robert Cowan, Nadine Buczak, Christopher Long, Chris J James, Fergio Sismono, Andreas Büchner

This study investigated which of a range of factors could explain performance in two distinct groups of experienced, adult cochlear implant recipients differentiated by performance on words in quiet: 72 with poorer word scores versus 77 with better word scores. Tests measured the potential contribution of sound processor mapping, electrode placement, neural health, impedance, cognitive, and patient-related factors in predicting performance. A systematically measured sound processor MAP was compared to the subject's walk-in MAP. Electrode placement included modiolar distance, basal and apical insertion angle, and presence of scalar translocation. Neural health measurements included bipolar thresholds, polarity effect using asymmetrical pulses, and evoked compound action potential (ECAP) measures such as the interphase gap (IPG) effect, total refractory time, and panoramic ECAP. Impedance measurements included trans impedance matrix and four-point impedance. Cognitive tests comprised vocabulary ability, the Stroop test, and the Symbol Digits Modality Test. Performance was measured with words in quiet and sentence in noise tests and basic auditory sensitivity measures including phoneme discrimination in noise and quiet, amplitude modulation detection thresholds and quick spectral modulation detection. A range of predictor variables accounted for between 33% and 60% of the variability in performance outcomes. Multivariable regression analyses showed four key factors that were consistently predictive of poorer performance across several outcomes: substantially underfitted sound processor MAP thresholds, higher average bipolar thresholds, greater total refractory time, and greater IPG offset. Scalar translocation, cognitive variables, and other patient related factors were also significant predictors across more than one performance outcome.

这项研究调查了哪些因素可以解释两组不同的有经验的成年人工耳蜗受者在安静环境下的表现:72人的单词得分较低,77人的单词得分较高。测试测量了声音处理器映射、电极放置、神经健康、阻抗、认知和患者相关因素在预测性能方面的潜在贡献。系统测量的声音处理器MAP与受试者的步入式MAP进行了比较。电极放置包括模摩尔距离、基底和根尖插入角度以及标量移位的存在。神经健康测量包括双极阈值,使用不对称脉冲的极性效应,以及诱发复合动作电位(ECAP)测量,如间期间隙(IPG)效应,总耐火时间和全景ECAP。阻抗测量包括跨阻抗矩阵和四点阻抗。认知测试包括词汇能力、Stroop测试和符号数字情态测试。通过安静测试中的单词和噪音测试中的句子以及基本的听觉灵敏度测试,包括噪音和安静测试中的音素识别、幅度调制检测阈值和快速频谱调制检测。一系列预测变量占绩效结果可变性的33%至60%。多变量回归分析显示,四个关键因素在几个结果中一致预测较差的表现:声音处理器MAP阈值明显不合适,平均双极阈值较高,总难治时间较长,IPG偏移较大。标量易位、认知变量和其他患者相关因素也是多个表现结果的重要预测因素。
{"title":"A Prospective, Multicentre Case-Control Trial Examining Factors That Explain Variable Clinical Performance in Post Lingual Adult CI Recipients.","authors":"Pam Dawson, Amanda Fullerton, Harish Krishnamoorthi, Kerrie Plant, Robert Cowan, Nadine Buczak, Christopher Long, Chris J James, Fergio Sismono, Andreas Büchner","doi":"10.1177/23312165251347138","DOIUrl":"10.1177/23312165251347138","url":null,"abstract":"<p><p>This study investigated which of a range of factors could explain performance in two distinct groups of experienced, adult cochlear implant recipients differentiated by performance on words in quiet: 72 with poorer word scores versus 77 with better word scores. Tests measured the potential contribution of sound processor mapping, electrode placement, neural health, impedance, cognitive, and patient-related factors in predicting performance. A systematically measured sound processor MAP was compared to the subject's walk-in MAP. Electrode placement included modiolar distance, basal and apical insertion angle, and presence of scalar translocation. Neural health measurements included bipolar thresholds, polarity effect using asymmetrical pulses, and evoked compound action potential (ECAP) measures such as the interphase gap (IPG) effect, total refractory time, and panoramic ECAP. Impedance measurements included trans impedance matrix and four-point impedance. Cognitive tests comprised vocabulary ability, the Stroop test, and the Symbol Digits Modality Test. Performance was measured with words in quiet and sentence in noise tests and basic auditory sensitivity measures including phoneme discrimination in noise and quiet, amplitude modulation detection thresholds and quick spectral modulation detection. A range of predictor variables accounted for between 33% and 60% of the variability in performance outcomes. Multivariable regression analyses showed four key factors that were consistently predictive of poorer performance across several outcomes: substantially underfitted sound processor MAP thresholds, higher average bipolar thresholds, greater total refractory time, and greater IPG offset. Scalar translocation, cognitive variables, and other patient related factors were also significant predictors across more than one performance outcome.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251347138"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12205208/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144508936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hearing Aid Use is Associated with Faster Visual Lexical Decision. 助听器的使用与更快的视觉词汇决策有关。
IF 3 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-01-01 Epub Date: 2025-09-19 DOI: 10.1177/23312165251375892
Ruijing Ning, Carine Signoret, Emil Holmer, Henrik Danielsson

This study investigates the impact of hearing aid (HA) use on visual lexical decision (LD) performance in individuals with hearing loss. We hypothesize that HA use benefits phonological processing and leads to faster and more accurate visual LD. We compared the visual LD performance among three groups: 92 short-term HA users (<5 years), 98 long-term HA users, and 55 nonusers, while controlling for hearing level, age, and years of education. Results showed that, compared with non-HA users, HA users showed significantly faster reaction times in visual LD, specifically, long-term HA use was associated with smaller difference in reaction time for pseudowords compared to nonwords. These results suggest that HA use is associated with faster visual word recognition, potentially reflecting enhanced cognitive functions beyond auditory processing. These findings point to possible cognitive advantages linked to HA use.

本研究探讨助听器(HA)的使用对听力损失个体视觉词汇决策(LD)表现的影响。我们假设使用HA有利于语音加工,并导致更快和更准确的视觉LD。我们比较了三组的视觉LD表现:92名短期HA用户(
{"title":"Hearing Aid Use is Associated with Faster Visual Lexical Decision.","authors":"Ruijing Ning, Carine Signoret, Emil Holmer, Henrik Danielsson","doi":"10.1177/23312165251375892","DOIUrl":"10.1177/23312165251375892","url":null,"abstract":"<p><p>This study investigates the impact of hearing aid (HA) use on visual lexical decision (LD) performance in individuals with hearing loss. We hypothesize that HA use benefits phonological processing and leads to faster and more accurate visual LD. We compared the visual LD performance among three groups: 92 short-term HA users (<5 years), 98 long-term HA users, and 55 nonusers, while controlling for hearing level, age, and years of education. Results showed that, compared with non-HA users, HA users showed significantly faster reaction times in visual LD, specifically, long-term HA use was associated with smaller difference in reaction time for pseudowords compared to nonwords. These results suggest that HA use is associated with faster visual word recognition, potentially reflecting enhanced cognitive functions beyond auditory processing. These findings point to possible cognitive advantages linked to HA use.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251375892"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12449647/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145092808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Trends in Hearing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1