Pub Date : 2024-01-01DOI: 10.1177/23312165241286456
Nicolas Wallaert, Antoine Perry, Hadrien Jean, Gwenaelle Creff, Benoit Godey, Nihaad Paraouty
To date, pure-tone audiometry remains the gold standard for clinical auditory testing. However, pure-tone audiometry is time-consuming and only provides a discrete estimate of hearing acuity. Here, we aim to address these two main drawbacks by developing a machine learning (ML)-based approach for fully automated bone-conduction (BC) audiometry tests with forehead vibrator placement. Study 1 examines the occlusion effects when the headphones are positioned on both ears during BC forehead testing. Study 2 describes the ML-based approach for BC audiometry, with automated contralateral masking rules, compensation for occlusion effects and forehead-mastoid corrections. Next, the performance of ML-audiometry is examined in comparison to manual and conventional BC audiometry with mastoid placement. Finally, Study 3 examines the test-retest reliability of ML-audiometry. Our results show no significant performance difference between automated ML-audiometry and manual conventional audiometry. High test-retest reliability is achieved with the automated ML-audiometry. Together, our findings demonstrate the performance and reliability of the automated ML-based BC audiometry for both normal-hearing and hearing-impaired adult listeners with mild to severe hearing losses.
迄今为止,纯音测听仍是临床听觉测试的黄金标准。然而,纯音测听耗时较长,而且只能提供离散的听敏度估计值。在此,我们旨在通过开发一种基于机器学习(ML)的方法来解决这两个主要缺点,即使用前额振动器进行全自动骨传导(BC)听力测试。研究 1 探讨了 BC 前额测试中耳机置于双耳时的闭塞效应。研究 2 介绍了基于 ML 的 BC 听力测量方法,包括自动对侧掩蔽规则、闭塞效应补偿和前额-乳突校正。接下来,研究人员将 ML 测听法的性能与手动测听法和乳突置位的传统 BC 测听法进行了比较。最后,研究 3 检验了 ML 听力测定法的重复测试可靠性。研究结果表明,自动 ML 听力测定法与手动传统听力测定法之间没有明显的性能差异。自动 ML 听力测定法的测试再测可靠性很高。总之,我们的研究结果表明,对于听力正常和听力受损的轻度至重度听力损失的成年听众,基于 ML 的自动 BC 听力测定法都具有良好的性能和可靠性。
{"title":"Performance and Reliability Evaluation of an Automated Bone-Conduction Audiometry Using Machine Learning.","authors":"Nicolas Wallaert, Antoine Perry, Hadrien Jean, Gwenaelle Creff, Benoit Godey, Nihaad Paraouty","doi":"10.1177/23312165241286456","DOIUrl":"https://doi.org/10.1177/23312165241286456","url":null,"abstract":"<p><p>To date, pure-tone audiometry remains the gold standard for clinical auditory testing. However, pure-tone audiometry is time-consuming and only provides a discrete estimate of hearing acuity. Here, we aim to address these two main drawbacks by developing a machine learning (ML)-based approach for fully automated bone-conduction (BC) audiometry tests with forehead vibrator placement. Study 1 examines the occlusion effects when the headphones are positioned on both ears during BC forehead testing. Study 2 describes the ML-based approach for BC audiometry, with automated contralateral masking rules, compensation for occlusion effects and forehead-mastoid corrections. Next, the performance of ML-audiometry is examined in comparison to manual and conventional BC audiometry with mastoid placement. Finally, Study 3 examines the test-retest reliability of ML-audiometry. Our results show no significant performance difference between automated ML-audiometry and manual conventional audiometry. High test-retest reliability is achieved with the automated ML-audiometry. Together, our findings demonstrate the performance and reliability of the automated ML-based BC audiometry for both normal-hearing and hearing-impaired adult listeners with mild to severe hearing losses.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241286456"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142570248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241262517
Daniel Oberfeld, Katharina Staab, Florian Kattner, Wolfgang Ellermeier
Listeners with normal audiometric thresholds show substantial variability in their ability to understand speech in noise (SiN). These individual differences have been reported to be associated with a range of auditory and cognitive abilities. The present study addresses the association between SiN processing and the individual susceptibility of short-term memory to auditory distraction (i.e., the irrelevant sound effect [ISE]). In a sample of 67 young adult participants with normal audiometric thresholds, we measured speech recognition performance in a spatial listening task with two interfering talkers (speech-in-speech identification), audiometric thresholds, binaural sensitivity to the temporal fine structure (interaural phase differences [IPD]), serial memory with and without interfering talkers, and self-reported noise sensitivity. Speech-in-speech processing was not significantly associated with the ISE. The most important predictors of high speech-in-speech recognition performance were a large short-term memory span, low IPD thresholds, bilaterally symmetrical audiometric thresholds, and low individual noise sensitivity. Surprisingly, the susceptibility of short-term memory to irrelevant sound accounted for a substantially smaller amount of variance in speech-in-speech processing than the nondisrupted short-term memory capacity. The data confirm the role of binaural sensitivity to the temporal fine structure, although its association to SiN recognition was weaker than in some previous studies. The inverse association between self-reported noise sensitivity and SiN processing deserves further investigation.
听力阈值正常的听者在理解噪声语音(SiN)的能力上存在很大差异。据报道,这些个体差异与一系列听觉和认知能力有关。本研究探讨了噪音语言处理能力与个体短期记忆对听觉干扰(即无关声音效应 [ISE])的敏感性之间的关联。我们以听阈正常的 67 名年轻成年参与者为样本,测量了在有两个干扰说话者的空间听力任务中的语音识别成绩(语音中的语音识别)、听阈、对时间精细结构的双耳敏感度(耳间相位差 [IPD])、有干扰说话者和无干扰说话者的序列记忆以及自我报告的噪声敏感度。语音中的语音处理与 ISE 没有明显关联。短期记忆跨度大、IPD阈值低、双侧听力阈值对称和个体噪声敏感度低是预测高语音识别能力的最重要因素。令人惊讶的是,短期记忆对无关声音的易感性在语音-语音处理中造成的差异远远小于未受干扰的短期记忆能力。这些数据证实了双耳对时间精细结构的敏感性所起的作用,尽管它与 SiN 识别的关联性比以前的一些研究要弱。自我报告的噪声敏感度与 SiN 处理之间的反向关联值得进一步研究。
{"title":"Is Recognition of Speech in Noise Related to Memory Disruption Caused by Irrelevant Sound?","authors":"Daniel Oberfeld, Katharina Staab, Florian Kattner, Wolfgang Ellermeier","doi":"10.1177/23312165241262517","DOIUrl":"10.1177/23312165241262517","url":null,"abstract":"<p><p>Listeners with normal audiometric thresholds show substantial variability in their ability to understand speech in noise (SiN). These individual differences have been reported to be associated with a range of auditory and cognitive abilities. The present study addresses the association between SiN processing and the individual susceptibility of short-term memory to auditory distraction (i.e., the irrelevant sound effect [ISE]). In a sample of 67 young adult participants with normal audiometric thresholds, we measured speech recognition performance in a spatial listening task with two interfering talkers (speech-in-speech identification), audiometric thresholds, binaural sensitivity to the temporal fine structure (interaural phase differences [IPD]), serial memory with and without interfering talkers, and self-reported noise sensitivity. Speech-in-speech processing was not significantly associated with the ISE. The most important predictors of high speech-in-speech recognition performance were a large short-term memory span, low IPD thresholds, bilaterally symmetrical audiometric thresholds, and low individual noise sensitivity. Surprisingly, the susceptibility of short-term memory to irrelevant sound accounted for a substantially smaller amount of variance in speech-in-speech processing than the nondisrupted short-term memory capacity. The data confirm the role of binaural sensitivity to the temporal fine structure, although its association to SiN recognition was weaker than in some previous studies. The inverse association between self-reported noise sensitivity and SiN processing deserves further investigation.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241262517"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11273587/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141761865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241293787
Nicholas R Haywood, David McAlpine, Deborah Vickers, Brian Roberts
Interaural time differences are often considered a weak cue for stream segregation. We investigated this claim with headphone-presented pure tones differing in a related form of interaural configuration-interaural phase differences (ΔIPD)-or/and in frequency (ΔF). In experiment 1, sequences comprised 5 × ABA- repetitions (A and B = 80-ms tones, "-" = 160-ms silence), and listeners reported whether integration or segregation was heard. Envelope shape was varied but remained constant across all tones within a trial. Envelopes were either quasi-trapezoidal or had a fast attack and slow release (FA-SR) or vice versa (SA-FR). The FA-SR envelope caused more segregation than SA-FR in a task where only ΔIPD cues were present, but not in a corresponding ΔF-only task. In experiment 2, interstimulus interval (ISI) was varied (0-60 ms) between FA-SR tones. ΔF-based segregation decreased with increasing ISI, whereas ΔIPD-based segregation increased. This suggests that binaural temporal integration may limit segregation at short ISIs. In another task, ΔF and ΔIPD cues were presented alone or in combination. Here, ΔIPD-based segregation was greatly reduced, suggesting ΔIPD-based segregation is highly sensitive to experimental context. Experiments 1-2 demonstrate that ΔIPD can promote segregation in optimized stimuli/tasks. Experiment 3 employed a task requiring integration for good performance. Listeners detected a delay on the final four B tones of an 8 × ABA- sequence. Although performance worsened with increasing ΔF, increasing ΔIPD had only a marginal impact. This suggests that, even in stimuli optimized for ΔIPD-based segregation, listeners remained mostly able to disregard ΔIPD when segregation was detrimental to performance.
{"title":"Factors Influencing Stream Segregation Based on Interaural Phase Difference Cues.","authors":"Nicholas R Haywood, David McAlpine, Deborah Vickers, Brian Roberts","doi":"10.1177/23312165241293787","DOIUrl":"10.1177/23312165241293787","url":null,"abstract":"<p><p>Interaural time differences are often considered a weak cue for stream segregation. We investigated this claim with headphone-presented pure tones differing in a related form of interaural configuration-interaural phase differences (ΔIPD)-or/and in frequency (ΔF). In experiment 1, sequences comprised 5 × ABA- repetitions (A and B = 80-ms tones, \"-\" = 160-ms silence), and listeners reported whether integration or segregation was heard. Envelope shape was varied but remained constant across all tones within a trial. Envelopes were either quasi-trapezoidal or had a fast attack and slow release (FA-SR) or vice versa (SA-FR). The FA-SR envelope caused more segregation than SA-FR in a task where only ΔIPD cues were present, but not in a corresponding ΔF-only task. In experiment 2, interstimulus interval (ISI) was varied (0-60 ms) between FA-SR tones. ΔF-based segregation decreased with increasing ISI, whereas ΔIPD-based segregation increased. This suggests that binaural temporal integration may limit segregation at short ISIs. In another task, ΔF and ΔIPD cues were presented alone or in combination. Here, ΔIPD-based segregation was greatly reduced, suggesting ΔIPD-based segregation is highly sensitive to experimental context. Experiments 1-2 demonstrate that ΔIPD can promote segregation in optimized stimuli/tasks. Experiment 3 employed a task requiring integration for good performance. Listeners detected a delay on the final four B tones of an 8 × ABA- sequence. Although performance worsened with increasing ΔF, increasing ΔIPD had only a marginal impact. This suggests that, even in stimuli optimized for ΔIPD-based segregation, listeners remained mostly able to disregard ΔIPD when segregation was detrimental to performance.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241293787"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11629429/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142802838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241292205
Nasser-Eddine Monir, Paul Magron, Romain Serizel
In the intricate acoustic landscapes where speech intelligibility is challenged by noise and reverberation, multichannel speech enhancement emerges as a promising solution for individuals with hearing loss. Such algorithms are commonly evaluated at the utterance scale. However, this approach overlooks the granular acoustic nuances revealed by phoneme-specific analysis, potentially obscuring key insights into their performance. This paper presents an in-depth phoneme-scale evaluation of three state-of-the-art multichannel speech enhancement algorithms. These algorithms-filter-and-sum network, minimum variance distortionless response, and Tango-are here extensively evaluated across different noise conditions and spatial setups, employing realistic acoustic simulations with measured room impulse responses, and leveraging diversity offered by multiple microphones in a binaural hearing setup. The study emphasizes the fine-grained phoneme-scale analysis, revealing that while some phonemes like plosives are heavily impacted by environmental acoustics and challenging to deal with by the algorithms, others like nasals and sibilants see substantial improvements after enhancement. These investigations demonstrate important improvements in phoneme clarity in noisy conditions, with insights that could drive the development of more personalized and phoneme-aware hearing aid technologies. Additionally, while this study provides extensive data on the physical metrics of processed speech, these physical metrics do not necessarily imitate human perceptions of speech, and the impact of the findings presented would have to be investigated through listening tests.
{"title":"A Phoneme-Scale Assessment of Multichannel Speech Enhancement Algorithms.","authors":"Nasser-Eddine Monir, Paul Magron, Romain Serizel","doi":"10.1177/23312165241292205","DOIUrl":"10.1177/23312165241292205","url":null,"abstract":"<p><p>In the intricate acoustic landscapes where speech intelligibility is challenged by noise and reverberation, multichannel speech enhancement emerges as a promising solution for individuals with hearing loss. Such algorithms are commonly evaluated at the utterance scale. However, this approach overlooks the granular acoustic nuances revealed by phoneme-specific analysis, potentially obscuring key insights into their performance. This paper presents an in-depth phoneme-scale evaluation of three state-of-the-art multichannel speech enhancement algorithms. These algorithms-filter-and-sum network, minimum variance distortionless response, and Tango-are here extensively evaluated across different noise conditions and spatial setups, employing realistic acoustic simulations with measured room impulse responses, and leveraging diversity offered by multiple microphones in a binaural hearing setup. The study emphasizes the fine-grained phoneme-scale analysis, revealing that while some phonemes like plosives are heavily impacted by environmental acoustics and challenging to deal with by the algorithms, others like nasals and sibilants see substantial improvements after enhancement. These investigations demonstrate important improvements in phoneme clarity in noisy conditions, with insights that could drive the development of more personalized and phoneme-aware hearing aid technologies. Additionally, while this study provides extensive data on the physical metrics of processed speech, these physical metrics do not necessarily imitate human perceptions of speech, and the impact of the findings presented would have to be investigated through listening tests.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241292205"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11638999/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Decoding speech envelopes from electroencephalogram (EEG) signals holds potential as a research tool for objectively assessing auditory processing, which could contribute to future developments in hearing loss diagnosis. However, current methods struggle to meet both high accuracy and interpretability. We propose a deep learning model called the auditory decoding transformer (ADT) network for speech envelope reconstruction from EEG signals to address these issues. The ADT network uses spatio-temporal convolution for feature extraction, followed by a transformer decoder to decode the speech envelopes. Through anticausal masking, the ADT considers only the current and future EEG features to match the natural relationship of speech and EEG. Performance evaluation shows that the ADT network achieves average reconstruction scores of 0.168 and 0.167 on the SparrKULee and DTU datasets, respectively, rivaling those of other nonlinear models. Furthermore, by visualizing the weights of the spatio-temporal convolution layer as time-domain filters and brain topographies, combined with an ablation study of the temporal convolution kernels, we analyze the behavioral patterns of the ADT network in decoding speech envelopes. The results indicate that low- (0.5-8 Hz) and high-frequency (14-32 Hz) EEG signals are more critical for envelope reconstruction and that the active brain regions are primarily distributed bilaterally in the auditory cortex, consistent with previous research. Visualization of attention scores further validated previous research. In summary, the ADT network balances high performance and interpretability, making it a promising tool for studying neural speech envelope tracking.
{"title":"ADT Network: A Novel Nonlinear Method for Decoding Speech Envelopes From EEG Signals.","authors":"Ruixiang Liu, Chang Liu, Dan Cui, Huan Zhang, Xinmeng Xu, Yuxin Duan, Yihu Chao, Xianzheng Sha, Limin Sun, Xiulan Ma, Shuo Li, Shijie Chang","doi":"10.1177/23312165241282872","DOIUrl":"https://doi.org/10.1177/23312165241282872","url":null,"abstract":"<p><p>Decoding speech envelopes from electroencephalogram (EEG) signals holds potential as a research tool for objectively assessing auditory processing, which could contribute to future developments in hearing loss diagnosis. However, current methods struggle to meet both high accuracy and interpretability. We propose a deep learning model called the auditory decoding transformer (ADT) network for speech envelope reconstruction from EEG signals to address these issues. The ADT network uses spatio-temporal convolution for feature extraction, followed by a transformer decoder to decode the speech envelopes. Through anticausal masking, the ADT considers only the current and future EEG features to match the natural relationship of speech and EEG. Performance evaluation shows that the ADT network achieves average reconstruction scores of 0.168 and 0.167 on the SparrKULee and DTU datasets, respectively, rivaling those of other nonlinear models. Furthermore, by visualizing the weights of the spatio-temporal convolution layer as time-domain filters and brain topographies, combined with an ablation study of the temporal convolution kernels, we analyze the behavioral patterns of the ADT network in decoding speech envelopes. The results indicate that low- (0.5-8 Hz) and high-frequency (14-32 Hz) EEG signals are more critical for envelope reconstruction and that the active brain regions are primarily distributed bilaterally in the auditory cortex, consistent with previous research. Visualization of attention scores further validated previous research. In summary, the ADT network balances high performance and interpretability, making it a promising tool for studying neural speech envelope tracking.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241282872"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11489951/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142478206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241261490
Saskia Ibelings, Thomas Brand, Esther Ruigendijk, Inga Holube
Speech-recognition tests are widely used in both clinical and research audiology. The purpose of this study was the development of a novel speech-recognition test that combines concepts of different speech-recognition tests to reduce training effects and allows for a large set of speech material. The new test consists of four different words per trial in a meaningful construct with a fixed structure, the so-called phrases. Various free databases were used to select the words and to determine their frequency. Highly frequent nouns were grouped into thematic categories and combined with related adjectives and infinitives. After discarding inappropriate and unnatural combinations, and eliminating duplications of (sub-)phrases, a total number of 772 phrases remained. Subsequently, the phrases were synthesized using a text-to-speech system. The synthesis significantly reduces the effort compared to recordings with a real speaker. After excluding outliers, measured speech-recognition scores for the phrases with 31 normal-hearing participants at fixed signal-to-noise ratios (SNR) revealed speech-recognition thresholds (SRT) for each phrase varying up to 4 dB. The median SRT was -9.1 dB SNR and thus comparable to existing sentence tests. The psychometric function's slope of 15 percentage points per dB is also comparable and enables efficient use in audiology. Summarizing, the principle of creating speech material in a modular system has many potential applications.
{"title":"Development of a Phrase-Based Speech-Recognition Test Using Synthetic Speech.","authors":"Saskia Ibelings, Thomas Brand, Esther Ruigendijk, Inga Holube","doi":"10.1177/23312165241261490","DOIUrl":"10.1177/23312165241261490","url":null,"abstract":"<p><p>Speech-recognition tests are widely used in both clinical and research audiology. The purpose of this study was the development of a novel speech-recognition test that combines concepts of different speech-recognition tests to reduce training effects and allows for a large set of speech material. The new test consists of four different words per trial in a meaningful construct with a fixed structure, the so-called phrases. Various free databases were used to select the words and to determine their frequency. Highly frequent nouns were grouped into thematic categories and combined with related adjectives and infinitives. After discarding inappropriate and unnatural combinations, and eliminating duplications of (sub-)phrases, a total number of 772 phrases remained. Subsequently, the phrases were synthesized using a text-to-speech system. The synthesis significantly reduces the effort compared to recordings with a real speaker. After excluding outliers, measured speech-recognition scores for the phrases with 31 normal-hearing participants at fixed signal-to-noise ratios (SNR) revealed speech-recognition thresholds (SRT) for each phrase varying up to 4 dB. The median SRT was -9.1 dB SNR and thus comparable to existing sentence tests. The psychometric function's slope of 15 percentage points per dB is also comparable and enables efficient use in audiology. Summarizing, the principle of creating speech material in a modular system has many potential applications.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241261490"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11273571/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141761864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241260621
Mira Van Wilderode, Nathan Van Humbeeck, Ralf Krampe, Astrid van Wieringen
While listening, we commonly participate in simultaneous activities. For instance, at receptions people often stand while engaging in conversation. It is known that listening and postural control are associated with each other. Previous studies focused on the interplay of listening and postural control when the speech identification task had rather high cognitive control demands. This study aimed to determine whether listening and postural control interact when the speech identification task requires minimal cognitive control, i.e., when words are presented without background noise, or a large memory load. This study included 22 young adults, 27 middle-aged adults, and 21 older adults. Participants performed a speech identification task (auditory single task), a postural control task (posture single task) and combined postural control and speech identification tasks (dual task) to assess the effects of multitasking. The difficulty levels of the listening and postural control tasks were manipulated by altering the level of the words (25 or 30 dB SPL) and the mobility of the platform (stable or moving). The sound level was increased for adults with a hearing impairment. In the dual-task, listening performance decreased, especially for middle-aged and older adults, while postural control improved. These results suggest that even when cognitive control demands for listening are minimal, interaction with postural control occurs. Correlational analysis revealed that hearing loss was a better predictor than age of speech identification and postural control.
在聆听时,我们通常会同时参与一些活动。例如,在招待会上,人们常常一边站着一边交谈。众所周知,听力和姿势控制是相互关联的。以前的研究主要集中在语音识别任务对认知控制要求较高时,听力和姿势控制的相互作用。本研究旨在确定当语音识别任务对认知控制要求最低时,即单词出现时没有背景噪音,或记忆负荷较大时,听力和姿势控制是否会相互作用。这项研究包括 22 名年轻人、27 名中年人和 21 名老年人。受试者分别完成了语音识别任务(听觉单一任务)、姿势控制任务(姿势单一任务)以及姿势控制和语音识别联合任务(双重任务),以评估多任务的影响。听力和姿势控制任务的难度是通过改变词语的音量(25 或 30 dB SPL)和平台的移动性(稳定或移动)来控制的。对于有听力障碍的成年人,声级会提高。在双重任务中,听力表现下降,尤其是中老年人,而姿势控制能力则有所提高。这些结果表明,即使对听力的认知控制要求很低,也会与姿势控制发生相互作用。相关分析表明,听力损失比年龄更能预测语言识别能力和姿势控制能力。
{"title":"Speech-Identification During Standing as a Multitasking Challenge for Young, Middle-Aged and Older Adults.","authors":"Mira Van Wilderode, Nathan Van Humbeeck, Ralf Krampe, Astrid van Wieringen","doi":"10.1177/23312165241260621","DOIUrl":"10.1177/23312165241260621","url":null,"abstract":"<p><p>While listening, we commonly participate in simultaneous activities. For instance, at receptions people often stand while engaging in conversation. It is known that listening and postural control are associated with each other. Previous studies focused on the interplay of listening and postural control when the speech identification task had rather high cognitive control demands. This study aimed to determine whether listening and postural control interact when the speech identification task requires minimal cognitive control, i.e., when words are presented without background noise, or a large memory load. This study included 22 young adults, 27 middle-aged adults, and 21 older adults. Participants performed a speech identification task (auditory single task), a postural control task (posture single task) and combined postural control and speech identification tasks (dual task) to assess the effects of multitasking. The difficulty levels of the listening and postural control tasks were manipulated by altering the level of the words (25 or 30 dB SPL) and the mobility of the platform (stable or moving). The sound level was increased for adults with a hearing impairment. In the dual-task, listening performance decreased, especially for middle-aged and older adults, while postural control improved. These results suggest that even when cognitive control demands for listening are minimal, interaction with postural control occurs. Correlational analysis revealed that hearing loss was a better predictor than age of speech identification and postural control.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241260621"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11282555/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141761866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241273342
Larry E Humes, Sumitrajit Dhar, Vinaya Manchaiah, Anu Sharma, Theresa H Chisolm, Michelle L Arnold, Victoria A Sanchez
During the last decade, there has been a move towards consumer-centric hearing healthcare. This is a direct result of technological advancements (e.g., merger of consumer grade hearing aids with consumer grade earphones creating a wide range of hearing devices) as well as policy changes (e.g., the U.S. Food and Drug Administration creating a new over-the-counter [OTC] hearing aid category). In addition to various direct-to-consumer (DTC) hearing devices available on the market, there are also several validated tools for the self-assessment of auditory function and the detection of ear disease, as well as tools for education about hearing loss, hearing devices, and communication strategies. Further, all can be made easily available to a wide range of people. This perspective provides a framework and identifies tools to improve and maintain optimal auditory wellness across the adult life course. A broadly available and accessible set of tools that can be made available on a digital platform to aid adults in the assessment and as needed, the improvement, of auditory wellness is discussed.
{"title":"A Perspective on Auditory Wellness: What It Is, Why It Is Important, and How It Can Be Managed.","authors":"Larry E Humes, Sumitrajit Dhar, Vinaya Manchaiah, Anu Sharma, Theresa H Chisolm, Michelle L Arnold, Victoria A Sanchez","doi":"10.1177/23312165241273342","DOIUrl":"10.1177/23312165241273342","url":null,"abstract":"<p><p>During the last decade, there has been a move towards consumer-centric hearing healthcare. This is a direct result of technological advancements (e.g., merger of consumer grade hearing aids with consumer grade earphones creating a wide range of hearing devices) as well as policy changes (e.g., the U.S. Food and Drug Administration creating a new over-the-counter [OTC] hearing aid category). In addition to various direct-to-consumer (DTC) hearing devices available on the market, there are also several validated tools for the self-assessment of auditory function and the detection of ear disease, as well as tools for education about hearing loss, hearing devices, and communication strategies. Further, all can be made easily available to a wide range of people. This <i>perspective</i> provides a framework and identifies tools to improve and maintain optimal auditory wellness across the adult life course. A broadly available and accessible set of tools that can be made available on a digital platform to aid adults in the assessment and as needed, the improvement, of auditory wellness is discussed.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241273342"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11329910/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241299778
Mats Exter, Theresa Jansen, Laura Hartog, Dirk Oetting
Loudness is a fundamental dimension of auditory perception. When hearing impairment results in a loudness deficit, hearing aids are typically prescribed to compensate for this. However, the relationship between an individual's specific hearing impairment and the hearing aid fitting strategy used to address it is usually not straightforward. Various iterations of fine-tuning and troubleshooting by the hearing care professional are required, based largely on experience and the introspective feedback from the hearing aid user. We present the development of a new method for validating an individual's loudness perception of natural signals relative to a normal-hearing reference. It is a measurement method specifically designed for the situation typically encountered by hearing care professionals, namely, with hearing-impaired individuals in the free field with their hearing aids in place. In combination with the qualitative user feedback that the measurement is fast and that its results are intuitively displayed and easily interpretable, the method fills a gap between existing tools and is well suited to provide concrete guidance and orientation to the hearing care professional in the process of individual gain adjustment.
{"title":"Development and Evaluation of a Loudness Validation Method With Natural Signals for Hearing Aid Fitting.","authors":"Mats Exter, Theresa Jansen, Laura Hartog, Dirk Oetting","doi":"10.1177/23312165241299778","DOIUrl":"https://doi.org/10.1177/23312165241299778","url":null,"abstract":"<p><p>Loudness is a fundamental dimension of auditory perception. When hearing impairment results in a loudness deficit, hearing aids are typically prescribed to compensate for this. However, the relationship between an individual's specific hearing impairment and the hearing aid fitting strategy used to address it is usually not straightforward. Various iterations of fine-tuning and troubleshooting by the hearing care professional are required, based largely on experience and the introspective feedback from the hearing aid user. We present the development of a new method for validating an individual's loudness perception of natural signals relative to a normal-hearing reference. It is a measurement method specifically designed for the situation typically encountered by hearing care professionals, namely, with hearing-impaired individuals in the free field with their hearing aids in place. In combination with the qualitative user feedback that the measurement is fast and that its results are intuitively displayed and easily interpretable, the method fills a gap between existing tools and is well suited to provide concrete guidance and orientation to the hearing care professional in the process of individual gain adjustment.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241299778"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142781551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241271340
Martin J Lindenbeck, Piotr Majdak, Bernhard Laback
Timing cues such as interaural time differences (ITDs) and temporal pitch are pivotal for sound localization and source segregation, but their perception is degraded in cochlear-implant (CI) listeners as compared to normal-hearing listeners. In multi-electrode stimulation, intra-aural channel interactions between electrodes are assumed to be an important factor limiting access to those cues. The monaural asynchrony of stimulation timing across electrodes is assumed to mediate the amount of these interactions. This study investigated the effect of the monaural temporal electrode asynchrony (mTEA) between two electrodes, applied similarly in both ears, on ITD-based left/right discrimination sensitivity in five CI listeners, using pulse trains with 100 pulses per second and per electrode. Forward-masked spatial tuning curves were measured at both ears to find electrode separations evoking controlled degrees of across-electrode masking. For electrode separations smaller than 3 mm, results showed an effect of mTEA. Patterns were u/v-shaped, consistent with an explanation in terms of the effective pulse rate that appears to be subject to the well-known rate limitation in electric hearing. For separations larger than 7 mm, no mTEA effects were observed. A comparison to monaural rate-pitch discrimination in a separate set of listeners and in a matched setup showed no systematic differences between percepts. Overall, an important role of the mTEA in both binaural and monaural dual-electrode stimulation is consistent with a monaural pulse-rate limitation whose effect is mediated by channel interactions. Future CI stimulation strategies aiming at improved timing-cue encoding should minimize the stimulation delay between nearby electrodes that need to be stimulated successively.
{"title":"Effects of Monaural Temporal Electrode Asynchrony and Channel Interactions in Bilateral and Unilateral Cochlear-Implant Stimulation.","authors":"Martin J Lindenbeck, Piotr Majdak, Bernhard Laback","doi":"10.1177/23312165241271340","DOIUrl":"10.1177/23312165241271340","url":null,"abstract":"<p><p>Timing cues such as interaural time differences (ITDs) and temporal pitch are pivotal for sound localization and source segregation, but their perception is degraded in cochlear-implant (CI) listeners as compared to normal-hearing listeners. In multi-electrode stimulation, intra-aural channel interactions between electrodes are assumed to be an important factor limiting access to those cues. The monaural asynchrony of stimulation timing across electrodes is assumed to mediate the amount of these interactions. This study investigated the effect of the monaural temporal electrode asynchrony (mTEA) between two electrodes, applied similarly in both ears, on ITD-based left/right discrimination sensitivity in five CI listeners, using pulse trains with 100 pulses per second and per electrode. Forward-masked spatial tuning curves were measured at both ears to find electrode separations evoking controlled degrees of across-electrode masking. For electrode separations smaller than 3 mm, results showed an effect of mTEA. Patterns were u/v-shaped, consistent with an explanation in terms of the effective pulse rate that appears to be subject to the well-known rate limitation in electric hearing. For separations larger than 7 mm, no mTEA effects were observed. A comparison to monaural rate-pitch discrimination in a separate set of listeners and in a matched setup showed no systematic differences between percepts. Overall, an important role of the mTEA in both binaural and monaural dual-electrode stimulation is consistent with a monaural pulse-rate limitation whose effect is mediated by channel interactions. Future CI stimulation strategies aiming at improved timing-cue encoding should minimize the stimulation delay between nearby electrodes that need to be stimulated successively.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241271340"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11382250/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142113726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}