Research in hearing sciences has provided extensive knowledge about how the human auditory system processes speech and assists communication. In contrast, little is known about how this system processes "natural soundscapes," that is the complex arrangements of biological and geophysical sounds shaped by sound propagation through non-anthropogenic habitats [Grinfeder et al. (2022). Frontiers in Ecology and Evolution. 10: 894232]. This is surprising given that, for many species, the capacity to process natural soundscapes determines survival and reproduction through the ability to represent and monitor the immediate environment. Here we propose a framework to encourage research programmes in the field of "human auditory ecology," focusing on the study of human auditory perception of ecological processes at work in natural habitats. Based on large acoustic databases with high ecological validity, these programmes should investigate the extent to which this presumably ancestral monitoring function of the human auditory system is adapted to specific information conveyed by natural soundscapes, whether it operate throughout the life span or whether it emerges through individual learning or cultural transmission. Beyond fundamental knowledge of human hearing, these programmes should yield a better understanding of how normal-hearing and hearing-impaired listeners monitor rural and city green and blue spaces and benefit from them, and whether rehabilitation devices (hearing aids and cochlear implants) restore natural soundscape perception and emotional responses back to normal. Importantly, they should also reveal whether and how humans hear the rapid changes in the environment brought about by human activity.
听力科学的研究提供了关于人类听觉系统如何处理语言和协助交流的广泛知识。相比之下,人们对这个系统如何处理“自然声景”知之甚少,“自然声景”是通过非人为栖息地的声音传播形成的生物和地球物理声音的复杂排列[Grinfeder et al.(2022)]。生态与进化前沿。10:894232。这是令人惊讶的,因为对于许多物种来说,处理自然声景的能力通过表现和监控周围环境的能力决定了它们的生存和繁殖。在此,我们提出了一个框架,以鼓励“人类听觉生态学”领域的研究计划,重点研究人类听觉感知在自然栖息地中工作的生态过程。基于具有高生态有效性的大型声学数据库,这些程序应该调查人类听觉系统的这种可能的祖先监测功能在多大程度上适应了自然声景所传达的特定信息,它是否贯穿整个生命周期,还是通过个人学习或文化传播出现。除了人类听力的基本知识之外,这些计划还应使人们更好地了解听力正常和听力受损的听众如何监测农村和城市的绿色和蓝色空间并从中受益,以及康复设备(助听器和人工耳蜗)是否能恢复自然的音景感知和情绪反应。重要的是,它们还应该揭示人类是否以及如何听到人类活动给环境带来的快速变化。
{"title":"Human Auditory Ecology: Extending Hearing Research to the Perception of Natural Soundscapes by Humans in Rapidly Changing Environments.","authors":"Christian Lorenzi, Frédéric Apoux, Elie Grinfeder, Bernie Krause, Nicole Miller-Viacava, Jérôme Sueur","doi":"10.1177/23312165231212032","DOIUrl":"10.1177/23312165231212032","url":null,"abstract":"<p><p>Research in hearing sciences has provided extensive knowledge about how the human auditory system processes speech and assists communication. In contrast, little is known about how this system processes \"natural soundscapes,\" that is the complex arrangements of biological and geophysical sounds shaped by sound propagation through non-anthropogenic habitats [Grinfeder et al. (2022). <i>Frontiers in Ecology and Evolution. 10:</i> 894232]. This is surprising given that, for many species, the capacity to process natural soundscapes determines survival and reproduction through the ability to represent and monitor the immediate environment. Here we propose a framework to encourage research programmes in the field of \"human auditory ecology,\" focusing on the study of human auditory perception of ecological processes at work in natural habitats. Based on large acoustic databases with high ecological validity, these programmes should investigate the extent to which this presumably ancestral monitoring function of the human auditory system is adapted to specific information conveyed by natural soundscapes, whether it operate throughout the life span or whether it emerges through individual learning or cultural transmission. Beyond fundamental knowledge of human hearing, these programmes should yield a better understanding of how normal-hearing and hearing-impaired listeners monitor rural and city green and blue spaces and benefit from them, and whether rehabilitation devices (hearing aids and cochlear implants) restore natural soundscape perception and emotional responses back to normal. Importantly, they should also reveal whether and how humans hear the rapid changes in the environment brought about by human activity.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231212032"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10658775/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138048241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231181757
Anastasia G Sares, Annie C Gilbert, Yue Zhang, Maria Iordanov, Alexandre Lehmann, Mickael L D Deroche
Auditory memory is an important everyday skill evaluated more and more frequently in clinical settings as there is recently a greater recognition of the cost of hearing loss to cognitive systems. Testing often involves reading a list of unrelated items aloud; but prosodic variations in pitch and timing across the list can affect the number of items remembered. Here, we ran a series of online studies on normally-hearing participants to provide normative data (with a larger and more diverse population than the typical student sample) on a novel protocol characterizing the effects of suprasegmental properties in speech, namely investigating pitch patterns, fast and slow pacing, and interactions between pitch and time grouping. In addition to free recall, and in line with our desire to work eventually with individuals exhibiting more limited cognitive capacity, we included a cued recall task to help participants recover specifically the words forgotten during the free recall part. We replicated key findings from previous research, demonstrating the benefits of slower pacing and of grouping on free recall. However, only slower pacing led to better performance on cued recall, indicating that grouping effects may decay surprisingly fast (over a matter of one minute) compared to the effect of slowed pacing. These results provide a benchmark for future comparisons of short-term recall performance in hearing-impaired listeners and users of cochlear implants.
{"title":"Grouping by Time and Pitch Facilitates Free but Not Cued Recall for Word Lists in Normally-Hearing Listeners.","authors":"Anastasia G Sares, Annie C Gilbert, Yue Zhang, Maria Iordanov, Alexandre Lehmann, Mickael L D Deroche","doi":"10.1177/23312165231181757","DOIUrl":"https://doi.org/10.1177/23312165231181757","url":null,"abstract":"<p><p>Auditory memory is an important everyday skill evaluated more and more frequently in clinical settings as there is recently a greater recognition of the cost of hearing loss to cognitive systems. Testing often involves reading a list of unrelated items aloud; but prosodic variations in pitch and timing across the list can affect the number of items remembered. Here, we ran a series of online studies on normally-hearing participants to provide normative data (with a larger and more diverse population than the typical student sample) on a novel protocol characterizing the effects of suprasegmental properties in speech, namely investigating pitch patterns, fast and slow pacing, and interactions between pitch and time grouping. In addition to free recall, and in line with our desire to work eventually with individuals exhibiting more limited cognitive capacity, we included a cued recall task to help participants recover specifically the words forgotten during the free recall part. We replicated key findings from previous research, demonstrating the benefits of slower pacing and of grouping on free recall. However, only slower pacing led to better performance on cued recall, indicating that grouping effects may decay surprisingly fast (over a matter of one minute) compared to the effect of slowed pacing. These results provide a benchmark for future comparisons of short-term recall performance in hearing-impaired listeners and users of cochlear implants.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231181757"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/a6/25/10.1177_23312165231181757.PMC10286184.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9712047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231201020
Peter Lokša, Norbert Kopčo
The ventriloquism aftereffect (VAE), observed as a shift in the perceived locations of sounds after audio-visual stimulation, requires reference frame (RF) alignment since hearing and vision encode space in different RFs (head-centered vs. eye-centered). Previous experimental studies reported inconsistent results, observing either a mixture of head-centered and eye-centered frames, or a predominantly head-centered frame. Here, a computational model is introduced, examining the neural mechanisms underlying these effects. The basic model version assumes that the auditory spatial map is head-centered and the visual signals are converted to head-centered frame prior to inducing the adaptation. Two mechanisms are considered as extended model versions to describe the mixed-frame experimental data: (1) additional presence of visual signals in eye-centered frame and (2) eye-gaze direction-dependent attenuation in VAE when eyes shift away from the training fixation. Simulation results show that the mixed-frame results are mainly due to the second mechanism, suggesting that the RF of VAE is mainly head-centered. Additionally, a mechanism is proposed to explain a new ventriloquism-aftereffect-like phenomenon in which adaptation is induced by aligned audio-visual signals when saccades are used for responding to auditory targets. A version of the model extended to consider such response-method-related biases accurately predicts the new phenomenon. When attempting to model all the experimentally observed phenomena simultaneously, the model predictions are qualitatively similar but less accurate, suggesting that the proposed neural mechanisms interact in a more complex way than assumed in the model.
{"title":"Toward a Unified Theory of the Reference Frame of the Ventriloquism Aftereffect.","authors":"Peter Lokša, Norbert Kopčo","doi":"10.1177/23312165231201020","DOIUrl":"10.1177/23312165231201020","url":null,"abstract":"<p><p>The ventriloquism aftereffect (VAE), observed as a shift in the perceived locations of sounds after audio-visual stimulation, requires reference frame (RF) alignment since hearing and vision encode space in different RFs (head-centered vs. eye-centered). Previous experimental studies reported inconsistent results, observing either a mixture of head-centered and eye-centered frames, or a predominantly head-centered frame. Here, a computational model is introduced, examining the neural mechanisms underlying these effects. The basic model version assumes that the auditory spatial map is head-centered and the visual signals are converted to head-centered frame prior to inducing the adaptation. Two mechanisms are considered as extended model versions to describe the mixed-frame experimental data: (1) additional presence of visual signals in eye-centered frame and (2) eye-gaze direction-dependent attenuation in VAE when eyes shift away from the training fixation. Simulation results show that the mixed-frame results are mainly due to the second mechanism, suggesting that the RF of VAE is mainly head-centered. Additionally, a mechanism is proposed to explain a new ventriloquism-aftereffect-like phenomenon in which adaptation is induced by aligned audio-visual signals when saccades are used for responding to auditory targets. A version of the model extended to consider such response-method-related biases accurately predicts the new phenomenon. When attempting to model all the experimentally observed phenomena simultaneously, the model predictions are qualitatively similar but less accurate, suggesting that the proposed neural mechanisms interact in a more complex way than assumed in the model.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231201020"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/ff/13/10.1177_23312165231201020.PMC10505348.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10670951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165221148035
Alberte B Seeberg, Niels T Haumann, Andreas Højlund, Anne S F Andersen, Kathleen F Faulkner, Elvira Brattico, Peter Vuust, Bjørn Petersen
Cochlear implants (CIs) are optimized for speech perception but poor in conveying musical sound features such as pitch, melody, and timbre. Here, we investigated the early development of discrimination of musical sound features after cochlear implantation. Nine recently implanted CI users (CIre) were tested shortly after switch-on (T1) and approximately 3 months later (T2), using a musical multifeature mismatch negativity (MMN) paradigm, presenting four deviant features (intensity, pitch, timbre, and rhythm), and a three-alternative forced-choice behavioral test. For reference, groups of experienced CI users (CIex; n = 13) and normally hearing (NH) controls (n = 14) underwent the same tests once. We found significant improvement in CIre's neural discrimination of pitch and timbre as marked by increased MMN amplitudes. This was not reflected in the behavioral results. Behaviorally, CIre scored well above chance level at both time points for all features except intensity, but significantly below NH controls for all features except rhythm. Both CI groups scored significantly below NH in behavioral pitch discrimination. No significant difference was found in MMN amplitude between CIex and NH. The results indicate that development of musical discrimination can be detected neurophysiologically early after switch-on. However, to fully take advantage of the sparse information from the implant, a prolonged adaptation period may be required. Behavioral discrimination accuracy was notably high already shortly after implant switch-on, although well below that of NH listeners. This study provides new insight into the early development of music-discrimination abilities in CI users and may have clinical and therapeutic relevance.
{"title":"Adapting to the Sound of Music - Development of Music Discrimination Skills in Recently Implanted CI Users.","authors":"Alberte B Seeberg, Niels T Haumann, Andreas Højlund, Anne S F Andersen, Kathleen F Faulkner, Elvira Brattico, Peter Vuust, Bjørn Petersen","doi":"10.1177/23312165221148035","DOIUrl":"https://doi.org/10.1177/23312165221148035","url":null,"abstract":"Cochlear implants (CIs) are optimized for speech perception but poor in conveying musical sound features such as pitch, melody, and timbre. Here, we investigated the early development of discrimination of musical sound features after cochlear implantation. Nine recently implanted CI users (CIre) were tested shortly after switch-on (T1) and approximately 3 months later (T2), using a musical multifeature mismatch negativity (MMN) paradigm, presenting four deviant features (intensity, pitch, timbre, and rhythm), and a three-alternative forced-choice behavioral test. For reference, groups of experienced CI users (CIex; n = 13) and normally hearing (NH) controls (n = 14) underwent the same tests once. We found significant improvement in CIre's neural discrimination of pitch and timbre as marked by increased MMN amplitudes. This was not reflected in the behavioral results. Behaviorally, CIre scored well above chance level at both time points for all features except intensity, but significantly below NH controls for all features except rhythm. Both CI groups scored significantly below NH in behavioral pitch discrimination. No significant difference was found in MMN amplitude between CIex and NH. The results indicate that development of musical discrimination can be detected neurophysiologically early after switch-on. However, to fully take advantage of the sparse information from the implant, a prolonged adaptation period may be required. Behavioral discrimination accuracy was notably high already shortly after implant switch-on, although well below that of NH listeners. This study provides new insight into the early development of music-discrimination abilities in CI users and may have clinical and therapeutic relevance.","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165221148035"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/0a/8a/10.1177_23312165221148035.PMC9830578.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10750139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231157255
Megan Knoetze, Vinaya Manchaiah, Bopane Mothemela, De Wet Swanepoel
This systematic review examined the audiological and nonaudiological factors that influence hearing help-seeking and hearing aid uptake in adults with hearing loss based on the literature published during the last decade. Peer-reviewed articles published between January 2011 and February 2022 were identified through systematic searches in electronic databases CINAHL, PsycINFO, and MEDLINE. The review was conducted and reported according to the PRISMA protocol. Forty-two articles met the inclusion criteria. Seventy (42 audiological and 28 nonaudiological) hearing help-seeking factors and 159 (93 audiological and 66 nonaudiological) hearing aid uptake factors were investigated with many factors reported only once (10/70 and 62/159, respectively). Hearing aid uptake had some strong predictors (e.g., hearing sensitivity) with others showing conflicting results (e.g., self-reported health). Hearing help-seeking had clear nonpredictive factors (e.g., education) and conflicting factors (e.g., self-reported health). New factors included cognitive anxiety associated with increased help-seeking and hearing aid uptake and urban residency and access to financial support with hearing aid uptake. Most studies were rated as having a low level of evidence (67%) and fair quality (86%). Effective promotion of hearing help-seeking requires more research evidence. Investigating factors with conflicting results and limited evidence is important to clarify what factors support help-seeking and hearing aid uptake in adults with hearing loss. These findings can inform future research and hearing health promotion and rehabilitation practices.
{"title":"Factors Influencing Hearing Help-Seeking and Hearing Aid Uptake in Adults: A Systematic Review of the Past Decade.","authors":"Megan Knoetze, Vinaya Manchaiah, Bopane Mothemela, De Wet Swanepoel","doi":"10.1177/23312165231157255","DOIUrl":"https://doi.org/10.1177/23312165231157255","url":null,"abstract":"<p><p>This systematic review examined the audiological and nonaudiological factors that influence hearing help-seeking and hearing aid uptake in adults with hearing loss based on the literature published during the last decade. Peer-reviewed articles published between January 2011 and February 2022 were identified through systematic searches in electronic databases CINAHL, PsycINFO, and MEDLINE. The review was conducted and reported according to the PRISMA protocol. Forty-two articles met the inclusion criteria. Seventy (42 audiological and 28 nonaudiological) hearing help-seeking factors and 159 (93 audiological and 66 nonaudiological) hearing aid uptake factors were investigated with many factors reported only once (10/70 and 62/159, respectively). Hearing aid uptake had some strong predictors (e.g., hearing sensitivity) with others showing conflicting results (e.g., self-reported health). Hearing help-seeking had clear nonpredictive factors (e.g., education) and conflicting factors (e.g., self-reported health). New factors included cognitive anxiety associated with increased help-seeking and hearing aid uptake and urban residency and access to financial support with hearing aid uptake. Most studies were rated as having a low level of evidence (67%) and fair quality (86%). Effective promotion of hearing help-seeking requires more research evidence. Investigating factors with conflicting results and limited evidence is important to clarify what factors support help-seeking and hearing aid uptake in adults with hearing loss. These findings can inform future research and hearing health promotion and rehabilitation practices.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231157255"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/5a/a2/10.1177_23312165231157255.PMC9940236.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10752961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231205713
Biao Chen, Ying Shi, Ying Kong, Jingyuan Chen, Lifang Zhang, Yongxin Li, John J. Galvin, Qian-Jie Fu
Different from normal-hearing (NH) listeners, speech recognition thresholds (SRTs) in cochlear implant (CI) users are typically poorer with dynamic maskers than with speech-spectrum noise (SSN). The effectiveness of different masker types may depend on their acoustic and linguistic characteristics. The goal of the present study was to evaluate the effectiveness of different masker types with varying acoustic and linguistic properties in CI and NH listeners. SRTs were measured with nine maskers, including SSN, dynamic nonspeech maskers, and speech maskers with or without lexical content. Results showed that CI users performed significantly poorer than NH listeners with all maskers. NH listeners were much more sensitive to masker type than were CI users. Relative to SSN, NH listeners experienced significant masking release for most maskers, which could be well explained by the glimpse proportion, especially for maskers containing similar cues related to fundamental frequency or lexical content. In contrast, CI users generally experienced negative masking release. There was significant intercorrelation among the maskers for CI users’ SRTs but much less so for NH listeners’ SRTs. Principal component analysis showed that one factor explained 72% of the variance in CI users’ SRTs but only 55% in NH listeners’ SRTs across all maskers. Taken together, the results suggest that SRTs in SSN largely accounted for the variability in CI users’ SRTs with dynamic maskers. Different from NH listeners, CI users appear to be more susceptible to energetic masking and do not experience a release from masking with dynamic envelopes or speech maskers.
{"title":"Susceptibility to Steady Noise Largely Explains Susceptibility to Dynamic Maskers in Cochlear Implant Users, but not in Normal-Hearing Listeners","authors":"Biao Chen, Ying Shi, Ying Kong, Jingyuan Chen, Lifang Zhang, Yongxin Li, John J. Galvin, Qian-Jie Fu","doi":"10.1177/23312165231205713","DOIUrl":"https://doi.org/10.1177/23312165231205713","url":null,"abstract":"Different from normal-hearing (NH) listeners, speech recognition thresholds (SRTs) in cochlear implant (CI) users are typically poorer with dynamic maskers than with speech-spectrum noise (SSN). The effectiveness of different masker types may depend on their acoustic and linguistic characteristics. The goal of the present study was to evaluate the effectiveness of different masker types with varying acoustic and linguistic properties in CI and NH listeners. SRTs were measured with nine maskers, including SSN, dynamic nonspeech maskers, and speech maskers with or without lexical content. Results showed that CI users performed significantly poorer than NH listeners with all maskers. NH listeners were much more sensitive to masker type than were CI users. Relative to SSN, NH listeners experienced significant masking release for most maskers, which could be well explained by the glimpse proportion, especially for maskers containing similar cues related to fundamental frequency or lexical content. In contrast, CI users generally experienced negative masking release. There was significant intercorrelation among the maskers for CI users’ SRTs but much less so for NH listeners’ SRTs. Principal component analysis showed that one factor explained 72% of the variance in CI users’ SRTs but only 55% in NH listeners’ SRTs across all maskers. Taken together, the results suggest that SRTs in SSN largely accounted for the variability in CI users’ SRTs with dynamic maskers. Different from NH listeners, CI users appear to be more susceptible to energetic masking and do not experience a release from masking with dynamic envelopes or speech maskers.","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136257090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231189596
Ibrahim Almufarrij, Harvey Dillon, Benjamin Adams, Aneela Greval, Kevin J Munro
Hearing aid verification with real-ear measurement (REM) is recommended in clinical practice. Improvements, over time, in accuracy of manufacturers' initial fit mean the benefit of routine REM for new adult users is unclear. This registered, double-blinded, randomized, mixed-methods clinical trial aimed to (i) determine whether new adult hearing aid users prefer initial or real-ear fit and (ii) investigate the reasons for preferences. New adult hearing aid users (n = 45) were each fitted with two programs: the initial fit and real-ear fit, both with adjustments based on immediate feedback from the patient. Participants were asked to complete daily paired-comparisons of the two programs with a magnitude estimation of the preference, one for each of clarity/comfort in quiet/noise as well as overall preference. The results revealed gain adjustment requests were low in number and small in magnitude. Deviation from NAL-NL2 targets (after adjustment for a 65 dB SPL input) was close to zero, except at high frequencies where real-ear fits were around 3 dB closer to target. There was no difference in clarity ratings between programs, but comfort ratings favored initial fit. Overall, 10 participants (22%) expressed a preference for real-ear fit. Reasons for preference were primarily based on comfort with the initial fit and clarity with real-ear fit. It may be acceptable to fit new adult users with mild-to-moderate hearing loss without the need for REMs, if the primary outcome of interest is user preference. It remains to be seen if the findings generalize to other fitting software, other outcome measures and more severe hearing loss.
建议在临床实践中使用真实耳朵测量(REM)进行助听器验证。随着时间的推移,制造商初始拟合准确性的提高意味着常规REM对新成年用户的好处尚不清楚。这项注册、双盲、随机、混合方法的临床试验旨在(i)确定新的成人助听器使用者是喜欢初次佩戴还是真正佩戴,以及(ii)调查偏好的原因。新的成人助听器使用者(n = 45)分别配备了两个程序:初始贴合和真正的耳朵贴合,两者都根据患者的即时反馈进行调整。参与者被要求完成两个项目的每日配对比较,并对偏好进行幅度估计,分别评估安静/噪音中的清晰度/舒适度以及总体偏好。结果表明,增益调整请求数量少,幅度小。与NAL-NL2目标的偏差(调整65 dB SPL输入)接近于零,但在实际耳朵适合度约为3的高频下除外 距离目标近dB。节目之间的清晰度评分没有差异,但舒适度评分有利于初始适合度。总体而言,10名参与者(22%)表示更喜欢真正的耳朵贴合感。偏好的原因主要是基于最初贴合的舒适度和真正贴合耳朵的清晰度。如果感兴趣的主要结果是用户偏好,那么在不需要REMs的情况下,适合轻度至中度听力损失的新成年用户可能是可以接受的。这些发现是否适用于其他拟合软件、其他结果测量和更严重的听力损失,还有待观察。
{"title":"Listening Preferences of New Adult Hearing Aid Users: A Registered, Double-Blind, Randomized, Mixed-Methods Clinical Trial of Initial Versus Real-Ear Fit.","authors":"Ibrahim Almufarrij, Harvey Dillon, Benjamin Adams, Aneela Greval, Kevin J Munro","doi":"10.1177/23312165231189596","DOIUrl":"10.1177/23312165231189596","url":null,"abstract":"<p><p>Hearing aid verification with real-ear measurement (REM) is recommended in clinical practice. Improvements, over time, in accuracy of manufacturers' initial fit mean the benefit of routine REM for new adult users is unclear. This registered, double-blinded, randomized, mixed-methods clinical trial aimed to (i) determine whether new adult hearing aid users prefer initial or real-ear fit and (ii) investigate the reasons for preferences. New adult hearing aid users (<i>n</i> = 45) were each fitted with two programs: the initial fit and real-ear fit, both with adjustments based on immediate feedback from the patient. Participants were asked to complete daily paired-comparisons of the two programs with a magnitude estimation of the preference, one for each of clarity/comfort in quiet/noise as well as overall preference. The results revealed gain adjustment requests were low in number and small in magnitude. Deviation from NAL-NL2 targets (after adjustment for a 65 dB SPL input) was close to zero, except at high frequencies where real-ear fits were around 3 dB closer to target. There was no difference in clarity ratings between programs, but comfort ratings favored initial fit. Overall, 10 participants (22%) expressed a preference for real-ear fit. Reasons for preference were primarily based on comfort with the initial fit and clarity with real-ear fit. It may be acceptable to fit new adult users with mild-to-moderate hearing loss without the need for REMs, if the primary outcome of interest is user preference. It remains to be seen if the findings generalize to other fitting software, other outcome measures and more severe hearing loss.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231189596"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10637150/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71522975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165221143907
Snandan Sharma, Lucas H M Mens, Ad F M Snik, A John van Opstal, Marc M van Wanrooij
Many cochlear implant users with binaural residual (acoustic) hearing benefit from combining electric and acoustic stimulation (EAS) in the implanted ear with acoustic amplification in the other. These bimodal EAS listeners can potentially use low-frequency binaural cues to localize sounds. However, their hearing is generally asymmetric for mid- and high-frequency sounds, perturbing or even abolishing binaural cues. Here, we investigated the effect of a frequency-dependent binaural asymmetry in hearing thresholds on sound localization by seven bimodal EAS listeners. Frequency dependence was probed by presenting sounds with power in low-, mid-, high-, or mid-to-high-frequency bands. Frequency-dependent hearing asymmetry was present in the bimodal EAS listening condition (when using both devices) but was also induced by independently switching devices on or off. Using both devices, hearing was near symmetric for low frequencies, asymmetric for mid frequencies with better hearing thresholds in the implanted ear, and monaural for high frequencies with no hearing in the non-implanted ear. Results show that sound-localization performance was poor in general. Typically, localization was strongly biased toward the better hearing ear. We observed that hearing asymmetry was a good predictor for these biases. Notably, even when hearing was symmetric a preferential bias toward the ear using the hearing aid was revealed. We discuss how frequency dependence of any hearing asymmetry may lead to binaural cues that are spatially inconsistent as the spectrum of a sound changes. We speculate that this inconsistency may prevent accurate sound-localization even after long-term exposure to the hearing asymmetry.
{"title":"Hearing Asymmetry Biases Spatial Hearing in Bimodal Cochlear-Implant Users Despite Bilateral Low-Frequency Hearing Preservation.","authors":"Snandan Sharma, Lucas H M Mens, Ad F M Snik, A John van Opstal, Marc M van Wanrooij","doi":"10.1177/23312165221143907","DOIUrl":"https://doi.org/10.1177/23312165221143907","url":null,"abstract":"<p><p>Many cochlear implant users with binaural residual (acoustic) hearing benefit from combining electric and acoustic stimulation (EAS) in the implanted ear with acoustic amplification in the other. These bimodal EAS listeners can potentially use low-frequency binaural cues to localize sounds. However, their hearing is generally asymmetric for mid- and high-frequency sounds, perturbing or even abolishing binaural cues. Here, we investigated the effect of a frequency-dependent binaural asymmetry in hearing thresholds on sound localization by seven bimodal EAS listeners. Frequency dependence was probed by presenting sounds with power in low-, mid-, high-, or mid-to-high-frequency bands. Frequency-dependent hearing asymmetry was present in the bimodal EAS listening condition (when using both devices) but was also induced by independently switching devices on or off. Using both devices, hearing was near symmetric for low frequencies, asymmetric for mid frequencies with better hearing thresholds in the implanted ear, and monaural for high frequencies with no hearing in the non-implanted ear. Results show that sound-localization performance was poor in general. Typically, localization was strongly biased toward the better hearing ear. We observed that hearing asymmetry was a good predictor for these biases. Notably, even when hearing was symmetric a preferential bias toward the ear using the hearing aid was revealed. We discuss how frequency dependence of any hearing asymmetry may lead to binaural cues that are spatially inconsistent as the spectrum of a sound changes. We speculate that this inconsistency may prevent accurate sound-localization even after long-term exposure to the hearing asymmetry.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165221143907"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9829999/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10806563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165221141142
Eleanor E Harding, Etienne Gaudrain, Imke J Hrycyk, Robert L Harris, Barbara Tillmann, Bert Maat, Rolien H Free, Deniz Başkent
While previous research investigating music emotion perception of cochlear implant (CI) users observed that temporal cues informing tempo largely convey emotional arousal (relaxing/stimulating), it remains unclear how other properties of the temporal content may contribute to the transmission of arousal features. Moreover, while detailed spectral information related to pitch and harmony in music - often not well perceived by CI users- reportedly conveys emotional valence (positive, negative), it remains unclear how the quality of spectral content contributes to valence perception. Therefore, the current study used vocoders to vary temporal and spectral content of music and tested music emotion categorization (joy, fear, serenity, sadness) in 23 normal-hearing participants. Vocoders were varied with two carriers (sinewave or noise; primarily modulating temporal information), and two filter orders (low or high; primarily modulating spectral information). Results indicated that emotion categorization was above-chance in vocoded excerpts but poorer than in a non-vocoded control condition. Among vocoded conditions, better temporal content (sinewave carriers) improved emotion categorization with a large effect while better spectral content (high filter order) improved it with a small effect. Arousal features were comparably transmitted in non-vocoded and vocoded conditions, indicating that lower temporal content successfully conveyed emotional arousal. Valence feature transmission steeply declined in vocoded conditions, revealing that valence perception was difficult for both lower and higher spectral content. The reliance on arousal information for emotion categorization of vocoded music suggests that efforts to refine temporal cues in the CI user signal may immediately benefit their music emotion perception.
{"title":"Musical Emotion Categorization with Vocoders of Varying Temporal and Spectral Content.","authors":"Eleanor E Harding, Etienne Gaudrain, Imke J Hrycyk, Robert L Harris, Barbara Tillmann, Bert Maat, Rolien H Free, Deniz Başkent","doi":"10.1177/23312165221141142","DOIUrl":"https://doi.org/10.1177/23312165221141142","url":null,"abstract":"<p><p>While previous research investigating music emotion perception of cochlear implant (CI) users observed that temporal cues informing tempo largely convey emotional arousal (relaxing/stimulating), it remains unclear how other properties of the temporal content may contribute to the transmission of arousal features. Moreover, while detailed spectral information related to pitch and harmony in music - often not well perceived by CI users- reportedly conveys emotional valence (positive, negative), it remains unclear how the quality of spectral content contributes to valence perception. Therefore, the current study used vocoders to vary temporal and spectral content of music and tested music emotion categorization (joy, fear, serenity, sadness) in 23 normal-hearing participants. Vocoders were varied with two carriers (sinewave or noise; primarily modulating temporal information), and two filter orders (low or high; primarily modulating spectral information). Results indicated that emotion categorization was above-chance in vocoded excerpts but poorer than in a non-vocoded control condition. Among vocoded conditions, better temporal content (sinewave carriers) improved emotion categorization with a large effect while better spectral content (high filter order) improved it with a small effect. Arousal features were comparably transmitted in non-vocoded and vocoded conditions, indicating that lower temporal content successfully conveyed emotional arousal. Valence feature transmission steeply declined in vocoded conditions, revealing that valence perception was difficult for both lower and higher spectral content. The reliance on arousal information for emotion categorization of vocoded music suggests that efforts to refine temporal cues in the CI user signal may immediately benefit their music emotion perception.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165221141142"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/83/fa/10.1177_23312165221141142.PMC9837297.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10746841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231156412
Diane S Lazard, Keith B Doelling, Luc H Arnal
Age-related hearing loss, presbycusis, is an unavoidable sensory degradation, often associated with the progressive decline of cognitive and social functions, and dementia. It is generally considered a natural consequence of the inner-ear deterioration. However, presbycusis arguably conflates a wide array of peripheral and central impairments. Although hearing rehabilitation maintains the integrity and activity of auditory networks and can prevent or revert maladaptive plasticity, the extent of such neural plastic changes in the aging brain is poorly appreciated. By reanalyzing a large-scale dataset of more than 2200 cochlear implant users (CI) and assessing the improvement in speech perception from 6 to 24 months of use, we show that, although rehabilitation improves speech understanding on average, age at implantation only minimally affects speech scores at 6 months but has a pejorative effect at 24 months post implantation. Furthermore, older subjects (>67 years old) were significantly more likely to degrade their performances after 2 years of CI use than the younger patients for each year increase in age. Secondary analysis reveals three possible plasticity trajectories after auditory rehabilitation to account for these disparities: Awakening, reversal of deafness-specific changes; Counteracting, stabilization of additional cognitive impairments; or Decline, independent pejorative processes that hearing rehabilitation cannot prevent. The role of complementary behavioral interventions needs to be considered to potentiate the (re)activation of auditory brain networks.
{"title":"Plasticity After Hearing Rehabilitation in the Aging Brain.","authors":"Diane S Lazard, Keith B Doelling, Luc H Arnal","doi":"10.1177/23312165231156412","DOIUrl":"https://doi.org/10.1177/23312165231156412","url":null,"abstract":"<p><p>Age-related hearing loss, presbycusis, is an unavoidable sensory degradation, often associated with the progressive decline of cognitive and social functions, and dementia. It is generally considered a natural consequence of the inner-ear deterioration. However, presbycusis arguably conflates a wide array of peripheral and central impairments. Although hearing rehabilitation maintains the integrity and activity of auditory networks and can prevent or revert maladaptive plasticity, the extent of such neural plastic changes in the aging brain is poorly appreciated. By reanalyzing a large-scale dataset of more than 2200 cochlear implant users (CI) and assessing the improvement in speech perception from 6 to 24 months of use, we show that, although rehabilitation improves speech understanding on average, age at implantation only minimally affects speech scores at 6 months but has a pejorative effect at 24 months post implantation. Furthermore, older subjects (>67 years old) were significantly more likely to degrade their performances after 2 years of CI use than the younger patients for each year increase in age. Secondary analysis reveals three possible plasticity trajectories after auditory rehabilitation to account for these disparities: Awakening, reversal of deafness-specific changes; Counteracting, stabilization of additional cognitive impairments; or Decline, independent pejorative processes that hearing rehabilitation cannot prevent. The role of complementary behavioral interventions needs to be considered to potentiate the (re)activation of auditory brain networks.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231156412"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/6d/15/10.1177_23312165231156412.PMC9936397.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10751676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}