Pub Date : 2024-01-01DOI: 10.1177/23312165241276435
Inga Holube, Stefan Taesler, Saskia Ibelings, Martin Hansen, Jasper Ooster
In speech audiometry, the speech-recognition threshold (SRT) is usually established by adjusting the signal-to-noise ratio (SNR) until 50% of the words or sentences are repeated correctly. However, these conditions are rarely encountered in everyday situations. Therefore, for a group of 15 young participants with normal hearing and a group of 12 older participants with hearing impairment, speech-recognition scores were determined at SRT and at four higher SNRs using several stationary and fluctuating maskers. Participants' verbal responses were recorded, and participants were asked to self-report their listening effort on a categorical scale (self-reported listening effort, SR-LE). The responses were analyzed using an Automatic Speech Recognizer (ASR) and compared to the results of a human examiner. An intraclass correlation coefficient of r = .993 for the agreement between their corresponding speech-recognition scores was observed. As expected, speech-recognition scores increased with increasing SNR and decreased with increasing SR-LE. However, differences between speech-recognition scores for fluctuating and stationary maskers were observed as a function of SNR, but not as a function of SR-LE. The verbal response time (VRT) and the response speech rate (RSR) of the listeners' responses were measured using an ASR. The participants with hearing impairment showed significantly lower RSRs and higher VRTs compared to the participants with normal hearing. These differences may be attributed to differences in age, hearing, or both. With increasing SR-LE, VRT increased and RSR decreased. The results show the possibility of deriving a behavioral measure, VRT, measured directly from participants' verbal responses during speech audiometry, as a proxy for SR-LE.
{"title":"Automated Measurement of Speech Recognition, Reaction Time, and Speech Rate and Their Relation to Self-Reported Listening Effort for Normal-Hearing and Hearing-Impaired Listeners Using various Maskers.","authors":"Inga Holube, Stefan Taesler, Saskia Ibelings, Martin Hansen, Jasper Ooster","doi":"10.1177/23312165241276435","DOIUrl":"10.1177/23312165241276435","url":null,"abstract":"<p><p>In speech audiometry, the speech-recognition threshold (SRT) is usually established by adjusting the signal-to-noise ratio (SNR) until 50% of the words or sentences are repeated correctly. However, these conditions are rarely encountered in everyday situations. Therefore, for a group of 15 young participants with normal hearing and a group of 12 older participants with hearing impairment, speech-recognition scores were determined at SRT and at four higher SNRs using several stationary and fluctuating maskers. Participants' verbal responses were recorded, and participants were asked to self-report their listening effort on a categorical scale (self-reported listening effort, SR-LE). The responses were analyzed using an Automatic Speech Recognizer (ASR) and compared to the results of a human examiner. An intraclass correlation coefficient of <i>r </i>= .993 for the agreement between their corresponding speech-recognition scores was observed. As expected, speech-recognition scores increased with increasing SNR and decreased with increasing SR-LE. However, differences between speech-recognition scores for fluctuating and stationary maskers were observed as a function of SNR, but not as a function of SR-LE. The verbal response time (VRT) and the response speech rate (RSR) of the listeners' responses were measured using an ASR. The participants with hearing impairment showed significantly lower RSRs and higher VRTs compared to the participants with normal hearing. These differences may be attributed to differences in age, hearing, or both. With increasing SR-LE, VRT increased and RSR decreased. The results show the possibility of deriving a behavioral measure, VRT, measured directly from participants' verbal responses during speech audiometry, as a proxy for SR-LE.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241276435"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11421406/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142299020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241231685
Deborah A Vickers, Brian C J Moore
{"title":"Editorial: Cochlear Implants and Music.","authors":"Deborah A Vickers, Brian C J Moore","doi":"10.1177/23312165241231685","DOIUrl":"10.1177/23312165241231685","url":null,"abstract":"","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241231685"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10874149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139742320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241229057
Gloria Araiza-Illan, Luke Meyer, Khiet P Truong, Deniz Başkent
A practical speech audiometry tool is the digits-in-noise (DIN) test for hearing screening of populations of varying ages and hearing status. The test is usually conducted by a human supervisor (e.g., clinician), who scores the responses spoken by the listener, or online, where software scores the responses entered by the listener. The test has 24-digit triplets presented in an adaptive staircase procedure, resulting in a speech reception threshold (SRT). We propose an alternative automated DIN test setup that can evaluate spoken responses whilst conducted without a human supervisor, using the open-source automatic speech recognition toolkit, Kaldi-NL. Thirty self-reported normal-hearing Dutch adults (19-64 years) completed one DIN + Kaldi-NL test. Their spoken responses were recorded and used for evaluating the transcript of decoded responses by Kaldi-NL. Study 1 evaluated the Kaldi-NL performance through its word error rate (WER), percentage of summed decoding errors regarding only digits found in the transcript compared to the total number of digits present in the spoken responses. Average WER across participants was 5.0% (range 0-48%, SD = 8.8%), with average decoding errors in three triplets per participant. Study 2 analyzed the effect that triplets with decoding errors from Kaldi-NL had on the DIN test output (SRT), using bootstrapping simulations. Previous research indicated 0.70 dB as the typical within-subject SRT variability for normal-hearing adults. Study 2 showed that up to four triplets with decoding errors produce SRT variations within this range, suggesting that our proposed setup could be feasible for clinical applications.
{"title":"Automated Speech Audiometry: Can It Work Using Open-Source Pre-Trained Kaldi-NL Automatic Speech Recognition?","authors":"Gloria Araiza-Illan, Luke Meyer, Khiet P Truong, Deniz Başkent","doi":"10.1177/23312165241229057","DOIUrl":"10.1177/23312165241229057","url":null,"abstract":"<p><p>A practical speech audiometry tool is the digits-in-noise (DIN) test for hearing screening of populations of varying ages and hearing status. The test is usually conducted by a human supervisor (e.g., clinician), who scores the responses spoken by the listener, or online, where software scores the responses entered by the listener. The test has 24-digit triplets presented in an adaptive staircase procedure, resulting in a speech reception threshold (SRT). We propose an alternative automated DIN test setup that can evaluate spoken responses whilst conducted without a human supervisor, using the open-source automatic speech recognition toolkit, Kaldi-NL. Thirty self-reported normal-hearing Dutch adults (19-64 years) completed one DIN + Kaldi-NL test. Their spoken responses were recorded and used for evaluating the transcript of decoded responses by Kaldi-NL. Study 1 evaluated the Kaldi-NL performance through its word error rate (WER), percentage of summed decoding errors regarding only digits found in the transcript compared to the total number of digits present in the spoken responses. Average WER across participants was 5.0% (range 0-48%, SD = 8.8%), with average decoding errors in three triplets per participant. Study 2 analyzed the effect that triplets with decoding errors from Kaldi-NL had on the DIN test output (SRT), using bootstrapping simulations. Previous research indicated 0.70 dB as the typical within-subject SRT variability for normal-hearing adults. Study 2 showed that up to four triplets with decoding errors produce SRT variations within this range, suggesting that our proposed setup could be feasible for clinical applications.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241229057"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10943752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140132882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241229880
Sean R Anderson, Emily Burg, Lukas Suveg, Ruth Y Litovsky
Bilateral cochlear implants (BiCIs) result in several benefits, including improvements in speech understanding in noise and sound source localization. However, the benefit bilateral implants provide among recipients varies considerably across individuals. Here we consider one of the reasons for this variability: difference in hearing function between the two ears, that is, interaural asymmetry. Thus far, investigations of interaural asymmetry have been highly specialized within various research areas. The goal of this review is to integrate these studies in one place, motivating future research in the area of interaural asymmetry. We first consider bottom-up processing, where binaural cues are represented using excitation-inhibition of signals from the left ear and right ear, varying with the location of the sound in space, and represented by the lateral superior olive in the auditory brainstem. We then consider top-down processing via predictive coding, which assumes that perception stems from expectations based on context and prior sensory experience, represented by cascading series of cortical circuits. An internal, perceptual model is maintained and updated in light of incoming sensory input. Together, we hope that this amalgamation of physiological, behavioral, and modeling studies will help bridge gaps in the field of binaural hearing and promote a clearer understanding of the implications of interaural asymmetry for future research on optimal patient interventions.
{"title":"Review of Binaural Processing With Asymmetrical Hearing Outcomes in Patients With Bilateral Cochlear Implants.","authors":"Sean R Anderson, Emily Burg, Lukas Suveg, Ruth Y Litovsky","doi":"10.1177/23312165241229880","DOIUrl":"10.1177/23312165241229880","url":null,"abstract":"<p><p>Bilateral cochlear implants (BiCIs) result in several benefits, including improvements in speech understanding in noise and sound source localization. However, the benefit bilateral implants provide among recipients varies considerably across individuals. Here we consider one of the reasons for this variability: difference in hearing function between the two ears, that is, interaural asymmetry. Thus far, investigations of interaural asymmetry have been highly specialized within various research areas. The goal of this review is to integrate these studies in one place, motivating future research in the area of interaural asymmetry. We first consider bottom-up processing, where binaural cues are represented using excitation-inhibition of signals from the left ear and right ear, varying with the location of the sound in space, and represented by the lateral superior olive in the auditory brainstem. We then consider top-down processing via predictive coding, which assumes that perception stems from expectations based on context and prior sensory experience, represented by cascading series of cortical circuits. An internal, perceptual model is maintained and updated in light of incoming sensory input. Together, we hope that this amalgamation of physiological, behavioral, and modeling studies will help bridge gaps in the field of binaural hearing and promote a clearer understanding of the implications of interaural asymmetry for future research on optimal patient interventions.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241229880"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10976506/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140307503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cochlear implant (CI) users, even with substantial speech comprehension, generally have poor sensitivity to pitch information (or fundamental frequency, F0). This insensitivity is often attributed to limited spectral and temporal resolution in the CI signals. However, the pitch sensitivity markedly varies among individuals, and some users exhibit fairly good sensitivity. This indicates that the CI signal contains sufficient information about F0, and users' sensitivity is predominantly limited by other physiological conditions such as neuroplasticity or neural health. We estimated the upper limit of F0 information that a CI signal can convey by decoding F0 from simulated CI signals (multi-channel pulsatile signals) with a deep neural network model (referred to as the CI model). We varied the number of electrode channels and the pulse rate, which should respectively affect spectral and temporal resolutions of stimulus representations. The F0-estimation performance generally improved with increasing number of channels and pulse rate. For the sounds presented under quiet conditions, the model performance was at best comparable to that of a control waveform model, which received raw-waveform inputs. Under conditions in which background noise was imposed, the performance of the CI model generally degraded by a greater degree than that of the waveform model. The pulse rate had a particularly large effect on predicted performance. These observations indicate that the CI signal contains some information for predicting F0, which is particularly sufficient for targets under quiet conditions. The temporal resolution (represented as pulse rate) plays a critical role in pitch representation under noisy conditions.
人工耳蜗 (CI) 用户即使有很强的语音理解能力,一般对音高信息(或基频,F0)的敏感度也很低。这种不敏感通常归因于 CI 信号的频谱和时间分辨率有限。然而,不同个体的音调灵敏度存在明显差异,有些用户的灵敏度相当高。这表明 CI 信号包含足够的 F0 信息,而用户的灵敏度主要受到神经可塑性或神经健康等其他生理条件的限制。我们通过使用深度神经网络模型(简称 CI 模型)对模拟 CI 信号(多通道脉动信号)进行 F0 解码,从而估算出 CI 信号所能传达的 F0 信息上限。我们改变了电极通道的数量和脉冲频率,这将分别影响刺激表征的频谱和时间分辨率。随着通道数和脉冲频率的增加,F0 估算性能普遍提高。对于在安静条件下呈现的声音,模型性能最多只能与接收原始波形输入的对照波形模型相媲美。在有背景噪音的条件下,CI 模型的性能通常比波形模型的性能下降得更多。脉搏率对预测性能的影响尤其大。这些观察结果表明,CI 信号包含一些预测 F0 的信息,尤其是对安静条件下的目标而言,这些信息是足够的。时间分辨率(以脉搏率表示)在噪声条件下的音高表示中起着至关重要的作用。
{"title":"Estimating Pitch Information From Simulated Cochlear Implant Signals With Deep Neural Networks.","authors":"Takanori Ashihara, Shigeto Furukawa, Makio Kashino","doi":"10.1177/23312165241298606","DOIUrl":"10.1177/23312165241298606","url":null,"abstract":"<p><p>Cochlear implant (CI) users, even with substantial speech comprehension, generally have poor sensitivity to pitch information (or fundamental frequency, F0). This insensitivity is often attributed to limited spectral and temporal resolution in the CI signals. However, the pitch sensitivity markedly varies among individuals, and some users exhibit fairly good sensitivity. This indicates that the CI signal contains sufficient information about F0, and users' sensitivity is predominantly limited by other physiological conditions such as neuroplasticity or neural health. We estimated the upper limit of F0 information that a CI signal can convey by decoding F0 from simulated CI signals (multi-channel pulsatile signals) with a deep neural network model (referred to as the CI model). We varied the number of electrode channels and the pulse rate, which should respectively affect spectral and temporal resolutions of stimulus representations. The F0-estimation performance generally improved with increasing number of channels and pulse rate. For the sounds presented under quiet conditions, the model performance was at best comparable to that of a control waveform model, which received raw-waveform inputs. Under conditions in which background noise was imposed, the performance of the CI model generally degraded by a greater degree than that of the waveform model. The pulse rate had a particularly large effect on predicted performance. These observations indicate that the CI signal contains some information for predicting F0, which is particularly sufficient for targets under quiet conditions. The temporal resolution (represented as pulse rate) plays a critical role in pitch representation under noisy conditions.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241298606"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11693851/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142683025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241305058
Ingvi Örnolfsson, Axel Ahrens, Torsten Dau, Tobias May
Collaboration is a key element of many communicative interactions. Analyzing the effect of collaborative interaction on subsequent decision-making tasks offers the potential to quantitatively evaluate criteria that are indicative of successful communication. While many studies have explored how collaboration aids decision-making, little is known about how communicative barriers, such as loud background noise or hearing impairment, affect this process. This study investigated how collaborative triadic conversations held in different background noise levels affected the decision-making of individual group members in a subsequent individual task. Thirty normal-hearing participants were recruited and organized into triads. First, each participant answered a series of binary general knowledge questions and provided a confidence rating along with each response. The questions were then discussed in triads in either loud (78 dB) or soft (48 dB) background noise. Participants then answered the same questions individually again. Three decision-making measures - stay/switch behavior, decision convergence, and voting strategy - were used to assess if and how participants adjusted their initial decisions after the conversations. The results revealed an interaction between initial confidence rating and noise level: participants were more likely to modify their decisions towards high-confidence prior decisions, and this effect was more pronounced when the conversations had taken place in loud noise. We speculate that this may be because low-confidence opinions are less likely to be voiced in noisy environments compared to high-confidence opinions. The findings demonstrate that decision-making tasks can be designed for conversation studies with groups of more than two participants, and that such tasks can be used to explore how communicative barriers impact subsequent decision-making of individual group members.
{"title":"The Effect of Collaborative Triadic Conversations in Noise on Decision-Making in a General-Knowledge Task.","authors":"Ingvi Örnolfsson, Axel Ahrens, Torsten Dau, Tobias May","doi":"10.1177/23312165241305058","DOIUrl":"10.1177/23312165241305058","url":null,"abstract":"<p><p>Collaboration is a key element of many communicative interactions. Analyzing the effect of collaborative interaction on subsequent decision-making tasks offers the potential to quantitatively evaluate criteria that are indicative of successful communication. While many studies have explored how collaboration aids decision-making, little is known about how communicative barriers, such as loud background noise or hearing impairment, affect this process. This study investigated how collaborative triadic conversations held in different background noise levels affected the decision-making of individual group members in a subsequent individual task. Thirty normal-hearing participants were recruited and organized into triads. First, each participant answered a series of binary general knowledge questions and provided a confidence rating along with each response. The questions were then discussed in triads in either loud (78 dB) or soft (48 dB) background noise. Participants then answered the same questions individually again. Three decision-making measures - stay/switch behavior, decision convergence, and voting strategy - were used to assess if and how participants adjusted their initial decisions after the conversations. The results revealed an interaction between initial confidence rating and noise level: participants were more likely to modify their decisions towards high-confidence prior decisions, and this effect was more pronounced when the conversations had taken place in loud noise. We speculate that this may be because low-confidence opinions are less likely to be voiced in noisy environments compared to high-confidence opinions. The findings demonstrate that decision-making tasks can be designed for conversation studies with groups of more than two participants, and that such tasks can be used to explore how communicative barriers impact subsequent decision-making of individual group members.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241305058"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11639005/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142819739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241287092
Jan A A van Heteren, Hanneke D van Oorschot, Anne W Wendrich, Jeroen P M Peters, Koenraad S Rhebergen, Wilko Grolman, Robert J Stokroos, Adriana L Smit
There is currently a lack of prospective studies comparing multiple treatment options for single-sided deafness (SSD) in terms of long-term sound localization outcomes. This randomized controlled trial (RCT) aims to compare the objective and subjective sound localization abilities of SSD patients treated with a cochlear implant (CI), a bone conduction device (BCD), a contralateral routing of signals (CROS) hearing aid, or no treatment after two years of follow-up. About 120 eligible patients were randomized to cochlear implantation or to a trial period with first a BCD on a headband, then a CROS (or vice versa). After the trial periods, participants opted for a surgically implanted BCD, a CROS, or no treatment. Sound localization accuracy (in three configurations, calculated as percentage correct and root-mean squared error in degrees) and subjective spatial hearing (subscale of the Speech, Spatial and Qualities of hearing (SSQ) questionnaire) were assessed at baseline and after 24 months of follow-up. At the start of follow-up, 28 participants were implanted with a CI, 25 with a BCD, 34 chose a CROS, and 26 opted for no treatment. Participants in the CI group showed better sound localization accuracy and subjective spatial hearing compared to participants in the BCD, CROS, and no-treatment groups at 24 months. Participants in the CI and CROS groups showed improved subjective spatial hearing at 24 months compared to baseline. To conclude, CI outperformed the BCD, CROS, and no-treatment groups in terms of sound localization accuracy and subjective spatial hearing in SSD patients. TRIAL REGISTRATION Netherlands Trial Register (https://onderzoekmetmensen.nl): NL4457, CINGLE trial.
{"title":"Sound Localization in Single-Sided Deafness; Outcomes of a Randomized Controlled Trial on the Comparison Between Cochlear Implantation, Bone Conduction Devices, and Contralateral Routing of Signals Hearing Aids.","authors":"Jan A A van Heteren, Hanneke D van Oorschot, Anne W Wendrich, Jeroen P M Peters, Koenraad S Rhebergen, Wilko Grolman, Robert J Stokroos, Adriana L Smit","doi":"10.1177/23312165241287092","DOIUrl":"10.1177/23312165241287092","url":null,"abstract":"<p><p>There is currently a lack of prospective studies comparing multiple treatment options for single-sided deafness (SSD) in terms of long-term sound localization outcomes. This randomized controlled trial (RCT) aims to compare the objective and subjective sound localization abilities of SSD patients treated with a cochlear implant (CI), a bone conduction device (BCD), a contralateral routing of signals (CROS) hearing aid, or no treatment after two years of follow-up. About 120 eligible patients were randomized to cochlear implantation or to a trial period with first a BCD on a headband, then a CROS (or vice versa). After the trial periods, participants opted for a surgically implanted BCD, a CROS, or no treatment. Sound localization accuracy (in three configurations, calculated as percentage correct and root-mean squared error in degrees) and subjective spatial hearing (subscale of the Speech, Spatial and Qualities of hearing (SSQ) questionnaire) were assessed at baseline and after 24 months of follow-up. At the start of follow-up, 28 participants were implanted with a CI, 25 with a BCD, 34 chose a CROS, and 26 opted for no treatment. Participants in the CI group showed better sound localization accuracy and subjective spatial hearing compared to participants in the BCD, CROS, and no-treatment groups at 24 months. Participants in the CI and CROS groups showed improved subjective spatial hearing at 24 months compared to baseline. To conclude, CI outperformed the BCD, CROS, and no-treatment groups in terms of sound localization accuracy and subjective spatial hearing in SSD patients. <b>TRIAL REGISTRATION</b> Netherlands Trial Register (https://onderzoekmetmensen.nl): NL4457, <i>CINGLE</i> trial.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241287092"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11526308/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142523412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241287622
Vanessa Frei, Raffael Schmitt, Martin Meyer, Nathalie Giroud
Comprehending speech in noise (SiN) poses a challenge for older hearing-impaired listeners, requiring auditory and working memory resources. Visual speech cues provide additional sensory information supporting speech understanding, while the extent of such visual benefit is characterized by large variability, which might be accounted for by individual differences in working memory capacity (WMC). In the current study, we investigated behavioral and neurofunctional (i.e., neural speech tracking) correlates of auditory and audio-visual speech comprehension in babble noise and the associations with WMC. Healthy older adults with hearing impairment quantified by pure-tone hearing loss (threshold average: 31.85-57 dB, N = 67) listened to sentences in babble noise in audio-only, visual-only and audio-visual speech modality and performed a pattern matching and a comprehension task, while electroencephalography (EEG) was recorded. Behaviorally, no significant difference in task performance was observed across modalities. However, we did find a significant association between individual working memory capacity and task performance, suggesting a more complex interplay between audio-visual speech cues, working memory capacity and real-world listening tasks. Furthermore, we found that the visual speech presentation was accompanied by increased cortical tracking of the speech envelope, particularly in a right-hemispheric auditory topographical cluster. Post-hoc, we investigated the potential relationships between the behavioral performance and neural speech tracking but were not able to establish a significant association. Overall, our results show an increase in neurofunctional correlates of speech associated with congruent visual speech cues, specifically in a right auditory cluster, suggesting multisensory integration.
{"title":"Processing of Visual Speech Cues in Speech-in-Noise Comprehension Depends on Working Memory Capacity and Enhances Neural Speech Tracking in Older Adults With Hearing Impairment.","authors":"Vanessa Frei, Raffael Schmitt, Martin Meyer, Nathalie Giroud","doi":"10.1177/23312165241287622","DOIUrl":"10.1177/23312165241287622","url":null,"abstract":"<p><p>Comprehending speech in noise (SiN) poses a challenge for older hearing-impaired listeners, requiring auditory and working memory resources. Visual speech cues provide additional sensory information supporting speech understanding, while the extent of such visual benefit is characterized by large variability, which might be accounted for by individual differences in working memory capacity (WMC). In the current study, we investigated behavioral and neurofunctional (i.e., neural speech tracking) correlates of auditory and audio-visual speech comprehension in babble noise and the associations with WMC. Healthy older adults with hearing impairment quantified by pure-tone hearing loss (threshold average: 31.85-57 dB, <i>N</i> = 67) listened to sentences in babble noise in audio-only, visual-only and audio-visual speech modality and performed a pattern matching and a comprehension task, while electroencephalography (EEG) was recorded. Behaviorally, no significant difference in task performance was observed across modalities. However, we did find a significant association between individual working memory capacity and task performance, suggesting a more complex interplay between audio-visual speech cues, working memory capacity and real-world listening tasks. Furthermore, we found that the visual speech presentation was accompanied by increased cortical tracking of the speech envelope, particularly in a right-hemispheric auditory topographical cluster. Post-hoc, we investigated the potential relationships between the behavioral performance and neural speech tracking but were not able to establish a significant association. Overall, our results show an increase in neurofunctional correlates of speech associated with congruent visual speech cues, specifically in a right auditory cluster, suggesting multisensory integration.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241287622"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11520018/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142511002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241273391
Kan Chen, Bo Yang, Xiaoyan Yue, He Mi, Jianjun Leng, Lujie Li, Haoyu Wang, Yaxin Lai
This study presents a comprehensive analysis of global, regional, and national trends in the burden of hearing loss (HL) among children and adolescents from 1990 to 2019, using data from the Global Burden of Disease study. Over this period, there was a general decline in HL prevalence and years lived with disability (YLDs) globally, with average annual percentage changes (AAPCs) of -0.03% (95% uncertainty interval [UI], -0.04% to -0.01%; p = 0.001) and -0.23% (95% UI, -0.25% to -0.20%; p < 0.001). Males exhibited higher rates of HL prevalence and YLDs than females. Mild and moderate HL were the most common categories across all age groups, but the highest proportion of YLDs was associated with profound HL [22.23% (95% UI, 8.63%-57.53%)]. Among females aged 15-19 years, the prevalence and YLD rates for moderate HL rose, with AAPCs of 0.14% (95% UI, 0.06%-0.22%; p = 0.001) and 0.13% (95% UI, 0.08%-0.18%; p < 0.001). This increase is primarily attributed to age-related and other HL (such as environmental, lifestyle factors, and occupational noise exposure) and otitis media, highlighting the need for targeted research and interventions for this demographic. Southeast Asia and Western Sub-Saharan Africa bore the heaviest HL burden, while High-income North America showed lower HL prevalence and YLD rates but a slight increasing trend in recent years, with AAPCs of 0.13% (95% UI, 0.1%-0.16%; p < 0.001) and 0.08% (95% UI, 0.04% to 0.12%; p < 0.001). Additionally, the analysis revealed a significant negative correlation between sociodemographic index (SDI) and both HL prevalence (r = -0.74; p < 0.001) and YLD (r = -0.76; p < 0.001) rates. However, the changes in HL trends were not significantly correlated with SDI, suggesting that factors beyond economic development, such as policies and cultural practices, also affect HL. Despite the overall optimistic trend, this study emphasizes the continued need to focus on specific high-risk groups and regions to further reduce the HL burden and enhance the quality of life for affected children and adolescents.
本研究利用全球疾病负担研究(Global Burden of Disease)的数据,对 1990 年至 2019 年期间全球、地区和国家的儿童和青少年听力损失(HL)负担趋势进行了全面分析。在此期间,全球 HL 患病率和残疾生活年数 (YLD) 普遍下降,年均百分比变化 (AAPC) 为 -0.03%(95% 不确定区间 [UI],-0.04% 至 -0.01%;p = 0.001)和 -0.23%(95% UI,-0.25% 至 -0.20%;p p = 0.001)和 0.13%(95% UI,0.08% 至 0.18%;p p r = -0.74;p r = -0.76;p p r = 0.001)。
{"title":"Global, Regional, and National Burdens of Hearing Loss for Children and Adolescents from 1990 to 2019: A Trend Analysis.","authors":"Kan Chen, Bo Yang, Xiaoyan Yue, He Mi, Jianjun Leng, Lujie Li, Haoyu Wang, Yaxin Lai","doi":"10.1177/23312165241273391","DOIUrl":"10.1177/23312165241273391","url":null,"abstract":"<p><p>This study presents a comprehensive analysis of global, regional, and national trends in the burden of hearing loss (HL) among children and adolescents from 1990 to 2019, using data from the Global Burden of Disease study. Over this period, there was a general decline in HL prevalence and years lived with disability (YLDs) globally, with average annual percentage changes (AAPCs) of -0.03% (95% uncertainty interval [UI], -0.04% to -0.01%; <i>p</i> = 0.001) and -0.23% (95% UI, -0.25% to -0.20%; <i>p</i> < 0.001). Males exhibited higher rates of HL prevalence and YLDs than females. Mild and moderate HL were the most common categories across all age groups, but the highest proportion of YLDs was associated with profound HL [22.23% (95% UI, 8.63%-57.53%)]. Among females aged 15-19 years, the prevalence and YLD rates for moderate HL rose, with AAPCs of 0.14% (95% UI, 0.06%-0.22%; <i>p</i> = 0.001) and 0.13% (95% UI, 0.08%-0.18%; <i>p</i> < 0.001). This increase is primarily attributed to age-related and other HL (such as environmental, lifestyle factors, and occupational noise exposure) and otitis media, highlighting the need for targeted research and interventions for this demographic. Southeast Asia and Western Sub-Saharan Africa bore the heaviest HL burden, while High-income North America showed lower HL prevalence and YLD rates but a slight increasing trend in recent years, with AAPCs of 0.13% (95% UI, 0.1%-0.16%; <i>p</i> < 0.001) and 0.08% (95% UI, 0.04% to 0.12%; <i>p</i> < 0.001). Additionally, the analysis revealed a significant negative correlation between sociodemographic index (SDI) and both HL prevalence (<i>r</i> = -0.74; <i>p</i> < 0.001) and YLD (<i>r</i> = -0.76; <i>p</i> < 0.001) rates. However, the changes in HL trends were not significantly correlated with SDI, suggesting that factors beyond economic development, such as policies and cultural practices, also affect HL. Despite the overall optimistic trend, this study emphasizes the continued need to focus on specific high-risk groups and regions to further reduce the HL burden and enhance the quality of life for affected children and adolescents.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241273391"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11342320/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142019246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241230947
Stefanie Goicke, Florian Denk, Tim Jürgens
Sound localization is an important ability in everyday life. This study investigates the influence of vision and presentation mode on auditory spatial bisection performance. Subjects were asked to identify the smaller perceived distance between three consecutive stimuli that were either presented via loudspeakers (free field) or via headphones after convolution with generic head-related impulse responses (binaural reproduction). Thirteen azimuthal sound incidence angles on a circular arc segment of ±24° at a radius of 3 m were included in three regions of space (front, rear, and laterally left). Twenty normally sighted (measured both sighted and blindfolded) and eight blind persons participated. Results showed no significant differences with respect to visual condition, but strong effects of sound direction and presentation mode. Psychometric functions were steepest in frontal space and indicated median spatial bisection thresholds of 11°-14°. Thresholds increased significantly in rear (11°-17°) and laterally left (20°-28°) space in free field. Individual pinna and torso cues, as available only in free field presentation, improved the performance of all participants compared to binaural reproduction. Especially in rear space, auditory spatial bisection thresholds were three to four times higher (i.e., poorer) using binaural reproduction than in free field. The results underline the importance of individual auditory spatial cues for spatial bisection, irrespective of access to vision, which indicates that vision may not be strictly necessary to calibrate allocentric spatial hearing.
{"title":"Auditory Spatial Bisection of Blind and Normally Sighted Individuals in Free Field and Virtual Acoustics.","authors":"Stefanie Goicke, Florian Denk, Tim Jürgens","doi":"10.1177/23312165241230947","DOIUrl":"10.1177/23312165241230947","url":null,"abstract":"<p><p>Sound localization is an important ability in everyday life. This study investigates the influence of vision and presentation mode on auditory spatial bisection performance. Subjects were asked to identify the smaller perceived distance between three consecutive stimuli that were either presented via loudspeakers (free field) or via headphones after convolution with generic head-related impulse responses (binaural reproduction). Thirteen azimuthal sound incidence angles on a circular arc segment of ±24° at a radius of 3 m were included in three regions of space (front, rear, and laterally left). Twenty normally sighted (measured both sighted and blindfolded) and eight blind persons participated. Results showed no significant differences with respect to visual condition, but strong effects of sound direction and presentation mode. Psychometric functions were steepest in frontal space and indicated median spatial bisection thresholds of 11°-14°. Thresholds increased significantly in rear (11°-17°) and laterally left (20°-28°) space in free field. Individual pinna and torso cues, as available only in free field presentation, improved the performance of all participants compared to binaural reproduction. Especially in rear space, auditory spatial bisection thresholds were three to four times higher (i.e., poorer) using binaural reproduction than in free field. The results underline the importance of individual auditory spatial cues for spatial bisection, irrespective of access to vision, which indicates that vision may not be strictly necessary to calibrate allocentric spatial hearing.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241230947"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10874137/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139742319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}