Pub Date : 2023-01-01DOI: 10.1177/23312165231171987
Julian Angermeier, Werner Hemmert, Stefan Zirn
Subjects utilizing a cochlear implant (CI) in one ear and a hearing aid (HA) on the contralateral ear suffer from mismatches in stimulation timing due to different processing latencies of both devices. This device delay mismatch leads to a temporal mismatch in auditory nerve stimulation. Compensating for this auditory nerve stimulation mismatch by compensating for the device delay mismatch can significantly improve sound source localization accuracy. One CI manufacturer has already implemented the possibility of mismatch compensation in its current fitting software. This study investigated if this fitting parameter can be readily used in clinical settings and determined the effects of familiarization to a compensated device delay mismatch over a period of 3-4 weeks. Sound localization accuracy and speech understanding in noise were measured in eleven bimodal CI/HA users, with and without a compensation of the device delay mismatch. The results showed that sound localization bias improved to 0°, implying that the localization bias towards the CI was eliminated when the device delay mismatch was compensated. The RMS error was improved by 18% with this improvement not reaching statistical significance. The effects were acute and did not further improve after 3 weeks of familiarization. For the speech tests, spatial release from masking did not improve with a compensated mismatch. The results show that this fitting parameter can be readily used by clinicians to improve sound localization ability in bimodal users. Further, our findings suggest that subjects with poor sound localization ability benefit the most from the device delay mismatch compensation.
{"title":"Clinical Feasibility and Familiarization Effects of Device Delay Mismatch Compensation in Bimodal CI/HA Users.","authors":"Julian Angermeier, Werner Hemmert, Stefan Zirn","doi":"10.1177/23312165231171987","DOIUrl":"https://doi.org/10.1177/23312165231171987","url":null,"abstract":"<p><p>Subjects utilizing a cochlear implant (CI) in one ear and a hearing aid (HA) on the contralateral ear suffer from mismatches in stimulation timing due to different processing latencies of both devices. This device delay mismatch leads to a temporal mismatch in auditory nerve stimulation. Compensating for this auditory nerve stimulation mismatch by compensating for the device delay mismatch can significantly improve sound source localization accuracy. One CI manufacturer has already implemented the possibility of mismatch compensation in its current fitting software. This study investigated if this fitting parameter can be readily used in clinical settings and determined the effects of familiarization to a compensated device delay mismatch over a period of 3-4 weeks. Sound localization accuracy and speech understanding in noise were measured in eleven bimodal CI/HA users, with and without a compensation of the device delay mismatch. The results showed that sound localization bias improved to 0°, implying that the localization bias towards the CI was eliminated when the device delay mismatch was compensated. The RMS error was improved by 18% with this improvement not reaching statistical significance. The effects were acute and did not further improve after 3 weeks of familiarization. For the speech tests, spatial release from masking did not improve with a compensated mismatch. The results show that this fitting parameter can be readily used by clinicians to improve sound localization ability in bimodal users. Further, our findings suggest that subjects with poor sound localization ability benefit the most from the device delay mismatch compensation.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231171987"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10196534/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9886415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231184982
Brian C J Moore, Josef Schlittenlacher
The diagnosis of noise-induced hearing loss (NIHL) is based on three requirements: a history of exposure to noise with the potential to cause hearing loss; the absence of known causes of hearing loss other than noise exposure; and the presence of certain features in the audiogram. All current methods for diagnosing NIHL have involved examination of the typical features of the audiograms of noise-exposed individuals and the formulation of quantitative rules for the identification of those features. This article describes an alternative approach based on the use of multilayer perceptrons (MLPs). The approach was applied to databases containing the ages and audiograms of individuals claiming compensation for NIHL sustained during military service (M-NIHL), who were assumed mostly to have M-NIHL, and control databases with no known exposure to intense sounds. The MLPs were trained so as to classify individuals as belonging to the exposed or control group based on their audiograms and ages, thereby automatically identifying the features of the audiogram that provide optimal classification. Two databases (noise exposed and nonexposed) were used for training and validation of the MLPs and two independent databases were used for evaluation and further analyses. The best-performing MLP was one trained to identify whether or not an individual had M-NIHL based on age and the audiogram for both ears. This achieved a sensitivity of 0.986 and a specificity of 0.902, giving an overall accuracy markedly higher than for previous methods.
{"title":"Diagnosing Noise-Induced Hearing Loss Sustained During Military Service Using Deep Neural Networks.","authors":"Brian C J Moore, Josef Schlittenlacher","doi":"10.1177/23312165231184982","DOIUrl":"10.1177/23312165231184982","url":null,"abstract":"<p><p>The diagnosis of noise-induced hearing loss (NIHL) is based on three requirements: a history of exposure to noise with the potential to cause hearing loss; the absence of known causes of hearing loss other than noise exposure; and the presence of certain features in the audiogram. All current methods for diagnosing NIHL have involved examination of the typical features of the audiograms of noise-exposed individuals and the formulation of quantitative rules for the identification of those features. This article describes an alternative approach based on the use of multilayer perceptrons (MLPs). The approach was applied to databases containing the ages and audiograms of individuals claiming compensation for NIHL sustained during military service (M-NIHL), who were assumed mostly to have M-NIHL, and control databases with no known exposure to intense sounds. The MLPs were trained so as to classify individuals as belonging to the exposed or control group based on their audiograms and ages, thereby automatically identifying the features of the audiogram that provide optimal classification. Two databases (noise exposed and nonexposed) were used for training and validation of the MLPs and two independent databases were used for evaluation and further analyses. The best-performing MLP was one trained to identify whether or not an individual had M-NIHL based on age and the audiogram for both ears. This achieved a sensitivity of 0.986 and a specificity of 0.902, giving an overall accuracy markedly higher than for previous methods.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231184982"},"PeriodicalIF":2.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10408324/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10318915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165221076681
Iliza M Butera, Ryan A Stevenson, René H Gifford, Mark T Wallace
The reduction in spectral resolution by cochlear implants oftentimes requires complementary visual speech cues to facilitate understanding. Despite substantial clinical characterization of auditory-only speech measures, relatively little is known about the audiovisual (AV) integrative abilities that most cochlear implant (CI) users rely on for daily speech comprehension. In this study, we tested AV integration in 63 CI users and 69 normal-hearing (NH) controls using the McGurk and sound-induced flash illusions. To our knowledge, this study is the largest to-date measuring the McGurk effect in this population and the first that tests the sound-induced flash illusion (SIFI). When presented with conflicting AV speech stimuli (i.e., the phoneme "ba" dubbed onto the viseme "ga"), we found that 55 CI users (87%) reported a fused percept of "da" or "tha" on at least one trial. After applying an error correction based on unisensory responses, we found that among those susceptible to the illusion, CI users experienced lower fusion than controls-a result that was concordant with results from the SIFI where the pairing of a single circle flashing on the screen with multiple beeps resulted in fewer illusory flashes for CI users. While illusion perception in these two tasks appears to be uncorrelated among CI users, we identified a negative correlation in the NH group. Because neither illusion appears to provide further explanation of variability in CI outcome measures, further research is needed to determine how these findings relate to CI users' speech understanding, particularly in ecological listening conditions that are naturally multisensory.
由于人工耳蜗降低了频谱分辨率,因此通常需要辅助视觉语音线索来帮助理解。尽管纯听觉语音测量的临床特征很多,但大多数人工耳蜗(CI)用户日常语音理解所依赖的视听(AV)整合能力却知之甚少。在本研究中,我们使用麦克格克幻觉和声音诱导闪光幻觉测试了 63 名 CI 使用者和 69 名正常听力(NH)对照者的视听整合能力。据我们所知,这项研究是迄今为止在该人群中测量麦格克效应的最大规模研究,也是第一项测试声音诱发闪光幻觉(SIFI)的研究。我们发现,当出现相互冲突的 AV 语音刺激(即音素 "ba "配音到视觉音素 "ga "上)时,55 名 CI 用户(87%)至少在一次试验中报告了 "da "或 "tha "的融合感知。根据单感官反应进行误差校正后,我们发现,在易受幻觉影响的人群中,CI 用户的融合感低于对照组--这一结果与 SIFI 的结果一致,即屏幕上闪烁的单个圆圈与多个蜂鸣声配对会导致 CI 用户的幻觉闪烁减少。虽然在这两项任务中,CI 使用者的幻觉感知似乎并不相关,但我们在 NH 组中发现了负相关。由于这两种幻觉似乎都不能进一步解释 CI 结果测量的变异性,因此需要进一步研究以确定这些发现与 CI 用户的语音理解有何关系,特别是在自然多感官的生态听力条件下。
{"title":"Visually biased Perception in Cochlear Implant Users: A Study of the McGurk and Sound-Induced Flash Illusions.","authors":"Iliza M Butera, Ryan A Stevenson, René H Gifford, Mark T Wallace","doi":"10.1177/23312165221076681","DOIUrl":"10.1177/23312165221076681","url":null,"abstract":"<p><p>The reduction in spectral resolution by cochlear implants oftentimes requires complementary visual speech cues to facilitate understanding. Despite substantial clinical characterization of auditory-only speech measures, relatively little is known about the audiovisual (AV) integrative abilities that most cochlear implant (CI) users rely on for daily speech comprehension. In this study, we tested AV integration in 63 CI users and 69 normal-hearing (NH) controls using the McGurk and sound-induced flash illusions. To our knowledge, this study is the largest to-date measuring the McGurk effect in this population and the first that tests the sound-induced flash illusion (SIFI). When presented with conflicting AV speech stimuli (i.e., the phoneme \"ba\" dubbed onto the viseme \"ga\"), we found that 55 CI users (87%) reported a fused percept of \"da\" or \"tha\" on at least one trial. After applying an error correction based on unisensory responses, we found that among those susceptible to the illusion, CI users experienced lower fusion than controls-a result that was concordant with results from the SIFI where the pairing of a single circle flashing on the screen with multiple beeps resulted in fewer illusory flashes for CI users. While illusion perception in these two tasks appears to be uncorrelated among CI users, we identified a negative correlation in the NH group. Because neither illusion appears to provide further explanation of variability in CI outcome measures, further research is needed to determine how these findings relate to CI users' speech understanding, particularly in ecological listening conditions that are naturally multisensory.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165221076681"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/6d/d6/10.1177_23312165221076681.PMC10334005.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9763744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231173234
Emanuele Perugia, Frederic Marmel, Karolina Kluk
The aim of this study was to assess feasibility of using electrophysiological auditory steady-state response (ASSR) masking for detecting dead regions (DRs). Fifteen normally hearing adults were tested using behavioral and electrophysiological tasks. In the electrophysiological task, ASSRs were recorded to a 2 kHz exponentially amplitude-modulated tone (AM2) presented within a notched threshold equalizing noise (TEN) whose center frequency (CFNOTCH) varied. We hypothesized that, in the absence of DRs, ASSR amplitudes would be largest for CFNOTCH at/or near the signal frequency. In the presence of a DR at the signal frequency, the largest ASSR amplitude would occur at a frequency (fmax) far away from the signal frequency. The AM2 and the TEN were presented at 60 and 75 dB SPL, respectively. In the behavioral task, for the same maskers as above, the masker level at which an AM and a pure tone could just be distinguished, denoted AM2ML, was determined, for low (10 dB above absolute AM2 threshold) and high (60 dB SPL) signal levels. We also hypothesized that the value of fmax would be similar for both techniques. The ASSR fmax values obtained from grand average ASSR amplitudes, but not from individual amplitudes, were consistent with our hypotheses. The agreement between the behavioral fmax and ASSR fmax was poor. The within-session ASSR-amplitude repeatability was good for AM2 alone, but poor for AM2 in notched TEN. The ASSR-amplitude variability between and within participants seems to be a major roadblock to developing our approach into an effective DR detection method.
本研究旨在评估使用电生理学听觉稳态反应(ASSR)掩蔽检测死区(DR)的可行性。15 名听力正常的成年人接受了行为和电生理任务测试。在电生理任务中,我们记录了在中心频率(CFNOTCH)变化的缺口阈值均衡噪声(TEN)中出现的 2 kHz 指数调幅音(AM2)的听觉稳态反应。我们假设,在没有 DR 的情况下,当 CFNOTCH 位于或接近信号频率时,ASSR 幅值最大。在信号频率存在 DR 的情况下,最大的 ASSR 振幅将出现在远离信号频率的频率(fmax)上。AM2 和 TEN 的声压级分别为 60 和 75 dB。在行为任务中,对于与上述相同的掩蔽器,我们确定了在低信号水平(比 AM2 绝对阈值高 10 dB)和高信号水平(60 dB SPL)下,AM 和纯音刚刚能被区分开的掩蔽器电平,记为 AM2ML。我们还假设两种技术的 fmax 值相似。从 ASSR 总平均振幅(而非单个振幅)获得的 ASSR fmax 值与我们的假设一致。行为最大值与 ASSR 最大值之间的一致性较差。单独使用 AM2 时,会话内 ASSR 振幅的可重复性较好,但缺口 TEN 中 AM2 的可重复性较差。参与者之间和参与者内部的 ASSR 振幅变异性似乎是将我们的方法发展成有效 DR 检测方法的主要障碍。
{"title":"Feasibility of Diagnosing Dead Regions Using Auditory Steady-State Responses to an Exponentially Amplitude Modulated Tone in Threshold Equalizing Notched Noise, Assessed Using Normal-Hearing Participants.","authors":"Emanuele Perugia, Frederic Marmel, Karolina Kluk","doi":"10.1177/23312165231173234","DOIUrl":"10.1177/23312165231173234","url":null,"abstract":"<p><p>The aim of this study was to assess feasibility of using electrophysiological auditory steady-state response (ASSR) masking for detecting dead regions (DRs). Fifteen normally hearing adults were tested using behavioral and electrophysiological tasks. In the electrophysiological task, ASSRs were recorded to a 2 kHz exponentially amplitude-modulated tone (AM2) presented within a notched threshold equalizing noise (TEN) whose center frequency (CF<sub>NOTCH</sub>) varied. We hypothesized that, in the absence of DRs, ASSR amplitudes would be largest for CF<sub>NOTCH</sub> at/or near the signal frequency. In the presence of a DR at the signal frequency, the largest ASSR amplitude would occur at a frequency (<i>f<sub>max</sub></i>) far away from the signal frequency. The AM2 and the TEN were presented at 60 and 75 dB SPL, respectively. In the behavioral task, for the same maskers as above, the masker level at which an AM and a pure tone could just be distinguished, denoted AM2ML, was determined, for low (10 dB above absolute AM2 threshold) and high (60 dB SPL) signal levels. We also hypothesized that the value of <i>f<sub>max</sub></i> would be similar for both techniques. The ASSR <i>f<sub>max</sub></i> values obtained from grand average ASSR amplitudes, but not from individual amplitudes, were consistent with our hypotheses. The agreement between the behavioral <i>f<sub>max</sub></i> and ASSR <i>f<sub>max</sub></i> was poor. The within-session ASSR-amplitude repeatability was good for AM2 alone, but poor for AM2 in notched TEN. The ASSR-amplitude variability between and within participants seems to be a major roadblock to developing our approach into an effective DR detection method.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231173234"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10336760/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9775441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231188619
Ľuboš Hládek, Bernhard U Seeber
Speech intelligibility in cocktail party situations has been traditionally studied for stationary sound sources and stationary participants. Here, speech intelligibility and behavior were investigated during active self-rotation of standing participants in a spatialized speech test. We investigated if people would rotate to improve speech intelligibility, and we asked if knowing the target location would be further beneficial. Target sentences randomly appeared at one of four possible locations: 0°, ± 90°, 180° relative to the participant's initial orientation on each trial, while speech-shaped noise was presented from the front (0°). Participants responded naturally with self-rotating motion. Target sentences were presented either without (Audio-only) or with a picture of an avatar (Audio-Visual). In a baseline (Static) condition, people were standing still without visual location cues. Participants' self-orientation undershot the target location and orientations were close to acoustically optimal. Participants oriented more often in an acoustically optimal way, and speech intelligibility was higher in the Audio-Visual than in the Audio-only condition for the lateral targets. The intelligibility of the individual words in Audio-Visual and Audio-only increased during self-rotation towards the rear target, but it was reduced for the lateral targets when compared to Static, which could be mostly, but not fully, attributed to changes in spatial unmasking. Speech intelligibility prediction based on a model of static spatial unmasking considering self-rotations overestimated the participant performance by 1.4 dB. The results suggest that speech intelligibility is reduced during self-rotation, and that visual cues of location help to achieve more optimal self-rotations and better speech intelligibility.
{"title":"Speech Intelligibility in Reverberation is Reduced During Self-Rotation.","authors":"Ľuboš Hládek, Bernhard U Seeber","doi":"10.1177/23312165231188619","DOIUrl":"10.1177/23312165231188619","url":null,"abstract":"<p><p>Speech intelligibility in cocktail party situations has been traditionally studied for stationary sound sources and stationary participants. Here, speech intelligibility and behavior were investigated during active self-rotation of standing participants in a spatialized speech test. We investigated if people would rotate to improve speech intelligibility, and we asked if knowing the target location would be further beneficial. Target sentences randomly appeared at one of four possible locations: 0°, ± 90°, 180° relative to the participant's initial orientation on each trial, while speech-shaped noise was presented from the front (0°). Participants responded naturally with self-rotating motion. Target sentences were presented either without (Audio-only) or with a picture of an avatar (Audio-Visual). In a baseline (Static) condition, people were standing still without visual location cues. Participants' self-orientation undershot the target location and orientations were close to acoustically optimal. Participants oriented more often in an acoustically optimal way, and speech intelligibility was higher in the Audio-Visual than in the Audio-only condition for the lateral targets. The intelligibility of the individual words in Audio-Visual and Audio-only increased during self-rotation towards the rear target, but it was reduced for the lateral targets when compared to Static, which could be mostly, but not fully, attributed to changes in spatial unmasking. Speech intelligibility prediction based on a model of static spatial unmasking considering self-rotations overestimated the participant performance by 1.4 dB. The results suggest that speech intelligibility is reduced during self-rotation, and that visual cues of location help to achieve more optimal self-rotations and better speech intelligibility.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231188619"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10363862/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9872318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231154035
Michael Alexander Chesnaye, Steven Lewis Bell, James Michael Harte, Lisbeth Birkelund Simonsen, Anisa Sadru Visram, Michael Anthony Stone, Kevin James Munro, David Martin Simpson
The cortical auditory evoked potential (CAEP) is a change in neural activity in response to sound, and is of interest for audiological assessment of infants, especially those who use hearing aids. Within this population, CAEP waveforms are known to vary substantially across individuals, which makes detecting the CAEP through visual inspection a challenging task. It also means that some of the best automated CAEP detection methods used in adults are probably not suitable for this population. This study therefore evaluates and optimizes the performance of new and existing methods for aided (i.e., the stimuli are presented through subjects' hearing aid(s)) CAEP detection in infants with hearing loss. Methods include the conventional Hotellings T2 test, various modified q-sample statistics, and two novel variants of T2 statistics, which were designed to exploit the correlation structure underlying the data. Various additional methods from the literature were also evaluated, including the previously best-performing methods for adult CAEP detection. Data for the assessment consisted of aided CAEPs recorded from 59 infant hearing aid users with mild to profound bilateral hearing loss, and simulated signals. The highest test sensitivities were observed for the modified T2 statistics, followed by the modified q-sample statistics, and lastly by the conventional Hotelling's T2 test, which showed low detection rates for ensemble sizes <80 epochs. The high test sensitivities at small ensemble sizes observed for the modified T2 and q-sample statistics are especially relevant for infant testing, as the time available for data collection tends to be limited in this population.
{"title":"Modified T<sup>2</sup> Statistics for Improved Detection of Aided Cortical Auditory Evoked Potentials in Hearing-Impaired Infants.","authors":"Michael Alexander Chesnaye, Steven Lewis Bell, James Michael Harte, Lisbeth Birkelund Simonsen, Anisa Sadru Visram, Michael Anthony Stone, Kevin James Munro, David Martin Simpson","doi":"10.1177/23312165231154035","DOIUrl":"10.1177/23312165231154035","url":null,"abstract":"<p><p>The cortical auditory evoked potential (CAEP) is a change in neural activity in response to sound, and is of interest for audiological assessment of infants, especially those who use hearing aids. Within this population, CAEP waveforms are known to vary substantially across individuals, which makes detecting the CAEP through visual inspection a challenging task. It also means that some of the best automated CAEP detection methods used in adults are probably not suitable for this population. This study therefore evaluates and optimizes the performance of new and existing methods for aided (i.e., the stimuli are presented through subjects' hearing aid(s)) CAEP detection in infants with hearing loss. Methods include the conventional Hotellings T<sup>2</sup> test, various modified q-sample statistics, and two novel variants of T<sup>2</sup> statistics, which were designed to exploit the correlation structure underlying the data. Various additional methods from the literature were also evaluated, including the previously best-performing methods for adult CAEP detection. Data for the assessment consisted of aided CAEPs recorded from 59 infant hearing aid users with mild to profound bilateral hearing loss, and simulated signals. The highest test sensitivities were observed for the modified T<sup>2</sup> statistics, followed by the modified q-sample statistics, and lastly by the conventional Hotelling's T<sup>2</sup> test, which showed low detection rates for ensemble sizes <80 epochs. The high test sensitivities at small ensemble sizes observed for the modified T<sup>2</sup> and q-sample statistics are especially relevant for infant testing, as the time available for data collection tends to be limited in this population.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231154035"},"PeriodicalIF":2.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9974628/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10828646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231182289
Chiara Valzolgher, Mariam Alzaher, Valérie Gaveau, Aurélie Coudert, Mathieu Marx, Eric Truy, Pascal Barone, Alessandro Farnè, Francesco Pavani
Lateralized sounds can orient visual attention, with benefits for audio-visual processing. Here, we asked to what extent perturbed auditory spatial cues-resulting from cochlear implants (CI) or unilateral hearing loss (uHL)-allow this automatic mechanism of information selection from the audio-visual environment. We used a classic paradigm from experimental psychology (capture of visual attention with sounds) to probe the integrity of audio-visual attentional orienting in 60 adults with hearing loss: bilateral CI users (N = 20), unilateral CI users (N = 20), and individuals with uHL (N = 20). For comparison, we also included a group of normal-hearing (NH, N = 20) participants, tested in binaural and monaural listening conditions (i.e., with one ear plugged). All participants also completed a sound localization task to assess spatial hearing skills. Comparable audio-visual orienting was observed in bilateral CI, uHL, and binaural NH participants. By contrast, audio-visual orienting was, on average, absent in unilateral CI users and reduced in NH listening with one ear plugged. Spatial hearing skills were better in bilateral CI, uHL, and binaural NH participants than in unilateral CI users and monaurally plugged NH listeners. In unilateral CI users, spatial hearing skills correlated with audio-visual-orienting abilities. These novel results show that audio-visual-attention orienting can be preserved in bilateral CI users and in uHL patients to a greater extent than unilateral CI users. This highlights the importance of assessing the impact of hearing loss beyond auditory difficulties alone: to capture to what extent it may enable or impede typical interactions with the multisensory environment.
{"title":"Capturing Visual Attention With Perturbed Auditory Spatial Cues.","authors":"Chiara Valzolgher, Mariam Alzaher, Valérie Gaveau, Aurélie Coudert, Mathieu Marx, Eric Truy, Pascal Barone, Alessandro Farnè, Francesco Pavani","doi":"10.1177/23312165231182289","DOIUrl":"10.1177/23312165231182289","url":null,"abstract":"<p><p>Lateralized sounds can orient visual attention, with benefits for audio-visual processing. Here, we asked to what extent perturbed auditory spatial cues-resulting from cochlear implants (CI) or unilateral hearing loss (uHL)-allow this automatic mechanism of information selection from the audio-visual environment. We used a classic paradigm from experimental psychology (capture of visual attention with sounds) to probe the integrity of audio-visual attentional orienting in 60 adults with hearing loss: bilateral CI users (<i>N</i> = 20), unilateral CI users (<i>N</i> = 20), and individuals with uHL (<i>N</i> = 20). For comparison, we also included a group of normal-hearing (NH, <i>N</i> = 20) participants, tested in binaural and monaural listening conditions (i.e., with one ear plugged). All participants also completed a sound localization task to assess spatial hearing skills. Comparable audio-visual orienting was observed in bilateral CI, uHL, and binaural NH participants. By contrast, audio-visual orienting was, on average, absent in unilateral CI users and reduced in NH listening with one ear plugged. Spatial hearing skills were better in bilateral CI, uHL, and binaural NH participants than in unilateral CI users and monaurally plugged NH listeners. In unilateral CI users, spatial hearing skills correlated with audio-visual-orienting abilities. These novel results show that audio-visual-attention orienting can be preserved in bilateral CI users and in uHL patients to a greater extent than unilateral CI users. This highlights the importance of assessing the impact of hearing loss beyond auditory difficulties alone: to capture to what extent it may enable or impede typical interactions with the multisensory environment.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231182289"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/84/a2/10.1177_23312165231182289.PMC10467228.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10127241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165221148022
Sina Tahmasebi, Manuel Segovia-Martinez, Waldo Nogueira
Cochlear implants (CIs) are implantable medical devices that can partially restore hearing to people suffering from profound sensorineural hearing loss. While these devices provide good speech understanding in quiet, many CI users face difficulties when listening to music. Reasons include poor spatial specificity of electric stimulation, limited transmission of spectral and temporal fine structure of acoustic signals, and restrictions in the dynamic range that can be conveyed via electric stimulation of the auditory nerve. The coding strategies currently used in CIs are typically designed for speech rather than music. This work investigates the optimization of CI coding strategies to make singing music more accessible to CI users. The aim is to reduce the spectral complexity of music by selecting fewer bands for stimulation, attenuating the background instruments by strengthening a noise reduction algorithm, and optimizing the electric dynamic range through a back-end compressor. The optimizations were evaluated through both objective and perceptual measures of speech understanding and melody identification of singing voice with and without background instruments, as well as music appreciation questionnaires. Consistent with the objective measures, results gathered from the perceptual evaluations indicated that reducing the number of selected bands and optimizing the electric dynamic range significantly improved speech understanding in music. Moreover, results obtained from questionnaires show that the new music back-end compressor significantly improved music enjoyment. These results have potential as a new CI program for improved singing music perception.
{"title":"Optimization of Sound Coding Strategies to Make Singing Music More Accessible for Cochlear Implant Users.","authors":"Sina Tahmasebi, Manuel Segovia-Martinez, Waldo Nogueira","doi":"10.1177/23312165221148022","DOIUrl":"https://doi.org/10.1177/23312165221148022","url":null,"abstract":"<p><p>Cochlear implants (CIs) are implantable medical devices that can partially restore hearing to people suffering from profound sensorineural hearing loss. While these devices provide good speech understanding in quiet, many CI users face difficulties when listening to music. Reasons include poor spatial specificity of electric stimulation, limited transmission of spectral and temporal fine structure of acoustic signals, and restrictions in the dynamic range that can be conveyed via electric stimulation of the auditory nerve. The coding strategies currently used in CIs are typically designed for speech rather than music. This work investigates the optimization of CI coding strategies to make singing music more accessible to CI users. The aim is to reduce the spectral complexity of music by selecting fewer bands for stimulation, attenuating the background instruments by strengthening a noise reduction algorithm, and optimizing the electric dynamic range through a back-end compressor. The optimizations were evaluated through both objective and perceptual measures of speech understanding and melody identification of singing voice with and without background instruments, as well as music appreciation questionnaires. Consistent with the objective measures, results gathered from the perceptual evaluations indicated that reducing the number of selected bands and optimizing the electric dynamic range significantly improved speech understanding in music. Moreover, results obtained from questionnaires show that the new music back-end compressor significantly improved music enjoyment. These results have potential as a new CI program for improved singing music perception.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165221148022"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/a4/9b/10.1177_23312165221148022.PMC9837293.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10746839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231153280
Patrycja Książek, Adriana A Zekveld, Lorenz Fiedler, Sophia E Kramer, Dorothea Wendt
Daily communication may be effortful due to poor acoustic quality. In addition, memory demands can induce effort, especially for long or complex sentences. In the current study, we tested the impact of memory task demands and speech-to-noise ratio on the time-specific components of effort allocation during speech identification in noise. Thirty normally hearing adults (15 females, mean age 42.2 years) participated. In an established auditory memory test, listeners had to listen to a list of seven sentences in noise, and repeat the sentence-final word after presentation, and, if instructed, recall the repeated words. We tested the effects of speech-to-noise ratio (SNR; -4 dB, +1 dB) and recall (Recall; Yes, No), on the time-specific components of pupil responses, trial baseline pupil size, and their dynamics (change) along the list. We found three components in the pupil responses (early, middle, and late). While the additional memory task (recall versus no recall) lowered all components' values, SNR (-4 dB versus +1 dB SNR) increased the middle and late component values. Increasing memory demands (Recall) progressively increased trial baseline and steepened decrease of the late component's values. Trial baseline increased most steeply in the condition of +1 dB SNR with recall. The findings suggest that adding a recall to the auditory task alters effort allocation for listening. Listeners are dynamically re-allocating effort from listening to memorizing under changing memory and acoustic demands. The pupil baseline and the time-specific components of pupil responses provide a comprehensive picture of the interplay of SNR and recall on effort.
{"title":"Time-specific Components of Pupil Responses Reveal Alternations in Effort Allocation Caused by Memory Task Demands During Speech Identification in Noise.","authors":"Patrycja Książek, Adriana A Zekveld, Lorenz Fiedler, Sophia E Kramer, Dorothea Wendt","doi":"10.1177/23312165231153280","DOIUrl":"https://doi.org/10.1177/23312165231153280","url":null,"abstract":"<p><p>Daily communication may be effortful due to poor acoustic quality. In addition, memory demands can induce effort, especially for long or complex sentences. In the current study, we tested the impact of memory task demands and speech-to-noise ratio on the time-specific components of effort allocation during speech identification in noise. Thirty normally hearing adults (15 females, mean age 42.2 years) participated. In an established auditory memory test, listeners had to listen to a list of seven sentences in noise, and repeat the sentence-final word after presentation, and, if instructed, recall the repeated words. We tested the effects of speech-to-noise ratio (SNR; -4 dB, +1 dB) and recall (Recall; Yes, No), on the time-specific components of pupil responses, trial baseline pupil size, and their dynamics (change) along the list. We found three components in the pupil responses (early, middle, and late). While the additional memory task (recall versus no recall) lowered all components' values, SNR (-4 dB versus +1 dB SNR) increased the middle and late component values. Increasing memory demands (Recall) progressively increased trial baseline and steepened decrease of the late component's values. Trial baseline increased most steeply in the condition of +1 dB SNR with recall. The findings suggest that adding a recall to the auditory task alters effort allocation for listening. Listeners are dynamically re-allocating effort from listening to memorizing under changing memory and acoustic demands. The pupil baseline and the time-specific components of pupil responses provide a comprehensive picture of the interplay of SNR and recall on effort.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231153280"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/85/b7/10.1177_23312165231153280.PMC10028670.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9514033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Research in hearing sciences has provided extensive knowledge about how the human auditory system processes speech and assists communication. In contrast, little is known about how this system processes "natural soundscapes," that is the complex arrangements of biological and geophysical sounds shaped by sound propagation through non-anthropogenic habitats [Grinfeder et al. (2022). Frontiers in Ecology and Evolution. 10: 894232]. This is surprising given that, for many species, the capacity to process natural soundscapes determines survival and reproduction through the ability to represent and monitor the immediate environment. Here we propose a framework to encourage research programmes in the field of "human auditory ecology," focusing on the study of human auditory perception of ecological processes at work in natural habitats. Based on large acoustic databases with high ecological validity, these programmes should investigate the extent to which this presumably ancestral monitoring function of the human auditory system is adapted to specific information conveyed by natural soundscapes, whether it operate throughout the life span or whether it emerges through individual learning or cultural transmission. Beyond fundamental knowledge of human hearing, these programmes should yield a better understanding of how normal-hearing and hearing-impaired listeners monitor rural and city green and blue spaces and benefit from them, and whether rehabilitation devices (hearing aids and cochlear implants) restore natural soundscape perception and emotional responses back to normal. Importantly, they should also reveal whether and how humans hear the rapid changes in the environment brought about by human activity.
听力科学的研究提供了关于人类听觉系统如何处理语言和协助交流的广泛知识。相比之下,人们对这个系统如何处理“自然声景”知之甚少,“自然声景”是通过非人为栖息地的声音传播形成的生物和地球物理声音的复杂排列[Grinfeder et al.(2022)]。生态与进化前沿。10:894232。这是令人惊讶的,因为对于许多物种来说,处理自然声景的能力通过表现和监控周围环境的能力决定了它们的生存和繁殖。在此,我们提出了一个框架,以鼓励“人类听觉生态学”领域的研究计划,重点研究人类听觉感知在自然栖息地中工作的生态过程。基于具有高生态有效性的大型声学数据库,这些程序应该调查人类听觉系统的这种可能的祖先监测功能在多大程度上适应了自然声景所传达的特定信息,它是否贯穿整个生命周期,还是通过个人学习或文化传播出现。除了人类听力的基本知识之外,这些计划还应使人们更好地了解听力正常和听力受损的听众如何监测农村和城市的绿色和蓝色空间并从中受益,以及康复设备(助听器和人工耳蜗)是否能恢复自然的音景感知和情绪反应。重要的是,它们还应该揭示人类是否以及如何听到人类活动给环境带来的快速变化。
{"title":"Human Auditory Ecology: Extending Hearing Research to the Perception of Natural Soundscapes by Humans in Rapidly Changing Environments.","authors":"Christian Lorenzi, Frédéric Apoux, Elie Grinfeder, Bernie Krause, Nicole Miller-Viacava, Jérôme Sueur","doi":"10.1177/23312165231212032","DOIUrl":"10.1177/23312165231212032","url":null,"abstract":"<p><p>Research in hearing sciences has provided extensive knowledge about how the human auditory system processes speech and assists communication. In contrast, little is known about how this system processes \"natural soundscapes,\" that is the complex arrangements of biological and geophysical sounds shaped by sound propagation through non-anthropogenic habitats [Grinfeder et al. (2022). <i>Frontiers in Ecology and Evolution. 10:</i> 894232]. This is surprising given that, for many species, the capacity to process natural soundscapes determines survival and reproduction through the ability to represent and monitor the immediate environment. Here we propose a framework to encourage research programmes in the field of \"human auditory ecology,\" focusing on the study of human auditory perception of ecological processes at work in natural habitats. Based on large acoustic databases with high ecological validity, these programmes should investigate the extent to which this presumably ancestral monitoring function of the human auditory system is adapted to specific information conveyed by natural soundscapes, whether it operate throughout the life span or whether it emerges through individual learning or cultural transmission. Beyond fundamental knowledge of human hearing, these programmes should yield a better understanding of how normal-hearing and hearing-impaired listeners monitor rural and city green and blue spaces and benefit from them, and whether rehabilitation devices (hearing aids and cochlear implants) restore natural soundscape perception and emotional responses back to normal. Importantly, they should also reveal whether and how humans hear the rapid changes in the environment brought about by human activity.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231212032"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10658775/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138048241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}