A major goal of hearing-device provision is to improve communication in daily life. However, there is still a large gap between the user's daily-life aided listening experience and hearing-aid benefit as assessed with established speech reception measurements in the laboratory and clinical practice. For a more realistic assessment, hearing-aid provision needs to be tested in suitable acoustic environments. In this study, using virtual acoustics, we developed complex acoustic scenarios to measure speech-intelligibility and listening-effort benefit obtained from hearing-aid amplification and signal enhancement strategies. Measurements were conducted using the participants' own devices and a research hearing aid, the Portable Hearing Laboratory (PHL). On the PHL, in addition to amplification, a monaural and a binaural directional filter, as well as a spectral filter were employed. We assessed the benefit from different signal enhancement strategies at the group and the individual level. At the group level, signal enhancement including directional filtering provided a higher hearing-aid benefit in challenging acoustic scenarios in terms of speech intelligibility compared to amplification alone or combined with spectral filtering. However, no difference between monaural and binaural signal enhancement occurred. On an individual level, we found large differences in hearing-aid benefit between participants. While some benefitted from signal-enhancement algorithms, others benefitted from amplification alone, but additional signal enhancement had a detrimental effect. This shows the importance of an individual selection of signal enhancement strategies as a part of the hearing-aid fitting process.
This study investigated the morphology of the functional near-infrared spectroscopy (fNIRS) response to speech sounds measured from 16 sleeping infants and how it changes with repeated stimulus presentation. We observed a positive peak followed by a wide negative trough, with the latter being most evident in early epochs. We argue that the overall response morphology captures the effects of two simultaneous, but independent, response mechanisms that are both activated at the stimulus onset: one being the obligatory response to a sound stimulus by the auditory system, and the other being a neural suppression effect induced by the arousal system. Because the two effects behave differently with repeated epochs, it is possible to mathematically separate them and use fNIRS to study factors that affect the development and activation of the arousal system in infants. The results also imply that standard fNIRS analysis techniques need to be adjusted to take into account the possibilities of multiple simultaneous brain systems being activated and that the response to a stimulus is not necessarily stationary.
The objective of this project was to establish cutoff scores on the tinnitus subscale of the Tinnitus and Hearing Survey (THS) using a large sample of United States service members (SM) with the end goal of guiding clinical referrals for tinnitus evaluation. A total of 4,589 SM undergoing annual audiometric surveillance were prospectively recruited to complete the THS tinnitus subscale (THS-T). A subset of 1,304 participants also completed the Tinnitus Functional Index (TFI). The original 5-point response scale of the THS (THS-T16) was modified to an 11-point scale (THS-T40) for some participants, to align with the response scale of the TFI. Age, sex, hearing loss, and self-reported tinnitus bother were also recorded. The THS-T was relatively insensitive to hearing, but self-reported bothersome tinnitus was significantly associated with the THS-T40 score. Receiver operating characteristic analysis was used to determine cutoff scores on the THS-T that aligned with recommended cutoff values for clinical intervention on the TFI. A cutoff of 9 on the THS-T40 aligns with a TFI cutoff of 25, indicating a patient may need intervention for tinnitus. A cutoff of 15 aligns with a TFI cutoff of 50, indicating that more aggressive intervention for tinnitus is warranted. The THS-T is a viable tool to identify patients with tinnitus complaints warranting clinical evaluation for use by hearing conservation programs and primary care clinics. The THS-T40 cutoff scores of 9 and 15 provide clinical reference points to guide referrals to audiology.
The extent to which active noise cancelation (ANC), when combined with hearing assistance, can improve speech intelligibility in noise is not well understood. One possible source of benefit is ANC's ability to reduce the sound level of the direct (i.e., vent-transmitted) path. This reduction lowers the "floor" imposed by the direct path, thereby allowing any increases to the signal-to-noise ratio (SNR) created in the amplified path to be "realized" at the eardrum. Here we used a modeling approach to estimate this benefit. We compared pairs of simulated hearing aids that differ only in terms of their ability to provide ANC and computed intelligibility metrics on their outputs. The difference in metric scores between simulated devices is termed the "ANC Benefit." These simulations show that ANC Benefit increases as (1) the environmental sound level increases, (2) the ability of the hearing aid to improve SNR increases, (3) the strength of the ANC increases, and (4) the hearing loss severity decreases. The predicted size of the ANC Benefit can be substantial. For a moderate hearing loss, the model predicts improvement in intelligibility metrics of >30% when environments are moderately loud (>70 dB SPL) and devices are moderately capable of increasing SNR (by >4 dB). It appears that ANC can be a critical ingredient in hearing devices that attempt to improve SNR in loud environments. ANC will become more and more important as advanced SNR-improving algorithms (e.g., artificial intelligence speech enhancement) are included in hearing devices.
This study investigated sound localization abilities in patients with bilateral conductive and/or mixed hearing loss (BCHL) when listening with either one or two middle ear implants (MEIs). Sound localization was measured by asking patients to point as quickly and accurately as possible with a head-mounted LED in the perceived sound direction. Loudspeakers, positioned around the listener within a range of +73°/-73° in the horizontal plane, were not visible to the patients. Broadband (500 Hz-20 kHz) noise bursts (150 ms), roved over a 20-dB range in 10 dB steps was presented. MEIs stimulate the ipsilateral cochlea only and therefore the localization response was not affected by crosstalk. Sound localization was better with bilateral MEIs compared with the unilateral left and unilateral right conditions. Good sound localization performance was found in the bilaterally aided hearing condition in four patients. In two patients, localization abilities equaled normal hearing performance. Interestingly, in the unaided condition, when both devices were turned off, subjects could still localize the stimuli presented at the highest sound level. Comparison with data of patients implanted bilaterally with bone-conduction devices, demonstrated that localization abilities with MEIs were superior. The measurements demonstrate that patients with BCHL, using remnant binaural cues in the unaided condition, are able to process binaural cues when listening with bilateral MEIs. We conclude that implantation with two MEIs, each stimulating only the ipsilateral cochlea, without crosstalk to the contralateral cochlea, can result in good sound localization abilities, and that this topic needs further investigation.
During continuous speech perception, endogenous neural activity becomes time-locked to acoustic stimulus features, such as the speech amplitude envelope. This speech-brain coupling can be decoded using non-invasive brain imaging techniques, including electroencephalography (EEG). Neural decoding may provide clinical use as an objective measure of stimulus encoding by the brain-for example during cochlear implant listening, wherein the speech signal is severely spectrally degraded. Yet, interplay between acoustic and linguistic factors may lead to top-down modulation of perception, thereby complicating audiological applications. To address this ambiguity, we assess neural decoding of the speech envelope under spectral degradation with EEG in acoustically hearing listeners (n = 38; 18-35 years old) using vocoded speech. We dissociate sensory encoding from higher-order processing by employing intelligible (English) and non-intelligible (Dutch) stimuli, with auditory attention sustained using a repeated-phrase detection task. Subject-specific and group decoders were trained to reconstruct the speech envelope from held-out EEG data, with decoder significance determined via random permutation testing. Whereas speech envelope reconstruction did not vary by spectral resolution, intelligible speech was associated with better decoding accuracy in general. Results were similar across subject-specific and group analyses, with less consistent effects of spectral degradation in group decoding. Permutation tests revealed possible differences in decoder statistical significance by experimental condition. In general, while robust neural decoding was observed at the individual and group level, variability within participants would most likely prevent the clinical use of such a measure to differentiate levels of spectral degradation and intelligibility on an individual basis.
Many older adults live with some form of hearing loss and have difficulty understanding speech in the presence of background sound. Experiences resulting from such difficulties include increased listening effort and fatigue. Social interactions may become less appealing in the context of such experiences, and age-related hearing loss is associated with an increased risk of social isolation and associated negative psychosocial health outcomes. However, the precise relationship between age-related hearing loss and social isolation is not well described. Here, we review the literature and synthesize existing work from different domains to propose a framework with three conceptual anchor stages to describe the relation between hearing loss and social isolation: within-situation disengagement from listening, social withdrawal, and social isolation. We describe the distinct characteristics of each stage and suggest potential interventions to mitigate negative impacts of hearing loss on social lives and health. We close by outlining potential implications for researchers and clinicians.
An objective method for assessing speech audibility is essential to evaluate hearing aid benefit in children who are unable to participate in hearing tests. With consonant-vowel syllables, brainstem-dominant responses elicited at the voice fundamental frequency have proven successful for assessing audibility. This study aimed to harness the neural activity elicited by the slow envelope of the same repetitive consonant-vowel syllables to assess audibility. In adults and children with normal hearing and children with hearing loss wearing hearing aids, neural activity elicited by the stimulus /su∫i/ or /sa∫i/ presented at 55-75 dB SPL was analyzed using the temporal response function approach. No-stimulus runs or very low stimulus level (15 dB SPL) were used to simulate inaudible conditions in adults and children with normal hearing. Both groups of children demonstrated higher response amplitudes relative to adults. Detectability (sensitivity; true positive rate) ranged between 80.1 and 100%, and did not vary by group or stimulus level but varied by stimulus, with /sa∫i/ achieving 100% detectability at 65 dB SPL. The average minimum time needed to detect a response ranged between 3.7 and 6.4 min across stimuli and listener groups, with the shortest times recorded for stimulus /sa∫i/ and in children with hearing loss. Specificity was >94.9%. Responses to the slow envelope of non-meaningful consonant-vowel syllables can be used to ascertain audible vs. inaudible speech with sufficient accuracy within clinically feasible test times. Such responses can increase the clinical usefulness of existing objective approaches to evaluate hearing aid benefit.
Exposure to intense low-frequency sounds, for example inside tanks and armoured vehicles, can lead to noise-induced hearing loss (NIHL) with a variable audiometric pattern, including low- and mid-frequency hearing loss. It is not known how well existing methods for diagnosing NIHL apply in such cases. Here, the audiograms of 68 military personnel (mostly veterans) who had been exposed to intense low-frequency noise (together with other types of noise) and who had low-frequency hearing loss (defined as a pure-tone average loss at 0.25, 0.5 and 1 kHz ≥20 dB) were used to assess the sensitivity of three diagnostic methods: the method of Coles, Lutman and Buffin, denoted CLB, which depends on the identification of a notch or bulge in the audiogram near 4 kHz, and two methods specifically intended for diagnosing NIHL sustained during military service, the rM-NIHL method, which depends on the identification of a notch or bulge in the audiogram near 4 kHz and/or a hearing loss at high frequencies greater than expected from age alone, and the MLP(18) method based on a multi-layer perceptron. The proportion of individuals receiving a positive diagnosis for either or both ears, which provides an approximate measure of sensitivity, was 0.40 for the CLB method, 0.79 for the rM-NIHL method and 1.0 for the MLP(18) method. It is concluded that the MLP(18) method is suitable for diagnosing NIHL sustained during military service whether or not the exposure includes intense low-frequency sounds.