Cochlear tuning and hence auditory frequency selectivity are thought to change in noisy environments by activation of the medial olivocochlear reflex (MOCR). In humans, auditory frequency selectivity is often assessed using psychoacoustical tuning curves (PTCs), a plot of the level required for pure-tone maskers to just mask a fixed-level pure-tone probe as a function of masker frequency. Sometimes, however, the stimuli used to measure a PTC are long enough that they can activate the MOCR by themselves and thus affect the PTC. Here, PTCs for probe frequencies of 500 Hz and 4 kHz were measured in forward masking using short maskers (30 ms) and probes (10 ms) to minimize the activation of the MOCR by the maskers or the probes. PTCs were also measured in the presence of long (300 ms) ipsilateral, contralateral, and bilateral broadband noise precursors to investigate the effect of the ipsilateral, contralateral, and bilateral MOCR on PTC tuning. Four listeners with normal hearing participated in the experiments. At 500 Hz, ipsilateral and bilateral precursors sharpened the PTCs by decreasing the thresholds for maskers with frequencies at or near the probe frequency with minimal effects on thresholds for maskers remote in frequency from the probe. At 4 kHz, by contrast, ipsilateral and bilateral precursors barely affected thresholds for maskers near the probe frequency but broadened PTCs by reducing thresholds for maskers far from the probe. Contralateral precursors barely affected PTCs. An existing computational model was used to interpret the results. The model suggested that despite the apparent differences, the pattern of results is consistent with the ipsilateral and bilateral MOCR inhibiting the cochlear gain similarly at the two probe frequencies and more strongly than the contralateral MOCR.
It has long been known that environmental conditions, particularly during development, affect morphological and functional properties of the brain including sensory systems; manipulating the environment thus represents a viable way to explore experience-dependent plasticity of the brain as well as of sensory systems. In this review, we summarize our experience with the effects of acoustically enriched environment (AEE) consisting of spectrally and temporally modulated complex sounds applied during first weeks of the postnatal development in rats and compare it with the related knowledge from the literature. Compared to controls, rats exposed to AEE showed in neurons of several parts of the auditory system differences in the dendritic length and in number of spines and spine density. The AEE exposure permanently influenced neuronal representation of the sound frequency and intensity resulting in lower excitatory thresholds, increased frequency selectivity and steeper rate-intensity functions. These changes were present both in the neurons of the inferior colliculus and the auditory cortex (AC). In addition, the AEE changed the responsiveness of AC neurons to frequency modulated, and also to a lesser extent, amplitude-modulated stimuli. Rearing rat pups in AEE leads to an increased reliability of acoustical responses of AC neurons, affecting both the rate and the temporal codes. At the level of individual spikes, the discharge patterns of individual neurons show a higher degree of similarity across stimulus repetitions. Behaviorally, rearing pups in AEE resulted in an improvement in the frequency resolution and gap detection ability under conditions with a worsened stimulus clarity. Altogether, the results of experiments show that the exposure to AEE during the critical developmental period influences the frequency and temporal processing in the auditory system, and these changes persist until adulthood. The results may serve for interpretation of the effects of the application of enriched acoustical environment in human neonatal medicine, especially in the case of care for preterm born children.
The genes Ocm (encoding oncomodulin) and Slc26a5 (encoding prestin) are expressed strongly in outer hair cells and both are involved in deafness in mice. However, it is not clear if they influence the expression of each other. In this study, we characterise the auditory phenotype resulting from two new mouse alleles, Ocmtm1e and Slc26a5tm1Cre. Each mutation leads to absence of detectable mRNA transcribed from the mutant allele, but there was no evidence that oncomodulin regulates expression of prestin or vice versa. The two mutants show distinctive patterns of auditory dysfunction. Ocmtm1e homozygotes have normal auditory brainstem response thresholds at 4 weeks old followed by progressive hearing loss starting at high frequencies, while heterozygotes show largely normal thresholds until 6 months of age, when signs of worse thresholds are detected. In contrast, Slc26a5tm1Cre homozygotes have stable but raised thresholds across all frequencies tested, 3 to 42 kHz, at least from 4 to 8 weeks old, while heterozygotes have raised thresholds at high frequencies. Distortion product otoacoustic emissions and cochlear microphonics show deficits similar to auditory brainstem responses in both mutants, suggesting that the origin of hearing impairment is in the outer hair cells. Endocochlear potentials are normal in the two mutants. Scanning electron microscopy revealed normal development of hair cells in Ocmtm1e homozygotes but scattered outer hair cell loss even at 4 weeks old when thresholds appeared normal, indicating that there is not a direct relationship between numbers of outer hair cells present and auditory thresholds.
The middle-ear muscle reflex (MEMR) and medial olivocochlear reflex (MOCR) modify peripheral auditory function, which may reduce masking and improve speech-in-noise (SIN) recognition. Previous work and our pilot data suggest that the two reflexes respond differently to static versus dynamic noise elicitors. However, little is known about how the two reflexes work in tandem to contribute to SIN recognition. We hypothesized that SIN recognition would be significantly correlated with the strength of the MEMR and with the strength of the MOCR. Additionally, we hypothesized that SIN recognition would be best when both reflexes were activated. A total of 43 healthy, normal-hearing adults met the inclusion/exclusion criteria (35 females, age range: 19–29 years). MEMR strength was assessed using wideband absorbance. MOCR strength was assessed using transient-evoked otoacoustic emissions. SIN recognition was assessed using a modified version of the QuickSIN. All measurements were made with and without two types of contralateral noise elicitors (steady and pulsed) at two levels (50 and 65 dB SPL). Steady noise was used to primarily elicit the MOCR and pulsed noise was used to elicit both reflexes. Two baseline conditions without a contralateral elicitor were also obtained. Results revealed differences in how the MEMR and MOCR responded to elicitor type and level. Contrary to hypotheses, SIN recognition was not significantly improved in the presence of any contralateral elicitors relative to the baseline conditions. Additionally, there were no significant correlations between MEMR strength and SIN recognition, or between MOCR strength and SIN recognition. MEMR and MOCR strength were significantly correlated for pulsed noise elicitors but not steady noise elicitors. Results suggest no association between SIN recognition and the MEMR or MOCR, at least as measured and analyzed in this study. SIN recognition may have been influenced by factors not accounted for in this study, such as contextual cues, warranting further study.
The detection of novel, low probability events in the environment is critical for survival. To perform this vital task, our brain is continuously building and updating a model of the outside world; an extensively studied phenomenon commonly referred to as predictive coding. Predictive coding posits that the brain is continuously extracting regularities from the environment to generate predictions. These predictions are then used to supress neuronal responses to redundant information, filtering those inputs, which then automatically enhances the remaining, unexpected inputs.
We have recently described the ability of auditory neurons to generate predictions about expected sensory inputs by detecting their absence in an oddball paradigm using omitted tones as deviants. Here, we studied the responses of individual neurons to omitted tones by presenting individual sequences of repetitive pure tones, using both random and periodic omissions, presented at both fast and slow rates in the inferior colliculus and auditory cortex neurons of anesthetized rats. Our goal was to determine whether feature-specific dependence of these predictions exists. Results showed that omitted tones could be detected at both high (8 Hz) and slow repetition rates (2 Hz), with detection being more robust at the non-lemniscal auditory pathway.
Several studies suggest that hearing loss results in changes in the balance between inhibition and excitation in the inferior colliculus (IC). The IC is an integral nucleus within the auditory brainstem. The majority of ascending pathways from the lateral lemniscus (LL), superior olivary complex (SOC), and cochlear nucleus (CN) synapse in the IC before projecting to the thalamus and cortex. Many of these ascending projections provide inhibitory innervation to neurons within the IC. However, the nature and the distribution of this inhibitory input have only been partially elucidated in the rat. The inhibitory neurotransmitter, gamma aminobutyric acid (GABA), from the ventral nucleus of the lateral lemniscus (VNLL), provides the primary inhibitory input to the IC of the rat with GABA from other lemniscal and SOC nuclei providing lesser, but prominent innervation.
There is evidence that hearing related conditions can result in dysfunction of IC neurons. These changes may be mediated in part by changes in GABA inputs to IC neurons. We have previously used gene micro-arrays in a study of deafness-related changes in gene expression in the IC and found significant changes in GAD as well as the GABA transporters and GABA receptors (Holt 2005). This is consistent with reports of age and trauma related changes in GABA (Bledsoe et al., 1995; Mossop et al., 2000; Salvi et al., 2000). Ototoxic lesions of the cochlea produced a permanent threshold shift. The number, intensity, and density of GABA positive axon terminals in the IC were compared in normal hearing and deafened rats. While the number of GABA immunolabeled puncta was only minimally different between groups, the intensity of labeling was significantly reduced. The ultrastructural localization and distribution of labeling was also examined. In deafened animals, the number of immuno gold particles was reduced by 78 % in axodendritic and 82 % in axosomatic GABAergic puncta. The affected puncta were primarily associated with small IC neurons. These results suggest that reduced inhibition to IC neurons contribute to the increased neuronal excitability observed in the IC following noise or drug induced hearing loss. Whether these deafness diminished inhibitory inputs originate from intrinsic or extrinsic CNIC sources awaits further study.
Over the last decade, multiple studies have shown that hearing-impaired listeners’ speech-in-noise reception ability, measured with audibility compensation, is closely associated with performance in spectro-temporal modulation (STM) detection tests. STM tests thus have the potential to provide highly relevant beyond-the-audiogram information in the clinic, but the available STM tests have not been optimized for clinical use in terms of test duration, required equipment, and procedural standardization. The present study introduces a quick-and-simple clinically viable STM test, named the Audible Contrast Threshold (ACT™) test. First, an experimenter-controlled STM measurement paradigm was developed, in which the patient is presented bilaterally with a continuous audibility-corrected noise via headphones and asked to press a pushbutton whenever they hear an STM target sound in the noise. The patient's threshold is established using a Hughson-Westlake tracking procedure with a three-out-of-five criterion and then refined by post-processing the collected data using a logistic function. Different stimulation paradigms were tested in 28 hearing-impaired participants and compared to data previously measured in the same participants with an established STM test paradigm. The best stimulation paradigm showed excellent test-retest reliability and good agreement with the established laboratory version. Second, the best stimulation paradigm with 1-second noise “waves” (windowed noise) was chosen, further optimized with respect to step size and logistic-function fitting, and tested in a population of 25 young normal-hearing participants using various types of transducers to obtain normative data. Based on these normative data, the “normalized Contrast Level” (in dB nCL) scale was defined, where 0 ± 4 dB nCL corresponds to normal performance and elevated dB nCL values indicate the degree of audible contrast loss. Overall, the results of the present study suggest that the ACT test may be considered a reliable, quick-and-simple (and thus clinically viable) test of STM sensitivity. The ACT can be measured directly after the audiogram using the same set up, adding only a few minutes to the process.
Auditory spatial attention detection (ASAD) seeks to determine which speaker in a surround sound field a listener is focusing on based on the one’s brain biosignals. Although existing studies have achieved ASAD from a single-trial electroencephalogram (EEG), the huge inter-subject variability makes them generally perform poorly in cross-subject scenarios. Besides, most ASAD methods do not take full advantage of topological relationships between EEG channels, which are crucial for high-quality ASAD. Recently, some advanced studies have introduced graph-based brain topology modeling into ASAD, but how to calculate edge weights in a graph to better capture actual brain connectivity is worthy of further investigation. To address these issues, we propose a new ASAD method in this paper. First, we model a multi-channel EEG segment as a graph, where differential entropy serves as the node feature, and a static adjacency matrix is generated based on inter-channel mutual information to quantify brain functional connectivity. Then, different subjects’ EEG graphs are encoded into a shared embedding space through a total variation graph neural network. Meanwhile, feature distribution alignment based on multi-kernel maximum mean discrepancy is adopted to learn subject-invariant patterns. Note that we align EEG embeddings of different subjects to reference distributions rather than align them to each other for the purpose of privacy preservation. A series of experiments on open datasets demonstrate that the proposed model outperforms state-of-the-art ASAD models in cross-subject scenarios with relatively low computational complexity, and feature distribution alignment improves the generalizability of the proposed model to a new subject.
Cochlear implant (CI) users experience diminished music enjoyment due to the technical limitations of the CI. Nonetheless, behavioral studies have reported that rhythmic features are well-transmitted through the CI. Still, the gradual improvement of rhythm perception after the CI switch-on has not yet been determined using neurophysiological measures. To fill this gap, we here reanalyzed the electroencephalographic responses of participants from two previous mismatch negativity studies. These studies included eight recently implanted CI users measured twice, within the first six weeks after CI switch-on and approximately three months later; thirteen experienced CI users with a median experience of 7 years; and fourteen normally hearing (NH) controls. All participants listened to a repetitive four-tone pattern (known in music as Alberti bass) for 35 min. Applying frequency tagging, we aimed to estimate the neural activity synchronized to the periodicities of the Alberti bass. We hypothesized that longer experience with the CI would be reflected in stronger frequency-tagged neural responses approaching the responses of NH controls. We found an increase in the frequency-tagged amplitudes after only 3 months of CI use. This increase in neural synchronization may reflect an early adaptation to the CI stimulation. Moreover, the frequency-tagged amplitudes of experienced CI users were significantly greater than those of recently implanted CI users, but still smaller than those of NH controls. The frequency-tagged neural responses did not just reflect spectrotemporal changes in the stimuli (i.e., intensity or spectral content fluctuating over time), but also showed non-linear transformations that seemed to enhance relevant periodicities of the Alberti bass. Our findings provide neurophysiological evidence indicating a gradual adaptation to the CI, which is noticeable already after three months, resulting in close to NH brain processing of spectrotemporal features of musical rhythms after extended CI use.