Pub Date : 2026-02-01Epub Date: 2025-12-30DOI: 10.1016/j.heares.2025.109523
Laura Keur-Huizinga , Nicole A. Huizinga , Adriana A. Zekveld , Niek J. Versfeld , Eco J.C. de Geus , Sophia E. Kramer
Hard-of-hearing individuals experience increased levels of fatigue and listening effort. In experimental settings, psychophysiological responses to auditory demand manipulations are considered to be influenced by listening effort. This study investigated the effects of hearing acuity and occupational need-for-recovery (NFR) on trial-level psychophysiological activity during speech perception. A total of 125 normal hearing and hard-of-hearing participants (88 females and 37 males, 37-72 years old) completed speech reception threshold tasks. Outcome measures included baseline pupil size (BPS), mean pupil dilation (MPD), skin conductance response (SCR) amplitudes, and respiratory sinus arrhythmia (RSA). First, the interaction between time-on-task and hearing acuity was analyzed for the outcome measures. A second analysis included a subsample of 82 participants and tested for interactions between NFR, time-on-task, and hearing acuity. Overall, BPS, MPD, and SCR amplitude decreased over time-on-task, whereas RSA increased, as expected. Higher NFR showed contradicting effects on BPS and MPD: if hearing acuity was better, higher NFR was associated with a decrease in the pupil measures, but vice versa if hearing acuity was worse. Participants with worse hearing combined with higher NFR showed a relatively stable BPS and SCR amplitude over time-on-task, indicative of (preparatory) compensatory activity. Additionally, RSA increased with worse hearing in the employed subsample, but was not sensitive to NFR. The effects of NFR, specifically on the pupil measures, were strongest in those with poorer hearing suggesting higher vulnerability to daily life fatigue.
{"title":"Hearing acuity and need for recovery affect time-on-task effects on psychophysiological activity during listening","authors":"Laura Keur-Huizinga , Nicole A. Huizinga , Adriana A. Zekveld , Niek J. Versfeld , Eco J.C. de Geus , Sophia E. Kramer","doi":"10.1016/j.heares.2025.109523","DOIUrl":"10.1016/j.heares.2025.109523","url":null,"abstract":"<div><div>Hard-of-hearing individuals experience increased levels of fatigue and listening effort. In experimental settings, psychophysiological responses to auditory demand manipulations are considered to be influenced by listening effort. This study investigated the effects of hearing acuity and occupational need-for-recovery (NFR) on trial-level psychophysiological activity during speech perception. A total of 125 normal hearing and hard-of-hearing participants (88 females and 37 males, 37-72 years old) completed speech reception threshold tasks. Outcome measures included baseline pupil size (BPS), mean pupil dilation (MPD), skin conductance response (SCR) amplitudes, and respiratory sinus arrhythmia (RSA). First, the interaction between time-on-task and hearing acuity was analyzed for the outcome measures. A second analysis included a subsample of 82 participants and tested for interactions between NFR, time-on-task, and hearing acuity. Overall, BPS, MPD, and SCR amplitude decreased over time-on-task, whereas RSA increased, as expected. Higher NFR showed contradicting effects on BPS and MPD: if hearing acuity was better, higher NFR was associated with a decrease in the pupil measures, but vice versa if hearing acuity was worse. Participants with worse hearing combined with higher NFR showed a relatively stable BPS and SCR amplitude over time-on-task, indicative of (preparatory) compensatory activity. Additionally, RSA increased with worse hearing in the employed subsample, but was not sensitive to NFR. The effects of NFR, specifically on the pupil measures, were strongest in those with poorer hearing suggesting higher vulnerability to daily life fatigue.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"471 ","pages":"Article 109523"},"PeriodicalIF":2.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145882930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2026-01-06DOI: 10.1016/j.heares.2026.109527
Lukas Graf , David Stauske , Mohammad Ghoncheh , Andreas Arnold , Hamidreza Mojallal , Christof Stieger , Hannes Maier
Introduction
Various couplers exist for the Vibrant Soundbridge implant to connect the actuator to a middle ear structure. While the surgical indication mainly distinguishes between purely sensorineural hearing loss and mixed hearing loss, and audiological indication ranges are designed accordingly, we believe that the biomechanical distinction between forward and reverse cochlear stimulation is more meaningful. To evaluate this, we apply all common couplers in temporal bone experiments and correct the measurements for adequate comparability.
Material and methods
The study was conducted at two research labs, each with n = 10 cadaveric temporal bones. Laser Doppler velocity measurements on the stapes were used for direct comparison between the coupling methods. Therefore, reverse cochlear stimulation requires a correction factor due to pressure loss through a third window. This was calculated by measuring intracochlear pressure differences in the same specimens.
Results
Compared to the stapes LDV measurement, reverse stimulations require a correction factor that continuously decreases in the high frequency range: approximately +22.4 dB between 125–250 Hz, +7.9 dB between 300–800 Hz, and +4.8 dB above 1000 Hz. Taking this into account, there is still a considerable difference between the forward and reverse stimulation methods of 15–22 dB in the frequency range <2000 Hz and approximately 7 dB >2000 Hz to the favor of forward stimulation.
Conclusions
Forward couplings, including direct stapes coupling, are mechanically superior to reverse stimulation. The stapes head coupler performs 7–25 dB better than the round window coupler and similar or even superior to incus couplers.
{"title":"Experimental comparison between forward and reverse Vibrant Soundbridge cochlear stimulation","authors":"Lukas Graf , David Stauske , Mohammad Ghoncheh , Andreas Arnold , Hamidreza Mojallal , Christof Stieger , Hannes Maier","doi":"10.1016/j.heares.2026.109527","DOIUrl":"10.1016/j.heares.2026.109527","url":null,"abstract":"<div><h3>Introduction</h3><div>Various couplers exist for the Vibrant Soundbridge implant to connect the actuator to a middle ear structure. While the surgical indication mainly distinguishes between purely sensorineural hearing loss and mixed hearing loss, and audiological indication ranges are designed accordingly, we believe that the biomechanical distinction between forward and reverse cochlear stimulation is more meaningful. To evaluate this, we apply all common couplers in temporal bone experiments and correct the measurements for adequate comparability.</div></div><div><h3>Material and methods</h3><div>The study was conducted at two research labs, each with <em>n</em> = 10 cadaveric temporal bones. Laser Doppler velocity measurements on the stapes were used for direct comparison between the coupling methods. Therefore, reverse cochlear stimulation requires a correction factor due to pressure loss through a third window. This was calculated by measuring intracochlear pressure differences in the same specimens.</div></div><div><h3>Results</h3><div>Compared to the stapes LDV measurement, reverse stimulations require a correction factor that continuously decreases in the high frequency range: approximately +22.4 dB between 125–250 Hz, +7.9 dB between 300–800 Hz, and +4.8 dB above 1000 Hz. Taking this into account, there is still a considerable difference between the forward and reverse stimulation methods of 15–22 dB in the frequency range <2000 Hz and approximately 7 dB >2000 Hz to the favor of forward stimulation.</div></div><div><h3>Conclusions</h3><div>Forward couplings, including direct stapes coupling, are mechanically superior to reverse stimulation. The stapes head coupler performs 7–25 dB better than the round window coupler and similar or even superior to incus couplers.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"471 ","pages":"Article 109527"},"PeriodicalIF":2.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145940176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2026-01-11DOI: 10.1016/j.heares.2026.109539
Iris Van de Ryck , Nicolas Heintz , Iustina Rotaru , Simon Geirnaert , Alexander Bertrand , Tom Francart
Objectives
Auditory attention decoding (AAD) refers to the process of identifying which sound source a listener is attending to, based on neural recordings, such as electroencephalography (EEG). Most AAD studies use a competing speaker paradigm where two continuously active speech signals are simultaneously presented, in which the participant is instructed to attend to one speaker and ignore the other speaker. However, such a competing two-speaker scenario is uncommon in real life, as speakers typically take turns rather than speaking simultaneously. In this paper, we argue that decoding attention to conversations (rather than individual speakers) is a more relevant paradigm for testing AAD algorithms. In such a conversation-tracking paradigm, the AAD algorithm focusses on switching between entire conversations, resulting in less frequent attention shifts (ignoring turn-taking within conversations), thereby allowing for more relaxed constraints on the decision time.
Design
To test AAD performance in such a conversation-tracking paradigm, we simulated a challenging restaurant scenario with three simultaneous two-speaker conversations, which were podcasts presented in front of the listener and in the back left and back right of the room. We conducted an EEG experiment on 20 normal-hearing participants to compare the performance of AAD in the commonly used competing speaker paradigm with two speakers versus the conversation tracking paradigm with 2 or 3 conversations, each containing two turn-taking speakers.
Results
We found that AAD, using stimulus decoding, worked well under all experimental conditions, and that the accuracy was not influenced by the direction of attention, the proximity to the target conversation, or the presence of within-trial attention switches (versus a condition with sustained attention). Given the challenging scenario, we probed for the participants’ listening experience and found a correlation between the neural decoding performance and the perceived listening effort and self-reported speech intelligibility. To gain insight into the speech intelligibility of the participants in our setup, they performed a speech-in-noise test (Flemish matrix sentence test), but we did not find a correlation between the speech intelligibility performance and the AAD performance.
{"title":"EEG-based decoding of auditory attention to conversations with turn-taking speakers","authors":"Iris Van de Ryck , Nicolas Heintz , Iustina Rotaru , Simon Geirnaert , Alexander Bertrand , Tom Francart","doi":"10.1016/j.heares.2026.109539","DOIUrl":"10.1016/j.heares.2026.109539","url":null,"abstract":"<div><h3>Objectives</h3><div>Auditory attention decoding (AAD) refers to the process of identifying which sound source a listener is attending to, based on neural recordings, such as electroencephalography (EEG). Most AAD studies use a competing speaker paradigm where two continuously active speech signals are simultaneously presented, in which the participant is instructed to attend to one speaker and ignore the other speaker. However, such a competing two-speaker scenario is uncommon in real life, as speakers typically take turns rather than speaking simultaneously. In this paper, we argue that decoding attention to conversations (rather than individual speakers) is a more relevant paradigm for testing AAD algorithms. In such a conversation-tracking paradigm, the AAD algorithm focusses on switching between entire conversations, resulting in less frequent attention shifts (ignoring turn-taking within conversations), thereby allowing for more relaxed constraints on the decision time.</div></div><div><h3>Design</h3><div>To test AAD performance in such a conversation-tracking paradigm, we simulated a challenging restaurant scenario with three simultaneous two-speaker conversations, which were podcasts presented in front of the listener and in the back left and back right of the room. We conducted an EEG experiment on 20 normal-hearing participants to compare the performance of AAD in the commonly used competing speaker paradigm with two speakers versus the conversation tracking paradigm with 2 or 3 conversations, each containing two turn-taking speakers.</div></div><div><h3>Results</h3><div>We found that AAD, using stimulus decoding, worked well under all experimental conditions, and that the accuracy was not influenced by the direction of attention, the proximity to the target conversation, or the presence of within-trial attention switches (versus a condition with sustained attention). Given the challenging scenario, we probed for the participants’ listening experience and found a correlation between the neural decoding performance and the perceived listening effort and self-reported speech intelligibility. To gain insight into the speech intelligibility of the participants in our setup, they performed a speech-in-noise test (Flemish matrix sentence test), but we did not find a correlation between the speech intelligibility performance and the AAD performance.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"471 ","pages":"Article 109539"},"PeriodicalIF":2.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146018323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-12-30DOI: 10.1016/j.heares.2025.109522
Carter M. Smith , Natalia Van Esch, Nichole E. Scheerer
Decreased sound tolerance (DST) is an encompassing term for conditions marked by a reduced tolerance to everyday sounds. Misophonia, sensitivity to specific trigger sounds which cue aversive responses, is one DST subtype. Hyperacusis, another DST subtype, occurs when people are irritated by general sounds that are not bothersome to others. Research suggests that those with DST face heightened mental health challenges. Psychometrically validated measures aligned with the recent misophonia consensus definition have not assessed the relationship between misophonia and mental health. There is also a complete dearth of DST-mental health research in Canadian universities. Here, 2095 Canadian undergraduate students completed DST and mental health questionnaires. We explored the relationship between anxiety and depression and DST. We found strong, positive correlations between DST symptoms and mental health difficulties. These findings highlight DST’s detrimental effects and the need for future research on strategies for managing and treating DST in post-secondary institutions.
{"title":"Anxiety and depression among Canadian undergraduates with decreased sound tolerance","authors":"Carter M. Smith , Natalia Van Esch, Nichole E. Scheerer","doi":"10.1016/j.heares.2025.109522","DOIUrl":"10.1016/j.heares.2025.109522","url":null,"abstract":"<div><div>Decreased sound tolerance (DST) is an encompassing term for conditions marked by a reduced tolerance to everyday sounds. Misophonia, sensitivity to specific trigger sounds which cue aversive responses, is one DST subtype. Hyperacusis, another DST subtype, occurs when people are irritated by general sounds that are not bothersome to others. Research suggests that those with DST face heightened mental health challenges. Psychometrically validated measures aligned with the recent misophonia consensus definition have not assessed the relationship between misophonia and mental health. There is also a complete dearth of DST-mental health research in Canadian universities. Here, 2095 Canadian undergraduate students completed DST and mental health questionnaires. We explored the relationship between anxiety and depression and DST. We found strong, positive correlations between DST symptoms and mental health difficulties. These findings highlight DST’s detrimental effects and the need for future research on strategies for managing and treating DST in post-secondary institutions.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"471 ","pages":"Article 109522"},"PeriodicalIF":2.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145917043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-12-23DOI: 10.1016/j.heares.2025.109520
Nina Chimienti , Chenjun Shi , Lily Kassis , Razzane Zaghloul , Man Do , Jitao Zhang , Xiying Guan
Cochlear macromechanics and micromechanics both rely on the mechanical properties of the cells and membranes in the organ of Corti (OC). These components’ stiffness have been investigated primarily using contact-based tests, which require the organ or cells to be removed from the cochlea. The approach is not only challenging but may alter the cells’ stiffness as they are moved into a non-native environment. Recently, optical Brillouin microscopy has emerged as a promising tool for quantifying the mechanical property of biological specimens. This contact-free modality encourages that the stiffness of the OC cells and other components can be measured in situ and even in vivo. In the present study, we validated the feasibility of in situ Brillouin measurement on the OC cells’ stiffness using fixed mouse cochleae. The results demonstrate that Brillouin microscopy has sufficient penetration depth and mechanical sensitivity to probe the OC, allowing us to differentiate the stiffness between the bone, spiral ligament, and cells; the longitudinal modulus obtained from the experiment varies between different types of OC cells in a way expected from the cells’ cytoskeletal composition. This pilot study paves the way for future application of Brillouin microscopy to quantify the stiffness of OC constituents in situ in living cochleae.
{"title":"Non-contact optical stiffness measurements in the organ of Corti in mice","authors":"Nina Chimienti , Chenjun Shi , Lily Kassis , Razzane Zaghloul , Man Do , Jitao Zhang , Xiying Guan","doi":"10.1016/j.heares.2025.109520","DOIUrl":"10.1016/j.heares.2025.109520","url":null,"abstract":"<div><div>Cochlear macromechanics and micromechanics both rely on the mechanical properties of the cells and membranes in the organ of Corti (OC). These components’ stiffness have been investigated primarily using contact-based tests, which require the organ or cells to be removed from the cochlea. The approach is not only challenging but may alter the cells’ stiffness as they are moved into a non-native environment. Recently, optical Brillouin microscopy has emerged as a promising tool for quantifying the mechanical property of biological specimens. This contact-free modality encourages that the stiffness of the OC cells and other components can be measured in situ and even in vivo. In the present study, we validated the feasibility of in situ Brillouin measurement on the OC cells’ stiffness using fixed mouse cochleae. The results demonstrate that Brillouin microscopy has sufficient penetration depth and mechanical sensitivity to probe the OC, allowing us to differentiate the stiffness between the bone, spiral ligament, and cells; the longitudinal modulus obtained from the experiment varies between different types of OC cells in a way expected from the cells’ cytoskeletal composition. This pilot study paves the way for future application of Brillouin microscopy to quantify the stiffness of OC constituents in situ in living cochleae.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"471 ","pages":"Article 109520"},"PeriodicalIF":2.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145847618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2026-01-02DOI: 10.1016/j.heares.2026.109525
David Stauske , Hamidreza Mojallal , Nils Prenzler , Hannes Maier , Mohammad Ghoncheh
Objective
Our experimental study evaluated the efficiency of sound transmission with the Floating Mass Transducer (FMT) in forward (incus short process, SP) and reverse (round window, RW) stimulation modes using different coupling configurations.
Methods
Using laser Doppler vibrometry (LDV) at the stapes and intracochlear pressure difference (ICPD) the equivalent sound pressure level output was determined according to ASTM standard. Coupling configurations included the Vibroplasty-SP-Coupler, the Vibroplasty-RW-Coupler, and the research RW-Precision-Coupler. Reverse stimulation was studied with both intact and disrupted ossicular chains.
Results
In forward stimulation with SP of the incus coupling, output levels increased from 200 Hz to ∼1.26 kHz and remained constant up to 8 kHz. LDV and ICPD yielded similar results in forward stimulation. In reverse stimulation, both RW coupling methods showed resonance peaks at 1.5–2 kHz, though output amplitudes were ∼10–12 dB lower than in forward stimulation. The RW-Precision-Coupler produced higher output with less variability than the Vibroplasty-RW-Coupler. In mid-frequency range (0.6 - 1.5 kHz) the output levels measured with LDV and ICPD were similar for forward and reverse stimulation, but ICPD indicated higher outputs at low and high frequencies. Variability was greater in reverse stimulation while intact versus disrupted ossicular chains showed no significant differences in reverse stimulation.
Conclusion
LDV is well-established for assessing forward stimulation, while ICPD is more accurate in reverse stimulation. Despite lower overall output, RW stimulation frequency characteristics are preserved, supporting its clinical relevance when forward stimulation is not feasible.
{"title":"Preclinical methods to determine output of the floating mass transducer in forward and reverse stimulation","authors":"David Stauske , Hamidreza Mojallal , Nils Prenzler , Hannes Maier , Mohammad Ghoncheh","doi":"10.1016/j.heares.2026.109525","DOIUrl":"10.1016/j.heares.2026.109525","url":null,"abstract":"<div><h3>Objective</h3><div>Our experimental study evaluated the efficiency of sound transmission with the Floating Mass Transducer (FMT) in forward (incus short process, SP) and reverse (round window, RW) stimulation modes using different coupling configurations.</div></div><div><h3>Methods</h3><div>Using laser Doppler vibrometry (LDV) at the stapes and intracochlear pressure difference (ICPD) the equivalent sound pressure level output was determined according to ASTM standard. Coupling configurations included the <em>Vibroplasty-SP-Coupler</em>, the <em>Vibroplasty-RW-Coupler</em>, and the research <em>RW-Precision-Coupler</em>. Reverse stimulation was studied with both intact and disrupted ossicular chains.</div></div><div><h3>Results</h3><div>In forward stimulation with SP of the incus coupling, output levels increased from 200 Hz to ∼1.26 kHz and remained constant up to 8 kHz. LDV and ICPD yielded similar results in forward stimulation. In reverse stimulation, both RW coupling methods showed resonance peaks at 1.5–2 kHz, though output amplitudes were ∼10–12 dB lower than in forward stimulation. The <em>RW-Precision-Coupler</em> produced higher output with less variability than the <em>Vibroplasty-RW-Coupler</em>. In mid-frequency range (0.6 - 1.5 kHz) the output levels measured with LDV and ICPD were similar for forward and reverse stimulation, but ICPD indicated higher outputs at low and high frequencies. Variability was greater in reverse stimulation while intact versus disrupted ossicular chains showed no significant differences in reverse stimulation.</div></div><div><h3>Conclusion</h3><div>LDV is well-established for assessing forward stimulation, while ICPD is more accurate in reverse stimulation. Despite lower overall output, RW stimulation frequency characteristics are preserved, supporting its clinical relevance when forward stimulation is not feasible.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"471 ","pages":"Article 109525"},"PeriodicalIF":2.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145932959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2026-01-03DOI: 10.1016/j.heares.2026.109526
Shichu Sun , Shuai Cheng , Shifu Li , Miao Zhao , Shiqi Jing , Yuhua Wang , You Zhou
Salicylate reliably induces tinnitus, yet its systemic effects on the central auditory and limbic systems remain incompletely characterized. Through integrated transcriptomic and metabolomic profiling of the rat cochlear nucleus and hippocampus, we observed pronounced region-specific remodeling following chronic tinnitus-inducing salicylate treatment. All differential and enrichment analyses were filtered using a nominal p-value cutoff (p < 0.05) without multiple-testing correction; thus, the findings should be interpreted as exploratory. We identified 150 differentially expressed genes (DEGs) and 70 differentially expressed metabolites (DEMs) in the cochlear nucleus, and 550 DEGs alongside 71 DEMs in the hippocampus. In the cochlear nucleus, DEGs were enriched in neuroactive ligand-receptor interaction, cell adhesion, TNF signaling, ABC transporters, and Hippo signaling pathways. Concurrently, DEMs were enriched in cholesterol metabolism, choline metabolism, aldosterone and cortisol synthesis, primary bile acid biosynthesis, and vitamin digestion and absorption. Multi-omics integration highlighted a synergistic network involving bile secretion, cholesterol metabolism, and ABC transporters. In the hippocampus, DEGs were associated with extracellular matrix (ECM)-receptor interaction, phagosome, apoptosis, PI3K-Akt signaling pathway, focal adhesion, Hippo signaling pathway, fatty acid elongation, and proteoglycans in cancer. DEMs were enriched in choline metabolism, glycerophospholipid metabolism, cholesterol metabolism, vitamin digestion and absorption, retrograde endocannabinoid signaling, primary bile acid biosynthesis, linoleic acid metabolism, alpha-linolenic acid metabolism, and phenylalanine metabolism. Integrative analysis revealed correlated networks involving, primary bile acid biosynthesis, bile secretion, cholesterol metabolism, ABC transporters, and choline metabolism. These findings provide a comprehensive view of the neurobiological mechanisms underlying salicylate-induced tinnitus, demonstrating robust region-specific remodeling within auditory and limbic structures. Our results suggest chronic salicylate exposure disrupts critical bioenergetic and signaling pathways, contributing to aberrant neural excitability in the auditory pathway and cognitive-affective impairments mediated by the hippocampus.
{"title":"Chronic salicylate administration induces transcriptomic and metabolomic remodeling in the rat cochlear nucleus and hippocampus","authors":"Shichu Sun , Shuai Cheng , Shifu Li , Miao Zhao , Shiqi Jing , Yuhua Wang , You Zhou","doi":"10.1016/j.heares.2026.109526","DOIUrl":"10.1016/j.heares.2026.109526","url":null,"abstract":"<div><div>Salicylate reliably induces tinnitus, yet its systemic effects on the central auditory and limbic systems remain incompletely characterized. Through integrated transcriptomic and metabolomic profiling of the rat cochlear nucleus and hippocampus, we observed pronounced region-specific remodeling following chronic tinnitus-inducing salicylate treatment. All differential and enrichment analyses were filtered using a nominal p-value cutoff (p < 0.05) without multiple-testing correction; thus, the findings should be interpreted as exploratory. We identified 150 differentially expressed genes (DEGs) and 70 differentially expressed metabolites (DEMs) in the cochlear nucleus, and 550 DEGs alongside 71 DEMs in the hippocampus. In the cochlear nucleus, DEGs were enriched in neuroactive ligand-receptor interaction, cell adhesion, TNF signaling, ABC transporters, and Hippo signaling pathways. Concurrently, DEMs were enriched in cholesterol metabolism, choline metabolism, aldosterone and cortisol synthesis, primary bile acid biosynthesis, and vitamin digestion and absorption. Multi-omics integration highlighted a synergistic network involving bile secretion, cholesterol metabolism, and ABC transporters. In the hippocampus, DEGs were associated with extracellular matrix (ECM)-receptor interaction, phagosome, apoptosis, PI3K-Akt signaling pathway, focal adhesion, Hippo signaling pathway, fatty acid elongation, and proteoglycans in cancer. DEMs were enriched in choline metabolism, glycerophospholipid metabolism, cholesterol metabolism, vitamin digestion and absorption, retrograde endocannabinoid signaling, primary bile acid biosynthesis, linoleic acid metabolism, alpha-linolenic acid metabolism, and phenylalanine metabolism. Integrative analysis revealed correlated networks involving, primary bile acid biosynthesis, bile secretion, cholesterol metabolism, ABC transporters, and choline metabolism. These findings provide a comprehensive view of the neurobiological mechanisms underlying salicylate-induced tinnitus, demonstrating robust region-specific remodeling within auditory and limbic structures. Our results suggest chronic salicylate exposure disrupts critical bioenergetic and signaling pathways, contributing to aberrant neural excitability in the auditory pathway and cognitive-affective impairments mediated by the hippocampus.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"471 ","pages":"Article 109526"},"PeriodicalIF":2.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146029457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2026-01-07DOI: 10.1016/j.heares.2026.109529
Yan Wang , Chanyuan Zhang , Xiaohui Ma , He Zhao , Qi Wang , Limei Cui , Liang Chen , Yan Sun
Background
Age-related hearing loss (ARHL) is a prevalent neurodegenerative condition commonly linked to aging and chronic inflammation. However, there is currently a lack of substantial proteomic evidence elucidating the underlying mechanisms within the brain.
Methods
The study employed proteomic techniques to identify proteins as biomarkers for ARHL and to investigate and predict their underlying pathogenic mechanisms and potential intervention targets.
Results
In studying the impact of aging on ARHL, we found significant expression of 7 hearing-related proteins. including Mbp, Mag, Plp1, Orm1, Orm2, Tubb2b, Tuba3aa, and Tuba4a. These proteins were enriched in the ceramide-related pathway (CAMS), sphingolipid pathway, and microtubule transport pathway. Further investigations into the impact of chronic neuroinflammation, particularly reflecting the activation of microglial, revealed an improvement in hearing following the inhibition of microglial activation. Additionally, two proteins significantly associated with hearing were discovered to be expressed in the cochlear nucleus. Mag, Orm1, enriched in CAMs and sphingolipid pathways.
Conclusions
In conclusion, we predict that aging may hinder the microtubule transport pathway, affect the CAMS and acidic glycoprotein pathways, influence the differential expression of proteins, thereby leading to the occurrence and development of ARHL. After the inhibition of microglia, key proteins of the CAMS and acidic glycoprotein pathways appeared among the differentially expressed proteins, which suggests that aging may induce ARHL by affecting myelin stripping in microglia, ultimately promoting the development of ARHL.
{"title":"Investigating the mechanisms of ageing and neuroinflammation in age-related hearing loss: A proteomic analysis","authors":"Yan Wang , Chanyuan Zhang , Xiaohui Ma , He Zhao , Qi Wang , Limei Cui , Liang Chen , Yan Sun","doi":"10.1016/j.heares.2026.109529","DOIUrl":"10.1016/j.heares.2026.109529","url":null,"abstract":"<div><h3>Background</h3><div>Age-related hearing loss (ARHL) is a prevalent neurodegenerative condition commonly linked to aging and chronic inflammation. However, there is currently a lack of substantial proteomic evidence elucidating the underlying mechanisms within the brain.</div></div><div><h3>Methods</h3><div>The study employed proteomic techniques to identify proteins as biomarkers for ARHL and to investigate and predict their underlying pathogenic mechanisms and potential intervention targets.</div></div><div><h3>Results</h3><div>In studying the impact of aging on ARHL, we found significant expression of 7 hearing-related proteins. including Mbp, Mag, Plp1, Orm1, Orm2, Tubb2b, Tuba3aa, and Tuba4a. These proteins were enriched in the ceramide-related pathway (CAMS), sphingolipid pathway, and microtubule transport pathway. Further investigations into the impact of chronic neuroinflammation, particularly reflecting the activation of microglial, revealed an improvement in hearing following the inhibition of microglial activation. Additionally, two proteins significantly associated with hearing were discovered to be expressed in the cochlear nucleus. Mag, Orm1, enriched in CAMs and sphingolipid pathways.</div></div><div><h3>Conclusions</h3><div>In conclusion, we predict that aging may hinder the microtubule transport pathway, affect the CAMS and acidic glycoprotein pathways, influence the differential expression of proteins, thereby leading to the occurrence and development of ARHL. After the inhibition of microglia, key proteins of the CAMS and acidic glycoprotein pathways appeared among the differentially expressed proteins, which suggests that aging may induce ARHL by affecting myelin stripping in microglia, ultimately promoting the development of ARHL.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"471 ","pages":"Article 109529"},"PeriodicalIF":2.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145951457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2026-01-09DOI: 10.1016/j.heares.2026.109536
David K. Ryugo , Satoshi Nishitani
Age-related hearing loss impairs speech understanding for socialization and music appreciation for enjoyment, both of which compromise quality of life and can lead to cognitive decline. It has previously been shown that while standard audiometric hearing thresholds can remain normal over time, speech understanding in noise is more difficult and there is the emergence of tinnitus. These specific hearing difficulties are not revealed by standard audiograms but we now know that they have been attributed to loss of high threshold auditory nerve fibers caused by the disappearance of terminal endings under inner hair cells. This loss can be measured by a reduction of evoked activity in the auditory nerve and atrophy of central auditory nerve endings in the anteroventral cochlear nucleus called endbulbs of Held. In the present study, we used age-graded cohorts of mice to compare hearing loss to the structure of auditory nerve synapses using serial section electron microscopy. We demonstrated a pathologic expansion and flattening of their synapses against spherical bushy cells in the rostral anteroventral cochlear nucleus in older mice with hearing loss. These changes portend impairments in sound processing and emphasize the importance of identifying “hidden” hearing loss for potential rehabilitation.
{"title":"Ageing and the auditory nerve: Hearing sensitivity and endbulb synapses","authors":"David K. Ryugo , Satoshi Nishitani","doi":"10.1016/j.heares.2026.109536","DOIUrl":"10.1016/j.heares.2026.109536","url":null,"abstract":"<div><div>Age-related hearing loss impairs speech understanding for socialization and music appreciation for enjoyment, both of which compromise quality of life and can lead to cognitive decline. It has previously been shown that while standard audiometric hearing thresholds can remain normal over time, speech understanding in noise is more difficult and there is the emergence of tinnitus. These specific hearing difficulties are not revealed by standard audiograms but we now know that they have been attributed to loss of high threshold auditory nerve fibers caused by the disappearance of terminal endings under inner hair cells. This loss can be measured by a reduction of evoked activity in the auditory nerve and atrophy of central auditory nerve endings in the anteroventral cochlear nucleus called endbulbs of Held. In the present study, we used age-graded cohorts of mice to compare hearing loss to the structure of auditory nerve synapses using serial section electron microscopy. We demonstrated a pathologic expansion and flattening of their synapses against spherical bushy cells in the rostral anteroventral cochlear nucleus in older mice with hearing loss. These changes portend impairments in sound processing and emphasize the importance of identifying “hidden” hearing loss for potential rehabilitation.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"471 ","pages":"Article 109536"},"PeriodicalIF":2.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146010090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-12-26DOI: 10.1016/j.heares.2025.109521
Shagun Ajmera , Rafay A. Khan , Gibbeum Kim , Namitha Jain , Ariana Castro , Howard Berenbaum , Fatima T. Husain
Misophonia and loudness hyperacusis are debilitating sound intolerance conditions marked by extreme emotional and physiological responses to everyday sounds. Although frequently co-occurring, their distinct neural correlates remain poorly delineated. In an exploratory data-driven analysis, we identified neural-connectivity based markers of misophonia among cortical and subcortical networks in the brain using resting-state fMRI data. We leveraged an optimized and cross-validated machine learning framework to sift through >85 thousand functional connections and to evaluate detectability of misophonia, in isolation and when comorbid with hyperacusis. Participants were rigorously categorized using structured interviews into misophonia-only (MI), misophonia with hyperacusis (MH), and control (CTR) groups. Classifier models trained on individual functional connectivity distinguished both MI and MH from CTR, with 63 % and 67 % test prediction accuracy respectively. Core misophonia-related alterations consistently emerged across both groups, particularly in salience, somatomotor, and frontoparietal control networks, implying disruptions in emotion regulation, motor inhibition, and attentional control, respectively. Specific to misophonia-only were connectivity abnormalities in the basal ganglia and subcortex, suggesting a neural dissociation between MI and MH conditions. In contrast, connectivity trends unique to MH revealed networks implicated in higher-order visual processing, likely reflecting hyperacusis-linked processes. These findings offer a refined neurobiological dissociation between misophonia and hyperacusis and underscore the importance of careful diagnostic separation in both research and clinical contexts. By isolating misophonia-relevant brain networks, our results provide actionable insight into the development of precise neuroscience-informed interventions. In particular, they support psychology-based therapy to target dysfunctional connectivity in salience and control circuits for treating misophonia.
{"title":"Altered intrinsic brain connectivity in misophonia, with and without hyperacusis","authors":"Shagun Ajmera , Rafay A. Khan , Gibbeum Kim , Namitha Jain , Ariana Castro , Howard Berenbaum , Fatima T. Husain","doi":"10.1016/j.heares.2025.109521","DOIUrl":"10.1016/j.heares.2025.109521","url":null,"abstract":"<div><div>Misophonia and loudness hyperacusis are debilitating sound intolerance conditions marked by extreme emotional and physiological responses to everyday sounds. Although frequently co-occurring, their distinct neural correlates remain poorly delineated. In an exploratory data-driven analysis, we identified neural-connectivity based markers of misophonia among cortical and subcortical networks in the brain using resting-state fMRI data. We leveraged an optimized and cross-validated machine learning framework to sift through >85 thousand functional connections and to evaluate detectability of misophonia, in isolation and when comorbid with hyperacusis. Participants were rigorously categorized using structured interviews into misophonia-only (MI), misophonia with hyperacusis (MH), and control (CTR) groups. Classifier models trained on individual functional connectivity distinguished both MI and MH from CTR, with 63 % and 67 % test prediction accuracy respectively. Core misophonia-related alterations consistently emerged across both groups, particularly in salience, somatomotor, and frontoparietal control networks, implying disruptions in emotion regulation, motor inhibition, and attentional control, respectively. Specific to misophonia-only were connectivity abnormalities in the basal ganglia and subcortex, suggesting a neural dissociation between MI and MH conditions. In contrast, connectivity trends unique to MH revealed networks implicated in higher-order visual processing, likely reflecting hyperacusis-linked processes. These findings offer a refined neurobiological dissociation between misophonia and hyperacusis and underscore the importance of careful diagnostic separation in both research and clinical contexts. By isolating misophonia-relevant brain networks, our results provide actionable insight into the development of precise neuroscience-informed interventions. In particular, they support psychology-based therapy to target dysfunctional connectivity in salience and control circuits for treating misophonia.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"471 ","pages":"Article 109521"},"PeriodicalIF":2.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145847620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}