Pub Date : 2026-01-01Epub Date: 2025-12-11DOI: 10.1016/j.heares.2025.109507
Matthew D. Sergison , Shani Poleg , Nathaniel T. Greene , Achim Klug , Daniel J. Tollin
The Mongolian gerbil is a common model organism for studying the neural and behavioral mechanisms of binaural and spatial hearing, largely because of its ability to hear lower frequencies better than other rodents and thus utilize both interaural time and level difference cues for sound localization. Prior spatial hearing studies in gerbils have relied on operant conditioning paradigms, requiring large amounts of time-consuming training and testing on multiple different tasks needed to make a comprehensive assessment of spatial hearing ability (including temporal processing, spatial acuity and spatial unmasking). This limits the ability of researchers to thoroughly assess behavioral performance in a population of animals. In this study, we used the prepulse inhibition of the acoustic startle reflex (PPI) to extensively assess spatial hearing and temporal processing abilities in a population of gerbils of both sexes. Results show that gerbils inhibit a startle response to a brief loud sound based on prepulse acoustical cues consisting of a 1) temporal gap in ongoing sounds, 2) change in sound source location, and 3) target sound in the presence of a masker. In each test, the magnitude of the suppression of startle increased monotonically as a function of the magnitude of the acoustical prepulse, not unlike a psychometric function, from which threshold performance could be measured. Thresholds in the gerbils in each task measured using PPI matched those acquired using operant conditioning methods.
{"title":"Spatial hearing and temporal processing ability of the Mongolian gerbil (Meriones unguiculatus) measured using prepulse inhibition of acoustic startle","authors":"Matthew D. Sergison , Shani Poleg , Nathaniel T. Greene , Achim Klug , Daniel J. Tollin","doi":"10.1016/j.heares.2025.109507","DOIUrl":"10.1016/j.heares.2025.109507","url":null,"abstract":"<div><div>The Mongolian gerbil is a common model organism for studying the neural and behavioral mechanisms of binaural and spatial hearing, largely because of its ability to hear lower frequencies better than other rodents and thus utilize both interaural time and level difference cues for sound localization. Prior spatial hearing studies in gerbils have relied on operant conditioning paradigms, requiring large amounts of time-consuming training and testing on multiple different tasks needed to make a comprehensive assessment of spatial hearing ability (including temporal processing, spatial acuity and spatial unmasking). This limits the ability of researchers to thoroughly assess behavioral performance in a population of animals. In this study, we used the prepulse inhibition of the acoustic startle reflex (PPI) to extensively assess spatial hearing and temporal processing abilities in a population of gerbils of both sexes. Results show that gerbils inhibit a startle response to a brief loud sound based on prepulse acoustical cues consisting of a 1) temporal gap in ongoing sounds, 2) change in sound source location, and 3) target sound in the presence of a masker. In each test, the magnitude of the suppression of startle increased monotonically as a function of the magnitude of the acoustical prepulse, not unlike a psychometric function, from which threshold performance could be measured. Thresholds in the gerbils in each task measured using PPI matched those acquired using operant conditioning methods.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"470 ","pages":"Article 109507"},"PeriodicalIF":2.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145833718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-12-18DOI: 10.1016/j.heares.2025.109513
Eser Sendesen , Hasan Colak , İrem Sendesen
Tinnitus is a complex clinical condition that lacks a well-established treatment because its underlying mechanisms remain poorly understood. This study investigated the long-term efficacy of a personalized 10 Hz amplitude-modulated (AM) sound enrichment protocol in comparison with an unmodulated (UM) protocol. Seventy-one participants with chronic tonal tinnitus were assigned to either a modulated group (MG, n = 27), which underwent a 10 Hz AM sound complex, or an unmodulated group (UMG, n = 44), which underwent an unmodulated sound complex. Stimuli were individually customized by spectrally shaping a noise band to compensate for each participant’s audiometric hearing loss and by increasing energy around their specific tinnitus pitch. Outcomes included the Tinnitus Handicap Inventory (THI), tinnitus loudness level (TLL), and minimum masking level (MML), assessed at baseline and at 1, 3, and 6 months. Both groups showed significant improvements in THI, TLL, and MML over 6 months (p < .001). However, the MG demonstrated a significantly greater reduction in MML compared with the UMG (F(1,69) = 4.001, p = .049). A higher proportion of participants in the MG reported complete tinnitus suppression (18.51%) compared with the UMG (4.54%). Customized sound enrichment is an effective long-term treatment for tinnitus associated with hearing loss. Incorporating 10 Hz amplitude modulation provides an additional benefit by reducing MML, suggesting that patients become less sensitive to tinnitus perception. These findings highlight the importance of an individualized approach and support the use of modulated stimuli such as 10 Hz AM sound in long-term treatment protocols.
耳鸣是一种复杂的临床状况,缺乏完善的治疗方法,因为其潜在的机制仍然知之甚少。本研究调查了个性化10hz调幅(AM)声音富集方案与非调幅(UM)方案的长期效果。71名患有慢性调性耳鸣的参与者被分配到调制组(MG, n = 27)和非调制组(UMG, n = 44),前者接受10 Hz AM复合声,后者接受非调制复合声。刺激是单独定制的,通过频谱塑造噪声带来补偿每个参与者的听力损失,并通过增加特定耳鸣音高周围的能量。结果包括在基线和1、3和6个月时评估耳鸣障碍量表(THI)、耳鸣响度水平(TLL)和最小掩蔽水平(MML)。两组在6个月内THI、TLL和MML均有显著改善(p < 0.001)。然而,与UMG相比,MG组MML的减少明显更大(F(1,69) = 4.001, p = 0.049)。MG组报告耳鸣完全抑制的比例(18.51%)高于UMG组(4.54%)。定制声音富集是一种有效的长期治疗耳鸣与听力损失。结合10hz振幅调制提供了额外的好处,通过减少MML,表明患者对耳鸣的感知变得不那么敏感。这些发现强调了个性化治疗方法的重要性,并支持在长期治疗方案中使用调制刺激,如10hz AM声音。
{"title":"Sound enrichment therapy is more effective for long-term tinnitus suppression with 10 Hz amplitude modulation","authors":"Eser Sendesen , Hasan Colak , İrem Sendesen","doi":"10.1016/j.heares.2025.109513","DOIUrl":"10.1016/j.heares.2025.109513","url":null,"abstract":"<div><div>Tinnitus is a complex clinical condition that lacks a well-established treatment because its underlying mechanisms remain poorly understood. This study investigated the long-term efficacy of a personalized 10 Hz amplitude-modulated (AM) sound enrichment protocol in comparison with an unmodulated (UM) protocol. Seventy-one participants with chronic tonal tinnitus were assigned to either a modulated group (MG, <em>n</em> = 27), which underwent a 10 Hz AM sound complex, or an unmodulated group (UMG, <em>n</em> = 44), which underwent an unmodulated sound complex. Stimuli were individually customized by spectrally shaping a noise band to compensate for each participant’s audiometric hearing loss and by increasing energy around their specific tinnitus pitch. Outcomes included the Tinnitus Handicap Inventory (THI), tinnitus loudness level (TLL), and minimum masking level (MML), assessed at baseline and at 1, 3, and 6 months. Both groups showed significant improvements in THI, TLL, and MML over 6 months (<em>p</em> < .001). However, the MG demonstrated a significantly greater reduction in MML compared with the UMG (F(1,69) = 4.001, <em>p</em> = .049). A higher proportion of participants in the MG reported complete tinnitus suppression (18.51%) compared with the UMG (4.54%). Customized sound enrichment is an effective long-term treatment for tinnitus associated with hearing loss. Incorporating 10 Hz amplitude modulation provides an additional benefit by reducing MML, suggesting that patients become less sensitive to tinnitus perception. These findings highlight the importance of an individualized approach and support the use of modulated stimuli such as 10 Hz AM sound in long-term treatment protocols.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"470 ","pages":"Article 109513"},"PeriodicalIF":2.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145819017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-12-10DOI: 10.1016/j.heares.2025.109506
Lei Zhou , Chunyan Li , Na Shen , Keguang Chen , Huaili Jiang , Miaolin Feng , Menglong Zhao , Chi Cheng , Xinsheng Huang
Objective
To establish a high-fidelity finite element method (FEM) model of the human inner ear and explore the biomechanical effects of superior semicircular canal dehiscence (SCD) on both cochlear and vestibular function.
Methods
A detailed FEM model of the entire human ear was reconstructed from high-resolution computed tomography (CT) data. The model was validated through comparison with established experimental data, including basilar membrane (BM) displacement patterns, cochlear tonotopy, inner ear impedance, and middle-ear transfer function. After validation, the model was adapted to simulate SCD.
Results
The simulated outcomes were consistent with published in-vitro and in-vivo findings, indicating the accuracy of the model. The introduction of SCD resulted in attenuated BM displacement, a marked reduction in cochlear impedance, and an increase in vestibular sensitivity to air-conducted stimuli.
Conclusion
This study developed and validated a whole-ear FEM model demonstrating that SCD produces low-frequency conductive hearing loss and enhances vestibular sound responses. These findings provide explanations for clinical symptoms and VEMP findings, while also revealing the influence of intracranial pressure. Collectively, this model serves as a valuable tool for advancing pathophysiological and diagnostic research.
{"title":"Whole-ear finite element analysis of superior semicircular canal dehiscence and its impact on inner-ear responses","authors":"Lei Zhou , Chunyan Li , Na Shen , Keguang Chen , Huaili Jiang , Miaolin Feng , Menglong Zhao , Chi Cheng , Xinsheng Huang","doi":"10.1016/j.heares.2025.109506","DOIUrl":"10.1016/j.heares.2025.109506","url":null,"abstract":"<div><h3>Objective</h3><div>To establish a high-fidelity finite element method (FEM) model of the human inner ear and explore the biomechanical effects of superior semicircular canal dehiscence (SCD) on both cochlear and vestibular function.</div></div><div><h3>Methods</h3><div>A detailed FEM model of the entire human ear was reconstructed from high-resolution computed tomography (CT) data. The model was validated through comparison with established experimental data, including basilar membrane (BM) displacement patterns, cochlear tonotopy, inner ear impedance, and middle-ear transfer function. After validation, the model was adapted to simulate SCD.</div></div><div><h3>Results</h3><div>The simulated outcomes were consistent with published <em>in-vitro</em> and <em>in-vivo</em> findings, indicating the accuracy of the model. The introduction of SCD resulted in attenuated BM displacement, a marked reduction in cochlear impedance, and an increase in vestibular sensitivity to air-conducted stimuli.</div></div><div><h3>Conclusion</h3><div>This study developed and validated a whole-ear FEM model demonstrating that SCD produces low-frequency conductive hearing loss and enhances vestibular sound responses. These findings provide explanations for clinical symptoms and VEMP findings, while also revealing the influence of intracranial pressure. Collectively, this model serves as a valuable tool for advancing pathophysiological and diagnostic research.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"470 ","pages":"Article 109506"},"PeriodicalIF":2.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145749656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-12-09DOI: 10.1016/j.heares.2025.109505
Alexander Huber , Bastian Baselt , Ivo Dobrev , Lukas Prochazka , Flurin Pfiffner , Dominik Etter , Nicole Peter-Siegrist , Christof Röösli , Jae Hoon Sim , Merlin Schär
Accurate experimental measurement of middle ear mechanics is critical for both basic auditory research and clinical applications. Although numerous experimental studies have characterized middle ear function, structured guidance for selecting appropriate measurement techniques is limited, which can result in suboptimal experimental designs.
In this article, we present a systematic, three-phase framework for method selection in middle ear research. Phase 1 defines project-specific parameters based on the research question, Phase 2 maps these parameters to relevant physical quantities, and Phase 3 identifies suitable techniques from a methods toolbox using a “Zurich Measurement Assessment Chart (ZMAC). ZMAC visualizes the performance of methods across multiple criteria. The article includes a method toolbox that offers a structured guide to the wide range of techniques available for studying middle ear mechanics. The methods outline is organized into major measurement domains such as static and dynamic motion, geometry and microstructure, pressure and force, and clinical assessments. Each method is presented in a standardized format that summarizes core principles, use cases, advantages and limitations, and future developments, enabling researchers to efficiently translate project-specific parameters into practical implementation. Furthermore, ZMAC contributes to improved reproducibility and more consistent standardization across laboratories.
Middle ear measurements are inherently challenging due to the extremely small amplitudes, forces, and pressures involved, evolving at high temporal resolution. No single technique provides a universal solution. Instead, method selection must be tailored to the research objective, carefully balancing strengths and limitations in relation to the specific research question. Looking forward, advances in middle ear research are expected from multimodal, miniaturized, and artificial intelligence (AI)-assisted approaches linking structure and mechanics to patient-centered outcomes and therapeutic benefit.
{"title":"Methods matter: Current and future practices for middle ear mechanics laboratories","authors":"Alexander Huber , Bastian Baselt , Ivo Dobrev , Lukas Prochazka , Flurin Pfiffner , Dominik Etter , Nicole Peter-Siegrist , Christof Röösli , Jae Hoon Sim , Merlin Schär","doi":"10.1016/j.heares.2025.109505","DOIUrl":"10.1016/j.heares.2025.109505","url":null,"abstract":"<div><div>Accurate experimental measurement of middle ear mechanics is critical for both basic auditory research and clinical applications. Although numerous experimental studies have characterized middle ear function, structured guidance for selecting appropriate measurement techniques is limited, which can result in suboptimal experimental designs.</div><div>In this article, we present a systematic, three-phase framework for method selection in middle ear research. Phase 1 defines project-specific parameters based on the research question, Phase 2 maps these parameters to relevant physical quantities, and Phase 3 identifies suitable techniques from a methods toolbox using a “Zurich Measurement Assessment Chart (ZMAC). ZMAC visualizes the performance of methods across multiple criteria. The article includes a method toolbox that offers a structured guide to the wide range of techniques available for studying middle ear mechanics. The methods outline is organized into major measurement domains such as static and dynamic motion, geometry and microstructure, pressure and force, and clinical assessments. Each method is presented in a standardized format that summarizes core principles, use cases, advantages and limitations, and future developments, enabling researchers to efficiently translate project-specific parameters into practical implementation. Furthermore, ZMAC contributes to improved reproducibility and more consistent standardization across laboratories.</div><div>Middle ear measurements are inherently challenging due to the extremely small amplitudes, forces, and pressures involved, evolving at high temporal resolution. No single technique provides a universal solution. Instead, method selection must be tailored to the research objective, carefully balancing strengths and limitations in relation to the specific research question. Looking forward, advances in middle ear research are expected from multimodal, miniaturized, and artificial intelligence (AI)-assisted approaches linking structure and mechanics to patient-centered outcomes and therapeutic benefit.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"470 ","pages":"Article 109505"},"PeriodicalIF":2.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145793065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cisplatin is a widely used chemotherapeutic agent, but its clinical utility is limited by dose-dependent ototoxicity, causing irreversible sensorineural hearing loss and significantly impairing quality of life, especially in pediatric patients. This review aims to systematically examines the molecular mechanisms underlying cisplatin-induced ototoxicity and evaluate both current and emerging preventive strategies. We find that the central pathological process involves a self-perpetuating cycle of oxidative stress and immune-inflammatory responses within the cochlea, ultimately triggering the programmed death of hair cells. We critically appraise current pharmacological interventions, noting that while antioxidants, anti-inflammatory agents, and targeted delivery strategies demonstrate partial protection, their efficacy is constrained by single-target approaches, trade-offs between efficacy and safety, and interpatient variability. In contrast, emerging strategies—including nanotechnology-based drug delivery, gene therapy, epigenetic modulation, stem cell transplantation, and artificial intelligence-driven personalized interventions—offer multi-mechanistic, targeted, and potentially more effective alternatives. These emerging strategies, grounded in a detailed understanding of the core mechanisms, highlight the need for integrative, precision-focused otoprotective strategies and provide a theoretical foundation to guide future translational research.
{"title":"Bridging the gap: mechanisms and novel translational strategies to prevent cisplatin-induced ototoxicity","authors":"Jie Bai, Wenjia Wang, Zeming Fu, Jingpu Yang, Yingyuan Guo, Guofang Guan","doi":"10.1016/j.heares.2025.109487","DOIUrl":"10.1016/j.heares.2025.109487","url":null,"abstract":"<div><div>Cisplatin is a widely used chemotherapeutic agent, but its clinical utility is limited by dose-dependent ototoxicity, causing irreversible sensorineural hearing loss and significantly impairing quality of life, especially in pediatric patients. This review aims to systematically examines the molecular mechanisms underlying cisplatin-induced ototoxicity and evaluate both current and emerging preventive strategies. We find that the central pathological process involves a self-perpetuating cycle of oxidative stress and immune-inflammatory responses within the cochlea, ultimately triggering the programmed death of hair cells. We critically appraise current pharmacological interventions, noting that while antioxidants, anti-inflammatory agents, and targeted delivery strategies demonstrate partial protection, their efficacy is constrained by single-target approaches, trade-offs between efficacy and safety, and interpatient variability. In contrast, emerging strategies—including nanotechnology-based drug delivery, gene therapy, epigenetic modulation, stem cell transplantation, and artificial intelligence-driven personalized interventions—offer multi-mechanistic, targeted, and potentially more effective alternatives. These emerging strategies, grounded in a detailed understanding of the core mechanisms, highlight the need for integrative, precision-focused otoprotective strategies and provide a theoretical foundation to guide future translational research.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"469 ","pages":"Article 109487"},"PeriodicalIF":2.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145621690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Misophonia is characterized by intense emotional responses to specific sounds, yet its neurophysiological basis remains unclear. This study investigated auditory cortical processing using multichannel auditory late latency responses (ALLR). ALLR recordings were obtained from 30 participants (15 with misophonia and 15 controls). Latencies and amplitudes of the P1-N1-P2 peaks were analyzed at Fz, Cz, and Pz, along with scalp topography. Results showed significantly earlier latencies and reduced N1 amplitudes in the misophonia group across all sites (Fz, Cz, and Pz), indicating heightened cortical activity. Topographical analysis revealed distinct scalp patterns: the misophonia group showed centro-parietal distributions, contrasting with the fronto-central patterns exhibited in controls. These findings suggest altered early-auditory processing and atypical cortical activation in individuals with misophonia, supporting its neurophysiological basis. The reduced N1 amplitude may represent a neurophysiological biomarker, while multichannel ALLR could serve as an objective index for diagnosis and treatment monitoring in future clinical applications.
{"title":"Multichannel auditory cortical responses in misophonia: A neurophysiological investigation","authors":"Kamalakannan Karupaiah , Rakesh Trinesh , Ajith Kumar Uppunda , Prashanth Prabhu","doi":"10.1016/j.heares.2025.109458","DOIUrl":"10.1016/j.heares.2025.109458","url":null,"abstract":"<div><div>Misophonia is characterized by intense emotional responses to specific sounds, yet its neurophysiological basis remains unclear. This study investigated auditory cortical processing using multichannel auditory late latency responses (ALLR). ALLR recordings were obtained from 30 participants (15 with misophonia and 15 controls). Latencies and amplitudes of the P1-N1-P2 peaks were analyzed at Fz, Cz, and Pz, along with scalp topography. Results showed significantly earlier latencies and reduced N1 amplitudes in the misophonia group across all sites (Fz, Cz, and Pz), indicating heightened cortical activity. Topographical analysis revealed distinct scalp patterns: the misophonia group showed centro-parietal distributions, contrasting with the fronto-central patterns exhibited in controls. These findings suggest altered early-auditory processing and atypical cortical activation in individuals with misophonia, supporting its neurophysiological basis. The reduced N1 amplitude may represent a neurophysiological biomarker, while multichannel ALLR could serve as an objective index for diagnosis and treatment monitoring in future clinical applications.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"468 ","pages":"Article 109458"},"PeriodicalIF":2.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145417848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-11-06DOI: 10.1016/j.heares.2025.109468
Xin Zhou , Xiaonan Wu , Suwei Ma , Qingxuan Cui , Linyi Xie , Fen Xiong , Guohui Chen , Jin Li , Mengtao Song , Lan Lan , Dayong Wang , Qiuju Wang
Auditory neuropathy (AN) is a complex auditory disorder characterized by disproportionately poor speech discrimination despite preserved auditory sensitivity, substantially impacting daily communication and overall quality of life. This study conducted comprehensive audiological measurements and high-density electroencephalography (EEG) measurements in resting and auditory task states on 21 AN, 21 age-, gender-, and hearing threshold-matched sensorineural hearing loss (SNHL), and 21 age- and gender-matched normal hearing (NH) subjects. The topological network attributes, microstates, event-related potentials (ERP), cortical lateralization, phase-locking value (PLV) functional connectivity strength of EEG, and correlations with audiological indicators were compared among three groups. The results showed that in the resting state, the global field power (GFP) of microstate A differed significantly after FDR correction, with SNHL showing higher GFP 3.23 (2.46–3.93) μV than AN 2.37 (2.08–3.08) μV and NH 2.38 (2.08–2.63) μV. The transition probability (TP) from microstate A to B and from B to C were higher in SNHL than NH (both P after correction = 0.011). During task processing, N1 amplitude was lower in SNHL than NH (P after correction = 0.023), while N1 latency was shorter in AN than SNHL (P after correction = 0.006) and was correlated with low-frequency PTA (correlation coefficient = 0.362, P after correction = 0.020). AN additionally exhibited left-hemispheric lateralization (P after correction < 0.05). Source localization revealed greater cortical activation in SNHL than in AN and NH, predominantly in the superior frontal gyrus (SNHL > NH: P = 0.00020, t0.05 = 3.692, and SNHL > AN: P = 0.01140, t0.05 = -3.794). Collectively, these findings demonstrate that AN exhibits unique neural compensation patterns distinct from SNHL, supporting cortical reorganization mechanisms specific to neural dyssynchrony rather than simple auditory input reduction.
{"title":"Characteristic cortical alterations in auditory neuropathy: An EEG study","authors":"Xin Zhou , Xiaonan Wu , Suwei Ma , Qingxuan Cui , Linyi Xie , Fen Xiong , Guohui Chen , Jin Li , Mengtao Song , Lan Lan , Dayong Wang , Qiuju Wang","doi":"10.1016/j.heares.2025.109468","DOIUrl":"10.1016/j.heares.2025.109468","url":null,"abstract":"<div><div>Auditory neuropathy (AN) is a complex auditory disorder characterized by disproportionately poor speech discrimination despite preserved auditory sensitivity, substantially impacting daily communication and overall quality of life. This study conducted comprehensive audiological measurements and high-density electroencephalography (EEG) measurements in resting and auditory task states on 21 AN, 21 age-, gender-, and hearing threshold-matched sensorineural hearing loss (SNHL), and 21 age- and gender-matched normal hearing (NH) subjects. The topological network attributes, microstates, event-related potentials (ERP), cortical lateralization, phase-locking value (PLV) functional connectivity strength of EEG, and correlations with audiological indicators were compared among three groups. The results showed that in the resting state, the global field power (GFP) of microstate A differed significantly after FDR correction, with SNHL showing higher GFP 3.23 (2.46–3.93) μV than AN 2.37 (2.08–3.08) μV and NH 2.38 (2.08–2.63) μV. The transition probability (TP) from microstate A to B and from B to C were higher in SNHL than NH (both <em>P</em> after correction = 0.011). During task processing, N1 amplitude was lower in SNHL than NH (<em>P</em> after correction = 0.023), while N1 latency was shorter in AN than SNHL (<em>P</em> after correction = 0.006) and was correlated with low-frequency PTA (correlation coefficient = 0.362, <em>P</em> after correction = 0.020). AN additionally exhibited left-hemispheric lateralization (<em>P</em> after correction < 0.05). Source localization revealed greater cortical activation in SNHL than in AN and NH, predominantly in the superior frontal gyrus (SNHL > NH: <em>P</em> = 0.00020, t<sub>0.05</sub> = 3.692, and SNHL > AN: <em>P</em> = 0.01140, t<sub>0.05</sub> = -3.794). Collectively, these findings demonstrate that AN exhibits unique neural compensation patterns distinct from SNHL, supporting cortical reorganization mechanisms specific to neural dyssynchrony rather than simple auditory input reduction.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"468 ","pages":"Article 109468"},"PeriodicalIF":2.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145512164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-08-05DOI: 10.1016/j.heares.2025.109377
Ted W. Cranford , Margaret A. Morris , Petr Krysl , John A. Hildebrand
Baleen whales produce and receive underwater sounds with wavelengths much longer than their bodies, despite having ears approximately the size of a human fist. How do they hear long wavelength sounds with relatively small ears? In 2015, we produced a computational model to simulate low-frequency hearing in the fin whale. That study predicted bone conduction as the most likely hearing mechanism. The whale’s enormous skull acts as an external ear, capturing sounds like an acoustic antenna and transmitting them to other parts of the ear.
In the current study, we tested the bone conduction hypothesis with physical, vibroacoustic experiments using partially denuded gray whale skulls. These experiments validated that long wavelength sounds excite skull vibrations, which are amplified and transferred to the dynamic components of each bony ear, known as the tympanoperiotic complex. Vibrations of the dynamic components include the bony pedicles, tympanic bullae and middle ear ossicles, resulting in displacement of fluid within the cochlea of the inner ear.
The pedicles are important components of this mechanism, thin flexible bones that suspend the bullae from the periotic, amplifying the low-frequency vibrations from the skull. We contend that this skull-driven pathway of sound reception and amplification within the bony ear complexes is key to understanding low-frequency hearing capabilities and mysticete natural history.
{"title":"Colossal ears? How baleen whales hear low-frequency sound","authors":"Ted W. Cranford , Margaret A. Morris , Petr Krysl , John A. Hildebrand","doi":"10.1016/j.heares.2025.109377","DOIUrl":"10.1016/j.heares.2025.109377","url":null,"abstract":"<div><div>Baleen whales produce and receive underwater sounds with wavelengths much longer than their bodies, despite having ears approximately the size of a human fist. How do they hear long wavelength sounds with relatively small ears? In 2015, we produced a computational model to simulate low-frequency hearing in the fin whale. That study predicted bone conduction as the most likely hearing mechanism. The whale’s enormous skull acts as an external ear, capturing sounds like an acoustic antenna and transmitting them to other parts of the ear.</div><div>In the current study, we tested the bone conduction hypothesis with physical, vibroacoustic experiments using partially denuded gray whale skulls. These experiments <em>validated</em> that long wavelength sounds excite skull vibrations, which are amplified and transferred to the dynamic components of each bony ear, known as the tympanoperiotic complex. Vibrations of the dynamic components include the bony pedicles, tympanic bullae and middle ear ossicles, resulting in displacement of fluid within the cochlea of the inner ear.</div><div>The pedicles are important components of this mechanism, thin flexible bones that suspend the bullae from the periotic, amplifying the low-frequency vibrations from the skull. We contend that this skull-driven pathway of sound reception and amplification within the bony ear complexes is key to understanding low-frequency hearing capabilities and mysticete natural history.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"468 ","pages":"Article 109377"},"PeriodicalIF":2.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145388881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-27DOI: 10.1016/j.heares.2025.109437
Katherine Bak , Lena Darakjian , Frank A. Russo , M. Kathleen Pichora-Fuller , Jennifer L. Campos
Age-related hearing loss may increase listening difficulties in challenging listening conditions (e.g., speech-in-noise), limiting cognitive resources available to perform common, complex multitasking behaviours like listening while driving. Older adults with hearing loss may compensate by increasing prefrontal cortex (PFC) activation in response to multitasking demands. Few realistic, controlled studies have examined how competing attentional demands of listening while driving affect performance and brain activation, and how these patterns may differ between older adults with and without audiometric hearing loss. This study examined dual-task costs and neural activation levels during a listening-while-driving task in 28 older adults with normal hearing (Mage = 71.79 years) and 22 older adults with hearing loss (Mage=74.00 years) using functional near-infrared spectroscopy (fNIRS). Participants completed a driving task in a high-fidelity driving simulator under simpler (Rural) and more complex (City) conditions and the Connected Speech Test (CST) at +4 dB and 0 dB signal-to-noise ratios (SNR; easier and harder listening respectively). They also performed both tasks simultaneously to examine dual-task costs. fNIRS was recorded during all conditions. Results demonstrated that older adults with hearing loss showed poorer listening accuracy, poorer driving performance, and greater oxygenation concentration in the PFC than those with normal hearing. Both groups showed poorer listening and driving performance in the dual-task compared to the single-task conditions, with the greatest dual-task costs observed during the most difficult condition (City driving, 0 dB SNR). Broadly, these findings could inform strategies to optimize vehicle acoustics and reduce auditory distractions, thereby supporting driving performance in challenging driving conditions.
{"title":"Dual-task costs of listening while driving in older adults with and without audiometric hearing loss: Behavioural and neurophysiological outcomes","authors":"Katherine Bak , Lena Darakjian , Frank A. Russo , M. Kathleen Pichora-Fuller , Jennifer L. Campos","doi":"10.1016/j.heares.2025.109437","DOIUrl":"10.1016/j.heares.2025.109437","url":null,"abstract":"<div><div>Age-related hearing loss may increase listening difficulties in challenging listening conditions (e.g., speech-in-noise), limiting cognitive resources available to perform common, complex multitasking behaviours like listening while driving. Older adults with hearing loss may compensate by increasing prefrontal cortex (PFC) activation in response to multitasking demands. Few realistic, controlled studies have examined how competing attentional demands of listening while driving affect performance and brain activation, and how these patterns may differ between older adults with and without audiometric hearing loss. This study examined dual-task costs and neural activation levels during a listening-while-driving task in 28 older adults with normal hearing (Mage = 71.79 years) and 22 older adults with hearing loss (Mage=74.00 years) using functional near-infrared spectroscopy (fNIRS). Participants completed a driving task in a high-fidelity driving simulator under simpler (Rural) and more complex (City) conditions and the Connected Speech Test (CST) at +4 dB and 0 dB signal-to-noise ratios (SNR; easier and harder listening respectively). They also performed both tasks simultaneously to examine dual-task costs. fNIRS was recorded during all conditions. Results demonstrated that older adults with hearing loss showed poorer listening accuracy, poorer driving performance, and greater oxygenation concentration in the PFC than those with normal hearing. Both groups showed poorer listening and driving performance in the dual-task compared to the single-task conditions, with the greatest dual-task costs observed during the most difficult condition (City driving, 0 dB SNR). Broadly, these findings could inform strategies to optimize vehicle acoustics and reduce auditory distractions, thereby supporting driving performance in challenging driving conditions.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"468 ","pages":"Article 109437"},"PeriodicalIF":2.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145344967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-11-08DOI: 10.1016/j.heares.2025.109470
Sara Cacciato-Salcedo , Ana B. Lao-Rodríguez , Manuel S. Malmierca
Prenatal exposure to valproic acid (VPA) provides a well-established rodent model of autism, yet its effects on auditory brainstem/midbrain processing across sex and development remain elusive. We recorded click-evoked auditory brainstem responses (ABRs) in Long–Evans rats that received prenatal VPA (400 mg/kg, gestational day 12) and in matched controls at prepubertal (postnatal days 30–45) and adult (65–120) stages under urethane anesthesia. We analyzed peak amplitudes, latencies, inter-peak intervals, and amplitude ratios across sound levels. Auditory thresholds remained comparable among groups. In controls, females showed larger amplitudes for waves I–II, shorter latencies for waves I, II, and IV, and steeper amplitude–intensity slopes for waves II, III, and V than males, indicating stronger level-dependent recruitment. Maturation enhanced early brainstem and midbrain responses by increasing amplitude growth (wave II) and shortening latencies (waves II–V), with effects more pronounced in females. Prenatal VPA exposure reduced wave II amplitude and delayed early peaks (I–III) in females, accompanied by elevated amplitude ratios, whereas in males it mainly affected later responses by reducing amplitudes for waves III–V and prolonging inter-peak latencies (I–III, III–V). These findings show that sex, age, and prenatal VPA exposure distinctly shape auditory brainstem/midbrain function.
{"title":"Sex- and age-specific effects on auditory brainstem responses in the valproic acid-induced rat model of autism","authors":"Sara Cacciato-Salcedo , Ana B. Lao-Rodríguez , Manuel S. Malmierca","doi":"10.1016/j.heares.2025.109470","DOIUrl":"10.1016/j.heares.2025.109470","url":null,"abstract":"<div><div>Prenatal exposure to valproic acid (VPA) provides a well-established rodent model of autism, yet its effects on auditory brainstem/midbrain processing across sex and development remain elusive. We recorded click-evoked auditory brainstem responses (ABRs) in Long–Evans rats that received prenatal VPA (400 mg/kg, gestational day 12) and in matched controls at prepubertal (postnatal days 30–45) and adult (65–120) stages under urethane anesthesia. We analyzed peak amplitudes, latencies, inter-peak intervals, and amplitude ratios across sound levels. Auditory thresholds remained comparable among groups. In controls, females showed larger amplitudes for waves I–II, shorter latencies for waves I, II, and IV, and steeper amplitude–intensity slopes for waves II, III, and V than males, indicating stronger level-dependent recruitment. Maturation enhanced early brainstem and midbrain responses by increasing amplitude growth (wave II) and shortening latencies (waves II–V), with effects more pronounced in females. Prenatal VPA exposure reduced wave II amplitude and delayed early peaks (I–III) in females, accompanied by elevated amplitude ratios, whereas in males it mainly affected later responses by reducing amplitudes for waves III–V and prolonging inter-peak latencies (I–III, III–V). These findings show that sex, age, and prenatal VPA exposure distinctly shape auditory brainstem/midbrain function.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"468 ","pages":"Article 109470"},"PeriodicalIF":2.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145512225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}