Cisplatin is a widely used chemotherapeutic agent, but its clinical utility is limited by dose-dependent ototoxicity, causing irreversible sensorineural hearing loss and significantly impairing quality of life, especially in pediatric patients. This review aims to systematically examines the molecular mechanisms underlying cisplatin-induced ototoxicity and evaluate both current and emerging preventive strategies. We find that the central pathological process involves a self-perpetuating cycle of oxidative stress and immune-inflammatory responses within the cochlea, ultimately triggering the programmed death of hair cells. We critically appraise current pharmacological interventions, noting that while antioxidants, anti-inflammatory agents, and targeted delivery strategies demonstrate partial protection, their efficacy is constrained by single-target approaches, trade-offs between efficacy and safety, and interpatient variability. In contrast, emerging strategies—including nanotechnology-based drug delivery, gene therapy, epigenetic modulation, stem cell transplantation, and artificial intelligence-driven personalized interventions—offer multi-mechanistic, targeted, and potentially more effective alternatives. These emerging strategies, grounded in a detailed understanding of the core mechanisms, highlight the need for integrative, precision-focused otoprotective strategies and provide a theoretical foundation to guide future translational research.
{"title":"Bridging the gap: mechanisms and novel translational strategies to prevent cisplatin-induced ototoxicity","authors":"Jie Bai, Wenjia Wang, Zeming Fu, Jingpu Yang, Yingyuan Guo, Guofang Guan","doi":"10.1016/j.heares.2025.109487","DOIUrl":"10.1016/j.heares.2025.109487","url":null,"abstract":"<div><div>Cisplatin is a widely used chemotherapeutic agent, but its clinical utility is limited by dose-dependent ototoxicity, causing irreversible sensorineural hearing loss and significantly impairing quality of life, especially in pediatric patients. This review aims to systematically examines the molecular mechanisms underlying cisplatin-induced ototoxicity and evaluate both current and emerging preventive strategies. We find that the central pathological process involves a self-perpetuating cycle of oxidative stress and immune-inflammatory responses within the cochlea, ultimately triggering the programmed death of hair cells. We critically appraise current pharmacological interventions, noting that while antioxidants, anti-inflammatory agents, and targeted delivery strategies demonstrate partial protection, their efficacy is constrained by single-target approaches, trade-offs between efficacy and safety, and interpatient variability. In contrast, emerging strategies—including nanotechnology-based drug delivery, gene therapy, epigenetic modulation, stem cell transplantation, and artificial intelligence-driven personalized interventions—offer multi-mechanistic, targeted, and potentially more effective alternatives. These emerging strategies, grounded in a detailed understanding of the core mechanisms, highlight the need for integrative, precision-focused otoprotective strategies and provide a theoretical foundation to guide future translational research.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"469 ","pages":"Article 109487"},"PeriodicalIF":2.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145621690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Misophonia is characterized by intense emotional responses to specific sounds, yet its neurophysiological basis remains unclear. This study investigated auditory cortical processing using multichannel auditory late latency responses (ALLR). ALLR recordings were obtained from 30 participants (15 with misophonia and 15 controls). Latencies and amplitudes of the P1-N1-P2 peaks were analyzed at Fz, Cz, and Pz, along with scalp topography. Results showed significantly earlier latencies and reduced N1 amplitudes in the misophonia group across all sites (Fz, Cz, and Pz), indicating heightened cortical activity. Topographical analysis revealed distinct scalp patterns: the misophonia group showed centro-parietal distributions, contrasting with the fronto-central patterns exhibited in controls. These findings suggest altered early-auditory processing and atypical cortical activation in individuals with misophonia, supporting its neurophysiological basis. The reduced N1 amplitude may represent a neurophysiological biomarker, while multichannel ALLR could serve as an objective index for diagnosis and treatment monitoring in future clinical applications.
{"title":"Multichannel auditory cortical responses in misophonia: A neurophysiological investigation","authors":"Kamalakannan Karupaiah , Rakesh Trinesh , Ajith Kumar Uppunda , Prashanth Prabhu","doi":"10.1016/j.heares.2025.109458","DOIUrl":"10.1016/j.heares.2025.109458","url":null,"abstract":"<div><div>Misophonia is characterized by intense emotional responses to specific sounds, yet its neurophysiological basis remains unclear. This study investigated auditory cortical processing using multichannel auditory late latency responses (ALLR). ALLR recordings were obtained from 30 participants (15 with misophonia and 15 controls). Latencies and amplitudes of the P1-N1-P2 peaks were analyzed at Fz, Cz, and Pz, along with scalp topography. Results showed significantly earlier latencies and reduced N1 amplitudes in the misophonia group across all sites (Fz, Cz, and Pz), indicating heightened cortical activity. Topographical analysis revealed distinct scalp patterns: the misophonia group showed centro-parietal distributions, contrasting with the fronto-central patterns exhibited in controls. These findings suggest altered early-auditory processing and atypical cortical activation in individuals with misophonia, supporting its neurophysiological basis. The reduced N1 amplitude may represent a neurophysiological biomarker, while multichannel ALLR could serve as an objective index for diagnosis and treatment monitoring in future clinical applications.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"468 ","pages":"Article 109458"},"PeriodicalIF":2.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145417848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-11-06DOI: 10.1016/j.heares.2025.109468
Xin Zhou , Xiaonan Wu , Suwei Ma , Qingxuan Cui , Linyi Xie , Fen Xiong , Guohui Chen , Jin Li , Mengtao Song , Lan Lan , Dayong Wang , Qiuju Wang
Auditory neuropathy (AN) is a complex auditory disorder characterized by disproportionately poor speech discrimination despite preserved auditory sensitivity, substantially impacting daily communication and overall quality of life. This study conducted comprehensive audiological measurements and high-density electroencephalography (EEG) measurements in resting and auditory task states on 21 AN, 21 age-, gender-, and hearing threshold-matched sensorineural hearing loss (SNHL), and 21 age- and gender-matched normal hearing (NH) subjects. The topological network attributes, microstates, event-related potentials (ERP), cortical lateralization, phase-locking value (PLV) functional connectivity strength of EEG, and correlations with audiological indicators were compared among three groups. The results showed that in the resting state, the global field power (GFP) of microstate A differed significantly after FDR correction, with SNHL showing higher GFP 3.23 (2.46–3.93) μV than AN 2.37 (2.08–3.08) μV and NH 2.38 (2.08–2.63) μV. The transition probability (TP) from microstate A to B and from B to C were higher in SNHL than NH (both P after correction = 0.011). During task processing, N1 amplitude was lower in SNHL than NH (P after correction = 0.023), while N1 latency was shorter in AN than SNHL (P after correction = 0.006) and was correlated with low-frequency PTA (correlation coefficient = 0.362, P after correction = 0.020). AN additionally exhibited left-hemispheric lateralization (P after correction < 0.05). Source localization revealed greater cortical activation in SNHL than in AN and NH, predominantly in the superior frontal gyrus (SNHL > NH: P = 0.00020, t0.05 = 3.692, and SNHL > AN: P = 0.01140, t0.05 = -3.794). Collectively, these findings demonstrate that AN exhibits unique neural compensation patterns distinct from SNHL, supporting cortical reorganization mechanisms specific to neural dyssynchrony rather than simple auditory input reduction.
{"title":"Characteristic cortical alterations in auditory neuropathy: An EEG study","authors":"Xin Zhou , Xiaonan Wu , Suwei Ma , Qingxuan Cui , Linyi Xie , Fen Xiong , Guohui Chen , Jin Li , Mengtao Song , Lan Lan , Dayong Wang , Qiuju Wang","doi":"10.1016/j.heares.2025.109468","DOIUrl":"10.1016/j.heares.2025.109468","url":null,"abstract":"<div><div>Auditory neuropathy (AN) is a complex auditory disorder characterized by disproportionately poor speech discrimination despite preserved auditory sensitivity, substantially impacting daily communication and overall quality of life. This study conducted comprehensive audiological measurements and high-density electroencephalography (EEG) measurements in resting and auditory task states on 21 AN, 21 age-, gender-, and hearing threshold-matched sensorineural hearing loss (SNHL), and 21 age- and gender-matched normal hearing (NH) subjects. The topological network attributes, microstates, event-related potentials (ERP), cortical lateralization, phase-locking value (PLV) functional connectivity strength of EEG, and correlations with audiological indicators were compared among three groups. The results showed that in the resting state, the global field power (GFP) of microstate A differed significantly after FDR correction, with SNHL showing higher GFP 3.23 (2.46–3.93) μV than AN 2.37 (2.08–3.08) μV and NH 2.38 (2.08–2.63) μV. The transition probability (TP) from microstate A to B and from B to C were higher in SNHL than NH (both <em>P</em> after correction = 0.011). During task processing, N1 amplitude was lower in SNHL than NH (<em>P</em> after correction = 0.023), while N1 latency was shorter in AN than SNHL (<em>P</em> after correction = 0.006) and was correlated with low-frequency PTA (correlation coefficient = 0.362, <em>P</em> after correction = 0.020). AN additionally exhibited left-hemispheric lateralization (<em>P</em> after correction < 0.05). Source localization revealed greater cortical activation in SNHL than in AN and NH, predominantly in the superior frontal gyrus (SNHL > NH: <em>P</em> = 0.00020, t<sub>0.05</sub> = 3.692, and SNHL > AN: <em>P</em> = 0.01140, t<sub>0.05</sub> = -3.794). Collectively, these findings demonstrate that AN exhibits unique neural compensation patterns distinct from SNHL, supporting cortical reorganization mechanisms specific to neural dyssynchrony rather than simple auditory input reduction.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"468 ","pages":"Article 109468"},"PeriodicalIF":2.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145512164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-27DOI: 10.1016/j.heares.2025.109437
Katherine Bak , Lena Darakjian , Frank A. Russo , M. Kathleen Pichora-Fuller , Jennifer L. Campos
Age-related hearing loss may increase listening difficulties in challenging listening conditions (e.g., speech-in-noise), limiting cognitive resources available to perform common, complex multitasking behaviours like listening while driving. Older adults with hearing loss may compensate by increasing prefrontal cortex (PFC) activation in response to multitasking demands. Few realistic, controlled studies have examined how competing attentional demands of listening while driving affect performance and brain activation, and how these patterns may differ between older adults with and without audiometric hearing loss. This study examined dual-task costs and neural activation levels during a listening-while-driving task in 28 older adults with normal hearing (Mage = 71.79 years) and 22 older adults with hearing loss (Mage=74.00 years) using functional near-infrared spectroscopy (fNIRS). Participants completed a driving task in a high-fidelity driving simulator under simpler (Rural) and more complex (City) conditions and the Connected Speech Test (CST) at +4 dB and 0 dB signal-to-noise ratios (SNR; easier and harder listening respectively). They also performed both tasks simultaneously to examine dual-task costs. fNIRS was recorded during all conditions. Results demonstrated that older adults with hearing loss showed poorer listening accuracy, poorer driving performance, and greater oxygenation concentration in the PFC than those with normal hearing. Both groups showed poorer listening and driving performance in the dual-task compared to the single-task conditions, with the greatest dual-task costs observed during the most difficult condition (City driving, 0 dB SNR). Broadly, these findings could inform strategies to optimize vehicle acoustics and reduce auditory distractions, thereby supporting driving performance in challenging driving conditions.
{"title":"Dual-task costs of listening while driving in older adults with and without audiometric hearing loss: Behavioural and neurophysiological outcomes","authors":"Katherine Bak , Lena Darakjian , Frank A. Russo , M. Kathleen Pichora-Fuller , Jennifer L. Campos","doi":"10.1016/j.heares.2025.109437","DOIUrl":"10.1016/j.heares.2025.109437","url":null,"abstract":"<div><div>Age-related hearing loss may increase listening difficulties in challenging listening conditions (e.g., speech-in-noise), limiting cognitive resources available to perform common, complex multitasking behaviours like listening while driving. Older adults with hearing loss may compensate by increasing prefrontal cortex (PFC) activation in response to multitasking demands. Few realistic, controlled studies have examined how competing attentional demands of listening while driving affect performance and brain activation, and how these patterns may differ between older adults with and without audiometric hearing loss. This study examined dual-task costs and neural activation levels during a listening-while-driving task in 28 older adults with normal hearing (Mage = 71.79 years) and 22 older adults with hearing loss (Mage=74.00 years) using functional near-infrared spectroscopy (fNIRS). Participants completed a driving task in a high-fidelity driving simulator under simpler (Rural) and more complex (City) conditions and the Connected Speech Test (CST) at +4 dB and 0 dB signal-to-noise ratios (SNR; easier and harder listening respectively). They also performed both tasks simultaneously to examine dual-task costs. fNIRS was recorded during all conditions. Results demonstrated that older adults with hearing loss showed poorer listening accuracy, poorer driving performance, and greater oxygenation concentration in the PFC than those with normal hearing. Both groups showed poorer listening and driving performance in the dual-task compared to the single-task conditions, with the greatest dual-task costs observed during the most difficult condition (City driving, 0 dB SNR). Broadly, these findings could inform strategies to optimize vehicle acoustics and reduce auditory distractions, thereby supporting driving performance in challenging driving conditions.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"468 ","pages":"Article 109437"},"PeriodicalIF":2.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145344967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-08-05DOI: 10.1016/j.heares.2025.109377
Ted W. Cranford , Margaret A. Morris , Petr Krysl , John A. Hildebrand
Baleen whales produce and receive underwater sounds with wavelengths much longer than their bodies, despite having ears approximately the size of a human fist. How do they hear long wavelength sounds with relatively small ears? In 2015, we produced a computational model to simulate low-frequency hearing in the fin whale. That study predicted bone conduction as the most likely hearing mechanism. The whale’s enormous skull acts as an external ear, capturing sounds like an acoustic antenna and transmitting them to other parts of the ear.
In the current study, we tested the bone conduction hypothesis with physical, vibroacoustic experiments using partially denuded gray whale skulls. These experiments validated that long wavelength sounds excite skull vibrations, which are amplified and transferred to the dynamic components of each bony ear, known as the tympanoperiotic complex. Vibrations of the dynamic components include the bony pedicles, tympanic bullae and middle ear ossicles, resulting in displacement of fluid within the cochlea of the inner ear.
The pedicles are important components of this mechanism, thin flexible bones that suspend the bullae from the periotic, amplifying the low-frequency vibrations from the skull. We contend that this skull-driven pathway of sound reception and amplification within the bony ear complexes is key to understanding low-frequency hearing capabilities and mysticete natural history.
{"title":"Colossal ears? How baleen whales hear low-frequency sound","authors":"Ted W. Cranford , Margaret A. Morris , Petr Krysl , John A. Hildebrand","doi":"10.1016/j.heares.2025.109377","DOIUrl":"10.1016/j.heares.2025.109377","url":null,"abstract":"<div><div>Baleen whales produce and receive underwater sounds with wavelengths much longer than their bodies, despite having ears approximately the size of a human fist. How do they hear long wavelength sounds with relatively small ears? In 2015, we produced a computational model to simulate low-frequency hearing in the fin whale. That study predicted bone conduction as the most likely hearing mechanism. The whale’s enormous skull acts as an external ear, capturing sounds like an acoustic antenna and transmitting them to other parts of the ear.</div><div>In the current study, we tested the bone conduction hypothesis with physical, vibroacoustic experiments using partially denuded gray whale skulls. These experiments <em>validated</em> that long wavelength sounds excite skull vibrations, which are amplified and transferred to the dynamic components of each bony ear, known as the tympanoperiotic complex. Vibrations of the dynamic components include the bony pedicles, tympanic bullae and middle ear ossicles, resulting in displacement of fluid within the cochlea of the inner ear.</div><div>The pedicles are important components of this mechanism, thin flexible bones that suspend the bullae from the periotic, amplifying the low-frequency vibrations from the skull. We contend that this skull-driven pathway of sound reception and amplification within the bony ear complexes is key to understanding low-frequency hearing capabilities and mysticete natural history.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"468 ","pages":"Article 109377"},"PeriodicalIF":2.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145388881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-11-08DOI: 10.1016/j.heares.2025.109470
Sara Cacciato-Salcedo , Ana B. Lao-Rodríguez , Manuel S. Malmierca
Prenatal exposure to valproic acid (VPA) provides a well-established rodent model of autism, yet its effects on auditory brainstem/midbrain processing across sex and development remain elusive. We recorded click-evoked auditory brainstem responses (ABRs) in Long–Evans rats that received prenatal VPA (400 mg/kg, gestational day 12) and in matched controls at prepubertal (postnatal days 30–45) and adult (65–120) stages under urethane anesthesia. We analyzed peak amplitudes, latencies, inter-peak intervals, and amplitude ratios across sound levels. Auditory thresholds remained comparable among groups. In controls, females showed larger amplitudes for waves I–II, shorter latencies for waves I, II, and IV, and steeper amplitude–intensity slopes for waves II, III, and V than males, indicating stronger level-dependent recruitment. Maturation enhanced early brainstem and midbrain responses by increasing amplitude growth (wave II) and shortening latencies (waves II–V), with effects more pronounced in females. Prenatal VPA exposure reduced wave II amplitude and delayed early peaks (I–III) in females, accompanied by elevated amplitude ratios, whereas in males it mainly affected later responses by reducing amplitudes for waves III–V and prolonging inter-peak latencies (I–III, III–V). These findings show that sex, age, and prenatal VPA exposure distinctly shape auditory brainstem/midbrain function.
{"title":"Sex- and age-specific effects on auditory brainstem responses in the valproic acid-induced rat model of autism","authors":"Sara Cacciato-Salcedo , Ana B. Lao-Rodríguez , Manuel S. Malmierca","doi":"10.1016/j.heares.2025.109470","DOIUrl":"10.1016/j.heares.2025.109470","url":null,"abstract":"<div><div>Prenatal exposure to valproic acid (VPA) provides a well-established rodent model of autism, yet its effects on auditory brainstem/midbrain processing across sex and development remain elusive. We recorded click-evoked auditory brainstem responses (ABRs) in Long–Evans rats that received prenatal VPA (400 mg/kg, gestational day 12) and in matched controls at prepubertal (postnatal days 30–45) and adult (65–120) stages under urethane anesthesia. We analyzed peak amplitudes, latencies, inter-peak intervals, and amplitude ratios across sound levels. Auditory thresholds remained comparable among groups. In controls, females showed larger amplitudes for waves I–II, shorter latencies for waves I, II, and IV, and steeper amplitude–intensity slopes for waves II, III, and V than males, indicating stronger level-dependent recruitment. Maturation enhanced early brainstem and midbrain responses by increasing amplitude growth (wave II) and shortening latencies (waves II–V), with effects more pronounced in females. Prenatal VPA exposure reduced wave II amplitude and delayed early peaks (I–III) in females, accompanied by elevated amplitude ratios, whereas in males it mainly affected later responses by reducing amplitudes for waves III–V and prolonging inter-peak latencies (I–III, III–V). These findings show that sex, age, and prenatal VPA exposure distinctly shape auditory brainstem/midbrain function.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"468 ","pages":"Article 109470"},"PeriodicalIF":2.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145512225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-10-28DOI: 10.1016/j.heares.2025.109465
Pierre H. Bourez , Guillaume T. Vallet , Nathalie Gosselin , François Bergeron , Philippe Fournier
Noise interfering with everyday activities is a common experience in many daily soundscapes. However, for individuals with intolerance to loud sounds, a condition called hyperacusis, these soundscapes can have a severe impact, greatly impairing lifestyle habits such as work, hobbies and social interactions. Yet, there is little experimental evidence documenting the functional impact of noise in these individuals. This study aims to validate a novel, ecologically relevant task designed to measure the functional impact of noise during a common daily activity: reading. Forty-nine participants (29 controls, 20 with hyperacusis) read a book excerpt while exposed to four different soundscapes. The sound level was gradually increased until participants reported that the noise interfered with their reading ability (called annoyance level), and then further increased until it became uncomfortable (called discomfort level). Participants then performed a 2-back cognitive task both in silence and in noise calibrated to their individual annoyance threshold. On average, individuals with hyperacusis reached these thresholds at sound levels 13 dB LAeq lower than controls. However, at their respective annoyance thresholds, both groups showed similar performance decrements (−3 %) in noise versus quiet. These findings support the validity of a novel ecological measure that integrates subjective annoyance thresholds with cognitive performance on a behavioral task, offering a reproducible approach to quantify the functional impact of hyperacusis.
干扰日常活动的噪音在许多日常声景中是一种常见的体验。然而,对于那些不能忍受大声声音的人来说,这些音景会产生严重的影响,极大地影响他们的生活习惯,比如工作、爱好和社交。然而,很少有实验证据证明噪音对这些人的功能影响。本研究旨在验证一种新颖的、与生态相关的任务,该任务旨在测量日常活动(阅读)中噪音对功能的影响。49名参与者(对照组29人,听觉亢进者20人)一边听着四种不同的音景,一边读一段书的节选。声音水平逐渐提高,直到参与者报告噪音干扰了他们的阅读能力(称为烦恼水平),然后进一步提高,直到它变得不舒服(称为不适水平)。然后,参与者在安静和噪音环境中进行了两项认知任务,根据他们的个人烦恼阈值进行了校准。平均而言,患有听觉亢进的个体达到这些阈值的声音水平比对照组低13 dB LAeq。然而,在各自的烦恼阈值下,两组在噪音和安静方面表现出相似的性能下降(- 3%)。这些发现支持了一种新的生态测量方法的有效性,该方法将主观烦恼阈值与行为任务中的认知表现结合起来,提供了一种可重复的方法来量化听觉亢进的功能影响。
{"title":"An immersive ecological measure of noise-induced functional interference in adults with hyperacusis","authors":"Pierre H. Bourez , Guillaume T. Vallet , Nathalie Gosselin , François Bergeron , Philippe Fournier","doi":"10.1016/j.heares.2025.109465","DOIUrl":"10.1016/j.heares.2025.109465","url":null,"abstract":"<div><div>Noise interfering with everyday activities is a common experience in many daily soundscapes. However, for individuals with intolerance to loud sounds, a condition called hyperacusis, these soundscapes can have a severe impact, greatly impairing lifestyle habits such as work, hobbies and social interactions. Yet, there is little experimental evidence documenting the functional impact of noise in these individuals. This study aims to validate a novel, ecologically relevant task designed to measure the functional impact of noise during a common daily activity: reading. Forty-nine participants (29 controls, 20 with hyperacusis) read a book excerpt while exposed to four different soundscapes. The sound level was gradually increased until participants reported that the noise interfered with their reading ability (called annoyance level), and then further increased until it became uncomfortable (called discomfort level). Participants then performed a 2-back cognitive task both in silence and in noise calibrated to their individual annoyance threshold. On average, individuals with hyperacusis reached these thresholds at sound levels 13 dB LAeq lower than controls. However, at their respective annoyance thresholds, both groups showed similar performance decrements (−3 %) in noise versus quiet. These findings support the validity of a novel ecological measure that integrates subjective annoyance thresholds with cognitive performance on a behavioral task, offering a reproducible approach to quantify the functional impact of hyperacusis.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"468 ","pages":"Article 109465"},"PeriodicalIF":2.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145470912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-10-15DOI: 10.1016/j.heares.2025.109452
Nicolas Dauman , Soly I. Erlandsson , Alain Londero , Marc Fagelson
Background
Previous studies on noise intolerance have recommended further research into the lived experience of hyperacusis in order to better understand the specific needs and strategies used by those affected. We conducted a study with an inductive approach in which individuals with hyperacusis were willing to share their experiences.
Objective
To build a theoretical framework, based on the perspective of individuals with hyperacusis, that incorporates the behavior patterns underlying their participation in social contexts.
Method
The study population (N=29) included 12 females and 17 males (mean age 45 years) who had severe hyperacusis (mean HQ score 31). Open-ended interviews were conducted, and classic Grounded Theory (GT) was applied for the analysis of collected data.
Results
Participants’ main concern was related to intrusive feelings of being trapped by noise. In order to overcome noise intrusion, they expressed the need to alternate between a) moving away from challenging sound environments and b) setting clear goals in social participation. Participants’ ability to take initiative in the auditory scene, and to put aversive bodily sensations into perspective, contributed to their ability to stay in noisy environments. Furthermore, access to restorative environments (e.g. walking in the forest or by the sea) helped them replenish the ability to remain attentive in noisy environments.
Conclusions
Considering the needs expressed by patients with hyperacusis contributes to clinical management by identifying the adaptive behaviors underlying social participation. Furthermore, hypotheses about barriers and facilitators to sound exposure can be tested in future research.
{"title":"Perspectives on self-management of noise intrusion in daily living with hyperacusis","authors":"Nicolas Dauman , Soly I. Erlandsson , Alain Londero , Marc Fagelson","doi":"10.1016/j.heares.2025.109452","DOIUrl":"10.1016/j.heares.2025.109452","url":null,"abstract":"<div><h3>Background</h3><div>Previous studies on noise intolerance have recommended further research into the lived experience of hyperacusis in order to better understand the specific needs and strategies used by those affected. We conducted a study with an inductive approach in which individuals with hyperacusis were willing to share their experiences.</div></div><div><h3>Objective</h3><div>To build a theoretical framework, based on the perspective of individuals with hyperacusis, that incorporates the behavior patterns underlying their participation in social contexts.</div></div><div><h3>Method</h3><div>The study population (N=29) included 12 females and 17 males (mean age 45 years) who had severe hyperacusis (mean HQ score 31). Open-ended interviews were conducted, and classic Grounded Theory (GT) was applied for the analysis of collected data.</div></div><div><h3>Results</h3><div>Participants’ main concern was related to intrusive feelings of being trapped by noise. In order to overcome noise intrusion, they expressed the need to alternate between a) moving away from challenging sound environments and b) setting clear goals in social participation. Participants’ ability to take initiative in the auditory scene, and to put aversive bodily sensations into perspective, contributed to their ability to stay in noisy environments. Furthermore, access to restorative environments (e.g. walking in the forest or by the sea) helped them replenish the ability to remain attentive in noisy environments.</div></div><div><h3>Conclusions</h3><div>Considering the needs expressed by patients with hyperacusis contributes to clinical management by identifying the adaptive behaviors underlying social participation. Furthermore, hypotheses about barriers and facilitators to sound exposure can be tested in future research.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"468 ","pages":"Article 109452"},"PeriodicalIF":2.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145444589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-10-04DOI: 10.1016/j.heares.2025.109444
Iris Van de Ryck , Nicolas Heintz , Iustina Rotaru , Debora Fieberg , Alexander Bertrand , Tom Francart
Objectives
Auditory Attention Decoding (AAD) is a technique utilizing brain signals to decode on which sound the listener focuses the attention. In most current studies, the effect of type of speech materials used and sex of the listener is not considered. We investigated the effect on AAD performance of factors related to the speaker (such as the sex of the speaker, background noise level, and same versus mixed-sex conditions) and the listener (sex of the listener).
Design
Forty-two young adults with normal hearing participated in the study. They listened to 2 competing speakers and were instructed to attend to one speaker and ignore the other speaker, whilst electroencephalography (EEG) and electrooculography (EOG) were measured. Background noise was introduced in half of the conditions. AAD performance was compared across eight experimental conditions.
Results
A significant main effect of speaker sex was found: A male target and/or male masker speaker resulted in higher AAD performance compared to a female speaker with a higher fundamental frequency (F0). These effects were found to be small and therefore likely clinically irrelevant.
Conclusion
While no substantial effects were found on the factors investigated in this study, including diverse and realistic training scenarios remains a valuable approach to prevent potential influences from other factors.
{"title":"Effects of speaker and listener sex on auditory attention decoding performance","authors":"Iris Van de Ryck , Nicolas Heintz , Iustina Rotaru , Debora Fieberg , Alexander Bertrand , Tom Francart","doi":"10.1016/j.heares.2025.109444","DOIUrl":"10.1016/j.heares.2025.109444","url":null,"abstract":"<div><h3>Objectives</h3><div>Auditory Attention Decoding (AAD) is a technique utilizing brain signals to decode on which sound the listener focuses the attention. In most current studies, the effect of type of speech materials used and sex of the listener is not considered. We investigated the effect on AAD performance of factors related to the speaker (such as the sex of the speaker, background noise level, and same versus mixed-sex conditions) and the listener (sex of the listener).</div></div><div><h3>Design</h3><div>Forty-two young adults with normal hearing participated in the study. They listened to 2 competing speakers and were instructed to attend to one speaker and ignore the other speaker, whilst electroencephalography (EEG) and electrooculography (EOG) were measured. Background noise was introduced in half of the conditions. AAD performance was compared across eight experimental conditions.</div></div><div><h3>Results</h3><div>A significant main effect of speaker sex was found: A male target and/or male masker speaker resulted in higher AAD performance compared to a female speaker with a higher fundamental frequency (F0). These effects were found to be small and therefore likely clinically irrelevant.</div></div><div><h3>Conclusion</h3><div>While no substantial effects were found on the factors investigated in this study, including diverse and realistic training scenarios remains a valuable approach to prevent potential influences from other factors.</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"468 ","pages":"Article 109444"},"PeriodicalIF":2.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145291973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-10-09DOI: 10.1016/j.heares.2025.109446
Haoze Zhang , Zhenhao Fu , Jingcheng Zhou , Yulin Ding , Xiaolong Li , Mengyuan Guo , Shiming Yang , Fangyuan Wang , Zhaohui Hou
The Eustachian tube, a conduit linking the tympanic cavity to the nasopharynx, poses challenges for observation of its pressure dynamics due to its concealed anatomical position. Furthermore, computational models have not yet accurately replicated its intricate structure. We propose that simplification of the Eustachian tube’s structure may represent a crucial step toward elucidating the mechanisms underlying intraluminal pressure variation. In the present study, simplified models were constructed from CT scans of patients with patulous Eustachian tube. These models captured the tube’s key morphological features, including a blind-ended tubular structure with a sealed tympanic orifice, an open pharyngeal orifice, and a deformable central segment. Particle image velocimetry (PIV) was used to enable visualization of flow field alterations within the lumen during the transition from a closed to an open state under various simulated middle ear pressure conditions. The following phenomena were observed: (1) Bidirectional pumping at the onset of intraluminal negative pressure, characterized by simultaneous suction from both sides toward the center; (2) Variation of the pumping phenomenon under different middle ear pressure conditions; (3) Vortex generation at the tympanic orifice upon tubal opening under middle ear negative pressure. These findings provide novel insights into the functional mechanics of the Eustachian tube, and offer supporting evidence for the surgical rationale of myringotomy with grommet insertion in patients with otitis media with effusion (OME).
{"title":"Flow field dynamics in the pumping function of eustachian tube under varied middle ear pressure states","authors":"Haoze Zhang , Zhenhao Fu , Jingcheng Zhou , Yulin Ding , Xiaolong Li , Mengyuan Guo , Shiming Yang , Fangyuan Wang , Zhaohui Hou","doi":"10.1016/j.heares.2025.109446","DOIUrl":"10.1016/j.heares.2025.109446","url":null,"abstract":"<div><div>The Eustachian tube, a conduit linking the tympanic cavity to the nasopharynx, poses challenges for observation of its pressure dynamics due to its concealed anatomical position. Furthermore, computational models have not yet accurately replicated its intricate structure. We propose that simplification of the Eustachian tube’s structure may represent a crucial step toward elucidating the mechanisms underlying intraluminal pressure variation. In the present study, simplified models were constructed from CT scans of patients with patulous Eustachian tube. These models captured the tube’s key morphological features, including a blind-ended tubular structure with a sealed tympanic orifice, an open pharyngeal orifice, and a deformable central segment. Particle image velocimetry (PIV) was used to enable visualization of flow field alterations within the lumen during the transition from a closed to an open state under various simulated middle ear pressure conditions. The following phenomena were observed: (1) Bidirectional pumping at the onset of intraluminal negative pressure, characterized by simultaneous suction from both sides toward the center; (2) Variation of the pumping phenomenon under different middle ear pressure conditions; (3) Vortex generation at the tympanic orifice upon tubal opening under middle ear negative pressure. These findings provide novel insights into the functional mechanics of the Eustachian tube, and offer supporting evidence for the surgical rationale of myringotomy with grommet insertion in patients with otitis media with effusion (OME).</div></div>","PeriodicalId":12881,"journal":{"name":"Hearing Research","volume":"468 ","pages":"Article 109446"},"PeriodicalIF":2.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145299803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}