Objective: This paper presents the design and structure of an open-access audiological dataset created to support hearing aid algorithm development and model-based audiology. It provides comprehensive perceptual measures for individuals with normal hearing and hearing loss, with and without hearing aids.
Design: The dataset includes pure-tone audiometry, otoscopy, coupler measurements, the Loudness Validation Method (LVM), tone-in-noise detection, categorical loudness scaling, speech recognition tests (Göttingen Sentence Test, GÖSA; Oldenburg Sentence Test, OLSA), listening effort (Adaptive Categorical Listening Effort Scaling, ACALES), and self-reported hearing and functioning (HEAR-COMMAND Tool). OLSA and ACALES were conducted in both common spatial setups and four virtual acoustic scenes, with aided and unaided conditions for hearing aid users. Data are organised according to the FAIR principles (Findable, Accessible, Interoperable, Reusable).
Study sample: Seventy-six participants.
Results: The release includes database documentation, measurement details, raw data, metadata, and structured SQL files. Sample outcomes for individuals with moderate hearing loss are reported here for the OLSA, ACALES, GÖSA, LVM, and audiometry.
Conclusions: This dataset enables cross-methodological analysis and provides simulated acoustic scenes for evaluating hearing loss. By combining standardised and novel measures, it offers a baseline resource for model-based audiology research and hearing aid benefit assessment.
扫码关注我们
求助内容:
应助结果提醒方式:
