Hadrien Jean, Nicolas Wallaert, Antoine Dreumont, Gwenaelle Creff, Benoit Godey, Nihaad Paraouty
{"title":"Automating Speech Audiometry in Quiet and in Noise Using a Deep Neural Network.","authors":"Hadrien Jean, Nicolas Wallaert, Antoine Dreumont, Gwenaelle Creff, Benoit Godey, Nihaad Paraouty","doi":"10.3390/biology14020191","DOIUrl":null,"url":null,"abstract":"<p><p>In addition to pure-tone audiometry tests and electrophysiological tests, a comprehensive hearing evaluation includes assessing a subject's ability to understand speech in quiet and in noise. In fact, speech audiometry tests are commonly used in clinical practice; however, they are time-consuming as they require manual scoring by a hearing professional. To address this issue, we developed an automated speech recognition (ASR) system for scoring subject responses at the phonetic level. The ASR was built using a deep neural network and trained with pre-recorded French speech materials: Lafon's cochlear lists and Dodelé logatoms. Next, we tested the performance and reliability of the ASR in clinical settings with both normal-hearing and hearing-impaired listeners. Our findings indicate that the ASR's performance is statistically similar to manual scoring by expert hearing professionals, both in quiet and in noisy conditions. Moreover, the test-retest reliability of the automated scoring closely matches that of manual scoring. Together, our results validate the use of this deep neural network in both clinical and research contexts for conducting speech audiometry tests in quiet and in noise.</p>","PeriodicalId":48624,"journal":{"name":"Biology-Basel","volume":"14 2","pages":""},"PeriodicalIF":3.6000,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11851792/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biology-Basel","FirstCategoryId":"99","ListUrlMain":"https://doi.org/10.3390/biology14020191","RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BIOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
In addition to pure-tone audiometry tests and electrophysiological tests, a comprehensive hearing evaluation includes assessing a subject's ability to understand speech in quiet and in noise. In fact, speech audiometry tests are commonly used in clinical practice; however, they are time-consuming as they require manual scoring by a hearing professional. To address this issue, we developed an automated speech recognition (ASR) system for scoring subject responses at the phonetic level. The ASR was built using a deep neural network and trained with pre-recorded French speech materials: Lafon's cochlear lists and Dodelé logatoms. Next, we tested the performance and reliability of the ASR in clinical settings with both normal-hearing and hearing-impaired listeners. Our findings indicate that the ASR's performance is statistically similar to manual scoring by expert hearing professionals, both in quiet and in noisy conditions. Moreover, the test-retest reliability of the automated scoring closely matches that of manual scoring. Together, our results validate the use of this deep neural network in both clinical and research contexts for conducting speech audiometry tests in quiet and in noise.
期刊介绍:
Biology (ISSN 2079-7737) is an international, peer-reviewed, quick-refereeing open access journal of Biological Science published by MDPI online. It publishes reviews, research papers and communications in all areas of biology and at the interface of related disciplines. Our aim is to encourage scientists to publish their experimental and theoretical results in as much detail as possible. There is no restriction on the length of the papers. The full experimental details must be provided so that the results can be reproduced. Electronic files regarding the full details of the experimental procedure, if unable to be published in a normal way, can be deposited as supplementary material.