Luiz Medeiros Araujo Lima-Filho, Leonardo Wanderley Lopes, Telmo de Menezes E Silva Filho
{"title":"Integrated Vocal Deviation Index (IVDI): A Machine Learning Model to Classifier of the General Grade of Vocal Deviation.","authors":"Luiz Medeiros Araujo Lima-Filho, Leonardo Wanderley Lopes, Telmo de Menezes E Silva Filho","doi":"10.1016/j.jvoice.2024.11.002","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>To develop a multiparametric index based on machine learning (ML) to predict and classify the overall degree of vocal deviation (GG).</p><p><strong>Method: </strong>The sample consisted of 300 dysphonic and non-dysphonic participants of both sexes. Two speech tasks were sustained vowel [a] and connected speech (counting numbers from 1 to 10). Five speech-language pathologists performed the auditory-perceptual judgment (APJ) of the GG and the degrees of roughness (GR), breathiness (GB), instability (GI), and strain (GS). We extracted 47 acoustic measurements from these tasks. The APJ result and the acoustic measurements were used to develop the multiparametric index. We used mean absolute error, root mean square error, and coefficient of determination (R²) to select the best model of ML to predict GG and feature importance to select the best set of variables for the index. After classifying the GG between nondysphonic, mild, moderate, and severe, the final model was validated using accuracy, sensitivity, specificity, predictive values, likelihood ratios, F1-Score, and weighted kappa.</p><p><strong>Results: </strong>The gradient boost model showed the best performance among the ML models. Eight features were selected in the model, including four acoustic measures (jitterLoc, smoothed cepstral peak prominenc, mean harmonic-to-noise ratio (HNRmean), and correlation) and four APJ measures (GR, GB, GS, and GI). The final model correctly classified 93.75% of participants and obtained a weighted kappa index of 0.9374, demonstrating the model's excellent performance.</p><p><strong>Conclusion: </strong>The Integrated Vocal Deviation Index includes four acoustic measures and four auditory-perceptual measures and showed excellent performance in classifying voices according to GG.</p>","PeriodicalId":49954,"journal":{"name":"Journal of Voice","volume":" ","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Voice","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.jvoice.2024.11.002","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Objective: To develop a multiparametric index based on machine learning (ML) to predict and classify the overall degree of vocal deviation (GG).
Method: The sample consisted of 300 dysphonic and non-dysphonic participants of both sexes. Two speech tasks were sustained vowel [a] and connected speech (counting numbers from 1 to 10). Five speech-language pathologists performed the auditory-perceptual judgment (APJ) of the GG and the degrees of roughness (GR), breathiness (GB), instability (GI), and strain (GS). We extracted 47 acoustic measurements from these tasks. The APJ result and the acoustic measurements were used to develop the multiparametric index. We used mean absolute error, root mean square error, and coefficient of determination (R²) to select the best model of ML to predict GG and feature importance to select the best set of variables for the index. After classifying the GG between nondysphonic, mild, moderate, and severe, the final model was validated using accuracy, sensitivity, specificity, predictive values, likelihood ratios, F1-Score, and weighted kappa.
Results: The gradient boost model showed the best performance among the ML models. Eight features were selected in the model, including four acoustic measures (jitterLoc, smoothed cepstral peak prominenc, mean harmonic-to-noise ratio (HNRmean), and correlation) and four APJ measures (GR, GB, GS, and GI). The final model correctly classified 93.75% of participants and obtained a weighted kappa index of 0.9374, demonstrating the model's excellent performance.
Conclusion: The Integrated Vocal Deviation Index includes four acoustic measures and four auditory-perceptual measures and showed excellent performance in classifying voices according to GG.
期刊介绍:
The Journal of Voice is widely regarded as the world''s premiere journal for voice medicine and research. This peer-reviewed publication is listed in Index Medicus and is indexed by the Institute for Scientific Information. The journal contains articles written by experts throughout the world on all topics in voice sciences, voice medicine and surgery, and speech-language pathologists'' management of voice-related problems. The journal includes clinical articles, clinical research, and laboratory research. Members of the Foundation receive the journal as a benefit of membership.