{"title":"基于人工智能的语音信号用于COVID-19诊断","authors":"Aseel Alfaidi, Abdullah Alshahrani, Maha Aljohani","doi":"10.1145/3584202.3584247","DOIUrl":null,"url":null,"abstract":"The speech signal has numerous features that represent the characteristics of a specific language and recognize emotions. It also contains information that can be used to identify the mental, psychological, and physical states of the speaker. Recently, the acoustic analysis of speech signals offers a practical, automated, and scalable method for medical diagnosis and monitoring symptoms of many diseases. In this paper, we explore the deep acoustic features from confirmed positive and negative cases of COVID-19 and compare the performance of the acoustic features and COVID-19 symptoms in terms of their ability to diagnose COVID-19. The proposed methodology consists of the pre-trained Visual Geometry Group (VGG-16) model based on Mel spectrogram images to extract deep audio features. In addition to the K-means algorithm that determines effective features, followed by a Genetic Algorithm-Support Vector Machine (GA-SVM) classifier to classify cases. The experimental findings indicate the proposed methodology’s capability to classify COVID-19 and NOT COVID-19 from acoustic features compared to COVID-19 symptoms, achieving an accuracy of 97%. The experimental results show that the proposed method remarkably improves the accuracy of COVID-19 detection over the handcrafted features used in previous studies.","PeriodicalId":438341,"journal":{"name":"Proceedings of the 6th International Conference on Future Networks & Distributed Systems","volume":"447 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial Intelligence-based Speech Signal for COVID-19 Diagnostics\",\"authors\":\"Aseel Alfaidi, Abdullah Alshahrani, Maha Aljohani\",\"doi\":\"10.1145/3584202.3584247\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The speech signal has numerous features that represent the characteristics of a specific language and recognize emotions. It also contains information that can be used to identify the mental, psychological, and physical states of the speaker. Recently, the acoustic analysis of speech signals offers a practical, automated, and scalable method for medical diagnosis and monitoring symptoms of many diseases. In this paper, we explore the deep acoustic features from confirmed positive and negative cases of COVID-19 and compare the performance of the acoustic features and COVID-19 symptoms in terms of their ability to diagnose COVID-19. The proposed methodology consists of the pre-trained Visual Geometry Group (VGG-16) model based on Mel spectrogram images to extract deep audio features. In addition to the K-means algorithm that determines effective features, followed by a Genetic Algorithm-Support Vector Machine (GA-SVM) classifier to classify cases. The experimental findings indicate the proposed methodology’s capability to classify COVID-19 and NOT COVID-19 from acoustic features compared to COVID-19 symptoms, achieving an accuracy of 97%. The experimental results show that the proposed method remarkably improves the accuracy of COVID-19 detection over the handcrafted features used in previous studies.\",\"PeriodicalId\":438341,\"journal\":{\"name\":\"Proceedings of the 6th International Conference on Future Networks & Distributed Systems\",\"volume\":\"447 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 6th International Conference on Future Networks & Distributed Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3584202.3584247\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 6th International Conference on Future Networks & Distributed Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3584202.3584247","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Artificial Intelligence-based Speech Signal for COVID-19 Diagnostics
The speech signal has numerous features that represent the characteristics of a specific language and recognize emotions. It also contains information that can be used to identify the mental, psychological, and physical states of the speaker. Recently, the acoustic analysis of speech signals offers a practical, automated, and scalable method for medical diagnosis and monitoring symptoms of many diseases. In this paper, we explore the deep acoustic features from confirmed positive and negative cases of COVID-19 and compare the performance of the acoustic features and COVID-19 symptoms in terms of their ability to diagnose COVID-19. The proposed methodology consists of the pre-trained Visual Geometry Group (VGG-16) model based on Mel spectrogram images to extract deep audio features. In addition to the K-means algorithm that determines effective features, followed by a Genetic Algorithm-Support Vector Machine (GA-SVM) classifier to classify cases. The experimental findings indicate the proposed methodology’s capability to classify COVID-19 and NOT COVID-19 from acoustic features compared to COVID-19 symptoms, achieving an accuracy of 97%. The experimental results show that the proposed method remarkably improves the accuracy of COVID-19 detection over the handcrafted features used in previous studies.