基于人工智能的语音信号用于COVID-19诊断

Aseel Alfaidi, Abdullah Alshahrani, Maha Aljohani
{"title":"基于人工智能的语音信号用于COVID-19诊断","authors":"Aseel Alfaidi, Abdullah Alshahrani, Maha Aljohani","doi":"10.1145/3584202.3584247","DOIUrl":null,"url":null,"abstract":"The speech signal has numerous features that represent the characteristics of a specific language and recognize emotions. It also contains information that can be used to identify the mental, psychological, and physical states of the speaker. Recently, the acoustic analysis of speech signals offers a practical, automated, and scalable method for medical diagnosis and monitoring symptoms of many diseases. In this paper, we explore the deep acoustic features from confirmed positive and negative cases of COVID-19 and compare the performance of the acoustic features and COVID-19 symptoms in terms of their ability to diagnose COVID-19. The proposed methodology consists of the pre-trained Visual Geometry Group (VGG-16) model based on Mel spectrogram images to extract deep audio features. In addition to the K-means algorithm that determines effective features, followed by a Genetic Algorithm-Support Vector Machine (GA-SVM) classifier to classify cases. The experimental findings indicate the proposed methodology’s capability to classify COVID-19 and NOT COVID-19 from acoustic features compared to COVID-19 symptoms, achieving an accuracy of 97%. The experimental results show that the proposed method remarkably improves the accuracy of COVID-19 detection over the handcrafted features used in previous studies.","PeriodicalId":438341,"journal":{"name":"Proceedings of the 6th International Conference on Future Networks & Distributed Systems","volume":"447 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial Intelligence-based Speech Signal for COVID-19 Diagnostics\",\"authors\":\"Aseel Alfaidi, Abdullah Alshahrani, Maha Aljohani\",\"doi\":\"10.1145/3584202.3584247\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The speech signal has numerous features that represent the characteristics of a specific language and recognize emotions. It also contains information that can be used to identify the mental, psychological, and physical states of the speaker. Recently, the acoustic analysis of speech signals offers a practical, automated, and scalable method for medical diagnosis and monitoring symptoms of many diseases. In this paper, we explore the deep acoustic features from confirmed positive and negative cases of COVID-19 and compare the performance of the acoustic features and COVID-19 symptoms in terms of their ability to diagnose COVID-19. The proposed methodology consists of the pre-trained Visual Geometry Group (VGG-16) model based on Mel spectrogram images to extract deep audio features. In addition to the K-means algorithm that determines effective features, followed by a Genetic Algorithm-Support Vector Machine (GA-SVM) classifier to classify cases. The experimental findings indicate the proposed methodology’s capability to classify COVID-19 and NOT COVID-19 from acoustic features compared to COVID-19 symptoms, achieving an accuracy of 97%. The experimental results show that the proposed method remarkably improves the accuracy of COVID-19 detection over the handcrafted features used in previous studies.\",\"PeriodicalId\":438341,\"journal\":{\"name\":\"Proceedings of the 6th International Conference on Future Networks & Distributed Systems\",\"volume\":\"447 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 6th International Conference on Future Networks & Distributed Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3584202.3584247\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 6th International Conference on Future Networks & Distributed Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3584202.3584247","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

语音信号具有许多特征,这些特征代表了特定语言的特征,并能识别情感。它还包含可用于识别说话人的精神、心理和身体状态的信息。最近,语音信号的声学分析为许多疾病的医学诊断和监测症状提供了一种实用、自动化和可扩展的方法。本文对新冠肺炎确诊阳性病例和阴性病例的深部声学特征进行了研究,并比较了声学特征与新冠肺炎症状的诊断能力。该方法包括基于Mel谱图图像的预训练视觉几何组(VGG-16)模型来提取深度音频特征。除了K-means算法确定有效特征外,还使用遗传算法-支持向量机(GA-SVM)分类器对案例进行分类。实验结果表明,与COVID-19症状相比,所提出的方法能够从声学特征中分类COVID-19和非COVID-19,准确率达到97%。实验结果表明,与以往研究中使用的手工特征相比,该方法显著提高了COVID-19检测的准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Artificial Intelligence-based Speech Signal for COVID-19 Diagnostics
The speech signal has numerous features that represent the characteristics of a specific language and recognize emotions. It also contains information that can be used to identify the mental, psychological, and physical states of the speaker. Recently, the acoustic analysis of speech signals offers a practical, automated, and scalable method for medical diagnosis and monitoring symptoms of many diseases. In this paper, we explore the deep acoustic features from confirmed positive and negative cases of COVID-19 and compare the performance of the acoustic features and COVID-19 symptoms in terms of their ability to diagnose COVID-19. The proposed methodology consists of the pre-trained Visual Geometry Group (VGG-16) model based on Mel spectrogram images to extract deep audio features. In addition to the K-means algorithm that determines effective features, followed by a Genetic Algorithm-Support Vector Machine (GA-SVM) classifier to classify cases. The experimental findings indicate the proposed methodology’s capability to classify COVID-19 and NOT COVID-19 from acoustic features compared to COVID-19 symptoms, achieving an accuracy of 97%. The experimental results show that the proposed method remarkably improves the accuracy of COVID-19 detection over the handcrafted features used in previous studies.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
EXPERIENCE IN USING BIG DATA TECHNOLOGY FOR DIGITALIZATION OF INFORMATION ENSURING THE SMOOTH OPERATION OF PHYSICAL TECHNOLOGY COMPANIES IN DISTRIBUTED ENVIRONMENTS IMPROVING OPTICAL PROPERTIES OF POLYVINYL ALCOHOL (PVA) AND CARBOXYMETHYL CELLULOSE (CMC) AS POLYMER BLEND FILMS (CMC-PVA) A Layered Taxonomy of Internet of Things Attacks Blockchain-based solutions for Cloud Computing Security: A Survey
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1