{"title":"The Ustc System for Adress-m Challenge","authors":"Kangdi Mei, Xinyun Ding, Yinlong Liu, Zhiqiang Guo, Feiyang Xu, Xin Li, Tuya Naren, Jiahong Yuan, Zhenhua Ling","doi":"10.1109/ICASSP49357.2023.10094714","DOIUrl":null,"url":null,"abstract":"This paper describes our submission to the ICASSP 2023 Signal Processing Grand Challenge (SPGC), which focuses on multilingual Alzheimer’s disease (AD) recognition through spontaneous speech. Our approaches include using a variety of acoustic features and silence-related information for AD detection and mini-mental state examination (MMSE) score prediction, and fine-tuning wav2vec2.0 models on speech in various frequency bands for AD detection. Our overall results on the test data outperform the baseline provided by the organizers, achieving 73.9% accuracy in AD detection by fine-tuning our bilingual wav2vec2.0 pre-trained model on the 0-1000Hz frequency band speech, and 4.610 RMSE (r = 0.565) in MMSE prediction through the fusion of eGeMAPS and silence features.","PeriodicalId":113072,"journal":{"name":"ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICASSP49357.2023.10094714","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
This paper describes our submission to the ICASSP 2023 Signal Processing Grand Challenge (SPGC), which focuses on multilingual Alzheimer’s disease (AD) recognition through spontaneous speech. Our approaches include using a variety of acoustic features and silence-related information for AD detection and mini-mental state examination (MMSE) score prediction, and fine-tuning wav2vec2.0 models on speech in various frequency bands for AD detection. Our overall results on the test data outperform the baseline provided by the organizers, achieving 73.9% accuracy in AD detection by fine-tuning our bilingual wav2vec2.0 pre-trained model on the 0-1000Hz frequency band speech, and 4.610 RMSE (r = 0.565) in MMSE prediction through the fusion of eGeMAPS and silence features.