Yihan Zhou;Xiaokang Yu;Zixi Huang;Feierdun Palati;Zeyu Zhao;Zihan He;Yuan Feng;Yuxi Luo
{"title":"基于增强视听线索的多模态融合注意网络抑郁水平识别","authors":"Yihan Zhou;Xiaokang Yu;Zixi Huang;Feierdun Palati;Zeyu Zhao;Zihan He;Yuan Feng;Yuxi Luo","doi":"10.1109/ACCESS.2025.3545587","DOIUrl":null,"url":null,"abstract":"In recent years, substantial research has focused on automated systems for assessing depression levels using different types of data, such as audio and visual inputs. However, signals recorded from individuals with depression can be influenced by external factors, such as the recording equipment and environment, making it essential to create a system that is resilient to these interferences to maintain accuracy. This study introduces a fused-attention model for evaluating depression severity using enhanced multi-modal data inputs. Applying several pre-trained advanced models, this article incorporates audiovisual sequences with augmentation. The framework includes two novel components, which we term as the FIE and VIE blocks, for extracting detailed facial and vocal features. The FIE block utilizes ResNet-18 to enhance the feature representation of video frames and integrates two types of attention mechanisms to capture spatial-temporal patterns. Meanwhile, the VIE block processes the Mel spectrogram of the audio signal, followed by an optimized Swin transformer block to extract auditory features. The model demonstrates strong performance, accurately identifying depression severity in 3-second audiovisual sequences with an 81.4% accuracy rate on the AVEC2014 dataset, and achieves a Kappa score of 0.731 and an MF1 index of 0.798. Furthermore, it shows high resilience to noise, underscoring its ability to mitigate the effects of recording equipment and environmental conditions in depression level estimation.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"13 ","pages":"37913-37923"},"PeriodicalIF":3.6000,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10904116","citationCount":"0","resultStr":"{\"title\":\"Multi-Modal Fused-Attention Network for Depression Level Recognition Based on Enhanced Audiovisual Cues\",\"authors\":\"Yihan Zhou;Xiaokang Yu;Zixi Huang;Feierdun Palati;Zeyu Zhao;Zihan He;Yuan Feng;Yuxi Luo\",\"doi\":\"10.1109/ACCESS.2025.3545587\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, substantial research has focused on automated systems for assessing depression levels using different types of data, such as audio and visual inputs. However, signals recorded from individuals with depression can be influenced by external factors, such as the recording equipment and environment, making it essential to create a system that is resilient to these interferences to maintain accuracy. This study introduces a fused-attention model for evaluating depression severity using enhanced multi-modal data inputs. Applying several pre-trained advanced models, this article incorporates audiovisual sequences with augmentation. The framework includes two novel components, which we term as the FIE and VIE blocks, for extracting detailed facial and vocal features. The FIE block utilizes ResNet-18 to enhance the feature representation of video frames and integrates two types of attention mechanisms to capture spatial-temporal patterns. Meanwhile, the VIE block processes the Mel spectrogram of the audio signal, followed by an optimized Swin transformer block to extract auditory features. The model demonstrates strong performance, accurately identifying depression severity in 3-second audiovisual sequences with an 81.4% accuracy rate on the AVEC2014 dataset, and achieves a Kappa score of 0.731 and an MF1 index of 0.798. Furthermore, it shows high resilience to noise, underscoring its ability to mitigate the effects of recording equipment and environmental conditions in depression level estimation.\",\"PeriodicalId\":13079,\"journal\":{\"name\":\"IEEE Access\",\"volume\":\"13 \",\"pages\":\"37913-37923\"},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2025-02-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10904116\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Access\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10904116/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Access","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10904116/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Multi-Modal Fused-Attention Network for Depression Level Recognition Based on Enhanced Audiovisual Cues
In recent years, substantial research has focused on automated systems for assessing depression levels using different types of data, such as audio and visual inputs. However, signals recorded from individuals with depression can be influenced by external factors, such as the recording equipment and environment, making it essential to create a system that is resilient to these interferences to maintain accuracy. This study introduces a fused-attention model for evaluating depression severity using enhanced multi-modal data inputs. Applying several pre-trained advanced models, this article incorporates audiovisual sequences with augmentation. The framework includes two novel components, which we term as the FIE and VIE blocks, for extracting detailed facial and vocal features. The FIE block utilizes ResNet-18 to enhance the feature representation of video frames and integrates two types of attention mechanisms to capture spatial-temporal patterns. Meanwhile, the VIE block processes the Mel spectrogram of the audio signal, followed by an optimized Swin transformer block to extract auditory features. The model demonstrates strong performance, accurately identifying depression severity in 3-second audiovisual sequences with an 81.4% accuracy rate on the AVEC2014 dataset, and achieves a Kappa score of 0.731 and an MF1 index of 0.798. Furthermore, it shows high resilience to noise, underscoring its ability to mitigate the effects of recording equipment and environmental conditions in depression level estimation.
IEEE AccessCOMPUTER SCIENCE, INFORMATION SYSTEMSENGIN-ENGINEERING, ELECTRICAL & ELECTRONIC
CiteScore
9.80
自引率
7.70%
发文量
6673
审稿时长
6 weeks
期刊介绍:
IEEE Access® is a multidisciplinary, open access (OA), applications-oriented, all-electronic archival journal that continuously presents the results of original research or development across all of IEEE''s fields of interest.
IEEE Access will publish articles that are of high interest to readers, original, technically correct, and clearly presented. Supported by author publication charges (APC), its hallmarks are a rapid peer review and publication process with open access to all readers. Unlike IEEE''s traditional Transactions or Journals, reviews are "binary", in that reviewers will either Accept or Reject an article in the form it is submitted in order to achieve rapid turnaround. Especially encouraged are submissions on:
Multidisciplinary topics, or applications-oriented articles and negative results that do not fit within the scope of IEEE''s traditional journals.
Practical articles discussing new experiments or measurement techniques, interesting solutions to engineering.
Development of new or improved fabrication or manufacturing techniques.
Reviews or survey articles of new or evolving fields oriented to assist others in understanding the new area.