Zhenyu Liu;Shimao Zhang;Bailin Chen;Gang Li;Qiongqiong Chen;Zhijie Ding;Xin Zhang;Bin Hu
{"title":"Stimulus-Response Pattern: The Core of Robust Cross-Stimulus Facial Depression Recognition","authors":"Zhenyu Liu;Shimao Zhang;Bailin Chen;Gang Li;Qiongqiong Chen;Zhijie Ding;Xin Zhang;Bin Hu","doi":"10.1109/TAFFC.2024.3496524","DOIUrl":null,"url":null,"abstract":"Facial depression recognition is one of the current hot topics. Mainstream methods mainly focus on how to design deep models to effectively extract the difference in facial movements between depressed patients and healthy people. However, this difference changes when the stimulus source to which the subjects are exposed changes. This leads to the performance degradation in cross-stimulus situation and limits the practical application of this technology. We hold the opinion that why depressed patients show behavioral characteristics different from healthy people is that they have a specific stable pattern of responding to stimulus. Therefore, we incorporate stimuli into the modeling process for the first time and employ deep networks to learn stable representations between stimulus and response to achieve stable and effective modeling. Specifically, we propose a deep modeling framework to learn the stimulus-response pattern of the subject through the interaction relationship between the stimulus videos and the subject’s facial movements. We constructed a balanced depression dataset of 364 individuals with three different stimulus videos to verify the effectiveness of our method. The results show that our method achieves state-of-the-art and the best generalization performance in depression recognition. This stimulus-response pattern modeling provides a new perspective for recognizing depression.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 2","pages":"1146-1158"},"PeriodicalIF":9.8000,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10750917/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Facial depression recognition is one of the current hot topics. Mainstream methods mainly focus on how to design deep models to effectively extract the difference in facial movements between depressed patients and healthy people. However, this difference changes when the stimulus source to which the subjects are exposed changes. This leads to the performance degradation in cross-stimulus situation and limits the practical application of this technology. We hold the opinion that why depressed patients show behavioral characteristics different from healthy people is that they have a specific stable pattern of responding to stimulus. Therefore, we incorporate stimuli into the modeling process for the first time and employ deep networks to learn stable representations between stimulus and response to achieve stable and effective modeling. Specifically, we propose a deep modeling framework to learn the stimulus-response pattern of the subject through the interaction relationship between the stimulus videos and the subject’s facial movements. We constructed a balanced depression dataset of 364 individuals with three different stimulus videos to verify the effectiveness of our method. The results show that our method achieves state-of-the-art and the best generalization performance in depression recognition. This stimulus-response pattern modeling provides a new perspective for recognizing depression.
期刊介绍:
The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.