Stimulus-Response Pattern: The Core of Robust Cross-Stimulus Facial Depression Recognition

IF 9.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Affective Computing Pub Date : 2024-11-12 DOI:10.1109/TAFFC.2024.3496524
Zhenyu Liu;Shimao Zhang;Bailin Chen;Gang Li;Qiongqiong Chen;Zhijie Ding;Xin Zhang;Bin Hu
{"title":"Stimulus-Response Pattern: The Core of Robust Cross-Stimulus Facial Depression Recognition","authors":"Zhenyu Liu;Shimao Zhang;Bailin Chen;Gang Li;Qiongqiong Chen;Zhijie Ding;Xin Zhang;Bin Hu","doi":"10.1109/TAFFC.2024.3496524","DOIUrl":null,"url":null,"abstract":"Facial depression recognition is one of the current hot topics. Mainstream methods mainly focus on how to design deep models to effectively extract the difference in facial movements between depressed patients and healthy people. However, this difference changes when the stimulus source to which the subjects are exposed changes. This leads to the performance degradation in cross-stimulus situation and limits the practical application of this technology. We hold the opinion that why depressed patients show behavioral characteristics different from healthy people is that they have a specific stable pattern of responding to stimulus. Therefore, we incorporate stimuli into the modeling process for the first time and employ deep networks to learn stable representations between stimulus and response to achieve stable and effective modeling. Specifically, we propose a deep modeling framework to learn the stimulus-response pattern of the subject through the interaction relationship between the stimulus videos and the subject’s facial movements. We constructed a balanced depression dataset of 364 individuals with three different stimulus videos to verify the effectiveness of our method. The results show that our method achieves state-of-the-art and the best generalization performance in depression recognition. This stimulus-response pattern modeling provides a new perspective for recognizing depression.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 2","pages":"1146-1158"},"PeriodicalIF":9.8000,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10750917/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Facial depression recognition is one of the current hot topics. Mainstream methods mainly focus on how to design deep models to effectively extract the difference in facial movements between depressed patients and healthy people. However, this difference changes when the stimulus source to which the subjects are exposed changes. This leads to the performance degradation in cross-stimulus situation and limits the practical application of this technology. We hold the opinion that why depressed patients show behavioral characteristics different from healthy people is that they have a specific stable pattern of responding to stimulus. Therefore, we incorporate stimuli into the modeling process for the first time and employ deep networks to learn stable representations between stimulus and response to achieve stable and effective modeling. Specifically, we propose a deep modeling framework to learn the stimulus-response pattern of the subject through the interaction relationship between the stimulus videos and the subject’s facial movements. We constructed a balanced depression dataset of 364 individuals with three different stimulus videos to verify the effectiveness of our method. The results show that our method achieves state-of-the-art and the best generalization performance in depression recognition. This stimulus-response pattern modeling provides a new perspective for recognizing depression.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
刺激-反应模式:稳健的跨刺激面部抑郁识别的核心
面部抑郁识别是当前研究的热点之一。主流方法主要集中在如何设计深度模型来有效提取抑郁症患者与健康人面部运动的差异。然而,当受试者所接触的刺激源发生变化时,这种差异就会发生变化。这导致了交叉刺激情况下的性能下降,限制了该技术的实际应用。我们认为抑郁症患者之所以表现出不同于健康人的行为特征,是因为他们对刺激有一种特定的稳定的反应模式。因此,我们首次将刺激纳入建模过程,并利用深度网络学习刺激与响应之间的稳定表征,实现稳定有效的建模。具体而言,我们提出了一个深度建模框架,通过刺激视频与被试面部动作之间的交互关系来学习被试的刺激-反应模式。我们构建了一个包含364个人的平衡抑郁数据集,其中包含三种不同的刺激视频,以验证我们方法的有效性。结果表明,该方法在抑郁症识别中达到了最先进和最好的泛化性能。这种刺激-反应模式模型为抑郁症的识别提供了新的视角。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
15.00
自引率
6.20%
发文量
174
期刊介绍: The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.
期刊最新文献
Personality Traits and Demographics Analysis in Online Mental Health Discourse EEG-Based Emotion Classification Using Deep Capsule Networks for Subject-Independent and Dependent Scenarios Nasal Dominance and Nostril Breathing Variability: Potential Biomarkers of Acute Stress Charting the Unspoken: Causal Inference-Guided LLM Augmentation for Emotion Recognition in Conversation R2G $^{3}$ Net: A Novel Hierarchical Spatial-Temporal Neural Network With a Regional-to-Global Fusion Mechanism for Multimodal Emotion Recognition
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1