Mobile Virtual Assistant for Multi-Modal Depression-Level Stratification

IF 9.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Affective Computing Pub Date : 2024-08-28 DOI:10.1109/TAFFC.2024.3451114
Eric Hsiao-Kuang Wu;Ting-Yu Gao;Chia-Ru Chung;Chun-Chuan Chen;Chia-Fen Tsai;Shih-Ching Yeh
{"title":"Mobile Virtual Assistant for Multi-Modal Depression-Level Stratification","authors":"Eric Hsiao-Kuang Wu;Ting-Yu Gao;Chia-Ru Chung;Chun-Chuan Chen;Chia-Fen Tsai;Shih-Ching Yeh","doi":"10.1109/TAFFC.2024.3451114","DOIUrl":null,"url":null,"abstract":"Depression not only afflicts hundreds of millions of people but also contributes to a global disability and healthcare burden. The primary method of diagnosing depression relies on the judgment of medical professionals in clinical interviews with patients, which is subjective and time-consuming. Recent studies have demonstrated that text, audio, facial attributes, heart rate, and eye movement could be utilized for depression-level stratification. In this paper, we construct a virtual assistant for automatic depression-level stratification on mobile devices that can actively guide users through voice dialogue and change conversation content using emotion perception. During the conversation, features from text, audio, facial attributes, heart rate, and eye movement are extracted for multi-modal depression-level stratification. We utilize a feature-level fusion framework to integrate five modalities and the deep neural network to classify the varying levels of depression, which include healthy, mild, moderate, or severe depression, as well as bipolar disorder (formerly called manic depression). With outcome data from 168 subjects, experimental results reveal that the total accuracy of feature-level fusion with five modal features achieves the highest accuracy of 90.26 percent.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 2","pages":"611-623"},"PeriodicalIF":9.8000,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10654566/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Depression not only afflicts hundreds of millions of people but also contributes to a global disability and healthcare burden. The primary method of diagnosing depression relies on the judgment of medical professionals in clinical interviews with patients, which is subjective and time-consuming. Recent studies have demonstrated that text, audio, facial attributes, heart rate, and eye movement could be utilized for depression-level stratification. In this paper, we construct a virtual assistant for automatic depression-level stratification on mobile devices that can actively guide users through voice dialogue and change conversation content using emotion perception. During the conversation, features from text, audio, facial attributes, heart rate, and eye movement are extracted for multi-modal depression-level stratification. We utilize a feature-level fusion framework to integrate five modalities and the deep neural network to classify the varying levels of depression, which include healthy, mild, moderate, or severe depression, as well as bipolar disorder (formerly called manic depression). With outcome data from 168 subjects, experimental results reveal that the total accuracy of feature-level fusion with five modal features achieves the highest accuracy of 90.26 percent.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于多模式抑郁分层的移动虚拟助理
抑郁症不仅折磨着数亿人,而且还造成了全球残疾和医疗负担。诊断抑郁症的主要方法依赖于医学专业人员对患者的临床访谈判断,这是主观的,耗时的。最近的研究表明,文本、音频、面部特征、心率和眼球运动可以用于抑郁水平分层。在本文中,我们构建了一个移动设备上的抑郁水平自动分层虚拟助手,它可以通过语音对话主动引导用户,并通过情感感知改变对话内容。在对话过程中,从文本、音频、面部属性、心率和眼动中提取特征,用于多模态抑郁水平分层。我们利用特征级融合框架来整合五种模式和深度神经网络来分类不同程度的抑郁症,包括健康、轻度、中度或重度抑郁症,以及双相情感障碍(以前称为躁狂抑郁症)。利用168个被试的结果数据,实验结果表明,5个模态特征的特征级融合的总准确率最高,达到90.26%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
15.00
自引率
6.20%
发文量
174
期刊介绍: The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.
期刊最新文献
CMD$^{3}$: Cross-Modal Decoupled Deformable Distillation for EEG-fNIRS Fusion Multi-scale Dynamic Temporal Network with Graph Matching Domain Adaptation for Cross-Subject EEG Emotion Recognition Toward a Realistic Encoding Model of Auditory Affective Understanding in the Brain Disentangled Instrumentalization Learning for Dialogue Emotion Detection 2025 Reviewers List*
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1