Image Encoding and Fusion of Multi-Modal Data Enhance Depression Diagnosis in Parkinson's Disease Patients

IF 9.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Affective Computing Pub Date : 2024-06-24 DOI:10.1109/TAFFC.2024.3418415
Jian Li;Yuliang Zhao;Huawei Zhang;Wayne Jason Li;Changzeng Fu;Chao Lian;Peng Shan
{"title":"Image Encoding and Fusion of Multi-Modal Data Enhance Depression Diagnosis in Parkinson's Disease Patients","authors":"Jian Li;Yuliang Zhao;Huawei Zhang;Wayne Jason Li;Changzeng Fu;Chao Lian;Peng Shan","doi":"10.1109/TAFFC.2024.3418415","DOIUrl":null,"url":null,"abstract":"The diagnosis of depression in individuals with Parkinson's Disease (PD) through the utilization of multimodal fusion techniques represents a significant domain. The primary challenge involves the creation of a robust fusion framework to address the heterogeneity among different modalities effectively. However, previous studies primarily focused on interactions between heterogeneous data, neglecting the structural similarities among isomorphic data, resulting in a substantial loss of feature information when merging heterogeneous data. In this study, we introduced a multi-modal data image encoding and fusion approach for diagnosing depression in PD patients. Additionally, we proposed a multi-modal dataset encompassing motion, facial expression, and audio data. First, we designed an RGB and sparse coding method to encode the multi-modal data, achieving the isomorphic transformation of multi-modal information and extracting feature information from lower-dimensional spaces. Furthermore, we introduced a Spatial-Temporal Network (STN) to fuse the three types of encoded images. We incorporated the Relation Global Attention (RGA) to enhance feature extraction and leverage all encoded image location feature nodes for balanced decision attention. Finally, recognizing the limitations of traditional machine learning algorithms in handling multi-tasks in medical diagnosis, we established a multi-task weighted loss function to achieve depression identification and severity prediction through Multi-Task learning (MTL).","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 1","pages":"145-160"},"PeriodicalIF":9.8000,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10570295/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The diagnosis of depression in individuals with Parkinson's Disease (PD) through the utilization of multimodal fusion techniques represents a significant domain. The primary challenge involves the creation of a robust fusion framework to address the heterogeneity among different modalities effectively. However, previous studies primarily focused on interactions between heterogeneous data, neglecting the structural similarities among isomorphic data, resulting in a substantial loss of feature information when merging heterogeneous data. In this study, we introduced a multi-modal data image encoding and fusion approach for diagnosing depression in PD patients. Additionally, we proposed a multi-modal dataset encompassing motion, facial expression, and audio data. First, we designed an RGB and sparse coding method to encode the multi-modal data, achieving the isomorphic transformation of multi-modal information and extracting feature information from lower-dimensional spaces. Furthermore, we introduced a Spatial-Temporal Network (STN) to fuse the three types of encoded images. We incorporated the Relation Global Attention (RGA) to enhance feature extraction and leverage all encoded image location feature nodes for balanced decision attention. Finally, recognizing the limitations of traditional machine learning algorithms in handling multi-tasks in medical diagnosis, we established a multi-task weighted loss function to achieve depression identification and severity prediction through Multi-Task learning (MTL).
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
图像编码和多模态数据融合可提高帕金森病患者的抑郁诊断水平
利用多模态融合技术诊断帕金森病(PD)患者的抑郁症是一个重要的领域。主要的挑战涉及创建一个强大的融合框架,以有效地解决不同模式之间的异质性。然而,以往的研究主要关注异构数据之间的相互作用,忽略了同构数据之间的结构相似性,导致异构数据合并时特征信息大量丢失。在这项研究中,我们介绍了一种多模态数据图像编码和融合方法来诊断PD患者的抑郁症。此外,我们提出了一个包含运动、面部表情和音频数据的多模态数据集。首先,设计了RGB和稀疏编码方法对多模态数据进行编码,实现了多模态信息的同构变换,并从低维空间中提取特征信息。此外,我们引入了一种时空网络(STN)来融合三种类型的编码图像。我们结合了关系全局注意(RGA)来增强特征提取,并利用所有编码的图像位置特征节点来平衡决策注意。最后,认识到传统机器学习算法在医学诊断中处理多任务的局限性,我们建立了一个多任务加权损失函数,通过多任务学习(multi-task learning, MTL)实现抑郁症的识别和严重程度预测。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
15.00
自引率
6.20%
发文量
174
期刊介绍: The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.
期刊最新文献
Video-Based Cross-Domain Emotion Recognition Via Sample-Graph Relations Self-Distillation EchoReason: a Two-stage Clinically Aligned Vision-Language Framework for Interpretable Diseases Diagnosis from Multi-Modal Ultrasound Advancing Micro-Expression Recognition: a Task-Specific Framework Integrating Frequency Analysis and Structural Embedding Facial Expression Recognition for Chinese Elderly Using Edge and Semantic Features Dual Path Network With Two-Step Transfer Learning An EEG-Based Multi-Source Domain Knowledge Transfer Framework for Cross-Session and Cross-Subject Emotion Recognition
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1