Dynamic Emotion-Dependent Network With Relational Subgraph Interaction for Multimodal Emotion Recognition

IF 9.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Affective Computing Pub Date : 2024-09-16 DOI:10.1109/TAFFC.2024.3461148
Ye Wang;Wei Zhang;Ke Liu;Wei Wu;Feng Hu;Hong Yu;Guoyin Wang
{"title":"Dynamic Emotion-Dependent Network With Relational Subgraph Interaction for Multimodal Emotion Recognition","authors":"Ye Wang;Wei Zhang;Ke Liu;Wei Wu;Feng Hu;Hong Yu;Guoyin Wang","doi":"10.1109/TAFFC.2024.3461148","DOIUrl":null,"url":null,"abstract":"Multimodal Emotion Recognition in Conversations (MERC) is an important topic in human-computer interaction. In the MERC task, conversations exhibit dynamic emotional dependency, including inter-speaker and intra-speaker emotional dependency, both are vital in understanding the content. However, current research primarily integrates these two emotional dependencies into one unified module, limiting the accuracy of MERC. In this paper, we propose a dynamic emotion-dependent network with relational subgraph interaction named DEDNet. DEDNet introduces relational subgraphs to separately model two emotional dependencies, enabling structured learning paths for utterances based on distinct emotional dependency types. Specifically, nodes indicate the utterances at different moments in the conversation, while edges define the emotional dependency and temporal relationships between nodes. To explicitly capture the differences between these two emotional dependencies, distinct subgraphs are designed for comprehensive representations. Furthermore, we propose an incremental interactive strategy, sequentially leveraging two emotional dependencies to learn the changes in dependency relationships. We find that modeling inter-speaker emotional dependency can better identify negative emotions and modeling intra-speaker emotional dependency can better recognize positive emotions. Experimental results demonstrate that our model outperforms current state-of-the-art methods on three benchmark datasets, IEMOCAP, MELD and DailyDialog.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 2","pages":"712-725"},"PeriodicalIF":9.8000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10680310/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Multimodal Emotion Recognition in Conversations (MERC) is an important topic in human-computer interaction. In the MERC task, conversations exhibit dynamic emotional dependency, including inter-speaker and intra-speaker emotional dependency, both are vital in understanding the content. However, current research primarily integrates these two emotional dependencies into one unified module, limiting the accuracy of MERC. In this paper, we propose a dynamic emotion-dependent network with relational subgraph interaction named DEDNet. DEDNet introduces relational subgraphs to separately model two emotional dependencies, enabling structured learning paths for utterances based on distinct emotional dependency types. Specifically, nodes indicate the utterances at different moments in the conversation, while edges define the emotional dependency and temporal relationships between nodes. To explicitly capture the differences between these two emotional dependencies, distinct subgraphs are designed for comprehensive representations. Furthermore, we propose an incremental interactive strategy, sequentially leveraging two emotional dependencies to learn the changes in dependency relationships. We find that modeling inter-speaker emotional dependency can better identify negative emotions and modeling intra-speaker emotional dependency can better recognize positive emotions. Experimental results demonstrate that our model outperforms current state-of-the-art methods on three benchmark datasets, IEMOCAP, MELD and DailyDialog.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于多模态情感识别的关系子图交互式动态情感依赖网络
对话中的多模态情感识别(MERC)是人机交互领域的一个重要课题。在MERC任务中,会话表现出动态的情感依赖,包括说话人之间和说话人内部的情感依赖,两者对理解内容至关重要。然而,目前的研究主要是将这两种情感依赖整合到一个统一的模块中,限制了MERC的准确性。本文提出了一种具有关系子图交互的动态情感依赖网络——DEDNet。DEDNet引入关系子图来分别对两种情感依赖进行建模,从而为基于不同情感依赖类型的话语提供结构化的学习路径。具体来说,节点表示对话中不同时刻的话语,而边缘则定义节点之间的情感依赖和时间关系。为了明确地捕捉这两种情感依赖之间的差异,设计了不同的子图来进行全面的表示。此外,我们提出了一种增量互动策略,即依次利用两种情感依赖来了解依赖关系的变化。研究发现,言语间情绪依赖模型可以更好地识别负面情绪,言语内情绪依赖模型可以更好地识别积极情绪。实验结果表明,我们的模型在IEMOCAP, MELD和DailyDialog三个基准数据集上优于当前最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
15.00
自引率
6.20%
发文量
174
期刊介绍: The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.
期刊最新文献
Graph-Based Representation Learning with Beta Uncertainty for Enhanced Multimodal Emotion Recognition SpotFormer: Multi-Scale Spatio-Temporal Transformer for Facial Expression Spotting Weakly Supervised Learning for Facial Affective Behavior Analysis: a Review CWEFS: Brain volume conduction effects inspired channel-wise EEG feature selection for multi-dimensional emotion recognition LES-Talker: Fine-Grained Emotion Editing for Talking Head Generation in Linear Emotion Space
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1