Multimodal Sentiment Analysis With Mutual Information-Based Disentangled Representation Learning

IF 9.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Affective Computing Pub Date : 2025-01-15 DOI:10.1109/TAFFC.2025.3529732
Hao Sun;Ziwei Niu;Hongyi Wang;Xinyao Yu;Jiaqing Liu;Yen-Wei Chen;Lanfen Lin
{"title":"Multimodal Sentiment Analysis With Mutual Information-Based Disentangled Representation Learning","authors":"Hao Sun;Ziwei Niu;Hongyi Wang;Xinyao Yu;Jiaqing Liu;Yen-Wei Chen;Lanfen Lin","doi":"10.1109/TAFFC.2025.3529732","DOIUrl":null,"url":null,"abstract":"Multimodal sentiment analysis seeks to utilize various types of signals to identify underlying emotions and sentiments. A key challenge in this field lies in multimodal representation learning, which aims to develop effective methods for integrating multimodal features into cohesive representations. Recent advancements include two notable approaches: one focuses on decomposing multimodal features into modality-invariant and -specific components, while the other emphasizes the use of mutual information to enhance the fusion of modalities. Both strategies have demonstrated effectiveness and yielded remarkable results. In this paper, we propose a novel learning framework that combines the strengths of these two approaches, termed mutual information-based disentangled multimodal representation learning. Our approach involves estimating different types of information during feature extraction and fusion stages. Specifically, we quantitatively assess and adjust the proportions of modality-invariant, -specific, and -complementary information during feature extraction. Subsequently, during fusion, we evaluate the amount of information retained by each modality in the fused representation. We employ mutual information or conditional mutual information to estimate each type of information content. By reconciling the proportions of these different types of information, our approach achieves state-of-the-art performance on popular sentiment analysis benchmarks, including CMU-MOSI and CMU-MOSEI.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 3","pages":"1606-1617"},"PeriodicalIF":9.8000,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10842969/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Multimodal sentiment analysis seeks to utilize various types of signals to identify underlying emotions and sentiments. A key challenge in this field lies in multimodal representation learning, which aims to develop effective methods for integrating multimodal features into cohesive representations. Recent advancements include two notable approaches: one focuses on decomposing multimodal features into modality-invariant and -specific components, while the other emphasizes the use of mutual information to enhance the fusion of modalities. Both strategies have demonstrated effectiveness and yielded remarkable results. In this paper, we propose a novel learning framework that combines the strengths of these two approaches, termed mutual information-based disentangled multimodal representation learning. Our approach involves estimating different types of information during feature extraction and fusion stages. Specifically, we quantitatively assess and adjust the proportions of modality-invariant, -specific, and -complementary information during feature extraction. Subsequently, during fusion, we evaluate the amount of information retained by each modality in the fused representation. We employ mutual information or conditional mutual information to estimate each type of information content. By reconciling the proportions of these different types of information, our approach achieves state-of-the-art performance on popular sentiment analysis benchmarks, including CMU-MOSI and CMU-MOSEI.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于互信息的解纠缠表示学习的多模态情感分析
多模态情绪分析试图利用各种类型的信号来识别潜在的情绪和情绪。该领域的一个关键挑战在于多模态表征学习,其目的是开发将多模态特征集成到内聚表征中的有效方法。最近的进展包括两种值得注意的方法:一种侧重于将多模态特征分解为模态不变和特定成分,而另一种强调使用互信息来增强模态融合。这两种策略都证明了有效性,并取得了显著成果。在本文中,我们提出了一种新的学习框架,结合了这两种方法的优势,称为基于互信息的非纠缠多模态表示学习。我们的方法包括在特征提取和融合阶段估计不同类型的信息。具体来说,我们在特征提取过程中定量评估和调整模态不变、特定和互补信息的比例。随后,在融合过程中,我们评估融合表示中每个模态保留的信息量。我们使用互信息或条件互信息来估计每种类型的信息内容。通过协调这些不同类型信息的比例,我们的方法在流行情感分析基准上实现了最先进的性能,包括CMU-MOSI和CMU-MOSEI。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
15.00
自引率
6.20%
发文量
174
期刊介绍: The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.
期刊最新文献
SMA-EL:a Minimal 1-cycle Construction Algorithm with Simplicial Maps Annotation and Edge Loss for Emotional Brain Networks Analysis NeuroPhysio: Explainable EEG-Based AI for Affective and Cognitive Load Recognition in Digital Health-Integrated Learning Systems SiaTalker: Siamese Emotion Injection for Coarse-to-Fine Speech-Driven 3D Facial Animation TPFN: A Text-Guided Progressive Fusion Network for Multimodal Sentiment Analysis Physiological Network of Emotion
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1