CSMF-SPC:具有有效语境语义模态融合和情感极性校正功能的多模态情感分析模型

IF 3.7 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pattern Analysis and Applications Pub Date : 2024-08-23 DOI:10.1007/s10044-024-01320-w
Yuqiang Li, Wenxuan Weng, Chun Liu, Lin Li
{"title":"CSMF-SPC:具有有效语境语义模态融合和情感极性校正功能的多模态情感分析模型","authors":"Yuqiang Li, Wenxuan Weng, Chun Liu, Lin Li","doi":"10.1007/s10044-024-01320-w","DOIUrl":null,"url":null,"abstract":"<p>Multimodal sentiment analysis focuses on the fusion of multiple modalities. However, modality representation learning is a key step for better modality fusion, so how to fully learn the sentiment information of non-text modalities is a problem worth exploring. In addition, how to further improve the accuracy of sentiment polarity prediction is also a work to be studied. To solve the above problems, we propose a multimodal sentiment analysis model with effective context semantic modality fusion and sentiment polarity correction (CSMF-SPC). Firstly, we design a low-rank multimodal fusion network based on context semantic modality (CSM-LRMFN). CSM-LRMFN uses the bi-directional long short-term memory network to extract the context semantic features of non-text modalities, and the BERT to extract the features of text modality. Then, CSM-LRMFN adopts a low-rank multimodal fusion method to fully extract the interaction information among modalities with contextual semantics. Different from previous studies, to improve the accuracy of sentiment polarity prediction, we design a weight self-adjusting sentiment polarity penalty loss function, which makes the model learn more sentiment features that are conducive to model prediction through backpropagation. Finally, a series of comparative experiments are conducted on the CMU-MOSI and CMU-MOSEI datasets. Compared with the current representative models, CSMF-SPC achieves better experimental results. Among them, the Acc-2 (including zero) metric is increased by 1.41% and 1.58% on the word-aligned and unaligned CMU-MOSI datasets respectively; it is improved by 1.50% and 2.14% respectively on the CMU-MOSEI dataset, which indicates that the improvement of CSMF-SPC is effective.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"109 1","pages":""},"PeriodicalIF":3.7000,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CSMF-SPC: Multimodal Sentiment Analysis Model with Effective Context Semantic Modality Fusion and Sentiment Polarity Correction\",\"authors\":\"Yuqiang Li, Wenxuan Weng, Chun Liu, Lin Li\",\"doi\":\"10.1007/s10044-024-01320-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Multimodal sentiment analysis focuses on the fusion of multiple modalities. However, modality representation learning is a key step for better modality fusion, so how to fully learn the sentiment information of non-text modalities is a problem worth exploring. In addition, how to further improve the accuracy of sentiment polarity prediction is also a work to be studied. To solve the above problems, we propose a multimodal sentiment analysis model with effective context semantic modality fusion and sentiment polarity correction (CSMF-SPC). Firstly, we design a low-rank multimodal fusion network based on context semantic modality (CSM-LRMFN). CSM-LRMFN uses the bi-directional long short-term memory network to extract the context semantic features of non-text modalities, and the BERT to extract the features of text modality. Then, CSM-LRMFN adopts a low-rank multimodal fusion method to fully extract the interaction information among modalities with contextual semantics. Different from previous studies, to improve the accuracy of sentiment polarity prediction, we design a weight self-adjusting sentiment polarity penalty loss function, which makes the model learn more sentiment features that are conducive to model prediction through backpropagation. Finally, a series of comparative experiments are conducted on the CMU-MOSI and CMU-MOSEI datasets. Compared with the current representative models, CSMF-SPC achieves better experimental results. Among them, the Acc-2 (including zero) metric is increased by 1.41% and 1.58% on the word-aligned and unaligned CMU-MOSI datasets respectively; it is improved by 1.50% and 2.14% respectively on the CMU-MOSEI dataset, which indicates that the improvement of CSMF-SPC is effective.</p>\",\"PeriodicalId\":54639,\"journal\":{\"name\":\"Pattern Analysis and Applications\",\"volume\":\"109 1\",\"pages\":\"\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2024-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Pattern Analysis and Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s10044-024-01320-w\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Analysis and Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10044-024-01320-w","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

多模态情感分析侧重于多种模态的融合。然而,模态表示学习是更好地进行模态融合的关键步骤,因此如何充分学习非文本模态的情感信息是一个值得探讨的问题。此外,如何进一步提高情感极性预测的准确性也是一项有待研究的工作。为了解决上述问题,我们提出了一种具有有效语境语义模态融合和情感极性校正(CSMF-SPC)的多模态情感分析模型。首先,我们设计了基于语境语义模态的低秩多模态融合网络(CSM-LRMFN)。CSM-LRMFN 利用双向长短期记忆网络提取非文本模态的上下文语义特征,利用 BERT 提取文本模态的特征。然后,CSM-LRMFN 采用低秩多模态融合方法,充分提取具有上下文语义的模态之间的交互信息。与以往研究不同的是,为了提高情感极性预测的准确性,我们设计了权重自调整的情感极性惩罚损失函数,使模型通过反向传播学习到更多有利于模型预测的情感特征。最后,我们在 CMU-MOSI 和 CMU-MOSEI 数据集上进行了一系列对比实验。与目前具有代表性的模型相比,CSMF-SPC 取得了更好的实验结果。其中,在单词对齐和未对齐的 CMU-MOSI 数据集上,Acc-2(含零)指标分别提高了 1.41% 和 1.58%;在 CMU-MOSEI 数据集上,Acc-2(含零)指标分别提高了 1.50% 和 2.14%,这表明 CSMF-SPC 的改进是有效的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
CSMF-SPC: Multimodal Sentiment Analysis Model with Effective Context Semantic Modality Fusion and Sentiment Polarity Correction

Multimodal sentiment analysis focuses on the fusion of multiple modalities. However, modality representation learning is a key step for better modality fusion, so how to fully learn the sentiment information of non-text modalities is a problem worth exploring. In addition, how to further improve the accuracy of sentiment polarity prediction is also a work to be studied. To solve the above problems, we propose a multimodal sentiment analysis model with effective context semantic modality fusion and sentiment polarity correction (CSMF-SPC). Firstly, we design a low-rank multimodal fusion network based on context semantic modality (CSM-LRMFN). CSM-LRMFN uses the bi-directional long short-term memory network to extract the context semantic features of non-text modalities, and the BERT to extract the features of text modality. Then, CSM-LRMFN adopts a low-rank multimodal fusion method to fully extract the interaction information among modalities with contextual semantics. Different from previous studies, to improve the accuracy of sentiment polarity prediction, we design a weight self-adjusting sentiment polarity penalty loss function, which makes the model learn more sentiment features that are conducive to model prediction through backpropagation. Finally, a series of comparative experiments are conducted on the CMU-MOSI and CMU-MOSEI datasets. Compared with the current representative models, CSMF-SPC achieves better experimental results. Among them, the Acc-2 (including zero) metric is increased by 1.41% and 1.58% on the word-aligned and unaligned CMU-MOSI datasets respectively; it is improved by 1.50% and 2.14% respectively on the CMU-MOSEI dataset, which indicates that the improvement of CSMF-SPC is effective.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Pattern Analysis and Applications
Pattern Analysis and Applications 工程技术-计算机:人工智能
CiteScore
7.40
自引率
2.60%
发文量
76
审稿时长
13.5 months
期刊介绍: The journal publishes high quality articles in areas of fundamental research in intelligent pattern analysis and applications in computer science and engineering. It aims to provide a forum for original research which describes novel pattern analysis techniques and industrial applications of the current technology. In addition, the journal will also publish articles on pattern analysis applications in medical imaging. The journal solicits articles that detail new technology and methods for pattern recognition and analysis in applied domains including, but not limited to, computer vision and image processing, speech analysis, robotics, multimedia, document analysis, character recognition, knowledge engineering for pattern recognition, fractal analysis, and intelligent control. The journal publishes articles on the use of advanced pattern recognition and analysis methods including statistical techniques, neural networks, genetic algorithms, fuzzy pattern recognition, machine learning, and hardware implementations which are either relevant to the development of pattern analysis as a research area or detail novel pattern analysis applications. Papers proposing new classifier systems or their development, pattern analysis systems for real-time applications, fuzzy and temporal pattern recognition and uncertainty management in applied pattern recognition are particularly solicited.
期刊最新文献
K-BEST subspace clustering: kernel-friendly block-diagonal embedded and similarity-preserving transformed subspace clustering Research on decoupled adaptive graph convolution networks based on skeleton data for action recognition Hidden Markov models with multivariate bounded asymmetric student’s t-mixture model emissions YOLOv7-GCM: a detection algorithm for creek waste based on improved YOLOv7 model LDC-PP-YOLOE: a lightweight model for detecting and counting citrus fruit
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1