基于拓扑捕获和不一致感知的讽刺检测网络

IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Information Fusion Pub Date : 2025-01-02 DOI:10.1016/j.inffus.2024.102918
Ling Gao, Nan Sheng, Yiming Liu, Hao Xu
{"title":"基于拓扑捕获和不一致感知的讽刺检测网络","authors":"Ling Gao, Nan Sheng, Yiming Liu, Hao Xu","doi":"10.1016/j.inffus.2024.102918","DOIUrl":null,"url":null,"abstract":"Multimodal sarcasm detection is a pivotal visual-linguistic task that aims to identify incongruity between the text purpose and the underlying meaning of other modal data. Existing works are dedicated to the learning of unimodal embeddings and the fusion of multimodal information. Nonetheless, they neglect the importance of topology and incongruity between multimodal information for sarcasm detection. Therefore, we propose a novel multimodal sarcasm detection network that incorporates multimodal topology capture and incongruity perception (TCIP). A text single-mode graph, a visual single-mode graph, and a visual–text heterogeneous graph are first established, where nodes contain visual elements and text elements. The association matrix of the heterogeneous graph encapsulates visual–visual associations, text–text associations, and visual–text associations. Subsequently, TCIP learns single-modal graphs and a heterogeneous graph based on graph convolutional networks to capture text topology information, visual topology information, and multimodal topology information. Furthermore, we pull together multimodal embeddings exhibiting consistent distributions and push away those with inconsistent distributions. TCIP finally feeds the fused embedding into a classifier to detect sarcasm results within visual–text pairs. Experimental results conducted on the multimodal sarcasm detection benchmarks and the multimodal science question answering dataset demonstrate the exceptional performance of TCIP.","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"2 1","pages":""},"PeriodicalIF":14.7000,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"TCIP: Network with topology capture and incongruity perception for sarcasm detection\",\"authors\":\"Ling Gao, Nan Sheng, Yiming Liu, Hao Xu\",\"doi\":\"10.1016/j.inffus.2024.102918\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multimodal sarcasm detection is a pivotal visual-linguistic task that aims to identify incongruity between the text purpose and the underlying meaning of other modal data. Existing works are dedicated to the learning of unimodal embeddings and the fusion of multimodal information. Nonetheless, they neglect the importance of topology and incongruity between multimodal information for sarcasm detection. Therefore, we propose a novel multimodal sarcasm detection network that incorporates multimodal topology capture and incongruity perception (TCIP). A text single-mode graph, a visual single-mode graph, and a visual–text heterogeneous graph are first established, where nodes contain visual elements and text elements. The association matrix of the heterogeneous graph encapsulates visual–visual associations, text–text associations, and visual–text associations. Subsequently, TCIP learns single-modal graphs and a heterogeneous graph based on graph convolutional networks to capture text topology information, visual topology information, and multimodal topology information. Furthermore, we pull together multimodal embeddings exhibiting consistent distributions and push away those with inconsistent distributions. TCIP finally feeds the fused embedding into a classifier to detect sarcasm results within visual–text pairs. Experimental results conducted on the multimodal sarcasm detection benchmarks and the multimodal science question answering dataset demonstrate the exceptional performance of TCIP.\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"2 1\",\"pages\":\"\"},\"PeriodicalIF\":14.7000,\"publicationDate\":\"2025-01-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1016/j.inffus.2024.102918\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.inffus.2024.102918","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

多模态讽刺检测是一项关键的视觉语言任务,旨在识别文本目的与其他模态数据的潜在意义之间的不一致。现有的研究主要集中在单模态嵌入的学习和多模态信息的融合。然而,他们忽视了拓扑和多模态信息之间的不一致性对讽刺检测的重要性。因此,我们提出了一种结合多模态拓扑捕获和不一致感知(TCIP)的新型多模态讽刺检测网络。首先建立了文本单模图、视觉单模图和视觉文本异构图,其中节点包含视觉元素和文本元素。异构图的关联矩阵封装了视觉-视觉关联、文本-文本关联和视觉-文本关联。随后,TCIP学习单模态图和基于图卷积网络的异构图,获取文本拓扑信息、视觉拓扑信息和多模态拓扑信息。此外,我们将具有一致分布的多模态嵌入拉到一起,并排除那些具有不一致分布的嵌入。最后,TCIP将融合嵌入到分类器中,以检测视觉文本对中的讽刺结果。在多模态讽刺语检测基准和多模态科学问答数据集上进行的实验结果表明,TCIP具有优异的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
TCIP: Network with topology capture and incongruity perception for sarcasm detection
Multimodal sarcasm detection is a pivotal visual-linguistic task that aims to identify incongruity between the text purpose and the underlying meaning of other modal data. Existing works are dedicated to the learning of unimodal embeddings and the fusion of multimodal information. Nonetheless, they neglect the importance of topology and incongruity between multimodal information for sarcasm detection. Therefore, we propose a novel multimodal sarcasm detection network that incorporates multimodal topology capture and incongruity perception (TCIP). A text single-mode graph, a visual single-mode graph, and a visual–text heterogeneous graph are first established, where nodes contain visual elements and text elements. The association matrix of the heterogeneous graph encapsulates visual–visual associations, text–text associations, and visual–text associations. Subsequently, TCIP learns single-modal graphs and a heterogeneous graph based on graph convolutional networks to capture text topology information, visual topology information, and multimodal topology information. Furthermore, we pull together multimodal embeddings exhibiting consistent distributions and push away those with inconsistent distributions. TCIP finally feeds the fused embedding into a classifier to detect sarcasm results within visual–text pairs. Experimental results conducted on the multimodal sarcasm detection benchmarks and the multimodal science question answering dataset demonstrate the exceptional performance of TCIP.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
期刊最新文献
Optimizing the environmental design and management of public green spaces: Analyzing urban infrastructure and long-term user experience with a focus on streetlight density in the city of Las Vegas, NV DF-BSFNet: A bilateral synergistic fusion network with novel dynamic flow convolution for robust road extraction KDFuse: A high-level vision task-driven infrared and visible image fusion method based on cross-domain knowledge distillation SelfFed: Self-adaptive Federated Learning with Non-IID data on Heterogeneous Edge Devices for Bias Mitigation and Enhance Training Efficiency DEMO: A Dynamics-Enhanced Learning Model for multi-horizon trajectory prediction in autonomous vehicles
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1