{"title":"TCIP: Network with topology capture and incongruity perception for sarcasm detection","authors":"Ling Gao, Nan Sheng, Yiming Liu, Hao Xu","doi":"10.1016/j.inffus.2024.102918","DOIUrl":null,"url":null,"abstract":"Multimodal sarcasm detection is a pivotal visual-linguistic task that aims to identify incongruity between the text purpose and the underlying meaning of other modal data. Existing works are dedicated to the learning of unimodal embeddings and the fusion of multimodal information. Nonetheless, they neglect the importance of topology and incongruity between multimodal information for sarcasm detection. Therefore, we propose a novel multimodal sarcasm detection network that incorporates multimodal topology capture and incongruity perception (TCIP). A text single-mode graph, a visual single-mode graph, and a visual–text heterogeneous graph are first established, where nodes contain visual elements and text elements. The association matrix of the heterogeneous graph encapsulates visual–visual associations, text–text associations, and visual–text associations. Subsequently, TCIP learns single-modal graphs and a heterogeneous graph based on graph convolutional networks to capture text topology information, visual topology information, and multimodal topology information. Furthermore, we pull together multimodal embeddings exhibiting consistent distributions and push away those with inconsistent distributions. TCIP finally feeds the fused embedding into a classifier to detect sarcasm results within visual–text pairs. Experimental results conducted on the multimodal sarcasm detection benchmarks and the multimodal science question answering dataset demonstrate the exceptional performance of TCIP.","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"2 1","pages":""},"PeriodicalIF":14.7000,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.inffus.2024.102918","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Multimodal sarcasm detection is a pivotal visual-linguistic task that aims to identify incongruity between the text purpose and the underlying meaning of other modal data. Existing works are dedicated to the learning of unimodal embeddings and the fusion of multimodal information. Nonetheless, they neglect the importance of topology and incongruity between multimodal information for sarcasm detection. Therefore, we propose a novel multimodal sarcasm detection network that incorporates multimodal topology capture and incongruity perception (TCIP). A text single-mode graph, a visual single-mode graph, and a visual–text heterogeneous graph are first established, where nodes contain visual elements and text elements. The association matrix of the heterogeneous graph encapsulates visual–visual associations, text–text associations, and visual–text associations. Subsequently, TCIP learns single-modal graphs and a heterogeneous graph based on graph convolutional networks to capture text topology information, visual topology information, and multimodal topology information. Furthermore, we pull together multimodal embeddings exhibiting consistent distributions and push away those with inconsistent distributions. TCIP finally feeds the fused embedding into a classifier to detect sarcasm results within visual–text pairs. Experimental results conducted on the multimodal sarcasm detection benchmarks and the multimodal science question answering dataset demonstrate the exceptional performance of TCIP.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.