Hongyu Yang , Jinjiao Zhang , Liang Zhang , Xiang Cheng , Ze Hu
{"title":"MRAN:用于假新闻检测的多模态关系感知注意力网络","authors":"Hongyu Yang , Jinjiao Zhang , Liang Zhang , Xiang Cheng , Ze Hu","doi":"10.1016/j.csi.2023.103822","DOIUrl":null,"url":null,"abstract":"<div><p>Existing multimodal fake news detection methods face challenges in jointly capturing the intramodality and cross-modal correlation relationships between image regions and text fragments. Additionally, these methods lack comprehensive hierarchical semantics mining for text. These limitations result in ineffective utilization of multimodal information and impact detection performance. To address these issues, we propose a multimodal relationship-aware attention network (MRAN), which consists of three main steps. First, a multi-level encoding network is employed to extract hierarchical semantic feature representations of text, while the visual feature extractor VGG19 learns image feature representations. Second, the captured text and image representations are input into the relationship-aware attention network, which generates high-order fusion features by calculating the similarity between information segments within modalities and cross-modal similarity. Finally, the fusion features are passed through a fake news detector, which identifies fake news. Experimental results on three benchmark datasets demonstrate the effectiveness of MRAN, highlighting its strong detection performance.</p></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"89 ","pages":"Article 103822"},"PeriodicalIF":4.1000,"publicationDate":"2023-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MRAN: Multimodal relationship-aware attention network for fake news detection\",\"authors\":\"Hongyu Yang , Jinjiao Zhang , Liang Zhang , Xiang Cheng , Ze Hu\",\"doi\":\"10.1016/j.csi.2023.103822\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Existing multimodal fake news detection methods face challenges in jointly capturing the intramodality and cross-modal correlation relationships between image regions and text fragments. Additionally, these methods lack comprehensive hierarchical semantics mining for text. These limitations result in ineffective utilization of multimodal information and impact detection performance. To address these issues, we propose a multimodal relationship-aware attention network (MRAN), which consists of three main steps. First, a multi-level encoding network is employed to extract hierarchical semantic feature representations of text, while the visual feature extractor VGG19 learns image feature representations. Second, the captured text and image representations are input into the relationship-aware attention network, which generates high-order fusion features by calculating the similarity between information segments within modalities and cross-modal similarity. Finally, the fusion features are passed through a fake news detector, which identifies fake news. Experimental results on three benchmark datasets demonstrate the effectiveness of MRAN, highlighting its strong detection performance.</p></div>\",\"PeriodicalId\":50635,\"journal\":{\"name\":\"Computer Standards & Interfaces\",\"volume\":\"89 \",\"pages\":\"Article 103822\"},\"PeriodicalIF\":4.1000,\"publicationDate\":\"2023-12-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Standards & Interfaces\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0920548923001034\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Standards & Interfaces","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0920548923001034","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
MRAN: Multimodal relationship-aware attention network for fake news detection
Existing multimodal fake news detection methods face challenges in jointly capturing the intramodality and cross-modal correlation relationships between image regions and text fragments. Additionally, these methods lack comprehensive hierarchical semantics mining for text. These limitations result in ineffective utilization of multimodal information and impact detection performance. To address these issues, we propose a multimodal relationship-aware attention network (MRAN), which consists of three main steps. First, a multi-level encoding network is employed to extract hierarchical semantic feature representations of text, while the visual feature extractor VGG19 learns image feature representations. Second, the captured text and image representations are input into the relationship-aware attention network, which generates high-order fusion features by calculating the similarity between information segments within modalities and cross-modal similarity. Finally, the fusion features are passed through a fake news detector, which identifies fake news. Experimental results on three benchmark datasets demonstrate the effectiveness of MRAN, highlighting its strong detection performance.
期刊介绍:
The quality of software, well-defined interfaces (hardware and software), the process of digitalisation, and accepted standards in these fields are essential for building and exploiting complex computing, communication, multimedia and measuring systems. Standards can simplify the design and construction of individual hardware and software components and help to ensure satisfactory interworking.
Computer Standards & Interfaces is an international journal dealing specifically with these topics.
The journal
• Provides information about activities and progress on the definition of computer standards, software quality, interfaces and methods, at national, European and international levels
• Publishes critical comments on standards and standards activities
• Disseminates user''s experiences and case studies in the application and exploitation of established or emerging standards, interfaces and methods
• Offers a forum for discussion on actual projects, standards, interfaces and methods by recognised experts
• Stimulates relevant research by providing a specialised refereed medium.