SARD: Fake news detection based on CLIP contrastive learning and multimodal semantic alignment

IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Journal of King Saud University-Computer and Information Sciences Pub Date : 2024-08-14 DOI:10.1016/j.jksuci.2024.102160
{"title":"SARD: Fake news detection based on CLIP contrastive learning and multimodal semantic alignment","authors":"","doi":"10.1016/j.jksuci.2024.102160","DOIUrl":null,"url":null,"abstract":"<div><p>The automatic detection of multimodal fake news can be used to effectively identify potential risks in cyberspace. Most of the existing multimodal fake news detection methods focus on fully exploiting textual and visual features in news content, thus neglecting the full utilization of news social context features that play an important role in improving fake news detection. To this end, we propose a new fake news detection method based on CLIP contrastive learning and multimodal semantic alignment (SARD). SARD leverages cutting-edge multimodal learning techniques, such as CLIP, and robust cross-modal contrastive learning methods to integrate features of news-oriented heterogeneous information networks (N-HIN) with multi-level textual and visual features into a unified framework for the first time. This framework not only achieves cross-modal alignment between deep textual and visual features but also considers cross-modal associations and semantic alignments across different modalities. Furthermore, SARD enhances fake news detection by aligning semantic features between news content and N-HIN features, an aspect largely overlooked by existing methods. We test and evaluate SARD on three real-world datasets. Experimental results demonstrate that SARD significantly outperforms the twelve state-of-the-art competitors in fake news detection, with an average improvement of 2.89% in Mac.F1 score and 2.13% in accuracy compared to the leading baseline models across three datasets.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2000,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002490/pdfft?md5=497eb195281148df13643994f201fe62&pid=1-s2.0-S1319157824002490-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of King Saud University-Computer and Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1319157824002490","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

The automatic detection of multimodal fake news can be used to effectively identify potential risks in cyberspace. Most of the existing multimodal fake news detection methods focus on fully exploiting textual and visual features in news content, thus neglecting the full utilization of news social context features that play an important role in improving fake news detection. To this end, we propose a new fake news detection method based on CLIP contrastive learning and multimodal semantic alignment (SARD). SARD leverages cutting-edge multimodal learning techniques, such as CLIP, and robust cross-modal contrastive learning methods to integrate features of news-oriented heterogeneous information networks (N-HIN) with multi-level textual and visual features into a unified framework for the first time. This framework not only achieves cross-modal alignment between deep textual and visual features but also considers cross-modal associations and semantic alignments across different modalities. Furthermore, SARD enhances fake news detection by aligning semantic features between news content and N-HIN features, an aspect largely overlooked by existing methods. We test and evaluate SARD on three real-world datasets. Experimental results demonstrate that SARD significantly outperforms the twelve state-of-the-art competitors in fake news detection, with an average improvement of 2.89% in Mac.F1 score and 2.13% in accuracy compared to the leading baseline models across three datasets.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
SARD:基于 CLIP 对比学习和多模态语义配准的假新闻检测
多模态假新闻的自动检测可用于有效识别网络空间的潜在风险。现有的多模态假新闻检测方法大多侧重于充分利用新闻内容中的文本和视觉特征,从而忽视了充分利用新闻社会语境特征,而社会语境特征在提高假新闻检测能力方面发挥着重要作用。为此,我们提出了一种基于 CLIP 对比学习和多模态语义对齐(SARD)的新型假新闻检测方法。SARD 利用前沿的多模态学习技术(如 CLIP)和稳健的跨模态对比学习方法,首次将面向新闻的异构信息网络(N-HIN)特征与多层次的文本和视觉特征整合到一个统一的框架中。该框架不仅实现了深度文本和视觉特征之间的跨模态对齐,还考虑了不同模态之间的跨模态关联和语义对齐。此外,SARD 还通过对齐新闻内容和 N-HIN 特征之间的语义特征来增强假新闻检测,而现有方法在很大程度上忽略了这一点。我们在三个真实世界的数据集上对 SARD 进行了测试和评估。实验结果表明,在假新闻检测方面,SARD 明显优于 12 个最先进的竞争对手,在三个数据集上,与领先的基线模型相比,Mac.F1 分数平均提高了 2.89%,准确率平均提高了 2.13%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
10.50
自引率
8.70%
发文量
656
审稿时长
29 days
期刊介绍: In 2022 the Journal of King Saud University - Computer and Information Sciences will become an author paid open access journal. Authors who submit their manuscript after October 31st 2021 will be asked to pay an Article Processing Charge (APC) after acceptance of their paper to make their work immediately, permanently, and freely accessible to all. The Journal of King Saud University Computer and Information Sciences is a refereed, international journal that covers all aspects of both foundations of computer and its practical applications.
期刊最新文献
Heterogeneous emotional contagion of the cyber–physical society A novel edge intelligence-based solution for safer footpath navigation of visually impaired using computer vision Improving embedding-based link prediction performance using clustering A sharding blockchain protocol for enhanced scalability and performance optimization through account transaction reconfiguration RAPID: Robust multi-pAtch masker using channel-wise Pooled varIance with two-stage patch Detection
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1