Christian Nathaniel Purwanto, Joan Santoso, Po-Ruey Lei, Hui-Kuo Yang, Wen-Chih Peng
{"title":"FakeCLIP: Multimodal Fake Caption Detection with Mixed Languages for Explainable Visualization","authors":"Christian Nathaniel Purwanto, Joan Santoso, Po-Ruey Lei, Hui-Kuo Yang, Wen-Chih Peng","doi":"10.1109/taai54685.2021.00010","DOIUrl":null,"url":null,"abstract":"Existing fake news research relies on news propagation or news metadata. Waiting for propagation structure to be enough is a waste of time. Hoping for reliable metadata information is also a waste because all data can be forged. The most natural way for human when verifying a news is through the content itself. In social media, most of the circulating news are in minimal content which consist of image and its text caption. We propose FakeCLIP to examine whether a caption truly describes the corresponding image or not. As far as we know, we are the first one to tackle fake news using fake caption approach. We found mixed languages problem where one single text can consist of many different languages mixed together. We provide explainable visualization for intuitive reasoning of which part contains fake information. Moreover, we also consider alignment of what happens in the image that being discussed in the text caption while showing the fake signal over them. Our proposed method performs better than the current state-of-the-art on Twitter datasets by 11.1%.","PeriodicalId":343821,"journal":{"name":"2021 International Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Technologies and Applications of Artificial Intelligence (TAAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/taai54685.2021.00010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Existing fake news research relies on news propagation or news metadata. Waiting for propagation structure to be enough is a waste of time. Hoping for reliable metadata information is also a waste because all data can be forged. The most natural way for human when verifying a news is through the content itself. In social media, most of the circulating news are in minimal content which consist of image and its text caption. We propose FakeCLIP to examine whether a caption truly describes the corresponding image or not. As far as we know, we are the first one to tackle fake news using fake caption approach. We found mixed languages problem where one single text can consist of many different languages mixed together. We provide explainable visualization for intuitive reasoning of which part contains fake information. Moreover, we also consider alignment of what happens in the image that being discussed in the text caption while showing the fake signal over them. Our proposed method performs better than the current state-of-the-art on Twitter datasets by 11.1%.