Bin Zhu , Meng Wu , Yunpeng Hong , Yi Chen , Bo Xie , Fei Liu , Chenyang Bu , Weiping Ding
{"title":"MMIEA: Multi-modal Interaction Entity Alignment model for knowledge graphs","authors":"Bin Zhu , Meng Wu , Yunpeng Hong , Yi Chen , Bo Xie , Fei Liu , Chenyang Bu , Weiping Ding","doi":"10.1016/j.inffus.2023.101935","DOIUrl":null,"url":null,"abstract":"<div><p>Fusing data from different sources to improve decision making in smart cities has received increasing attention. Collected data through sensors usually exist in a multi-modal form, such as values, images, and texts. Thus, designing models that handle multi-modal data has an important role in this field. Meanwhile, security and privacy issues cannot be ignored, as the leakage of big data may provide opportunities for criminals. To solve the above challenges, we focus on research on multi-modal entity alignment for knowledge graphs and proposed the Multi-Modal Interaction Entity Alignment model (MMIEA). The model is proposed from the perspective of fusing data from different modalities while maintaining privacy. We determined that the model is privacy-preserving because it does not need to transmit the raw data of each modality (only the vector representation is transmitted). Specifically, we introduce and improve the BERT-INT model for the entity alignment task in multi-modal knowledge graphs. Experimental results on two commonly used multi-modal datasets show that our method outperforms 17 algorithms, including nine multi-modal entity alignment methods.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"100 ","pages":"Article 101935"},"PeriodicalIF":14.7000,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253523002518","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Fusing data from different sources to improve decision making in smart cities has received increasing attention. Collected data through sensors usually exist in a multi-modal form, such as values, images, and texts. Thus, designing models that handle multi-modal data has an important role in this field. Meanwhile, security and privacy issues cannot be ignored, as the leakage of big data may provide opportunities for criminals. To solve the above challenges, we focus on research on multi-modal entity alignment for knowledge graphs and proposed the Multi-Modal Interaction Entity Alignment model (MMIEA). The model is proposed from the perspective of fusing data from different modalities while maintaining privacy. We determined that the model is privacy-preserving because it does not need to transmit the raw data of each modality (only the vector representation is transmitted). Specifically, we introduce and improve the BERT-INT model for the entity alignment task in multi-modal knowledge graphs. Experimental results on two commonly used multi-modal datasets show that our method outperforms 17 algorithms, including nine multi-modal entity alignment methods.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.