{"title":"Cross-modal Retrieval of Archives based on Principal Affinity Representation","authors":"Xiaoqing Yang, Yuelong Zhu, Jun Feng, Jiamin Lu","doi":"10.1109/CCCI52664.2021.9583202","DOIUrl":null,"url":null,"abstract":"The development of information technology has resulted in an exponential increase of archive information. Using cross-modal retrieval can achieve mutual retrieval of data like image and text. Aside from the former progresses, it is still challenging to mine both inter-modal connection and the intrinsic semantic associations of cross-modal data. In this paper, we propose a method to achieve an accurate and effective cross-modal retrieval. It uniformly represents heterogeneous data through the principal affinity representation algorithm based on a hybrid kernel function. To improve the accuracy of retrieval, we first employ an adaptive nearest neighbor search method to dynamically decide the retrieval radius. The search method is then combined with the existing tree structure-based retrieval algorithm to find the nearest neighbor points efficiently. The experimental results show our algorithms have a certain improvement in efficiency and accuracy of cross-modal retrieval.","PeriodicalId":136382,"journal":{"name":"2021 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCCI52664.2021.9583202","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The development of information technology has resulted in an exponential increase of archive information. Using cross-modal retrieval can achieve mutual retrieval of data like image and text. Aside from the former progresses, it is still challenging to mine both inter-modal connection and the intrinsic semantic associations of cross-modal data. In this paper, we propose a method to achieve an accurate and effective cross-modal retrieval. It uniformly represents heterogeneous data through the principal affinity representation algorithm based on a hybrid kernel function. To improve the accuracy of retrieval, we first employ an adaptive nearest neighbor search method to dynamically decide the retrieval radius. The search method is then combined with the existing tree structure-based retrieval algorithm to find the nearest neighbor points efficiently. The experimental results show our algorithms have a certain improvement in efficiency and accuracy of cross-modal retrieval.