{"title":"Joint feature fusion hashing for cross-modal retrieval","authors":"Yuxia Cao","doi":"10.1007/s13042-024-02309-x","DOIUrl":null,"url":null,"abstract":"<p>Cross-modal hashing retrieval maps data from different modalities into a common low-dimensional hash code space, enabling fast and efficient retrieval. Recently, there has been a growing interest in the cross-modal hashing retrieval approach. Nonetheless, a significant number of current methodologies overlook the influence of semantically rich features on retrieval performance. In addition, class attribute embedding is often forgotten in cross-modal feature fusion, which is crucial for learning more discriminative hash codes. To meet these challenges, we put forward a novel method, namely joint feature fusion hashing (JFFH) for cross-modal retrieval. Specifically, we use the fast language image pre-training model as the feature coding module of cross-modal data. To more effectively mitigate semantic disparities between modalities, we introduce a multimodal contrastive learning loss to strengthen the interaction between modalities and improve the semantic representation of modalities. In addition, we extract class attribute features as class embedding and integrate them with cross-modal features to enhance the semantic relationship within the fused features. To better capture both inter-modal and intra-modal dependencies as well as semantic relevance, we integrate the self-attention mechanism into the multi-modal fusion transformer encoder to facilitate efficient feature fusion. Besides, we apply label-wise high-level semantic similarity and feature-wise low-level semantic similarity to enhance the discrimination of hash codes. Our JFFH method shows better retrieval performance in large-scale cross-modal retrieval.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":"7 1","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Machine Learning and Cybernetics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s13042-024-02309-x","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Cross-modal hashing retrieval maps data from different modalities into a common low-dimensional hash code space, enabling fast and efficient retrieval. Recently, there has been a growing interest in the cross-modal hashing retrieval approach. Nonetheless, a significant number of current methodologies overlook the influence of semantically rich features on retrieval performance. In addition, class attribute embedding is often forgotten in cross-modal feature fusion, which is crucial for learning more discriminative hash codes. To meet these challenges, we put forward a novel method, namely joint feature fusion hashing (JFFH) for cross-modal retrieval. Specifically, we use the fast language image pre-training model as the feature coding module of cross-modal data. To more effectively mitigate semantic disparities between modalities, we introduce a multimodal contrastive learning loss to strengthen the interaction between modalities and improve the semantic representation of modalities. In addition, we extract class attribute features as class embedding and integrate them with cross-modal features to enhance the semantic relationship within the fused features. To better capture both inter-modal and intra-modal dependencies as well as semantic relevance, we integrate the self-attention mechanism into the multi-modal fusion transformer encoder to facilitate efficient feature fusion. Besides, we apply label-wise high-level semantic similarity and feature-wise low-level semantic similarity to enhance the discrimination of hash codes. Our JFFH method shows better retrieval performance in large-scale cross-modal retrieval.
期刊介绍:
Cybernetics is concerned with describing complex interactions and interrelationships between systems which are omnipresent in our daily life. Machine Learning discovers fundamental functional relationships between variables and ensembles of variables in systems. The merging of the disciplines of Machine Learning and Cybernetics is aimed at the discovery of various forms of interaction between systems through diverse mechanisms of learning from data.
The International Journal of Machine Learning and Cybernetics (IJMLC) focuses on the key research problems emerging at the junction of machine learning and cybernetics and serves as a broad forum for rapid dissemination of the latest advancements in the area. The emphasis of IJMLC is on the hybrid development of machine learning and cybernetics schemes inspired by different contributing disciplines such as engineering, mathematics, cognitive sciences, and applications. New ideas, design alternatives, implementations and case studies pertaining to all the aspects of machine learning and cybernetics fall within the scope of the IJMLC.
Key research areas to be covered by the journal include:
Machine Learning for modeling interactions between systems
Pattern Recognition technology to support discovery of system-environment interaction
Control of system-environment interactions
Biochemical interaction in biological and biologically-inspired systems
Learning for improvement of communication schemes between systems