Existing methods often utilize temporal motion information and spatial layout information in video to predict video saliency. However, the fixations are not always consistent with the moving object of interest, because human eye fixations are determined not only by the spatio-temporal information, but also by the velocity of eye movement. To address this issue, a new saliency prediction method via deep eye movement learning (EML) is proposed in this paper. Compared with previous methods that use human fixations as ground truth, our method uses the optical flow of fixations between successive frames as an extra ground truth for the purpose of eye movement learning. Experimental results on DHF1K, Hollywood2, and UCF-sports datasets show the proposed EML model achieves a promising result across a wide of metrics.
{"title":"Video Saliency Prediction via Deep Eye Movement Learning","authors":"Jiazhong Chen, Jing Chen, Yuan Dong, Dakai Ren, Shiqi Zhang, Zongyi Li","doi":"10.1145/3469877.3490597","DOIUrl":"https://doi.org/10.1145/3469877.3490597","url":null,"abstract":"Existing methods often utilize temporal motion information and spatial layout information in video to predict video saliency. However, the fixations are not always consistent with the moving object of interest, because human eye fixations are determined not only by the spatio-temporal information, but also by the velocity of eye movement. To address this issue, a new saliency prediction method via deep eye movement learning (EML) is proposed in this paper. Compared with previous methods that use human fixations as ground truth, our method uses the optical flow of fixations between successive frames as an extra ground truth for the purpose of eye movement learning. Experimental results on DHF1K, Hollywood2, and UCF-sports datasets show the proposed EML model achieves a promising result across a wide of metrics.","PeriodicalId":210974,"journal":{"name":"ACM Multimedia Asia","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131425318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyperspectral images (HSI) contains rich spectrum information but their spatial resolution is often limited by imaging system. Super-resolution (SR) reconstruction becomes a hot topic aiming to increase spatial resolution without extra hardware cost. The fusion-based hyperspectral image super-resolution (FHSR) methods use supplementary high-resolution multispectral images (HR-MSI) to recover spatial details, but well co-registered HR-MSI is hard to collect. Recently, single hyperspectral image super-resolution (SHSR) methods based on deep learning have made great progress. However, lack of HR-MSI input makes these SHSR methods difficult to exploit the spatial information. To take advantages of FHSR and SHSR methods, in this paper we propose a new pipeline treating HR-MSI as privilege information and try to improve our SHSR model with knowledge distillation. That is, our model uses paired MSI-HSI data to train and only needs LR-HSI as input during inference. Specifically, we combine SHSR and spectral super-resolution (SSR) and design a novel architecture, Distillation-Oriented Dual-branch Net (DODN), to make the SHSR model fully employ transferred knowledge from the SSR model. Since the main stream of SSR model are 2D CNNs and full 2D CNN causes spectral disorder in SHSR task, a new mixed 2D/3D block, called Distillation-Oriented Dual-branch Block (DODB) is proposed, where the 3D branch extracts spectral-spatial correlation while the 2D branch accepts information from the SSR model through knowledge distillation. The main idea is to distill the knowledge of spatial information from HR-MSI to the SHSR model without changing its network architecture. Extensive experiments on two benchmark datasets, CAVE and NTIRE2020, demonstrate that our proposed DODN outperforms the state-of-the-art SHSR methods, in terms of both quantitative and qualitative analysis.
{"title":"Improving Hyperspectral Super-Resolution via Heterogeneous Knowledge Distillation","authors":"Ziqian Liu, Qing Ma, Junjun Jiang, Xianming Liu","doi":"10.1145/3469877.3490610","DOIUrl":"https://doi.org/10.1145/3469877.3490610","url":null,"abstract":"Hyperspectral images (HSI) contains rich spectrum information but their spatial resolution is often limited by imaging system. Super-resolution (SR) reconstruction becomes a hot topic aiming to increase spatial resolution without extra hardware cost. The fusion-based hyperspectral image super-resolution (FHSR) methods use supplementary high-resolution multispectral images (HR-MSI) to recover spatial details, but well co-registered HR-MSI is hard to collect. Recently, single hyperspectral image super-resolution (SHSR) methods based on deep learning have made great progress. However, lack of HR-MSI input makes these SHSR methods difficult to exploit the spatial information. To take advantages of FHSR and SHSR methods, in this paper we propose a new pipeline treating HR-MSI as privilege information and try to improve our SHSR model with knowledge distillation. That is, our model uses paired MSI-HSI data to train and only needs LR-HSI as input during inference. Specifically, we combine SHSR and spectral super-resolution (SSR) and design a novel architecture, Distillation-Oriented Dual-branch Net (DODN), to make the SHSR model fully employ transferred knowledge from the SSR model. Since the main stream of SSR model are 2D CNNs and full 2D CNN causes spectral disorder in SHSR task, a new mixed 2D/3D block, called Distillation-Oriented Dual-branch Block (DODB) is proposed, where the 3D branch extracts spectral-spatial correlation while the 2D branch accepts information from the SSR model through knowledge distillation. The main idea is to distill the knowledge of spatial information from HR-MSI to the SHSR model without changing its network architecture. Extensive experiments on two benchmark datasets, CAVE and NTIRE2020, demonstrate that our proposed DODN outperforms the state-of-the-art SHSR methods, in terms of both quantitative and qualitative analysis.","PeriodicalId":210974,"journal":{"name":"ACM Multimedia Asia","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132535736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motion, according to its definition in physics, is the change in position with respect to time, regardless of the specific moving object and background. In this paper, we aim to learn appearance-independent motion representation in an unsupervised manner. The main idea is to separate motion from videos while leaving objects and background as content. Specifically, we design an encoder-decoder model which consists of a content encoder, a motion encoder and a video generator. To train the model, we leverage a one-step cycle-consistency in reconstruction within the same video and a two-step cycle-consistency in generation across different videos as self-supervised signals, and use adversarial training to remove the content representation from the motion representation. We demonstrate that the proposed framework can be used for conditional video generation and fine-grained action recognition.
{"title":"Motion = Video - Content: Towards Unsupervised Learning of Motion Representation from Videos","authors":"Hehe Fan, Mohan S. Kankanhalli","doi":"10.1145/3469877.3490582","DOIUrl":"https://doi.org/10.1145/3469877.3490582","url":null,"abstract":"Motion, according to its definition in physics, is the change in position with respect to time, regardless of the specific moving object and background. In this paper, we aim to learn appearance-independent motion representation in an unsupervised manner. The main idea is to separate motion from videos while leaving objects and background as content. Specifically, we design an encoder-decoder model which consists of a content encoder, a motion encoder and a video generator. To train the model, we leverage a one-step cycle-consistency in reconstruction within the same video and a two-step cycle-consistency in generation across different videos as self-supervised signals, and use adversarial training to remove the content representation from the motion representation. We demonstrate that the proposed framework can be used for conditional video generation and fine-grained action recognition.","PeriodicalId":210974,"journal":{"name":"ACM Multimedia Asia","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133700966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hierarchical graph pooling has shown great potential for capturing high-quality graph representations through the node cluster selection mechanism. However, the current node cluster selection methods have inadequate clustering issues, and their scoring methods rely too much on the node representation, resulting in excessive graph structure information loss during pooling. In this paper, a local capsule pooling network (LCPN) is proposed to alleviate the above issues. Specifically, (i) a local capsule pooling (LCP) is proposed to alleviate the issue of insufficient clustering; (ii) a task-aware readout (TAR) mechanism is proposed to obtain a more expressive graph representation; (iii) a pooling information loss (PIL) term is proposed to further alleviate the information loss caused by pooling during training. Experimental results on the graph classification task, the graph reconstruction task, and the pooled graph adjacency visualization task show the superior performance of the proposed LCPN and demonstrate its effectiveness and efficiency.
{"title":"Hierarchical Graph Representation Learning with Local Capsule Pooling","authors":"Zidong Su, Zehui Hu, Yangding Li","doi":"10.1145/3469877.3495645","DOIUrl":"https://doi.org/10.1145/3469877.3495645","url":null,"abstract":"Hierarchical graph pooling has shown great potential for capturing high-quality graph representations through the node cluster selection mechanism. However, the current node cluster selection methods have inadequate clustering issues, and their scoring methods rely too much on the node representation, resulting in excessive graph structure information loss during pooling. In this paper, a local capsule pooling network (LCPN) is proposed to alleviate the above issues. Specifically, (i) a local capsule pooling (LCP) is proposed to alleviate the issue of insufficient clustering; (ii) a task-aware readout (TAR) mechanism is proposed to obtain a more expressive graph representation; (iii) a pooling information loss (PIL) term is proposed to further alleviate the information loss caused by pooling during training. Experimental results on the graph classification task, the graph reconstruction task, and the pooled graph adjacency visualization task show the superior performance of the proposed LCPN and demonstrate its effectiveness and efficiency.","PeriodicalId":210974,"journal":{"name":"ACM Multimedia Asia","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131041741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Age estimation from a single facial image is a challenging and attractive research area in the computer vision community. Several facial datasets annotated with age and gender attributes became available in the literature. However, one major drawback is that these datasets do not consider the label distribution during data collection. Therefore, the models training on these datasets inevitably have bias for the age having least number of images. In this work, we analyze the age and gender distribution of previous datasets and publish an Uniform Age and Gender Dataset (UAGD) which has almost equal number of female and male images in each age. In addition, we investigate the impact of age and gender distribution on age estimation by comparing DEX CNN model trained on several different datasets. Our experiments show that UAGD dataset has good performance for age estimation task and also it is suitable for being an evaluation benchmark.
{"title":"A comparison study: the impact of age and gender distribution on age estimation","authors":"Chang Kong, Qiuming Luo, Guoliang Chen","doi":"10.1145/3469877.3490576","DOIUrl":"https://doi.org/10.1145/3469877.3490576","url":null,"abstract":"Age estimation from a single facial image is a challenging and attractive research area in the computer vision community. Several facial datasets annotated with age and gender attributes became available in the literature. However, one major drawback is that these datasets do not consider the label distribution during data collection. Therefore, the models training on these datasets inevitably have bias for the age having least number of images. In this work, we analyze the age and gender distribution of previous datasets and publish an Uniform Age and Gender Dataset (UAGD) which has almost equal number of female and male images in each age. In addition, we investigate the impact of age and gender distribution on age estimation by comparing DEX CNN model trained on several different datasets. Our experiments show that UAGD dataset has good performance for age estimation task and also it is suitable for being an evaluation benchmark.","PeriodicalId":210974,"journal":{"name":"ACM Multimedia Asia","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132652394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Composed query image retrieval is a growing research topic. The object is to retrieve images not only generally resemble the reference image, but differ according to the desired modification text. Existing methods mainly explore composing modification text with global feature or local entity descriptor of reference image. However, they ignore the fact that modification text is indeed diverse and arbitrary. It not only relates to abstractive global feature or concrete local entity transformation, but also often associates with the fine-grained structured visual adjustment. Thus, it is insufficient to emphasize the global or local entity visual for the query composition. In this work, we tackle this task by hierarchical composition learning. Specifically, the proposed method first encodes images into three representations consisting of global, entity and structure level representations. Structure level representation is richly explicable, which explicitly describes entities as well as attributes and relationships in the image with a directed graph. Based on these, we naturally perform hierarchical composition learning by fusing modification text and reference image in the global-entity-structure manner. It can transform the visual feature conditioned on modification text to target image in a coarse-to-fine manner, which takes advantage of the complementary information among three levels. Moreover, we introduce a hybrid space matching to explore global, entity and structure alignments which can get high performance and good interpretability.
{"title":"Hierarchical Composition Learning for Composed Query Image Retrieval","authors":"Yahui Xu, Yi Bin, Guoqing Wang, Yang Yang","doi":"10.1145/3469877.3490601","DOIUrl":"https://doi.org/10.1145/3469877.3490601","url":null,"abstract":"Composed query image retrieval is a growing research topic. The object is to retrieve images not only generally resemble the reference image, but differ according to the desired modification text. Existing methods mainly explore composing modification text with global feature or local entity descriptor of reference image. However, they ignore the fact that modification text is indeed diverse and arbitrary. It not only relates to abstractive global feature or concrete local entity transformation, but also often associates with the fine-grained structured visual adjustment. Thus, it is insufficient to emphasize the global or local entity visual for the query composition. In this work, we tackle this task by hierarchical composition learning. Specifically, the proposed method first encodes images into three representations consisting of global, entity and structure level representations. Structure level representation is richly explicable, which explicitly describes entities as well as attributes and relationships in the image with a directed graph. Based on these, we naturally perform hierarchical composition learning by fusing modification text and reference image in the global-entity-structure manner. It can transform the visual feature conditioned on modification text to target image in a coarse-to-fine manner, which takes advantage of the complementary information among three levels. Moreover, we introduce a hybrid space matching to explore global, entity and structure alignments which can get high performance and good interpretability.","PeriodicalId":210974,"journal":{"name":"ACM Multimedia Asia","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134281777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unsupervised object re-identification is a challenging task due to the missing of labels for the dataset. Many unsupervised object re-identification approaches combine clustering-based pseudo-label prediction with feature fine-tuning. These methods have achieved great success in the field of unsupervised object Re-ID. However, the inevitable label noise caused by the clustering procedure was ignored. Such noisy pseudo labels substantially hinder the model’s capability on further improving feature representations. To this end, we propose a novel joint label refinement and contrastive learning framework with hybrid memory to alleviate this problem. Firstly, in order to reduce the noise of clustering pseudo labels, we propose a novel noise refinement strategy. This strategy refines pseudo labels at clustering phase and promotes clustering quality by boosting the label purity. In addition, we propose a hybrid memory bank. The hybrid memory dynamically generates prototype-level and un-clustered instance-level supervisory signals for learning feature representations. With all prototype-level and un-clustered instance-level supervisions, re-identification model is trained progressively. Our proposed unsupervised object Re-ID framework significantly reduces the influence of noisy labels and refines the learned features. Our method consistently achieves state-of-the-art performance on benchmark datasets.
{"title":"Joint label refinement and contrastive learning with hybrid memory for Unsupervised Marine Object Re-Identification","authors":"Xiaorui Han, Zhiqi Chen, Ruixue Wang, Pengfei Zhao","doi":"10.1145/3469877.3497695","DOIUrl":"https://doi.org/10.1145/3469877.3497695","url":null,"abstract":"Unsupervised object re-identification is a challenging task due to the missing of labels for the dataset. Many unsupervised object re-identification approaches combine clustering-based pseudo-label prediction with feature fine-tuning. These methods have achieved great success in the field of unsupervised object Re-ID. However, the inevitable label noise caused by the clustering procedure was ignored. Such noisy pseudo labels substantially hinder the model’s capability on further improving feature representations. To this end, we propose a novel joint label refinement and contrastive learning framework with hybrid memory to alleviate this problem. Firstly, in order to reduce the noise of clustering pseudo labels, we propose a novel noise refinement strategy. This strategy refines pseudo labels at clustering phase and promotes clustering quality by boosting the label purity. In addition, we propose a hybrid memory bank. The hybrid memory dynamically generates prototype-level and un-clustered instance-level supervisory signals for learning feature representations. With all prototype-level and un-clustered instance-level supervisions, re-identification model is trained progressively. Our proposed unsupervised object Re-ID framework significantly reduces the influence of noisy labels and refines the learned features. Our method consistently achieves state-of-the-art performance on benchmark datasets.","PeriodicalId":210974,"journal":{"name":"ACM Multimedia Asia","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131340551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Temporal Sentence Grounding aims to localize the relevant temporal region in a given video according to the query sentence. It is a challenging task due to the semantic gap between different modalities and diversity of the event duration. Proposal generation plays an important role in previous mainstream methods. However, previous proposal generation methods apply the same feature extraction without considering the diversity of event duration. In this paper, we propose a novel temporal sentence grounding model with an U-shaped Network for efficient proposal generation (UN-TSG), which utilizes U-shaped structure to encode proposals of different lengths hierarchically. Experiments on two benchmark datasets demonstrate that with more efficient proposal generation method, our model can achieve the state-of-the-art grounding performance in faster speed and with less computation cost.
{"title":"Efficient Proposal Generation with U-shaped Network for Temporal Sentence Grounding","authors":"Ludan Ruan, Qin Jin","doi":"10.1145/3469877.3490606","DOIUrl":"https://doi.org/10.1145/3469877.3490606","url":null,"abstract":"Temporal Sentence Grounding aims to localize the relevant temporal region in a given video according to the query sentence. It is a challenging task due to the semantic gap between different modalities and diversity of the event duration. Proposal generation plays an important role in previous mainstream methods. However, previous proposal generation methods apply the same feature extraction without considering the diversity of event duration. In this paper, we propose a novel temporal sentence grounding model with an U-shaped Network for efficient proposal generation (UN-TSG), which utilizes U-shaped structure to encode proposals of different lengths hierarchically. Experiments on two benchmark datasets demonstrate that with more efficient proposal generation method, our model can achieve the state-of-the-art grounding performance in faster speed and with less computation cost.","PeriodicalId":210974,"journal":{"name":"ACM Multimedia Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127109081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visible-infrared person re-identification (VI-ReID) is undoubtedly a challenging cross-modality person retrieval task with increasing appreciation. Compared to traditional person ReID that focuses on person images in a single RGB mode, VI-ReID suffers from additional cross-modality discrepancy due to the different imaging processes of spectrum cameras. Several effective attempts have been made in recent years to narrow cross-modality gap aiming to improve the re-identification performance, but rarely study the key problem of optimizing the search results combined with relevant feedback. In this paper, we present the idea of cross-modality visible-infrared person re-identification combined with human positive feedback. This method allows the user to quickly optimize the search performance by selecting strong positive samples during the re-identification process. We have validated the effectiveness of our method on a public dataset, SYSU-MM01, and results confirmed that the proposed method achieved superior performance compared to the current state-of-the-art methods.
{"title":"Visible-Infrared Cross-Modal Person Re-identification based on Positive Feedback","authors":"Lingyi Lu, Xin Xu","doi":"10.1145/3469877.3497693","DOIUrl":"https://doi.org/10.1145/3469877.3497693","url":null,"abstract":"Visible-infrared person re-identification (VI-ReID) is undoubtedly a challenging cross-modality person retrieval task with increasing appreciation. Compared to traditional person ReID that focuses on person images in a single RGB mode, VI-ReID suffers from additional cross-modality discrepancy due to the different imaging processes of spectrum cameras. Several effective attempts have been made in recent years to narrow cross-modality gap aiming to improve the re-identification performance, but rarely study the key problem of optimizing the search results combined with relevant feedback. In this paper, we present the idea of cross-modality visible-infrared person re-identification combined with human positive feedback. This method allows the user to quickly optimize the search performance by selecting strong positive samples during the re-identification process. We have validated the effectiveness of our method on a public dataset, SYSU-MM01, and results confirmed that the proposed method achieved superior performance compared to the current state-of-the-art methods.","PeriodicalId":210974,"journal":{"name":"ACM Multimedia Asia","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117329453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The hashing technique has recently sparked much attention in information retrieval community due to its high efficiency in terms of storage and query processing. For cross-modal retrieval tasks, existing supervised hashing models either treat the semantic labels as the ground truth and formalize the problem to a classification task, or further add a similarity matrix as supervisory signals to pursue hash codes of high quality to represent coupled data. However, these approaches are incapable of ensuring that the learnt binary codes preserve well the semantics and similarity relationships contained in the supervised information. Moreover, for sophisticated discrete optimization problems, it is always addressed by continuous relaxation or bit-wise solver, which leads to a large quantization error and inefficient computation. To relieve these issues, in this paper, we present a two-step supervised discrete hashing method, i.e., laBel ReconstrUcting and Similarity preserving Hashing (BRUSH). We formulate it as an asymmetric pairwise similarity-preserving problem by using two latent semantic embeddings deducted from decomposing semantics and reconstructing semantics, respectively. Meanwhile, the unified binary codes are jointly generated based on both embeddings with the affinity guarantee, such that the discriminative property of the obtained hash codes can be significantly enhanced alongside preserving semantics well. In addition, by adopting two-step hash learning strategy, our method simplifies the procedure of the hashing function and binary codes learning, thus improving the flexibility and efficiency. The resulting discrete optimization problem is also elegantly solved by the proposed alternating algorithm without any relaxation. Extensive experiments on benchmarks demonstrate that BRUSH outperforms the state-of-the-art methods, in terms of efficiency and effectiveness.
{"title":"BRUSH: Label Reconstructing and Similarity Preserving Hashing for Cross-modal Retrieval","authors":"P. Zhang, Pengfei Zhao, Xin Luo, Xin-Shun Xu","doi":"10.1145/3469877.3490589","DOIUrl":"https://doi.org/10.1145/3469877.3490589","url":null,"abstract":"The hashing technique has recently sparked much attention in information retrieval community due to its high efficiency in terms of storage and query processing. For cross-modal retrieval tasks, existing supervised hashing models either treat the semantic labels as the ground truth and formalize the problem to a classification task, or further add a similarity matrix as supervisory signals to pursue hash codes of high quality to represent coupled data. However, these approaches are incapable of ensuring that the learnt binary codes preserve well the semantics and similarity relationships contained in the supervised information. Moreover, for sophisticated discrete optimization problems, it is always addressed by continuous relaxation or bit-wise solver, which leads to a large quantization error and inefficient computation. To relieve these issues, in this paper, we present a two-step supervised discrete hashing method, i.e., laBel ReconstrUcting and Similarity preserving Hashing (BRUSH). We formulate it as an asymmetric pairwise similarity-preserving problem by using two latent semantic embeddings deducted from decomposing semantics and reconstructing semantics, respectively. Meanwhile, the unified binary codes are jointly generated based on both embeddings with the affinity guarantee, such that the discriminative property of the obtained hash codes can be significantly enhanced alongside preserving semantics well. In addition, by adopting two-step hash learning strategy, our method simplifies the procedure of the hashing function and binary codes learning, thus improving the flexibility and efficiency. The resulting discrete optimization problem is also elegantly solved by the proposed alternating algorithm without any relaxation. Extensive experiments on benchmarks demonstrate that BRUSH outperforms the state-of-the-art methods, in terms of efficiency and effectiveness.","PeriodicalId":210974,"journal":{"name":"ACM Multimedia Asia","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121150961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}