BRUSH: Label Reconstructing and Similarity Preserving Hashing for Cross-modal Retrieval

P. Zhang, Pengfei Zhao, Xin Luo, Xin-Shun Xu
{"title":"BRUSH: Label Reconstructing and Similarity Preserving Hashing for Cross-modal Retrieval","authors":"P. Zhang, Pengfei Zhao, Xin Luo, Xin-Shun Xu","doi":"10.1145/3469877.3490589","DOIUrl":null,"url":null,"abstract":"The hashing technique has recently sparked much attention in information retrieval community due to its high efficiency in terms of storage and query processing. For cross-modal retrieval tasks, existing supervised hashing models either treat the semantic labels as the ground truth and formalize the problem to a classification task, or further add a similarity matrix as supervisory signals to pursue hash codes of high quality to represent coupled data. However, these approaches are incapable of ensuring that the learnt binary codes preserve well the semantics and similarity relationships contained in the supervised information. Moreover, for sophisticated discrete optimization problems, it is always addressed by continuous relaxation or bit-wise solver, which leads to a large quantization error and inefficient computation. To relieve these issues, in this paper, we present a two-step supervised discrete hashing method, i.e., laBel ReconstrUcting and Similarity preserving Hashing (BRUSH). We formulate it as an asymmetric pairwise similarity-preserving problem by using two latent semantic embeddings deducted from decomposing semantics and reconstructing semantics, respectively. Meanwhile, the unified binary codes are jointly generated based on both embeddings with the affinity guarantee, such that the discriminative property of the obtained hash codes can be significantly enhanced alongside preserving semantics well. In addition, by adopting two-step hash learning strategy, our method simplifies the procedure of the hashing function and binary codes learning, thus improving the flexibility and efficiency. The resulting discrete optimization problem is also elegantly solved by the proposed alternating algorithm without any relaxation. Extensive experiments on benchmarks demonstrate that BRUSH outperforms the state-of-the-art methods, in terms of efficiency and effectiveness.","PeriodicalId":210974,"journal":{"name":"ACM Multimedia Asia","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Multimedia Asia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3469877.3490589","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The hashing technique has recently sparked much attention in information retrieval community due to its high efficiency in terms of storage and query processing. For cross-modal retrieval tasks, existing supervised hashing models either treat the semantic labels as the ground truth and formalize the problem to a classification task, or further add a similarity matrix as supervisory signals to pursue hash codes of high quality to represent coupled data. However, these approaches are incapable of ensuring that the learnt binary codes preserve well the semantics and similarity relationships contained in the supervised information. Moreover, for sophisticated discrete optimization problems, it is always addressed by continuous relaxation or bit-wise solver, which leads to a large quantization error and inefficient computation. To relieve these issues, in this paper, we present a two-step supervised discrete hashing method, i.e., laBel ReconstrUcting and Similarity preserving Hashing (BRUSH). We formulate it as an asymmetric pairwise similarity-preserving problem by using two latent semantic embeddings deducted from decomposing semantics and reconstructing semantics, respectively. Meanwhile, the unified binary codes are jointly generated based on both embeddings with the affinity guarantee, such that the discriminative property of the obtained hash codes can be significantly enhanced alongside preserving semantics well. In addition, by adopting two-step hash learning strategy, our method simplifies the procedure of the hashing function and binary codes learning, thus improving the flexibility and efficiency. The resulting discrete optimization problem is also elegantly solved by the proposed alternating algorithm without any relaxation. Extensive experiments on benchmarks demonstrate that BRUSH outperforms the state-of-the-art methods, in terms of efficiency and effectiveness.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
跨模态检索的标签重构和相似性保持哈希
近年来,哈希技术以其在存储和查询处理方面的高效性受到了信息检索界的广泛关注。对于跨模态检索任务,现有的监督哈希模型要么将语义标签作为基本事实,将问题形式化为分类任务,要么进一步添加相似矩阵作为监督信号,追求高质量的哈希码来表示耦合数据。然而,这些方法不能保证学习到的二进制码很好地保留了监督信息中包含的语义和相似关系。此外,对于复杂的离散优化问题,通常采用连续松弛法或逐位求解法求解,导致量化误差大,计算效率低。为了解决这些问题,本文提出了一种两步监督离散哈希方法,即标签重构和相似性保持哈希(BRUSH)。我们利用从语义分解和语义重构中分别推导出的两个潜在语义嵌入,将其表述为一个非对称的两两相似保持问题。同时,基于两种嵌入方式联合生成具有亲和力保证的统一二进制码,使得得到的哈希码在保持语义的同时显著增强了判别性。此外,通过采用两步哈希学习策略,我们的方法简化了哈希函数和二进制码学习的过程,从而提高了灵活性和效率。所提出的交替算法也能很好地解决离散优化问题。在基准测试上的大量实验表明,在效率和有效性方面,BRUSH优于最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Multi-Scale Graph Convolutional Network and Dynamic Iterative Class Loss for Ship Segmentation in Remote Sensing Images Structural Knowledge Organization and Transfer for Class-Incremental Learning Hard-Boundary Attention Network for Nuclei Instance Segmentation Score Transformer: Generating Musical Score from Note-level Representation CMRD-Net: An Improved Method for Underwater Image Enhancement
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1