Optimization of Rank Losses for Image Retrieval

Elias Ramzi;Nicolas Audebert;Clément Rambour;André Araujo;Xavier Bitot;Nicolas Thome
{"title":"Optimization of Rank Losses for Image Retrieval","authors":"Elias Ramzi;Nicolas Audebert;Clément Rambour;André Araujo;Xavier Bitot;Nicolas Thome","doi":"10.1109/TPAMI.2025.3543846","DOIUrl":null,"url":null,"abstract":"In image retrieval, standard evaluation metrics rely on score ranking, e.g. average precision (AP), recall at k (R@k), normalized discounted cumulative gain (NDCG). In this work, we introduce a general framework for robust and decomposable rank losses optimization. It addresses two major challenges for end-to-end training of deep neural networks with rank losses: non-differentiability and non-decomposability. First, we propose a general surrogate for ranking operator, SupRank, that is amenable to stochastic gradient descent. It provides an upperbound for rank losses and ensures robust training. Second, we use a simple yet effective loss function to reduce the decomposability gap between the averaged batch approximation of ranking losses and their values on the whole training set. We apply our framework to two standard metrics for image retrieval: AP and R@k. Additionally, we apply our framework to hierarchical image retrieval. We introduce an extension of AP, the hierarchical average precision <inline-formula><tex-math>$\\mathcal {H}{\\mathrm -AP}$</tex-math></inline-formula>, and optimize it as well as the NDCG. Finally, we create the first hierarchical landmarks retrieval dataset. We use a semi-automatic pipeline to create hierarchical labels, extending the large scale Google Landmarks v2 dataset.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 6","pages":"4317-4329"},"PeriodicalIF":18.6000,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10896862/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In image retrieval, standard evaluation metrics rely on score ranking, e.g. average precision (AP), recall at k (R@k), normalized discounted cumulative gain (NDCG). In this work, we introduce a general framework for robust and decomposable rank losses optimization. It addresses two major challenges for end-to-end training of deep neural networks with rank losses: non-differentiability and non-decomposability. First, we propose a general surrogate for ranking operator, SupRank, that is amenable to stochastic gradient descent. It provides an upperbound for rank losses and ensures robust training. Second, we use a simple yet effective loss function to reduce the decomposability gap between the averaged batch approximation of ranking losses and their values on the whole training set. We apply our framework to two standard metrics for image retrieval: AP and R@k. Additionally, we apply our framework to hierarchical image retrieval. We introduce an extension of AP, the hierarchical average precision $\mathcal {H}{\mathrm -AP}$, and optimize it as well as the NDCG. Finally, we create the first hierarchical landmarks retrieval dataset. We use a semi-automatic pipeline to create hierarchical labels, extending the large scale Google Landmarks v2 dataset.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
图像检索中秩损失的优化
在图像检索中,标准评价指标依赖于分数排序,例如平均精度(AP), k点召回率(R@k),归一化贴现累积增益(NDCG)。在这项工作中,我们引入了一个鲁棒和可分解秩损失优化的一般框架。它解决了具有秩损失的深度神经网络端到端训练的两个主要挑战:不可微性和不可分解性。首先,我们提出了一种适合于随机梯度下降的排序算子的通用代理——su恶作剧。它为等级损失提供了上限,并确保了鲁棒性训练。其次,我们使用一个简单而有效的损失函数来减小排序损失的平均批逼近和它们在整个训练集中的值之间的可分解性差距。我们将框架应用于图像检索的两个标准度量:AP和R@k。此外,我们将该框架应用于分层图像检索。我们引入了AP的一个扩展,即层次平均精度$\mathcal {H}{\ mathm -AP}$,并对其进行了优化。最后,我们创建了第一个分层地标检索数据集。我们使用半自动管道来创建分层标签,扩展大规模谷歌Landmarks v2数据集。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
CrossEarth: Geospatial Vision Foundation Model for Domain Generalizable Remote Sensing Semantic Segmentation. Continuous Review and Timely Correction: Enhancing the Resistance to Noisy Labels via Self-Not-True and Class-Wise Distillation. On the Transferability and Discriminability of Representation Learning in Unsupervised Domain Adaptation. Fast Multi-view Discrete Clustering via Spectral Embedding Fusion. GrowSP++: Growing Superpoints and Primitives for Unsupervised 3D Semantic Segmentation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1