Sparse-Guided Partial Dense for Cross-Modal Remote Sensing Image–Text Retrieval

IF 8.6 1区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Transactions on Geoscience and Remote Sensing Pub Date : 2025-03-28 DOI:10.1109/TGRS.2025.3555956
Zuopeng Zhao;Xiaoran Miao;Lei Liu;Xinzheng Xu;Ying Liu;Jianfeng Hu;Bingbing Min;Yumeng Gao;Kanyaphakphachsorn Pharksuwan
{"title":"Sparse-Guided Partial Dense for Cross-Modal Remote Sensing Image–Text Retrieval","authors":"Zuopeng Zhao;Xiaoran Miao;Lei Liu;Xinzheng Xu;Ying Liu;Jianfeng Hu;Bingbing Min;Yumeng Gao;Kanyaphakphachsorn Pharksuwan","doi":"10.1109/TGRS.2025.3555956","DOIUrl":null,"url":null,"abstract":"Cross-modal remote sensing image-text retrieval (CMRSITR) involves retrieving relevant samples in one modality based on a query from another modality. Previous dense retrieval methods utilizing multivector dense representations have significantly enhanced retrieval performance. Meanwhile, recent advances in sparse retrieval have demonstrated that sparse representations offer comparable performance with enhanced interpretability and faster retrieval speeds. However, effectively integrating the strengths of these two paradigms to enable efficient and accurate retrieval in large-scale remote sensing (RS) image-text datasets remains an open challenge. In this study, we propose sparse-guided partial dense (SGPD) cross-modal retrieval, a novel approach that efficiently transforms dense vectors from pretrained dense retrieval models into sparse representations and leverages the overlap between sparse retrieval results and dense vector clusters to achieve high-precision and fast retrieval. By probabilistically selecting a limited number of dense clusters containing top sparse results, SGPD ensures retrieval efficiency while minimizing both memory and time costs. Designed as a plug-and-play solution, SGPD can be seamlessly integrated into existing RS image-text retrieval (RSITR) models without requiring modifications to their architectures. Extensive experiments on RS image-text datasets of varying scales demonstrate that SGPD achieves retrieval accuracy comparable to dense retrieval methods while significantly reducing training time and memory consumption.","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"63 ","pages":"1-13"},"PeriodicalIF":8.6000,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Geoscience and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10945442/","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Cross-modal remote sensing image-text retrieval (CMRSITR) involves retrieving relevant samples in one modality based on a query from another modality. Previous dense retrieval methods utilizing multivector dense representations have significantly enhanced retrieval performance. Meanwhile, recent advances in sparse retrieval have demonstrated that sparse representations offer comparable performance with enhanced interpretability and faster retrieval speeds. However, effectively integrating the strengths of these two paradigms to enable efficient and accurate retrieval in large-scale remote sensing (RS) image-text datasets remains an open challenge. In this study, we propose sparse-guided partial dense (SGPD) cross-modal retrieval, a novel approach that efficiently transforms dense vectors from pretrained dense retrieval models into sparse representations and leverages the overlap between sparse retrieval results and dense vector clusters to achieve high-precision and fast retrieval. By probabilistically selecting a limited number of dense clusters containing top sparse results, SGPD ensures retrieval efficiency while minimizing both memory and time costs. Designed as a plug-and-play solution, SGPD can be seamlessly integrated into existing RS image-text retrieval (RSITR) models without requiring modifications to their architectures. Extensive experiments on RS image-text datasets of varying scales demonstrate that SGPD achieves retrieval accuracy comparable to dense retrieval methods while significantly reducing training time and memory consumption.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
稀疏引导的部分密集跨模态遥感图像文本检索
跨模态遥感图像-文本检索(CMRSITR)是基于另一模态的查询以一模态检索相关样本。以往使用多向量密集表示的密集检索方法显著提高了检索性能。同时,稀疏检索的最新进展表明,稀疏表示具有增强的可解释性和更快的检索速度。然而,如何有效地整合这两种模式的优势,以实现大规模遥感(RS)图像-文本数据集的高效和准确检索仍然是一个悬而未决的挑战。在本研究中,我们提出了稀疏引导偏密集(SGPD)跨模态检索方法,该方法将密集向量从预训练的密集检索模型有效地转换为稀疏表示,并利用稀疏检索结果与密集向量簇之间的重叠来实现高精度和快速检索。通过概率地选择有限数量的包含顶级稀疏结果的密集集群,SGPD确保了检索效率,同时最小化了内存和时间成本。SGPD被设计为即插即用的解决方案,可以无缝地集成到现有的RS图像-文本检索(RSITR)模型中,而无需修改其体系结构。在不同尺度的RS图像-文本数据集上进行的大量实验表明,SGPD在显著减少训练时间和内存消耗的同时,取得了与密集检索方法相当的检索精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Geoscience and Remote Sensing
IEEE Transactions on Geoscience and Remote Sensing 工程技术-地球化学与地球物理
CiteScore
11.50
自引率
28.00%
发文量
1912
审稿时长
4.0 months
期刊介绍: IEEE Transactions on Geoscience and Remote Sensing (TGRS) is a monthly publication that focuses on the theory, concepts, and techniques of science and engineering as applied to sensing the land, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.
期刊最新文献
AIG 2 AN: Ambiguous-Interpretation Generalized GAN for Self-Supervised Raster-Vector Semantic Segmentation for cross-modal Remote Sensing Image ADIL: Adaptive Dual Imitation Learning for Real-time Object Detection in Remote Sensing Images Illumination-Prior Guided Monochrome–Infrared Fusion for Low-Light Aerial Imaging Improving Retrieval of Canopy Chlorophyll Content by Integrating Leaf Spectral Measurements into SAIL Model HGA2-FSL: A Heterogeneous Graph Aware-Aggregation Driven Few-Shot Learning Network for Hyperspectral Image Change Detection
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1