Cross-Attention Network for Cross-View Image Geo-Localization

Jingjing Wang, Xi Li
{"title":"Cross-Attention Network for Cross-View Image Geo-Localization","authors":"Jingjing Wang, Xi Li","doi":"10.1109/ISAS59543.2023.10164457","DOIUrl":null,"url":null,"abstract":"The task of cross-view geo-location is to get a corresponding image from a dataset of Global Positioning System (GPS) labeled aerial-view images, given a ground-view query image with an unknown location. This task presents challenges due to the significant differences in viewpoint and appearance between the two types of images. To overcome these challenges, we have developed a novel attention-based method that leverages a key localization cue. The cross-attention-based Swap Encoder Module (SEM) is proposed, which effectively aligns features by directing the network’s focus towards relevant information. Additionally, we employ an Image Proposal Network (IPN) to ensure consistent inputs of both aerial and ground-view images that correspond, during both training and validation phases. Experimental results show that our proposed network significantly outperforms existing benchmarking CVUSA dataset, with significant improvements for top-1 recall from 61.4% to 71.45%, and for top-10 from 90.49% to 92.30%.","PeriodicalId":199115,"journal":{"name":"2023 6th International Symposium on Autonomous Systems (ISAS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 6th International Symposium on Autonomous Systems (ISAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISAS59543.2023.10164457","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The task of cross-view geo-location is to get a corresponding image from a dataset of Global Positioning System (GPS) labeled aerial-view images, given a ground-view query image with an unknown location. This task presents challenges due to the significant differences in viewpoint and appearance between the two types of images. To overcome these challenges, we have developed a novel attention-based method that leverages a key localization cue. The cross-attention-based Swap Encoder Module (SEM) is proposed, which effectively aligns features by directing the network’s focus towards relevant information. Additionally, we employ an Image Proposal Network (IPN) to ensure consistent inputs of both aerial and ground-view images that correspond, during both training and validation phases. Experimental results show that our proposed network significantly outperforms existing benchmarking CVUSA dataset, with significant improvements for top-1 recall from 61.4% to 71.45%, and for top-10 from 90.49% to 92.30%.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
跨视点图像地理定位的交叉注意网络
交叉视点地理定位的任务是从全球定位系统(GPS)标记的鸟瞰图数据集中获得相应的图像,给定未知位置的地面视图查询图像。由于两种类型的图像在视点和外观上存在显着差异,因此这项任务提出了挑战。为了克服这些挑战,我们开发了一种新的基于注意力的方法,利用关键的定位线索。提出了基于交叉注意的交换编码器模块(SEM),该模块通过将网络的焦点指向相关信息,有效地对齐特征。此外,我们采用了图像建议网络(IPN),以确保在训练和验证阶段,航拍和地面图像的输入一致。实验结果表明,我们提出的网络显著优于现有的基准CVUSA数据集,前1名的召回率从61.4%提高到71.45%,前10名的召回率从90.49%提高到92.30%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A new type of video text automatic recognition method and its application in film and television works H∞ state feedback control for fuzzy singular Markovian jump systems with constant time delays and impulsive perturbations MMSTP: Multi-modal Spatiotemporal Feature Fusion Network for Precipitation Prediction Digital twin based bearing fault simulation modeling strategy and display dynamics End-to-End Model-Based Gait Recognition with Matching Module Based on Graph Neural Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1