{"title":"CSPFormer: A cross-spatial pyramid transformer for visual place recognition","authors":"Zhenyu Li , Pengjie Xu","doi":"10.1016/j.neucom.2024.127472","DOIUrl":null,"url":null,"abstract":"<div><p>Recently, the Vision Transformer (ViT), which applied the Transformer structure to various visual detection tasks, has outperformed convolutional neural networks (CNNs). Nonetheless, due to the lack of scale representation ability of the Transformer, how to extract the local features of the scene to effectively form a global descriptor is still a challenging problem. In the paper, we propose a Cross-Spatial Pyramid Transformer (CSPFormer) to learn the discriminative global descriptors from multi-scale visual features for efficient visual place recognition. Specifically, we first develop a pyramid CNN module that can extract multi-scale visual feature representations. Then, the extracted feature representations of multi-scales are input to multiple connected spatial pyramid Transformer modules that adaptively learn the spatial relationship of the different scale descriptors, where the multiple self-attention is applied to learn a global descriptor from discriminative local descriptors. CNN pyramid features and Transformer multi-scale features are mutually weighted to perform cross-spatial feature representation. The multiple self-attention enhances the long-term dependencies of multi-scale visual descriptors and reduces the computational cost. To obtain the final place-matching result accurately, the cosine function is used to calculate the spatial similarity between the two scenes. Experimental results on public place datasets show that the proposed method achieves state-of-the-art on large-scale visual place recognition tasks. Our model has achieved 94.7%, 92.8%, 91.3%, and 95.7% average recall based on the top 1% candidate scenario on KITTI, Nordland, VPRICE, and EuRoc datasets, respectively.</p></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"580 ","pages":"Article 127472"},"PeriodicalIF":5.5000,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224002431","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Recently, the Vision Transformer (ViT), which applied the Transformer structure to various visual detection tasks, has outperformed convolutional neural networks (CNNs). Nonetheless, due to the lack of scale representation ability of the Transformer, how to extract the local features of the scene to effectively form a global descriptor is still a challenging problem. In the paper, we propose a Cross-Spatial Pyramid Transformer (CSPFormer) to learn the discriminative global descriptors from multi-scale visual features for efficient visual place recognition. Specifically, we first develop a pyramid CNN module that can extract multi-scale visual feature representations. Then, the extracted feature representations of multi-scales are input to multiple connected spatial pyramid Transformer modules that adaptively learn the spatial relationship of the different scale descriptors, where the multiple self-attention is applied to learn a global descriptor from discriminative local descriptors. CNN pyramid features and Transformer multi-scale features are mutually weighted to perform cross-spatial feature representation. The multiple self-attention enhances the long-term dependencies of multi-scale visual descriptors and reduces the computational cost. To obtain the final place-matching result accurately, the cosine function is used to calculate the spatial similarity between the two scenes. Experimental results on public place datasets show that the proposed method achieves state-of-the-art on large-scale visual place recognition tasks. Our model has achieved 94.7%, 92.8%, 91.3%, and 95.7% average recall based on the top 1% candidate scenario on KITTI, Nordland, VPRICE, and EuRoc datasets, respectively.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.