{"title":"SAG-Net: Spectrum Adaptive Gate Network for Learning Feature Representation From Multispectral Imagery","authors":"Yong Li;Bohan Li;Zhongqun Chen;Yixuan Li;Guohan Zhang","doi":"10.1109/LGRS.2025.3535635","DOIUrl":null,"url":null,"abstract":"Feature representation plays a key role in matching keypoints, especially for the multispectral images of large spectral difference. On such image pairs, existing methods typically use the two images only, but it is challenging to directly learn spectrum-invariant feature representation due to the complex nonlinear distortion between them. To address this issue, this letter proposes using intermediate-band images to facilitate learning spectrum-invariant feature representation. For this purpose, this work designs a spectrum adaptive gate network (SAG-Net) that consists of a SPectral gate (SPeG) module and a deep feature extractor. The SPeG module selectively activates the spectrum-invariant features according to input image content on-the-fly. It hence allows for training on the images of over two bands simultaneously with a single network without the need of an individual branch per band. To investigate the SPeG module, we also constructed a Landsat 9 Multi-Spectral Images (L9-MSI) dataset including 3167 scenes of aligned images across five spectral bands (visible, B5, B6, B7, and B10) from the Landsat 9 imagery. The experimental results demonstrate the SPeG module can learn common feature representation for varying-band images, and the intermediate B5, B6, and B7 images are useful for the SAG-Net to learn the common feature between visible and B10. On the L9-MSI dataset, the SAG-Net significantly improved the number of correct matches and the matching score (MS). Our dataset will be released at <uri>https://github.com/bohanlee/L9MSI-Dataset.git</uri>.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10856180/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Feature representation plays a key role in matching keypoints, especially for the multispectral images of large spectral difference. On such image pairs, existing methods typically use the two images only, but it is challenging to directly learn spectrum-invariant feature representation due to the complex nonlinear distortion between them. To address this issue, this letter proposes using intermediate-band images to facilitate learning spectrum-invariant feature representation. For this purpose, this work designs a spectrum adaptive gate network (SAG-Net) that consists of a SPectral gate (SPeG) module and a deep feature extractor. The SPeG module selectively activates the spectrum-invariant features according to input image content on-the-fly. It hence allows for training on the images of over two bands simultaneously with a single network without the need of an individual branch per band. To investigate the SPeG module, we also constructed a Landsat 9 Multi-Spectral Images (L9-MSI) dataset including 3167 scenes of aligned images across five spectral bands (visible, B5, B6, B7, and B10) from the Landsat 9 imagery. The experimental results demonstrate the SPeG module can learn common feature representation for varying-band images, and the intermediate B5, B6, and B7 images are useful for the SAG-Net to learn the common feature between visible and B10. On the L9-MSI dataset, the SAG-Net significantly improved the number of correct matches and the matching score (MS). Our dataset will be released at https://github.com/bohanlee/L9MSI-Dataset.git.