Zhen Ye;Yuan Li;Zhen Li;Huan Liu;Yuxiang Zhang;Wei Li
{"title":"Attention Multiscale Network for Semantic Segmentation of Multimodal Remote Sensing Images","authors":"Zhen Ye;Yuan Li;Zhen Li;Huan Liu;Yuxiang Zhang;Wei Li","doi":"10.1109/TGRS.2025.3540848","DOIUrl":null,"url":null,"abstract":"Due to recent advancements in deep learning, techniques for urban structure extraction and semantic segmentation of multimodal remote sensing images have significant improvements. However, the challenge arises from the variable color intensity and complex texture of urban structures in optical images, particularly in buildings and roads. Fortunately, the light detection and ranging (LiDAR) images promote the task of developing an optimal multimodal fusion network that effectively leverages information from different modalities. In this article, we propose an attention multiscale network (AMSNet) for binary semantic segmentation tasks focused on building extraction, as well as multiclass semantic segmentation tasks, by integrating optical and LiDAR remote sensing images. AMSNet introduces two feature fusion modules—spatial scale adaptive fusion (S2AF) and semantic guided fusion (SGF). S2AF facilitates feature fusion between optical and LiDAR images within the same layer. This module contains a spatial scale selection strategy and an adaptive weight learning strategy, which enables the network to adaptively extract and intentionally select multiscale features from multimodal data. SGF addresses the semantic gap between different layered block features through semantic feature guidance strategy while achieving feature fusion. Furthermore, we introduce robust feature learning (RFL) to ensure the network robustness in rotation and variation in objects, making it resilient to images captured from different viewpoints and sensors. RFL incorporates point-to-point similarity learning strategy and multiscale feature reuse strategy. Experimental results on publicly available datasets demonstrate that AMSNet outperforms other state-of-the-art models. Extensive ablation studies further confirm the significance of all key components in the proposed approach. The source code of this method is available at <uri>https://github.com/B-LG-J/AMSNet.git</uri>.","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"63 ","pages":"1-15"},"PeriodicalIF":7.5000,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Geoscience and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10891514/","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Due to recent advancements in deep learning, techniques for urban structure extraction and semantic segmentation of multimodal remote sensing images have significant improvements. However, the challenge arises from the variable color intensity and complex texture of urban structures in optical images, particularly in buildings and roads. Fortunately, the light detection and ranging (LiDAR) images promote the task of developing an optimal multimodal fusion network that effectively leverages information from different modalities. In this article, we propose an attention multiscale network (AMSNet) for binary semantic segmentation tasks focused on building extraction, as well as multiclass semantic segmentation tasks, by integrating optical and LiDAR remote sensing images. AMSNet introduces two feature fusion modules—spatial scale adaptive fusion (S2AF) and semantic guided fusion (SGF). S2AF facilitates feature fusion between optical and LiDAR images within the same layer. This module contains a spatial scale selection strategy and an adaptive weight learning strategy, which enables the network to adaptively extract and intentionally select multiscale features from multimodal data. SGF addresses the semantic gap between different layered block features through semantic feature guidance strategy while achieving feature fusion. Furthermore, we introduce robust feature learning (RFL) to ensure the network robustness in rotation and variation in objects, making it resilient to images captured from different viewpoints and sensors. RFL incorporates point-to-point similarity learning strategy and multiscale feature reuse strategy. Experimental results on publicly available datasets demonstrate that AMSNet outperforms other state-of-the-art models. Extensive ablation studies further confirm the significance of all key components in the proposed approach. The source code of this method is available at https://github.com/B-LG-J/AMSNet.git.
期刊介绍:
IEEE Transactions on Geoscience and Remote Sensing (TGRS) is a monthly publication that focuses on the theory, concepts, and techniques of science and engineering as applied to sensing the land, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.