Pub Date : 2025-02-18DOI: 10.1109/TGRS.2025.3540848
Zhen Ye;Yuan Li;Zhen Li;Huan Liu;Yuxiang Zhang;Wei Li
Due to recent advancements in deep learning, techniques for urban structure extraction and semantic segmentation of multimodal remote sensing images have significant improvements. However, the challenge arises from the variable color intensity and complex texture of urban structures in optical images, particularly in buildings and roads. Fortunately, the light detection and ranging (LiDAR) images promote the task of developing an optimal multimodal fusion network that effectively leverages information from different modalities. In this article, we propose an attention multiscale network (AMSNet) for binary semantic segmentation tasks focused on building extraction, as well as multiclass semantic segmentation tasks, by integrating optical and LiDAR remote sensing images. AMSNet introduces two feature fusion modules—spatial scale adaptive fusion (S2AF) and semantic guided fusion (SGF). S2AF facilitates feature fusion between optical and LiDAR images within the same layer. This module contains a spatial scale selection strategy and an adaptive weight learning strategy, which enables the network to adaptively extract and intentionally select multiscale features from multimodal data. SGF addresses the semantic gap between different layered block features through semantic feature guidance strategy while achieving feature fusion. Furthermore, we introduce robust feature learning (RFL) to ensure the network robustness in rotation and variation in objects, making it resilient to images captured from different viewpoints and sensors. RFL incorporates point-to-point similarity learning strategy and multiscale feature reuse strategy. Experimental results on publicly available datasets demonstrate that AMSNet outperforms other state-of-the-art models. Extensive ablation studies further confirm the significance of all key components in the proposed approach. The source code of this method is available at https://github.com/B-LG-J/AMSNet.git.
{"title":"Attention Multiscale Network for Semantic Segmentation of Multimodal Remote Sensing Images","authors":"Zhen Ye;Yuan Li;Zhen Li;Huan Liu;Yuxiang Zhang;Wei Li","doi":"10.1109/TGRS.2025.3540848","DOIUrl":"10.1109/TGRS.2025.3540848","url":null,"abstract":"Due to recent advancements in deep learning, techniques for urban structure extraction and semantic segmentation of multimodal remote sensing images have significant improvements. However, the challenge arises from the variable color intensity and complex texture of urban structures in optical images, particularly in buildings and roads. Fortunately, the light detection and ranging (LiDAR) images promote the task of developing an optimal multimodal fusion network that effectively leverages information from different modalities. In this article, we propose an attention multiscale network (AMSNet) for binary semantic segmentation tasks focused on building extraction, as well as multiclass semantic segmentation tasks, by integrating optical and LiDAR remote sensing images. AMSNet introduces two feature fusion modules—spatial scale adaptive fusion (S2AF) and semantic guided fusion (SGF). S2AF facilitates feature fusion between optical and LiDAR images within the same layer. This module contains a spatial scale selection strategy and an adaptive weight learning strategy, which enables the network to adaptively extract and intentionally select multiscale features from multimodal data. SGF addresses the semantic gap between different layered block features through semantic feature guidance strategy while achieving feature fusion. Furthermore, we introduce robust feature learning (RFL) to ensure the network robustness in rotation and variation in objects, making it resilient to images captured from different viewpoints and sensors. RFL incorporates point-to-point similarity learning strategy and multiscale feature reuse strategy. Experimental results on publicly available datasets demonstrate that AMSNet outperforms other state-of-the-art models. Extensive ablation studies further confirm the significance of all key components in the proposed approach. The source code of this method is available at <uri>https://github.com/B-LG-J/AMSNet.git</uri>.","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"63 ","pages":"1-15"},"PeriodicalIF":7.5,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143443541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-18DOI: 10.1109/tgrs.2025.3543200
Baihong Lin, Zhengxia Zou, Zhenwei Shi
{"title":"RSBEV-Mamba: 3D BEV Sequence Modeling for Multi-View Remote Sensing Scene Segmentation","authors":"Baihong Lin, Zhengxia Zou, Zhenwei Shi","doi":"10.1109/tgrs.2025.3543200","DOIUrl":"https://doi.org/10.1109/tgrs.2025.3543200","url":null,"abstract":"","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"175 1","pages":""},"PeriodicalIF":8.2,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143443535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-18DOI: 10.1109/tgrs.2025.3543359
Jincan Liu, Jie Ding, Jichao Wang
{"title":"Tropical cyclone-affected ocean surface winds: A comparison of atmospheric reanalyses with altimeter, scatterometer, radiometer, and buoy measurements","authors":"Jincan Liu, Jie Ding, Jichao Wang","doi":"10.1109/tgrs.2025.3543359","DOIUrl":"https://doi.org/10.1109/tgrs.2025.3543359","url":null,"abstract":"","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"88 1","pages":""},"PeriodicalIF":8.2,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143443533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-18DOI: 10.1109/tgrs.2025.3543198
Hao Zhang, Peimin Zhu, Xianhai Song, Muhammad Ali, Ziang Li, Zhiying Liao, Dianyong Ruan, Tao Li
{"title":"Interactive channel segmentation from 2D seismic images using deep learning and conditional random fields","authors":"Hao Zhang, Peimin Zhu, Xianhai Song, Muhammad Ali, Ziang Li, Zhiying Liao, Dianyong Ruan, Tao Li","doi":"10.1109/tgrs.2025.3543198","DOIUrl":"https://doi.org/10.1109/tgrs.2025.3543198","url":null,"abstract":"","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"1 1","pages":""},"PeriodicalIF":8.2,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143443553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}