Attention Multiscale Network for Semantic Segmentation of Multimodal Remote Sensing Images

IF 7.5 1区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Transactions on Geoscience and Remote Sensing Pub Date : 2025-02-18 DOI:10.1109/TGRS.2025.3540848
Zhen Ye;Yuan Li;Zhen Li;Huan Liu;Yuxiang Zhang;Wei Li
{"title":"Attention Multiscale Network for Semantic Segmentation of Multimodal Remote Sensing Images","authors":"Zhen Ye;Yuan Li;Zhen Li;Huan Liu;Yuxiang Zhang;Wei Li","doi":"10.1109/TGRS.2025.3540848","DOIUrl":null,"url":null,"abstract":"Due to recent advancements in deep learning, techniques for urban structure extraction and semantic segmentation of multimodal remote sensing images have significant improvements. However, the challenge arises from the variable color intensity and complex texture of urban structures in optical images, particularly in buildings and roads. Fortunately, the light detection and ranging (LiDAR) images promote the task of developing an optimal multimodal fusion network that effectively leverages information from different modalities. In this article, we propose an attention multiscale network (AMSNet) for binary semantic segmentation tasks focused on building extraction, as well as multiclass semantic segmentation tasks, by integrating optical and LiDAR remote sensing images. AMSNet introduces two feature fusion modules—spatial scale adaptive fusion (S2AF) and semantic guided fusion (SGF). S2AF facilitates feature fusion between optical and LiDAR images within the same layer. This module contains a spatial scale selection strategy and an adaptive weight learning strategy, which enables the network to adaptively extract and intentionally select multiscale features from multimodal data. SGF addresses the semantic gap between different layered block features through semantic feature guidance strategy while achieving feature fusion. Furthermore, we introduce robust feature learning (RFL) to ensure the network robustness in rotation and variation in objects, making it resilient to images captured from different viewpoints and sensors. RFL incorporates point-to-point similarity learning strategy and multiscale feature reuse strategy. Experimental results on publicly available datasets demonstrate that AMSNet outperforms other state-of-the-art models. Extensive ablation studies further confirm the significance of all key components in the proposed approach. The source code of this method is available at <uri>https://github.com/B-LG-J/AMSNet.git</uri>.","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"63 ","pages":"1-15"},"PeriodicalIF":7.5000,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Geoscience and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10891514/","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Due to recent advancements in deep learning, techniques for urban structure extraction and semantic segmentation of multimodal remote sensing images have significant improvements. However, the challenge arises from the variable color intensity and complex texture of urban structures in optical images, particularly in buildings and roads. Fortunately, the light detection and ranging (LiDAR) images promote the task of developing an optimal multimodal fusion network that effectively leverages information from different modalities. In this article, we propose an attention multiscale network (AMSNet) for binary semantic segmentation tasks focused on building extraction, as well as multiclass semantic segmentation tasks, by integrating optical and LiDAR remote sensing images. AMSNet introduces two feature fusion modules—spatial scale adaptive fusion (S2AF) and semantic guided fusion (SGF). S2AF facilitates feature fusion between optical and LiDAR images within the same layer. This module contains a spatial scale selection strategy and an adaptive weight learning strategy, which enables the network to adaptively extract and intentionally select multiscale features from multimodal data. SGF addresses the semantic gap between different layered block features through semantic feature guidance strategy while achieving feature fusion. Furthermore, we introduce robust feature learning (RFL) to ensure the network robustness in rotation and variation in objects, making it resilient to images captured from different viewpoints and sensors. RFL incorporates point-to-point similarity learning strategy and multiscale feature reuse strategy. Experimental results on publicly available datasets demonstrate that AMSNet outperforms other state-of-the-art models. Extensive ablation studies further confirm the significance of all key components in the proposed approach. The source code of this method is available at https://github.com/B-LG-J/AMSNet.git.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Geoscience and Remote Sensing
IEEE Transactions on Geoscience and Remote Sensing 工程技术-地球化学与地球物理
CiteScore
11.50
自引率
28.00%
发文量
1912
审稿时长
4.0 months
期刊介绍: IEEE Transactions on Geoscience and Remote Sensing (TGRS) is a monthly publication that focuses on the theory, concepts, and techniques of science and engineering as applied to sensing the land, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.
期刊最新文献
Scale-Aware Pruning Framework for Remote Sensing Object Detection Via Multi-Feature Representation TransWCD: Scene-Adaptive Joint Constrained Framework for Weakly-Supervised Change Detection HASNet: A foreground association-driven Siamese network with hard sample optimization for remote sensing image change detection Future Spaceborne Oceanographic Lidar: Exploring the Effects of Large Off-nadir Angles on Signal Dynamic Range and Depth Aliasing Combining SAM with Limited Data for Change Detection in Remote Sensing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1