CloudSeg:用于在多云条件下绘制稳健土地覆被图的多模式学习框架

IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-06-10 DOI:10.1016/j.isprsjprs.2024.06.001
Fang Xu , Yilei Shi , Wen Yang , Gui-Song Xia , Xiao Xiang Zhu
{"title":"CloudSeg:用于在多云条件下绘制稳健土地覆被图的多模式学习框架","authors":"Fang Xu ,&nbsp;Yilei Shi ,&nbsp;Wen Yang ,&nbsp;Gui-Song Xia ,&nbsp;Xiao Xiang Zhu","doi":"10.1016/j.isprsjprs.2024.06.001","DOIUrl":null,"url":null,"abstract":"<div><p>Cloud coverage poses a significant challenge to optical image interpretation, degrading ground information on Earth’s surface. Synthetic aperture radar (SAR), with its ability to penetrate clouds, provides supplementary information to optical data. However, existing optical-SAR fusion methods predominantly focus on cloud-free scenarios, neglecting the practical challenge of semantic segmentation under cloudy conditions. To tackle this issue, we propose CloudSeg, a novel framework tailored for land cover mapping in the presence of clouds. It addresses the challenges posed by cloud cover from two aspects: reducing semantic ambiguity in areas of the cloudy image that are obscured by clouds and enhancing effective information in the unobstructed portions. Specifically, CloudSeg employs a multi-task learning strategy to simultaneously handle low-level visual task and high-level semantic understanding task, mitigating the semantic ambiguity caused by cloud cover by acquiring discriminative features through an auxiliary cloud removal task. Additionally, CloudSeg incorporates a knowledge distillation strategy, which utilizes the knowledge learned by the teacher network under cloud-free conditions to guide the student network to overcome the interference of cloud-covered areas, enhancing the valuable information from the unobstructed parts of cloud-covered images. Extensive experiments conducted on two datasets, <em>M3M-CR</em> and <em>WHU-OPT-SAR</em>, demonstrate the effectiveness and superiority of the proposed CloudSeg method for land cover mapping under cloudy conditions. Specifically, CloudSeg outperforms the state-of-the-art competitors by 3.16% in terms of mIoU on <em>M3M-CR</em> and by 5.56% on <em>WHU-OPT-SAR</em>, highlighting its substantial advantages for analyzing regions frequently obscured by clouds. Codes are available at <span>https://github.com/xufangchn/CloudSeg</span><svg><path></path></svg>.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6000,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CloudSeg: A multi-modal learning framework for robust land cover mapping under cloudy conditions\",\"authors\":\"Fang Xu ,&nbsp;Yilei Shi ,&nbsp;Wen Yang ,&nbsp;Gui-Song Xia ,&nbsp;Xiao Xiang Zhu\",\"doi\":\"10.1016/j.isprsjprs.2024.06.001\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Cloud coverage poses a significant challenge to optical image interpretation, degrading ground information on Earth’s surface. Synthetic aperture radar (SAR), with its ability to penetrate clouds, provides supplementary information to optical data. However, existing optical-SAR fusion methods predominantly focus on cloud-free scenarios, neglecting the practical challenge of semantic segmentation under cloudy conditions. To tackle this issue, we propose CloudSeg, a novel framework tailored for land cover mapping in the presence of clouds. It addresses the challenges posed by cloud cover from two aspects: reducing semantic ambiguity in areas of the cloudy image that are obscured by clouds and enhancing effective information in the unobstructed portions. Specifically, CloudSeg employs a multi-task learning strategy to simultaneously handle low-level visual task and high-level semantic understanding task, mitigating the semantic ambiguity caused by cloud cover by acquiring discriminative features through an auxiliary cloud removal task. Additionally, CloudSeg incorporates a knowledge distillation strategy, which utilizes the knowledge learned by the teacher network under cloud-free conditions to guide the student network to overcome the interference of cloud-covered areas, enhancing the valuable information from the unobstructed parts of cloud-covered images. Extensive experiments conducted on two datasets, <em>M3M-CR</em> and <em>WHU-OPT-SAR</em>, demonstrate the effectiveness and superiority of the proposed CloudSeg method for land cover mapping under cloudy conditions. Specifically, CloudSeg outperforms the state-of-the-art competitors by 3.16% in terms of mIoU on <em>M3M-CR</em> and by 5.56% on <em>WHU-OPT-SAR</em>, highlighting its substantial advantages for analyzing regions frequently obscured by clouds. Codes are available at <span>https://github.com/xufangchn/CloudSeg</span><svg><path></path></svg>.</p></div>\",\"PeriodicalId\":50269,\"journal\":{\"name\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":10.6000,\"publicationDate\":\"2024-06-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0924271624002314\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"GEOGRAPHY, PHYSICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0924271624002314","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 0

摘要

云层覆盖对光学图像判读构成重大挑战,降低了地球表面的地面信息质量。合成孔径雷达(SAR)具有穿透云层的能力,可为光学数据提供补充信息。然而,现有的光学-合成孔径雷达融合方法主要关注无云场景,忽视了在多云条件下进行语义分割的实际挑战。为了解决这个问题,我们提出了 CloudSeg,这是一个专为有云环境下的土地覆盖制图而量身定制的新框架。它从两个方面解决了云层带来的挑战:减少云层图像中被云层遮挡区域的语义模糊性,以及增强未被遮挡部分的有效信息。具体来说,CloudSeg 采用多任务学习策略,同时处理低层次的视觉任务和高层次的语义理解任务,通过辅助的云雾去除任务获取判别特征,从而减轻云层造成的语义模糊。此外,CloudSeg 还采用了知识提炼策略,利用教师网络在无云条件下学习到的知识来指导学生网络克服云层覆盖区域的干扰,从而增强云层覆盖图像中未被遮挡部分的有价值信息。在M3M-CR和WHU-OPT-SAR两个数据集上进行的大量实验证明了所提出的CloudSeg方法在多云条件下绘制土地覆盖图的有效性和优越性。具体来说,在 M3M-CR 和 WHU-OPT-SAR 两个数据集上,CloudSeg 的 mIoU 分别比最先进的竞争对手高出 3.16% 和 5.56%,这凸显了它在分析经常被云层遮挡的区域方面的巨大优势。代码见 https://github.com/xufangchn/CloudSeg。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
CloudSeg: A multi-modal learning framework for robust land cover mapping under cloudy conditions

Cloud coverage poses a significant challenge to optical image interpretation, degrading ground information on Earth’s surface. Synthetic aperture radar (SAR), with its ability to penetrate clouds, provides supplementary information to optical data. However, existing optical-SAR fusion methods predominantly focus on cloud-free scenarios, neglecting the practical challenge of semantic segmentation under cloudy conditions. To tackle this issue, we propose CloudSeg, a novel framework tailored for land cover mapping in the presence of clouds. It addresses the challenges posed by cloud cover from two aspects: reducing semantic ambiguity in areas of the cloudy image that are obscured by clouds and enhancing effective information in the unobstructed portions. Specifically, CloudSeg employs a multi-task learning strategy to simultaneously handle low-level visual task and high-level semantic understanding task, mitigating the semantic ambiguity caused by cloud cover by acquiring discriminative features through an auxiliary cloud removal task. Additionally, CloudSeg incorporates a knowledge distillation strategy, which utilizes the knowledge learned by the teacher network under cloud-free conditions to guide the student network to overcome the interference of cloud-covered areas, enhancing the valuable information from the unobstructed parts of cloud-covered images. Extensive experiments conducted on two datasets, M3M-CR and WHU-OPT-SAR, demonstrate the effectiveness and superiority of the proposed CloudSeg method for land cover mapping under cloudy conditions. Specifically, CloudSeg outperforms the state-of-the-art competitors by 3.16% in terms of mIoU on M3M-CR and by 5.56% on WHU-OPT-SAR, highlighting its substantial advantages for analyzing regions frequently obscured by clouds. Codes are available at https://github.com/xufangchn/CloudSeg.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ISPRS Journal of Photogrammetry and Remote Sensing
ISPRS Journal of Photogrammetry and Remote Sensing 工程技术-成像科学与照相技术
CiteScore
21.00
自引率
6.30%
发文量
273
审稿时长
40 days
期刊介绍: The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive. P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields. In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.
期刊最新文献
Integrating synthetic datasets with CLIP semantic insights for single image localization advancements Selective weighted least square and piecewise bilinear transformation for accurate satellite DSM generation Word2Scene: Efficient remote sensing image scene generation with only one word via hybrid intelligence and low-rank representation A_OPTRAM-ET: An automatic optical trapezoid model for evapotranspiration estimation and its global-scale assessments Atmospheric correction of geostationary ocean color imager data over turbid coastal waters under high solar zenith angles
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1