针对 Suomi-NPP VIIRS 昼夜图像的基于多尺度特征的云层探测方法

Jun Li, Chengjie Hu, Qinghong Sheng, Jiawei Xu, Chongrui Zhu, Weili Zhang
{"title":"针对 Suomi-NPP VIIRS 昼夜图像的基于多尺度特征的云层探测方法","authors":"Jun Li, Chengjie Hu, Qinghong Sheng, Jiawei Xu, Chongrui Zhu, Weili Zhang","doi":"10.5194/isprs-annals-x-1-2024-115-2024","DOIUrl":null,"url":null,"abstract":"Abstract. Cloud detection is a necessary step before the application of remote sensing images. However, most methods focus on cloud detection in daytime remote sensing images. The ignored nighttime remote sensing images play more and more important role in many fields such as urban monitoring, population estimation and disaster assessment. The radiation intensity similarity between artificial lights and clouds is higher in nighttime remote sensing images than in daytime remote sensing images, which makes it difficult to distinguish artificial lights from clouds. Therefore, this paper proposes a deep learning-based method (MFFCD-Net) to detect clouds for day and nighttime remote sensing images. MFFCD-Net is designed based on the encoder-decoder structure. The encoder adopts Resnet-50 as the backbone network for better feature extraction, and a dilated residual up-sampling module (DR-UP) is designed in the decoder for up-sampling feature maps while enlarging the receptive field. A multi-scale feature extraction fusion module (MFEF) is designed to enhance the ability of the MFFCD-Net to distinguish regular textures of artificial lights and random textures of clouds. An Global Feature Recovery Fusion Module (GFRF Module) is designed to select and fuse the feature in the encoding stage and the feature in the decoding stage, thus to achieve better cloud detection accuracy. This is the first time that a deep learning-based method is designed for cloud detection both in day and nighttime remote sensing images. The experimental results on Suomi-NPP VIIRS DNB images show that MFFCD-Net achieves higher accuracy than baseline methods both in day and nighttime remote sensing images. Results on daytime remote sensing images indicate that MFFCD-Net can obtain better balance on commission and omission rates than baseline methods (92.3% versus 90.5% on F1-score). Although artificial lights introduced strong interference in cloud detection in nighttime remote sensing images, the accuracy values of MFFCD-Net on OA, Precision, Recall, and F1-score are still higher than 90%. This demonstrates that MFFCD-Net can better distinguish artificial lights from clouds than baseline methods in nighttime remote sensing images. The effectiveness of MFFCD-Net proves that it is very promising for cloud detection both in day and nighttime remote sensing images.\n","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 5","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Multi-scale features-based cloud detection method for Suomi-NPP VIIRS day and night imagery\",\"authors\":\"Jun Li, Chengjie Hu, Qinghong Sheng, Jiawei Xu, Chongrui Zhu, Weili Zhang\",\"doi\":\"10.5194/isprs-annals-x-1-2024-115-2024\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract. Cloud detection is a necessary step before the application of remote sensing images. However, most methods focus on cloud detection in daytime remote sensing images. The ignored nighttime remote sensing images play more and more important role in many fields such as urban monitoring, population estimation and disaster assessment. The radiation intensity similarity between artificial lights and clouds is higher in nighttime remote sensing images than in daytime remote sensing images, which makes it difficult to distinguish artificial lights from clouds. Therefore, this paper proposes a deep learning-based method (MFFCD-Net) to detect clouds for day and nighttime remote sensing images. MFFCD-Net is designed based on the encoder-decoder structure. The encoder adopts Resnet-50 as the backbone network for better feature extraction, and a dilated residual up-sampling module (DR-UP) is designed in the decoder for up-sampling feature maps while enlarging the receptive field. A multi-scale feature extraction fusion module (MFEF) is designed to enhance the ability of the MFFCD-Net to distinguish regular textures of artificial lights and random textures of clouds. An Global Feature Recovery Fusion Module (GFRF Module) is designed to select and fuse the feature in the encoding stage and the feature in the decoding stage, thus to achieve better cloud detection accuracy. This is the first time that a deep learning-based method is designed for cloud detection both in day and nighttime remote sensing images. The experimental results on Suomi-NPP VIIRS DNB images show that MFFCD-Net achieves higher accuracy than baseline methods both in day and nighttime remote sensing images. Results on daytime remote sensing images indicate that MFFCD-Net can obtain better balance on commission and omission rates than baseline methods (92.3% versus 90.5% on F1-score). Although artificial lights introduced strong interference in cloud detection in nighttime remote sensing images, the accuracy values of MFFCD-Net on OA, Precision, Recall, and F1-score are still higher than 90%. This demonstrates that MFFCD-Net can better distinguish artificial lights from clouds than baseline methods in nighttime remote sensing images. The effectiveness of MFFCD-Net proves that it is very promising for cloud detection both in day and nighttime remote sensing images.\\n\",\"PeriodicalId\":508124,\"journal\":{\"name\":\"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences\",\"volume\":\" 5\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5194/isprs-annals-x-1-2024-115-2024\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5194/isprs-annals-x-1-2024-115-2024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

摘要云检测是遥感图像应用前的必要步骤。然而,大多数方法侧重于白天遥感图像中的云检测。被忽视的夜间遥感图像在城市监测、人口估计和灾害评估等许多领域发挥着越来越重要的作用。与日间遥感图像相比,夜间遥感图像中人造光和云的辐射强度相似度更高,这使得人造光和云难以区分。因此,本文提出了一种基于深度学习的方法(MFFCD-Net)来检测昼夜遥感图像中的云。MFFCD-Net 基于编码器-解码器结构设计。编码器采用 Resnet-50 作为骨干网络,以实现更好的特征提取;解码器中设计了一个扩张残差上采样模块(DR-UP),用于在扩大感受野的同时对特征图进行上采样。设计了一个多尺度特征提取融合模块(MFEF),以增强 MFFCD-Net 区分人造光的规则纹理和云的随机纹理的能力。设计了全局特征恢复融合模块(GFRF 模块),用于选择和融合编码阶段的特征和解码阶段的特征,从而实现更高的云检测精度。这是首次针对白天和夜间遥感图像的云检测设计基于深度学习的方法。Suomi-NPP VIIRS DNB 图像的实验结果表明,MFFCD-Net 在日间和夜间遥感图像中都比基准方法获得了更高的精度。日间遥感图像的结果表明,MFFCD-Net 比基准方法能更好地平衡委托率和遗漏率(在 F1 分数上分别为 92.3% 和 90.5%)。虽然人造光对夜间遥感图像中的云检测产生了强烈干扰,但 MFFCD-Net 在 OA、Precision、Recall 和 F1-score 方面的准确度值仍高于 90%。这表明,在夜间遥感图像中,MFFCD-Net 比基准方法能更好地将人造光与云区分开来。MFFCD-Net 的有效性证明,它在日间和夜间遥感图像的云检测方面都大有可为。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Multi-scale features-based cloud detection method for Suomi-NPP VIIRS day and night imagery
Abstract. Cloud detection is a necessary step before the application of remote sensing images. However, most methods focus on cloud detection in daytime remote sensing images. The ignored nighttime remote sensing images play more and more important role in many fields such as urban monitoring, population estimation and disaster assessment. The radiation intensity similarity between artificial lights and clouds is higher in nighttime remote sensing images than in daytime remote sensing images, which makes it difficult to distinguish artificial lights from clouds. Therefore, this paper proposes a deep learning-based method (MFFCD-Net) to detect clouds for day and nighttime remote sensing images. MFFCD-Net is designed based on the encoder-decoder structure. The encoder adopts Resnet-50 as the backbone network for better feature extraction, and a dilated residual up-sampling module (DR-UP) is designed in the decoder for up-sampling feature maps while enlarging the receptive field. A multi-scale feature extraction fusion module (MFEF) is designed to enhance the ability of the MFFCD-Net to distinguish regular textures of artificial lights and random textures of clouds. An Global Feature Recovery Fusion Module (GFRF Module) is designed to select and fuse the feature in the encoding stage and the feature in the decoding stage, thus to achieve better cloud detection accuracy. This is the first time that a deep learning-based method is designed for cloud detection both in day and nighttime remote sensing images. The experimental results on Suomi-NPP VIIRS DNB images show that MFFCD-Net achieves higher accuracy than baseline methods both in day and nighttime remote sensing images. Results on daytime remote sensing images indicate that MFFCD-Net can obtain better balance on commission and omission rates than baseline methods (92.3% versus 90.5% on F1-score). Although artificial lights introduced strong interference in cloud detection in nighttime remote sensing images, the accuracy values of MFFCD-Net on OA, Precision, Recall, and F1-score are still higher than 90%. This demonstrates that MFFCD-Net can better distinguish artificial lights from clouds than baseline methods in nighttime remote sensing images. The effectiveness of MFFCD-Net proves that it is very promising for cloud detection both in day and nighttime remote sensing images.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
The 19th 3D GeoInfo Conference: Preface Annals UAS Photogrammetry for Precise Digital Elevation Models of Complex Topography: A Strategy Guide Using Passive Multi-Modal Sensor Data for Thermal Simulation of Urban Surfaces Machine Learning Approaches for Vehicle Counting on Bridges Based on Global Ground-Based Radar Data Rectilinear Building Footprint Regularization Using Deep Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1