Jiajun Yang;Wenjing Wang;Keyan Chen;Liqin Liu;Zhengxia Zou;Zhenwei Shi
{"title":"Structural Representation-Guided GAN for Remote Sensing Image Cloud Removal","authors":"Jiajun Yang;Wenjing Wang;Keyan Chen;Liqin Liu;Zhengxia Zou;Zhenwei Shi","doi":"10.1109/LGRS.2024.3516078","DOIUrl":null,"url":null,"abstract":"Optical remote sensing imagery is often compromised by cloud cover, making effective cloud-removal techniques essential for enhancing the usability of such data. We designed a novel structural representation-guided generative adversarial network (GAN) framework for cloud removal, in which structure and gradient branches are integrated into the network, helping the model focus on the structural representations of ground objects during image reconstruction. Different from previous methods that concentrate on recovering pixel information, we emphasize learning the structural information of remote sensing images. We then utilize error feedback to fuse features from the structural auxiliary branch, guiding the image reconstruction process. During the training phase, synthetic cloud images are used to supervise the optimization of the cloud-removal network, while real cloud images are employed in an adversarial training manner for unsupervised learning to improve the generalization ability of the network. Additionally, multitemporal revisit images from remote sensing satellites are employed as auxiliary inputs, aiding the network to remove thick clouds reliably. We evaluated our framework on a dataset derived from SEN12MS-CR, and the proposed method outperformed classical cloud-removal methods in both objective performance and subjective visual quality. Furthermore, compared to other methods, our approach achieved superior cloud-removal results on real images.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10794750/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Optical remote sensing imagery is often compromised by cloud cover, making effective cloud-removal techniques essential for enhancing the usability of such data. We designed a novel structural representation-guided generative adversarial network (GAN) framework for cloud removal, in which structure and gradient branches are integrated into the network, helping the model focus on the structural representations of ground objects during image reconstruction. Different from previous methods that concentrate on recovering pixel information, we emphasize learning the structural information of remote sensing images. We then utilize error feedback to fuse features from the structural auxiliary branch, guiding the image reconstruction process. During the training phase, synthetic cloud images are used to supervise the optimization of the cloud-removal network, while real cloud images are employed in an adversarial training manner for unsupervised learning to improve the generalization ability of the network. Additionally, multitemporal revisit images from remote sensing satellites are employed as auxiliary inputs, aiding the network to remove thick clouds reliably. We evaluated our framework on a dataset derived from SEN12MS-CR, and the proposed method outperformed classical cloud-removal methods in both objective performance and subjective visual quality. Furthermore, compared to other methods, our approach achieved superior cloud-removal results on real images.