In recent years, optical remote sensing imagery has played an increasingly vital role in Earth observation, but cloud contamination exists as an inevitable degradation. Combining synthetic aperture radar (SAR) and optical data with machine learning offers a promising solution for reconstructing clear-sky satellite imagery. Nevertheless, several challenges persist, including insufficient attention to large cloud cover, difficulties in restoring temporal changes, and limited practicality of deep models. To address these issues, this paper introduces a novel deep learning-based cloud removal framework, termed Began+, which integrates bi-temporal SAR-optical data to deal with cloudy images with high cover ratios. The Began+ framework comprises two primary components: a deep network and a flexible post-processing step, combining the strengths of data-driven models for restoring change information and traditional gap-filling algorithms for mitigating radiance discrepancies. First, a bi-output enhanced generative adversarial network, abbreviated as Began, is designed for image synthesis, featuring an enhanced channel-wise fusion block (ECFB) and a multi-scale depth-wise convolution residual block (MDRB). By applying the dual-tasking optimization and co-learning strategy, the Began model identifies potential change areas from bi-temporal SAR and pre-temporal optical inputs, guiding the synthesis of target optical images. Second, a range of cloud masking and gap-filling techniques can be optionally employed to effectively reduce radiometric discrepancies between the synthesized images and the cloudy data, ultimately yielding high-quality, clear-sky imagery. To meet the big data requirements of deep learning, we constructed two globally distributed cloud removal datasets, named BiS1L8-CR and BiS1S2-CR. Supported by these datasets, extensive experiments demonstrated that the Began+ framework effectively captures bi-temporal change features, reconstructing precise surface information in both Landsat-8 and Sentinel-2 satellite images under large cloud cover. Compared to the latest solutions and algorithms, our proposed Began+ framework exhibits significant advantages from both qualitative and quantitative perspectives in both simulated and real experiments. Furthermore, without strict constraints on input timing, the Began+ framework enables accurate reconstruction of large-scale dual-sensor imagery under high-ratio cloud cover, effectively restoring changing surfaces and improving the quality of unsupervised vegetation extraction.
扫码关注我们
求助内容:
应助结果提醒方式:
