Pub Date : 2026-01-12DOI: 10.1109/JSTARS.2026.3651847
Kirk M. Scanlan;Anja Rutishauser;Sebastian B. Simonsen
The spatiotemporal properties of the Greenland Ice Sheet firn layer are an important factor when assessing overall ice sheet mass balance and internal meltwater storage capacity. Increasingly a target for the satellite remote sensing community, this study investigates the recovery of vertical firn density heterogeneity over a ten-year period from the synthesis of passive microwave and active radar altimetry measurements. The mismatch between ESA SMOS observations and a passive microwave forward model, initialized with surface densities estimated from the backscatter strength of ISRO/CNES SARAL and ESA CryoSat-2, serves as a proxy for vertical density variability. Validated with in situ measurements, the results demonstrate clear long-term patterns in Greenland firn heterogeneity characterized by spatially expansive sharp increases in firn heterogeneity following extreme melt seasons that require multiple quiescent years to rehabilitate. The results demonstrate that by the start of the 2023 melt season (i.e., the end of the timeframe considered), the Greenland firn layer had reached its most heterogeneous state of the preceding decade. Continued investigation into the synthesis of different remote sensing datasets represents a pathway toward generating novel insights into the spatiotemporal evolution of Greenland Ice Sheet surface conditions.
{"title":"Spatiotemporal Heterogeneity in Greenland Firn From the Synthesis of Satellite Radar Altimetry and Passive Microwave Measurements","authors":"Kirk M. Scanlan;Anja Rutishauser;Sebastian B. Simonsen","doi":"10.1109/JSTARS.2026.3651847","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3651847","url":null,"abstract":"The spatiotemporal properties of the Greenland Ice Sheet firn layer are an important factor when assessing overall ice sheet mass balance and internal meltwater storage capacity. Increasingly a target for the satellite remote sensing community, this study investigates the recovery of vertical firn density heterogeneity over a ten-year period from the synthesis of passive microwave and active radar altimetry measurements. The mismatch between ESA SMOS observations and a passive microwave forward model, initialized with surface densities estimated from the backscatter strength of ISRO/CNES SARAL and ESA CryoSat-2, serves as a proxy for vertical density variability. Validated with in situ measurements, the results demonstrate clear long-term patterns in Greenland firn heterogeneity characterized by spatially expansive sharp increases in firn heterogeneity following extreme melt seasons that require multiple quiescent years to rehabilitate. The results demonstrate that by the start of the 2023 melt season (i.e., the end of the timeframe considered), the Greenland firn layer had reached its most heterogeneous state of the preceding decade. Continued investigation into the synthesis of different remote sensing datasets represents a pathway toward generating novel insights into the spatiotemporal evolution of Greenland Ice Sheet surface conditions.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4085-4098"},"PeriodicalIF":5.3,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11339888","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remote sensing change detection aims to identify changes on the Earth's surface from remote sensing images acquired at different times. However, the identification of changed areas is often hindered by pseudochanges in similar objects, leading to inaccurate identification of change boundaries. To address this issue, we propose a novel network named boundary-guided semantic context network (BSCNet), which decouples features to improve the feature representation ability for changing objects. Specifically, we design a selective context fusion module that selectively fuses semantically rich features by computing the similarity between features from adjacent stages of the backbone network, thereby preventing detailed features from being overwhelmed by contextual information. In addition, to enhance the ability to perceive changes, we design a context fast aggregation module that leverages a pyramid structure to help the model simultaneously extract and fuse detailed and semantic information at different scales, enabling more accurate change detection. Finally, we design a boundary-guided feature fusion module to aggregate edge-level, texture-level, and semantic-level information, which enables the network to represent change regions more comprehensively and precisely. Experimental results on the WHU-CD, LEVIR-CD, and SYSU-CD datasets show that BSCNet achieves F1 scores of 94.92%, 92.19%, and 82.55%, respectively.
{"title":"Learning Boundary-Aware Semantic Context Network for Remote Sensing Change Detection","authors":"Weiran Zhou;Guanting Guo;Huihui Song;Xu Zhang;Kaihua Zhang","doi":"10.1109/JSTARS.2026.3651696","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3651696","url":null,"abstract":"Remote sensing change detection aims to identify changes on the Earth's surface from remote sensing images acquired at different times. However, the identification of changed areas is often hindered by pseudochanges in similar objects, leading to inaccurate identification of change boundaries. To address this issue, we propose a novel network named boundary-guided semantic context network (BSCNet), which decouples features to improve the feature representation ability for changing objects. Specifically, we design a selective context fusion module that selectively fuses semantically rich features by computing the similarity between features from adjacent stages of the backbone network, thereby preventing detailed features from being overwhelmed by contextual information. In addition, to enhance the ability to perceive changes, we design a context fast aggregation module that leverages a pyramid structure to help the model simultaneously extract and fuse detailed and semantic information at different scales, enabling more accurate change detection. Finally, we design a boundary-guided feature fusion module to aggregate edge-level, texture-level, and semantic-level information, which enables the network to represent change regions more comprehensively and precisely. Experimental results on the WHU-CD, LEVIR-CD, and SYSU-CD datasets show that BSCNet achieves F1 scores of 94.92%, 92.19%, and 82.55%, respectively.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4177-4187"},"PeriodicalIF":5.3,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11339892","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/JSTARS.2026.3650961
Shixin Liu;Pingyu Liu;Xiaofei Wang
The lack of prior knowledge is a challenging issue in target detection tasks for hyperspectral remote sensing images. In this article, we propose an effective network for object detection in hyperspectral remote sensing images. First, through spectral data augmentation methods, all surrounding pixels within a data block are encoded as the transformed spectral signature of the central pixel, thereby constructing a sufficient number of training sample pairs. Subsequently, a backbone network (PyramidMamba) was designed to establish long-term dependencies across the frequency domain and multiscale dimensions using the Mamba residual module and pyramid wavelet transform module. A residual self-attention module is further developed, integrating self-attention with convolutional operations to enhance feature extraction while improving the network's depth and stability. A backbone network was employed to extract representative vectors from augmented sample pairs, which were then optimized through a spectral contrast head to enhance the distinction between target and background features. Experimental results demonstrate that compared to mainstream algorithms, the proposed algorithm achieves higher detection accuracy and computational efficiency. It successfully learns deep nonlinear feature representations with stronger discriminative power, enabling effective separation of targets from background and delivering state-of-the-art performance.
{"title":"PyramidMamba: An Effective Hyperspectral Remote Sensing Image Target Detection Network","authors":"Shixin Liu;Pingyu Liu;Xiaofei Wang","doi":"10.1109/JSTARS.2026.3650961","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3650961","url":null,"abstract":"The lack of prior knowledge is a challenging issue in target detection tasks for hyperspectral remote sensing images. In this article, we propose an effective network for object detection in hyperspectral remote sensing images. First, through spectral data augmentation methods, all surrounding pixels within a data block are encoded as the transformed spectral signature of the central pixel, thereby constructing a sufficient number of training sample pairs. Subsequently, a backbone network (PyramidMamba) was designed to establish long-term dependencies across the frequency domain and multiscale dimensions using the Mamba residual module and pyramid wavelet transform module. A residual self-attention module is further developed, integrating self-attention with convolutional operations to enhance feature extraction while improving the network's depth and stability. A backbone network was employed to extract representative vectors from augmented sample pairs, which were then optimized through a spectral contrast head to enhance the distinction between target and background features. Experimental results demonstrate that compared to mainstream algorithms, the proposed algorithm achieves higher detection accuracy and computational efficiency. It successfully learns deep nonlinear feature representations with stronger discriminative power, enabling effective separation of targets from background and delivering state-of-the-art performance.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4163-4176"},"PeriodicalIF":5.3,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11329180","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As is well known, obtaining high-quality measured SAR vehicle data is difficult. As a result, deep learning-based data generation is frequently utilized for SAR target augmentation because of its affordability and simplicity of use. However, existing methods do not adequately consider the target scattering information during data generation, resulting in generated target SAR data that does not conform to the physical scattering laws of SAR imaging. In this article, we propose a SAR target data generation method based on target scattering features and cycle-consistent generative adversarial networks (CycleGAN). First, a physical model-based method called orthogonal matching pursuit (OMP) is adopted to extract the attribute scattering centers (ASCs) of SAR vehicle targets. Then, a multidimensional SAR target feature representation is constructed. Based on the scattering difference between the generated and real SAR target images, we introduce a loss function and further develop a generative model based on the CycleGAN. Therefore, the scattering mechanisms of SAR targets can be well learned, making the generated SAR data conform to the target scattering features. We conduct SAR target generation experiments under standard operating conditions (SOCs) and extended operating conditions (EOCs) on our self-acquired dataset as well as SAMPLE and MSTAR datasets. The SAR vehicle target data generated under SOC shows a more accurate scattering feature distribution to the real target data than other state-of-the-art methods. In addition, we generate SAR target data under EOC that conforms to SAR imaging patterns by modulating ASC feature parameters. Finally, the target recognition performance based on our proposed generated SAR vehicle data under SOC is validated, where the recognition rate increased by 4% after the addition of our generated target data.
{"title":"SAR Vehicle Data Generation With Scattering Features for Target Recognition","authors":"Dongdong Guan;Rui Feng;Yuzhen Xie;Huaiyue Ding;Yang Cui;Deliang Xiang","doi":"10.1109/JSTARS.2026.3652520","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3652520","url":null,"abstract":"As is well known, obtaining high-quality measured SAR vehicle data is difficult. As a result, deep learning-based data generation is frequently utilized for SAR target augmentation because of its affordability and simplicity of use. However, existing methods do not adequately consider the target scattering information during data generation, resulting in generated target SAR data that does not conform to the physical scattering laws of SAR imaging. In this article, we propose a SAR target data generation method based on target scattering features and cycle-consistent generative adversarial networks (CycleGAN). First, a physical model-based method called orthogonal matching pursuit (OMP) is adopted to extract the attribute scattering centers (ASCs) of SAR vehicle targets. Then, a multidimensional SAR target feature representation is constructed. Based on the scattering difference between the generated and real SAR target images, we introduce a loss function and further develop a generative model based on the CycleGAN. Therefore, the scattering mechanisms of SAR targets can be well learned, making the generated SAR data conform to the target scattering features. We conduct SAR target generation experiments under standard operating conditions (SOCs) and extended operating conditions (EOCs) on our self-acquired dataset as well as SAMPLE and MSTAR datasets. The SAR vehicle target data generated under SOC shows a more accurate scattering feature distribution to the real target data than other state-of-the-art methods. In addition, we generate SAR target data under EOC that conforms to SAR imaging patterns by modulating ASC feature parameters. Finally, the target recognition performance based on our proposed generated SAR vehicle data under SOC is validated, where the recognition rate increased by 4% after the addition of our generated target data.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"5520-5538"},"PeriodicalIF":5.3,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11344756","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-05DOI: 10.1109/JSTARS.2025.3650075
Zhenkai Wu;Xiaowen Ma;Kai Zheng;Rongrong Lian;Yun Chen;Zhenhua Huang;Wei Zhang;Siyang Song
Mamba, with itsadvantages of global perception and linear complexity, has been widely applied to identify changes of the target regions within the remote sensing (RS) images captured under complex scenarios and varied conditions. However, existing remote sensing change detection (RSCD) approaches based on Mamba frequently struggle to effectively perceive the inherent locality of change regions as they direct flatten and scan RS images (i.e., the features of the same region of changes are not distributed continuously within the sequence but are mixed with features from other regions throughout the sequence). In this article, we propose a novel locally adaptive SSM-based approach, termed CD-Lamba, which effectively enhances the locality of change detection while maintaining global perception. Specifically, our CD-Lamba includes a locally adaptive state-space scan (LASS) strategy for locality enhancement, a cross-temporal state-space scan strategy for bitemporal feature fusion, and a window shifting and perception mechanism to enhance interactions across segmented windows. These strategies are integrated into a multiscale cross-temporal LASS module to effectively highlight changes and refine changes’ representations feature generation. CD-Lamba significantly enhances local–global spatio-temporal interactions in bitemporal images, offering improved performance in RSCD tasks. Extensive experimental results show that CD-Lamba achieves state-of-the-art performance on four benchmark datasets with a satisfactory efficiency-accuracy tradeoff.
{"title":"CD-Lamba: Boosting Remote Sensing Change Detection via a Cross-Temporal Locally Adaptive State Space Model","authors":"Zhenkai Wu;Xiaowen Ma;Kai Zheng;Rongrong Lian;Yun Chen;Zhenhua Huang;Wei Zhang;Siyang Song","doi":"10.1109/JSTARS.2025.3650075","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3650075","url":null,"abstract":"Mamba, with itsadvantages of global perception and linear complexity, has been widely applied to identify changes of the target regions within the remote sensing (RS) images captured under complex scenarios and varied conditions. However, existing remote sensing change detection (RSCD) approaches based on Mamba frequently struggle to effectively perceive the inherent locality of change regions as they direct flatten and scan RS images (i.e., the features of the same region of changes are not distributed continuously within the sequence but are mixed with features from other regions throughout the sequence). In this article, we propose a novel locally adaptive SSM-based approach, termed CD-Lamba, which effectively enhances the locality of change detection while maintaining global perception. Specifically, our CD-Lamba includes a locally adaptive state-space scan (LASS) strategy for locality enhancement, a cross-temporal state-space scan strategy for bitemporal feature fusion, and a window shifting and perception mechanism to enhance interactions across segmented windows. These strategies are integrated into a multiscale cross-temporal LASS module to effectively highlight changes and refine changes’ representations feature generation. CD-Lamba significantly enhances local–global spatio-temporal interactions in bitemporal images, offering improved performance in RSCD tasks. Extensive experimental results show that CD-Lamba achieves state-of-the-art performance on four benchmark datasets with a satisfactory efficiency-accuracy tradeoff.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4028-4044"},"PeriodicalIF":5.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11322867","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-05DOI: 10.1109/JSTARS.2025.3650498
Lei Chen;Haiping Xiao
Considering the challenges of traditional monitoring methods in achieving large-scale surface subsidence monitoring over mining areas, as well as the difficulties in modeling settlement prediction methods and acquiring model hyperparameters, this article integrates rainfall data from the mining area, analyzes the spatiotemporal evolution characteristics of surface subsidence using small baseline subset interferometric synthetic aperture radar (SBAS-InSAR) technology, and proposes an APO-BiLSTM settlement prediction model. This model employs the Arctic Puffin Optimization (APO) to optimize the hyperparameters of a bidirectional long short-term memory (BiLSTM) network. The research results indicate that rainfall has caused the formation of nine distinct subsidence areas in the mining area, with Subsidence Area IX experiencing the most severe subsidence, covering an area of 9.31 km2, with an average annual subsidence rate as high as -331 mm/a and a maximum cumulative subsidence of 427 mm. In the early stages of subsidence, a “subsidence-lifting-subsidence-lifting” phenomenon is observed, which gradually stabilizes in the later stages. In addition, compared to the LSTM and BiLSTM models, the proposed APO-BiLSTM model reduces the root mean square error of single-step predictions by 79.8% and 76.6%, respectively, and the mean absolute error by 79.1% and 75.9%, while increasing the R2 by 6.0% and 4.4% . The absolute error of 78.3% of the high coherence points is less than 4 mm, indicating that the model has promising application prospects in large-scale surface subsidence prediction in mining areas.
{"title":"Spatiotemporal Evolution of Surface Subsidence in Large-Scale Mining Areas Under Rainfall Influence and Optimization Model Development","authors":"Lei Chen;Haiping Xiao","doi":"10.1109/JSTARS.2025.3650498","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3650498","url":null,"abstract":"Considering the challenges of traditional monitoring methods in achieving large-scale surface subsidence monitoring over mining areas, as well as the difficulties in modeling settlement prediction methods and acquiring model hyperparameters, this article integrates rainfall data from the mining area, analyzes the spatiotemporal evolution characteristics of surface subsidence using small baseline subset interferometric synthetic aperture radar (SBAS-InSAR) technology, and proposes an APO-BiLSTM settlement prediction model. This model employs the Arctic Puffin Optimization (APO) to optimize the hyperparameters of a bidirectional long short-term memory (BiLSTM) network. The research results indicate that rainfall has caused the formation of nine distinct subsidence areas in the mining area, with Subsidence Area IX experiencing the most severe subsidence, covering an area of 9.31 km<sup>2</sup>, with an average annual subsidence rate as high as -331 mm/a and a maximum cumulative subsidence of 427 mm. In the early stages of subsidence, a “subsidence-lifting-subsidence-lifting” phenomenon is observed, which gradually stabilizes in the later stages. In addition, compared to the LSTM and BiLSTM models, the proposed APO-BiLSTM model reduces the root mean square error of single-step predictions by 79.8% and 76.6%, respectively, and the mean absolute error by 79.1% and 75.9%, while increasing the R<sup>2</sup> by 6.0% and 4.4% . The absolute error of 78.3% of the high coherence points is less than 4 mm, indicating that the model has promising application prospects in large-scale surface subsidence prediction in mining areas.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4045-4055"},"PeriodicalIF":5.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11328805","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-05DOI: 10.1109/JSTARS.2025.3650563
Jiayi Liu;Zhe Guo;Rui Luo;Yi Liu;Shaohui Mei
In optical remote sensing, thin clouds pose a significant challenge for cloud removal due to their high brightness and spectral similarity to bright man-made objects, such as buildings. Existing thin cloud removal methods typically rely on single feature extraction or fixed physical model, which struggle to differentiate thin clouds from bright backgrounds in complex scenes, resulting in suboptimal image recovery. To address these issues, we propose atmospheric scattering-driven recovery enhancement network (ASENet), a novel network that integrates atmospheric scattering modeling with multilevel feedback enhancement mechanism to improve thin cloud removal for complex scenes. By learning the shape details of both thin clouds and ground features, ASENet dynamically adjusts weights in high-concentration cloud regions, ensuring clearer image recovery. Specifically, we design a feature fusion residual dehazing generator, which leverages deep residual blocks and high-resolution dehazing modules to capture environmental memory and enhance detail features, improving the model's adaptability and recovery accuracy in thin cloud regions. In addition, to better preserve the edges and textures of buildings and other ground objects, we introduce a spatial detail enhanced discriminator that incorporates the cascaded feedback-based feature mapping. This enables ASENet to better capture image details, maintain structural consistency, and effectively distinguish thin clouds from high-reflectance background objects. Extensive experiments on three benchmark datasets L8-ImgSet, RICE1, and WHUS2-CR demonstrate that our proposed ASENet outperforms state-of-the-art methods across both subjective and objective evaluation metrics, proving its effectiveness in thin cloud removal tasks under complex scenes.
{"title":"ASENet: Thin Cloud Removal Network for Complex Scenes via Atmospheric Scattering Modeling and Feedback Enhancement","authors":"Jiayi Liu;Zhe Guo;Rui Luo;Yi Liu;Shaohui Mei","doi":"10.1109/JSTARS.2025.3650563","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3650563","url":null,"abstract":"In optical remote sensing, thin clouds pose a significant challenge for cloud removal due to their high brightness and spectral similarity to bright man-made objects, such as buildings. Existing thin cloud removal methods typically rely on single feature extraction or fixed physical model, which struggle to differentiate thin clouds from bright backgrounds in complex scenes, resulting in suboptimal image recovery. To address these issues, we propose atmospheric scattering-driven recovery enhancement network (ASENet), a novel network that integrates atmospheric scattering modeling with multilevel feedback enhancement mechanism to improve thin cloud removal for complex scenes. By learning the shape details of both thin clouds and ground features, ASENet dynamically adjusts weights in high-concentration cloud regions, ensuring clearer image recovery. Specifically, we design a feature fusion residual dehazing generator, which leverages deep residual blocks and high-resolution dehazing modules to capture environmental memory and enhance detail features, improving the model's adaptability and recovery accuracy in thin cloud regions. In addition, to better preserve the edges and textures of buildings and other ground objects, we introduce a spatial detail enhanced discriminator that incorporates the cascaded feedback-based feature mapping. This enables ASENet to better capture image details, maintain structural consistency, and effectively distinguish thin clouds from high-reflectance background objects. Extensive experiments on three benchmark datasets L8-ImgSet, RICE1, and WHUS2-CR demonstrate that our proposed ASENet outperforms state-of-the-art methods across both subjective and objective evaluation metrics, proving its effectiveness in thin cloud removal tasks under complex scenes.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"3964-3982"},"PeriodicalIF":5.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11328777","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As severe convective weather exerts growing influence on public safety, enhancing forecast accuracy has become critically important. However, the predictive capability remains limited due to insufficient observational coverageenlrg in certain regions or variables, as well as the inadequate representation of the fine-scale physical processes responsible for local convective development. In response to these challenges, this study proposes a physically embedded neural network based on heterogeneous meteorological data, which utilizes satellite multispectral images and atmospheric temperature and humidity profile synergistically retrieved from space-based and ground-based infrared spectral observations, to forecast local convective initiation (CI) within a 6-hour lead time. The core innovation of this study lies in the development of a physically consistent model that explicitly embeds the convective available potential energy equation into the network architecture. By embedding physical information, the model enables the atmospheric thermodynamic feature extraction module to generate physically consistent feature tensors, thereby enhancing the representation of key convective processes. We trained the network using the pretraining and fine-tuning approach, then validated its effectiveness with reanalysis and actual observational data. The results demonstrate that incorporating the retrieved atmospheric profile data leads to a 40% improvement in the 6-hour average critical success index (CSI), increasing from 0.44 to 0.62 relative to forecasts without atmospheric profile input. Furthermore, in validation experiments using reanalysis data and radar observations, the proposed atmospheric profile feature extraction module consistently improves the model’s average forecast CSI by more than 29% compared to models utilizing purely data-driven profile extraction modules.
{"title":"A Deep Learning-Based Model for Nowcasting of Convective Initiation Using Infrared Observations","authors":"Huijie Zhao;Xiaohang Ma;Guorui Jia;Jialu Xu;Yihan Xie;Yujun Zhao","doi":"10.1109/JSTARS.2025.3650686","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3650686","url":null,"abstract":"As severe convective weather exerts growing influence on public safety, enhancing forecast accuracy has become critically important. However, the predictive capability remains limited due to insufficient observational coverageenlrg in certain regions or variables, as well as the inadequate representation of the fine-scale physical processes responsible for local convective development. In response to these challenges, this study proposes a physically embedded neural network based on heterogeneous meteorological data, which utilizes satellite multispectral images and atmospheric temperature and humidity profile synergistically retrieved from space-based and ground-based infrared spectral observations, to forecast local convective initiation (CI) within a 6-hour lead time. The core innovation of this study lies in the development of a physically consistent model that explicitly embeds the convective available potential energy equation into the network architecture. By embedding physical information, the model enables the atmospheric thermodynamic feature extraction module to generate physically consistent feature tensors, thereby enhancing the representation of key convective processes. We trained the network using the pretraining and fine-tuning approach, then validated its effectiveness with reanalysis and actual observational data. The results demonstrate that incorporating the retrieved atmospheric profile data leads to a 40% improvement in the 6-hour average critical success index (CSI), increasing from 0.44 to 0.62 relative to forecasts without atmospheric profile input. Furthermore, in validation experiments using reanalysis data and radar observations, the proposed atmospheric profile feature extraction module consistently improves the model’s average forecast CSI by more than 29% compared to models utilizing purely data-driven profile extraction modules.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4188-4202"},"PeriodicalIF":5.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11328812","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.1109/JSTARS.2025.3650394
Shun Zhang;Xuebin Zhang;Yaohui Xu;Ke Wang
Few-shot object detection (FSOD) in remote sensing imagery faces two critical challenges compared to general methods trained on large datasets, first, only a few labeled instances leveraged as the training set significantly limit the feature representation learning of deep neural networks; second, Remote sensing image data contain complicated background and multiple objects with greatly different sizes in the same image, which leads the detector to large numbers of false alarms and miss detections. This article proposes a FSOD framework (called DeCL-Det) that applies self-training to generate high-quality pseudoannotations from unlabeled target domain data. These refined pseudolabels are iteratively integrated into the training set to expand supervision for novel classes. An auxiliary network is introduced to mitigate label noise by rectifying misclassifications in pseudolabeled regions, ensuring robust learning. For multiscale feature learning, we propose a gradient-decoupled framework, GCFPN, combining feature pyramid networks (FPN) with a gradient decoupled layer (GDL). FPN is to extract multiscale feature representations, and GDL is to decouple the modules between the region proposal network and RCNN head into two stages or tasks through gradients. The two modules, FPN and GDL, train Faster R-CNN in a decoupled way to facilitate the multiscale feature learning of novel objects. To further enhance the classification ability, we introduce a supervised contrastive learning head to enhance feature discrimination, reinforcing robustness in FSOD. Experiments on the DIOR dataset indicate that our method performs better than several existing approaches and achieves competitive results.
{"title":"Few-Shot Object Detection on Remote Sensing Images Based on Decoupled Training, Contrastive Learning, and Self-Training","authors":"Shun Zhang;Xuebin Zhang;Yaohui Xu;Ke Wang","doi":"10.1109/JSTARS.2025.3650394","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3650394","url":null,"abstract":"Few-shot object detection (FSOD) in remote sensing imagery faces two critical challenges compared to general methods trained on large datasets, first, only a few labeled instances leveraged as the training set significantly limit the feature representation learning of deep neural networks; second, Remote sensing image data contain complicated background and multiple objects with greatly different sizes in the same image, which leads the detector to large numbers of false alarms and miss detections. This article proposes a FSOD framework (called DeCL-Det) that applies self-training to generate high-quality pseudoannotations from unlabeled target domain data. These refined pseudolabels are iteratively integrated into the training set to expand supervision for novel classes. An auxiliary network is introduced to mitigate label noise by rectifying misclassifications in pseudolabeled regions, ensuring robust learning. For multiscale feature learning, we propose a gradient-decoupled framework, GCFPN, combining feature pyramid networks (FPN) with a gradient decoupled layer (GDL). FPN is to extract multiscale feature representations, and GDL is to decouple the modules between the region proposal network and RCNN head into two stages or tasks through gradients. The two modules, FPN and GDL, train Faster R-CNN in a decoupled way to facilitate the multiscale feature learning of novel objects. To further enhance the classification ability, we introduce a supervised contrastive learning head to enhance feature discrimination, reinforcing robustness in FSOD. Experiments on the DIOR dataset indicate that our method performs better than several existing approaches and achieves competitive results.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"3983-3997"},"PeriodicalIF":5.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11321270","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-31DOI: 10.1109/JSTARS.2025.3649816
Siqi Lai;Mingliang Tao;Yanyang Liu;Lei Cui;Jia Su;Ling Wang
Radio frequency interference (RFI) may degrade the quality of remote sensing images acquired by spaceborne synthetic aperture radar (SAR). In the interferometric wide-swath mode of the Sentinel-1 satellite, the SAR receiver may capture multiple types of RFI signals within a single observation period, which is referred to as heterogeneous RFI, increasing the complexity of interference detection and mitigation. This article proposes a heterogeneous interference mitigation method based on subimage segmentation and local spectral features analysis. The proposed method divides the original single look complex image into multiple subimages along the range direction, enhancing the representation of interference features in the range frequency domain. Spectral analysis is then performed on each subimage to detect and mitigate interference. Finally, the image after RFI mitigation is reconstructed by stitching the subimages together. Experiments were conducted using simulated interference data generated from LuTan-1 and measured interference data from Sentinel-1. The results demonstrate that the proposed method can effectively mitigate RFI artifacts in various typical interference scenarios and restore the obscured ground object information in the images.
{"title":"Heterogeneous RFI Mitigation in Image-Domain via Subimage Segmentation and Local Frequency Feature Analysis","authors":"Siqi Lai;Mingliang Tao;Yanyang Liu;Lei Cui;Jia Su;Ling Wang","doi":"10.1109/JSTARS.2025.3649816","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3649816","url":null,"abstract":"Radio frequency interference (RFI) may degrade the quality of remote sensing images acquired by spaceborne synthetic aperture radar (SAR). In the interferometric wide-swath mode of the Sentinel-1 satellite, the SAR receiver may capture multiple types of RFI signals within a single observation period, which is referred to as heterogeneous RFI, increasing the complexity of interference detection and mitigation. This article proposes a heterogeneous interference mitigation method based on subimage segmentation and local spectral features analysis. The proposed method divides the original single look complex image into multiple subimages along the range direction, enhancing the representation of interference features in the range frequency domain. Spectral analysis is then performed on each subimage to detect and mitigate interference. Finally, the image after RFI mitigation is reconstructed by stitching the subimages together. Experiments were conducted using simulated interference data generated from LuTan-1 and measured interference data from Sentinel-1. The results demonstrate that the proposed method can effectively mitigate RFI artifacts in various typical interference scenarios and restore the obscured ground object information in the images.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4069-4084"},"PeriodicalIF":5.3,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11320316","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}