首页 > 最新文献

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing最新文献

英文 中文
Spatiotemporal Heterogeneity in Greenland Firn From the Synthesis of Satellite Radar Altimetry and Passive Microwave Measurements 基于卫星雷达测高和被动微波测量综合的格陵兰岛植被时空异质性
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-12 DOI: 10.1109/JSTARS.2026.3651847
Kirk M. Scanlan;Anja Rutishauser;Sebastian B. Simonsen
The spatiotemporal properties of the Greenland Ice Sheet firn layer are an important factor when assessing overall ice sheet mass balance and internal meltwater storage capacity. Increasingly a target for the satellite remote sensing community, this study investigates the recovery of vertical firn density heterogeneity over a ten-year period from the synthesis of passive microwave and active radar altimetry measurements. The mismatch between ESA SMOS observations and a passive microwave forward model, initialized with surface densities estimated from the backscatter strength of ISRO/CNES SARAL and ESA CryoSat-2, serves as a proxy for vertical density variability. Validated with in situ measurements, the results demonstrate clear long-term patterns in Greenland firn heterogeneity characterized by spatially expansive sharp increases in firn heterogeneity following extreme melt seasons that require multiple quiescent years to rehabilitate. The results demonstrate that by the start of the 2023 melt season (i.e., the end of the timeframe considered), the Greenland firn layer had reached its most heterogeneous state of the preceding decade. Continued investigation into the synthesis of different remote sensing datasets represents a pathway toward generating novel insights into the spatiotemporal evolution of Greenland Ice Sheet surface conditions.
格陵兰冰盖冰层的时空特征是评估冰盖整体质量平衡和内部融水储存能力的重要因素。作为卫星遥感界日益关注的一个目标,本研究从被动微波和主动雷达测高的综合测量中恢复了十年来垂直植被密度的非均质性。ESA SMOS观测与被动微波正演模型之间的不匹配可以作为垂直密度变化的代理,该模型初始化了ISRO/CNES SARAL和ESA CryoSat-2的后向散射强度估算的表面密度。通过原位测量验证,结果表明格陵兰冰质异质性具有明确的长期模式,其特征是在极端融化季节之后,冰质异质性在空间上急剧增加,需要多个静息年才能恢复。结果表明,到2023年融化季节开始时(即所考虑的时间框架结束时),格陵兰冰层已达到前十年最不均匀的状态。对不同遥感数据集的持续综合研究代表了对格陵兰冰盖表面条件的时空演变产生新见解的途径。
{"title":"Spatiotemporal Heterogeneity in Greenland Firn From the Synthesis of Satellite Radar Altimetry and Passive Microwave Measurements","authors":"Kirk M. Scanlan;Anja Rutishauser;Sebastian B. Simonsen","doi":"10.1109/JSTARS.2026.3651847","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3651847","url":null,"abstract":"The spatiotemporal properties of the Greenland Ice Sheet firn layer are an important factor when assessing overall ice sheet mass balance and internal meltwater storage capacity. Increasingly a target for the satellite remote sensing community, this study investigates the recovery of vertical firn density heterogeneity over a ten-year period from the synthesis of passive microwave and active radar altimetry measurements. The mismatch between ESA SMOS observations and a passive microwave forward model, initialized with surface densities estimated from the backscatter strength of ISRO/CNES SARAL and ESA CryoSat-2, serves as a proxy for vertical density variability. Validated with in situ measurements, the results demonstrate clear long-term patterns in Greenland firn heterogeneity characterized by spatially expansive sharp increases in firn heterogeneity following extreme melt seasons that require multiple quiescent years to rehabilitate. The results demonstrate that by the start of the 2023 melt season (i.e., the end of the timeframe considered), the Greenland firn layer had reached its most heterogeneous state of the preceding decade. Continued investigation into the synthesis of different remote sensing datasets represents a pathway toward generating novel insights into the spatiotemporal evolution of Greenland Ice Sheet surface conditions.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4085-4098"},"PeriodicalIF":5.3,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11339888","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Boundary-Aware Semantic Context Network for Remote Sensing Change Detection 基于学习边界感知的语义上下文网络遥感变化检测
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-12 DOI: 10.1109/JSTARS.2026.3651696
Weiran Zhou;Guanting Guo;Huihui Song;Xu Zhang;Kaihua Zhang
Remote sensing change detection aims to identify changes on the Earth's surface from remote sensing images acquired at different times. However, the identification of changed areas is often hindered by pseudochanges in similar objects, leading to inaccurate identification of change boundaries. To address this issue, we propose a novel network named boundary-guided semantic context network (BSCNet), which decouples features to improve the feature representation ability for changing objects. Specifically, we design a selective context fusion module that selectively fuses semantically rich features by computing the similarity between features from adjacent stages of the backbone network, thereby preventing detailed features from being overwhelmed by contextual information. In addition, to enhance the ability to perceive changes, we design a context fast aggregation module that leverages a pyramid structure to help the model simultaneously extract and fuse detailed and semantic information at different scales, enabling more accurate change detection. Finally, we design a boundary-guided feature fusion module to aggregate edge-level, texture-level, and semantic-level information, which enables the network to represent change regions more comprehensively and precisely. Experimental results on the WHU-CD, LEVIR-CD, and SYSU-CD datasets show that BSCNet achieves F1 scores of 94.92%, 92.19%, and 82.55%, respectively.
遥感变化检测的目的是从不同时间获取的遥感图像中识别地球表面的变化。然而,变化区域的识别往往受到类似对象的伪变化的阻碍,导致变化边界的不准确识别。为了解决这一问题,我们提出了一种新的网络,即边界引导语义上下文网络(BSCNet),该网络将特征解耦,以提高对变化对象的特征表示能力。具体而言,我们设计了一个选择性上下文融合模块,该模块通过计算骨干网相邻阶段特征之间的相似性来选择性地融合语义丰富的特征,从而防止详细特征被上下文信息淹没。此外,为了增强感知变化的能力,我们设计了一个上下文快速聚合模块,该模块利用金字塔结构帮助模型同时提取和融合不同尺度的细节和语义信息,从而实现更准确的变化检测。最后,设计了边界引导的特征融合模块,对边缘级、纹理级和语义级信息进行聚合,使网络能够更全面、更准确地表示变化区域。在WHU-CD、levird - cd和SYSU-CD数据集上的实验结果表明,BSCNet的F1得分分别为94.92%、92.19%和82.55%。
{"title":"Learning Boundary-Aware Semantic Context Network for Remote Sensing Change Detection","authors":"Weiran Zhou;Guanting Guo;Huihui Song;Xu Zhang;Kaihua Zhang","doi":"10.1109/JSTARS.2026.3651696","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3651696","url":null,"abstract":"Remote sensing change detection aims to identify changes on the Earth's surface from remote sensing images acquired at different times. However, the identification of changed areas is often hindered by pseudochanges in similar objects, leading to inaccurate identification of change boundaries. To address this issue, we propose a novel network named boundary-guided semantic context network (BSCNet), which decouples features to improve the feature representation ability for changing objects. Specifically, we design a selective context fusion module that selectively fuses semantically rich features by computing the similarity between features from adjacent stages of the backbone network, thereby preventing detailed features from being overwhelmed by contextual information. In addition, to enhance the ability to perceive changes, we design a context fast aggregation module that leverages a pyramid structure to help the model simultaneously extract and fuse detailed and semantic information at different scales, enabling more accurate change detection. Finally, we design a boundary-guided feature fusion module to aggregate edge-level, texture-level, and semantic-level information, which enables the network to represent change regions more comprehensively and precisely. Experimental results on the WHU-CD, LEVIR-CD, and SYSU-CD datasets show that BSCNet achieves F1 scores of 94.92%, 92.19%, and 82.55%, respectively.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4177-4187"},"PeriodicalIF":5.3,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11339892","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PyramidMamba: An Effective Hyperspectral Remote Sensing Image Target Detection Network 金字塔曼巴:一种有效的高光谱遥感图像目标检测网络
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-12 DOI: 10.1109/JSTARS.2026.3650961
Shixin Liu;Pingyu Liu;Xiaofei Wang
The lack of prior knowledge is a challenging issue in target detection tasks for hyperspectral remote sensing images. In this article, we propose an effective network for object detection in hyperspectral remote sensing images. First, through spectral data augmentation methods, all surrounding pixels within a data block are encoded as the transformed spectral signature of the central pixel, thereby constructing a sufficient number of training sample pairs. Subsequently, a backbone network (PyramidMamba) was designed to establish long-term dependencies across the frequency domain and multiscale dimensions using the Mamba residual module and pyramid wavelet transform module. A residual self-attention module is further developed, integrating self-attention with convolutional operations to enhance feature extraction while improving the network's depth and stability. A backbone network was employed to extract representative vectors from augmented sample pairs, which were then optimized through a spectral contrast head to enhance the distinction between target and background features. Experimental results demonstrate that compared to mainstream algorithms, the proposed algorithm achieves higher detection accuracy and computational efficiency. It successfully learns deep nonlinear feature representations with stronger discriminative power, enabling effective separation of targets from background and delivering state-of-the-art performance.
在高光谱遥感图像的目标检测任务中,缺乏先验知识是一个具有挑战性的问题。在本文中,我们提出了一种有效的高光谱遥感图像目标检测网络。首先,通过光谱数据增强方法,将数据块内所有周围像素编码为中心像素变换后的光谱签名,从而构造足够数量的训练样本对。随后,利用Mamba残差模块和金字塔小波变换模块,设计了一个骨干网络(PyramidMamba),以建立跨频域和多尺度维度的长期依赖关系。进一步开发残差自注意模块,将自注意与卷积运算相结合,增强特征提取,同时提高网络的深度和稳定性。利用骨干网络从增强的样本对中提取代表性向量,然后通过光谱对比头对其进行优化,以增强目标和背景特征的区别。实验结果表明,与主流算法相比,该算法具有更高的检测精度和计算效率。它成功地学习了深度非线性特征表示,具有更强的判别能力,能够有效地将目标与背景分离,并提供最先进的性能。
{"title":"PyramidMamba: An Effective Hyperspectral Remote Sensing Image Target Detection Network","authors":"Shixin Liu;Pingyu Liu;Xiaofei Wang","doi":"10.1109/JSTARS.2026.3650961","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3650961","url":null,"abstract":"The lack of prior knowledge is a challenging issue in target detection tasks for hyperspectral remote sensing images. In this article, we propose an effective network for object detection in hyperspectral remote sensing images. First, through spectral data augmentation methods, all surrounding pixels within a data block are encoded as the transformed spectral signature of the central pixel, thereby constructing a sufficient number of training sample pairs. Subsequently, a backbone network (PyramidMamba) was designed to establish long-term dependencies across the frequency domain and multiscale dimensions using the Mamba residual module and pyramid wavelet transform module. A residual self-attention module is further developed, integrating self-attention with convolutional operations to enhance feature extraction while improving the network's depth and stability. A backbone network was employed to extract representative vectors from augmented sample pairs, which were then optimized through a spectral contrast head to enhance the distinction between target and background features. Experimental results demonstrate that compared to mainstream algorithms, the proposed algorithm achieves higher detection accuracy and computational efficiency. It successfully learns deep nonlinear feature representations with stronger discriminative power, enabling effective separation of targets from background and delivering state-of-the-art performance.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4163-4176"},"PeriodicalIF":5.3,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11329180","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAR Vehicle Data Generation With Scattering Features for Target Recognition 基于散射特征的SAR车辆数据生成与目标识别
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-12 DOI: 10.1109/JSTARS.2026.3652520
Dongdong Guan;Rui Feng;Yuzhen Xie;Huaiyue Ding;Yang Cui;Deliang Xiang
As is well known, obtaining high-quality measured SAR vehicle data is difficult. As a result, deep learning-based data generation is frequently utilized for SAR target augmentation because of its affordability and simplicity of use. However, existing methods do not adequately consider the target scattering information during data generation, resulting in generated target SAR data that does not conform to the physical scattering laws of SAR imaging. In this article, we propose a SAR target data generation method based on target scattering features and cycle-consistent generative adversarial networks (CycleGAN). First, a physical model-based method called orthogonal matching pursuit (OMP) is adopted to extract the attribute scattering centers (ASCs) of SAR vehicle targets. Then, a multidimensional SAR target feature representation is constructed. Based on the scattering difference between the generated and real SAR target images, we introduce a loss function and further develop a generative model based on the CycleGAN. Therefore, the scattering mechanisms of SAR targets can be well learned, making the generated SAR data conform to the target scattering features. We conduct SAR target generation experiments under standard operating conditions (SOCs) and extended operating conditions (EOCs) on our self-acquired dataset as well as SAMPLE and MSTAR datasets. The SAR vehicle target data generated under SOC shows a more accurate scattering feature distribution to the real target data than other state-of-the-art methods. In addition, we generate SAR target data under EOC that conforms to SAR imaging patterns by modulating ASC feature parameters. Finally, the target recognition performance based on our proposed generated SAR vehicle data under SOC is validated, where the recognition rate increased by 4% after the addition of our generated target data.
众所周知,获得高质量的SAR车辆实测数据是困难的。因此,基于深度学习的数据生成由于其可负担性和简单性而经常用于SAR目标增强。然而,现有方法在数据生成过程中没有充分考虑目标散射信息,导致生成的目标SAR数据不符合SAR成像的物理散射规律。在本文中,我们提出了一种基于目标散射特征和周期一致生成对抗网络(CycleGAN)的SAR目标数据生成方法。首先,采用基于物理模型的正交匹配追踪(OMP)方法提取SAR车辆目标属性散射中心(ASCs);然后,构造了SAR目标的多维特征表示。基于生成的SAR目标图像与真实SAR目标图像之间的散射差异,引入损失函数,进一步建立了基于CycleGAN的生成模型。因此,可以很好地了解SAR目标的散射机理,使生成的SAR数据符合目标的散射特征。我们在自制数据集以及SAMPLE和MSTAR数据集上进行了标准操作条件(soc)和扩展操作条件(EOCs)下的SAR目标生成实验。在SOC下生成的SAR车辆目标数据对真实目标数据的散射特征分布比其他最先进的方法更准确。此外,我们通过调制ASC特征参数,在EOC条件下生成符合SAR成像模式的SAR目标数据。最后,验证了基于我们所生成的SAR车辆数据在SOC下的目标识别性能,在加入我们所生成的目标数据后,识别率提高了4%。
{"title":"SAR Vehicle Data Generation With Scattering Features for Target Recognition","authors":"Dongdong Guan;Rui Feng;Yuzhen Xie;Huaiyue Ding;Yang Cui;Deliang Xiang","doi":"10.1109/JSTARS.2026.3652520","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3652520","url":null,"abstract":"As is well known, obtaining high-quality measured SAR vehicle data is difficult. As a result, deep learning-based data generation is frequently utilized for SAR target augmentation because of its affordability and simplicity of use. However, existing methods do not adequately consider the target scattering information during data generation, resulting in generated target SAR data that does not conform to the physical scattering laws of SAR imaging. In this article, we propose a SAR target data generation method based on target scattering features and cycle-consistent generative adversarial networks (CycleGAN). First, a physical model-based method called orthogonal matching pursuit (OMP) is adopted to extract the attribute scattering centers (ASCs) of SAR vehicle targets. Then, a multidimensional SAR target feature representation is constructed. Based on the scattering difference between the generated and real SAR target images, we introduce a loss function and further develop a generative model based on the CycleGAN. Therefore, the scattering mechanisms of SAR targets can be well learned, making the generated SAR data conform to the target scattering features. We conduct SAR target generation experiments under standard operating conditions (SOCs) and extended operating conditions (EOCs) on our self-acquired dataset as well as SAMPLE and MSTAR datasets. The SAR vehicle target data generated under SOC shows a more accurate scattering feature distribution to the real target data than other state-of-the-art methods. In addition, we generate SAR target data under EOC that conforms to SAR imaging patterns by modulating ASC feature parameters. Finally, the target recognition performance based on our proposed generated SAR vehicle data under SOC is validated, where the recognition rate increased by 4% after the addition of our generated target data.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"5520-5538"},"PeriodicalIF":5.3,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11344756","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CD-Lamba: Boosting Remote Sensing Change Detection via a Cross-Temporal Locally Adaptive State Space Model CD-Lamba:通过跨时间局部自适应状态空间模型增强遥感变化检测
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-05 DOI: 10.1109/JSTARS.2025.3650075
Zhenkai Wu;Xiaowen Ma;Kai Zheng;Rongrong Lian;Yun Chen;Zhenhua Huang;Wei Zhang;Siyang Song
Mamba, with itsadvantages of global perception and linear complexity, has been widely applied to identify changes of the target regions within the remote sensing (RS) images captured under complex scenarios and varied conditions. However, existing remote sensing change detection (RSCD) approaches based on Mamba frequently struggle to effectively perceive the inherent locality of change regions as they direct flatten and scan RS images (i.e., the features of the same region of changes are not distributed continuously within the sequence but are mixed with features from other regions throughout the sequence). In this article, we propose a novel locally adaptive SSM-based approach, termed CD-Lamba, which effectively enhances the locality of change detection while maintaining global perception. Specifically, our CD-Lamba includes a locally adaptive state-space scan (LASS) strategy for locality enhancement, a cross-temporal state-space scan strategy for bitemporal feature fusion, and a window shifting and perception mechanism to enhance interactions across segmented windows. These strategies are integrated into a multiscale cross-temporal LASS module to effectively highlight changes and refine changes’ representations feature generation. CD-Lamba significantly enhances local–global spatio-temporal interactions in bitemporal images, offering improved performance in RSCD tasks. Extensive experimental results show that CD-Lamba achieves state-of-the-art performance on four benchmark datasets with a satisfactory efficiency-accuracy tradeoff.
曼巴以其全局感知和线性复杂性的优势,被广泛应用于识别复杂场景和多变条件下遥感图像中目标区域的变化。然而,现有的基于Mamba的遥感变化检测(RSCD)方法由于直接对RS图像进行平面化和扫描,往往难以有效感知变化区域的固有局部性(即同一变化区域的特征在序列中不是连续分布的,而是与序列中其他区域的特征混合在一起)。在本文中,我们提出了一种新的基于局部自适应ssm的方法,称为CD-Lamba,它有效地增强了变化检测的局域性,同时保持了全局感知。具体来说,我们的CD-Lamba包括一个局部自适应状态空间扫描(LASS)策略,用于局域增强,一个跨时间状态空间扫描策略,用于双时间特征融合,以及一个窗口移动和感知机制,以增强跨分割窗口的交互。这些策略被集成到一个多尺度跨时间的LASS模块中,以有效地突出变化并改进变化的表示特征生成。CD-Lamba显著增强了双时图像的局部-全局时空交互作用,提高了RSCD任务的性能。大量的实验结果表明,CD-Lamba在四个基准数据集上取得了令人满意的效率-精度权衡的最先进性能。
{"title":"CD-Lamba: Boosting Remote Sensing Change Detection via a Cross-Temporal Locally Adaptive State Space Model","authors":"Zhenkai Wu;Xiaowen Ma;Kai Zheng;Rongrong Lian;Yun Chen;Zhenhua Huang;Wei Zhang;Siyang Song","doi":"10.1109/JSTARS.2025.3650075","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3650075","url":null,"abstract":"Mamba, with itsadvantages of global perception and linear complexity, has been widely applied to identify changes of the target regions within the remote sensing (RS) images captured under complex scenarios and varied conditions. However, existing remote sensing change detection (RSCD) approaches based on Mamba frequently struggle to effectively perceive the inherent locality of change regions as they direct flatten and scan RS images (i.e., the features of the same region of changes are not distributed continuously within the sequence but are mixed with features from other regions throughout the sequence). In this article, we propose a novel locally adaptive SSM-based approach, termed CD-Lamba, which effectively enhances the locality of change detection while maintaining global perception. Specifically, our CD-Lamba includes a locally adaptive state-space scan (LASS) strategy for locality enhancement, a cross-temporal state-space scan strategy for bitemporal feature fusion, and a window shifting and perception mechanism to enhance interactions across segmented windows. These strategies are integrated into a multiscale cross-temporal LASS module to effectively highlight changes and refine changes’ representations feature generation. CD-Lamba significantly enhances local–global spatio-temporal interactions in bitemporal images, offering improved performance in RSCD tasks. Extensive experimental results show that CD-Lamba achieves state-of-the-art performance on four benchmark datasets with a satisfactory efficiency-accuracy tradeoff.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4028-4044"},"PeriodicalIF":5.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11322867","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatiotemporal Evolution of Surface Subsidence in Large-Scale Mining Areas Under Rainfall Influence and Optimization Model Development 降雨影响下大矿区地表沉降时空演变及优化模型开发
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-05 DOI: 10.1109/JSTARS.2025.3650498
Lei Chen;Haiping Xiao
Considering the challenges of traditional monitoring methods in achieving large-scale surface subsidence monitoring over mining areas, as well as the difficulties in modeling settlement prediction methods and acquiring model hyperparameters, this article integrates rainfall data from the mining area, analyzes the spatiotemporal evolution characteristics of surface subsidence using small baseline subset interferometric synthetic aperture radar (SBAS-InSAR) technology, and proposes an APO-BiLSTM settlement prediction model. This model employs the Arctic Puffin Optimization (APO) to optimize the hyperparameters of a bidirectional long short-term memory (BiLSTM) network. The research results indicate that rainfall has caused the formation of nine distinct subsidence areas in the mining area, with Subsidence Area IX experiencing the most severe subsidence, covering an area of 9.31 km2, with an average annual subsidence rate as high as -331 mm/a and a maximum cumulative subsidence of 427 mm. In the early stages of subsidence, a “subsidence-lifting-subsidence-lifting” phenomenon is observed, which gradually stabilizes in the later stages. In addition, compared to the LSTM and BiLSTM models, the proposed APO-BiLSTM model reduces the root mean square error of single-step predictions by 79.8% and 76.6%, respectively, and the mean absolute error by 79.1% and 75.9%, while increasing the R2 by 6.0% and 4.4% . The absolute error of 78.3% of the high coherence points is less than 4 mm, indicating that the model has promising application prospects in large-scale surface subsidence prediction in mining areas.
针对传统监测方法难以实现矿区大尺度地表沉陷监测,以及沉降预测建模方法和模型超参数获取困难的问题,结合矿区降水数据,采用小基线亚子集干涉合成孔径雷达(SBAS-InSAR)技术分析地表沉陷的时空演变特征。提出了一种APO-BiLSTM沉降预测模型。该模型采用北极海雀优化算法(Arctic Puffin Optimization, APO)对双向长短期记忆网络的超参数进行优化。研究结果表明:降雨导致矿区形成了9个不同的沉陷区,其中沉陷区IX沉陷最严重,面积为9.31 km2,年平均沉降速率高达-331 mm/a,最大累计沉降量为427 mm;沉降前期表现为“沉降-抬升-沉降-抬升”现象,后期逐渐趋于稳定。此外,与LSTM和BiLSTM模型相比,所提出的APO-BiLSTM模型单步预测均方根误差分别降低79.8%和76.6%,平均绝对误差分别降低79.1%和75.9%,R2分别提高6.0%和4.4%。高相干点的绝对误差为78.3%,误差小于4 mm,表明该模型在矿区大规模地表沉降预测中具有良好的应用前景。
{"title":"Spatiotemporal Evolution of Surface Subsidence in Large-Scale Mining Areas Under Rainfall Influence and Optimization Model Development","authors":"Lei Chen;Haiping Xiao","doi":"10.1109/JSTARS.2025.3650498","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3650498","url":null,"abstract":"Considering the challenges of traditional monitoring methods in achieving large-scale surface subsidence monitoring over mining areas, as well as the difficulties in modeling settlement prediction methods and acquiring model hyperparameters, this article integrates rainfall data from the mining area, analyzes the spatiotemporal evolution characteristics of surface subsidence using small baseline subset interferometric synthetic aperture radar (SBAS-InSAR) technology, and proposes an APO-BiLSTM settlement prediction model. This model employs the Arctic Puffin Optimization (APO) to optimize the hyperparameters of a bidirectional long short-term memory (BiLSTM) network. The research results indicate that rainfall has caused the formation of nine distinct subsidence areas in the mining area, with Subsidence Area IX experiencing the most severe subsidence, covering an area of 9.31 km<sup>2</sup>, with an average annual subsidence rate as high as -331 mm/a and a maximum cumulative subsidence of 427 mm. In the early stages of subsidence, a “subsidence-lifting-subsidence-lifting” phenomenon is observed, which gradually stabilizes in the later stages. In addition, compared to the LSTM and BiLSTM models, the proposed APO-BiLSTM model reduces the root mean square error of single-step predictions by 79.8% and 76.6%, respectively, and the mean absolute error by 79.1% and 75.9%, while increasing the R<sup>2</sup> by 6.0% and 4.4% . The absolute error of 78.3% of the high coherence points is less than 4 mm, indicating that the model has promising application prospects in large-scale surface subsidence prediction in mining areas.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4045-4055"},"PeriodicalIF":5.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11328805","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ASENet: Thin Cloud Removal Network for Complex Scenes via Atmospheric Scattering Modeling and Feedback Enhancement 基于大气散射建模和反馈增强的复杂场景薄云去除网络
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-05 DOI: 10.1109/JSTARS.2025.3650563
Jiayi Liu;Zhe Guo;Rui Luo;Yi Liu;Shaohui Mei
In optical remote sensing, thin clouds pose a significant challenge for cloud removal due to their high brightness and spectral similarity to bright man-made objects, such as buildings. Existing thin cloud removal methods typically rely on single feature extraction or fixed physical model, which struggle to differentiate thin clouds from bright backgrounds in complex scenes, resulting in suboptimal image recovery. To address these issues, we propose atmospheric scattering-driven recovery enhancement network (ASENet), a novel network that integrates atmospheric scattering modeling with multilevel feedback enhancement mechanism to improve thin cloud removal for complex scenes. By learning the shape details of both thin clouds and ground features, ASENet dynamically adjusts weights in high-concentration cloud regions, ensuring clearer image recovery. Specifically, we design a feature fusion residual dehazing generator, which leverages deep residual blocks and high-resolution dehazing modules to capture environmental memory and enhance detail features, improving the model's adaptability and recovery accuracy in thin cloud regions. In addition, to better preserve the edges and textures of buildings and other ground objects, we introduce a spatial detail enhanced discriminator that incorporates the cascaded feedback-based feature mapping. This enables ASENet to better capture image details, maintain structural consistency, and effectively distinguish thin clouds from high-reflectance background objects. Extensive experiments on three benchmark datasets L8-ImgSet, RICE1, and WHUS2-CR demonstrate that our proposed ASENet outperforms state-of-the-art methods across both subjective and objective evaluation metrics, proving its effectiveness in thin cloud removal tasks under complex scenes.
在光学遥感中,由于薄云的高亮度和光谱与明亮的人造物体(如建筑物)相似,因此对云的去除构成了重大挑战。现有的薄云去除方法通常依赖于单一特征提取或固定的物理模型,在复杂场景中难以区分薄云和明亮背景,导致图像恢复不理想。为了解决这些问题,我们提出了大气散射驱动恢复增强网络(ASENet),这是一种将大气散射建模与多层反馈增强机制相结合的新型网络,可以改善复杂场景下的薄云去除。通过学习薄云和地面特征的形状细节,ASENet动态调整高浓度云区域的权重,确保更清晰的图像恢复。具体而言,我们设计了一种特征融合残差去雾发生器,利用深度残差块和高分辨率去雾模块捕获环境记忆,增强细节特征,提高模型在薄云区域的适应性和恢复精度。此外,为了更好地保留建筑物和其他地面物体的边缘和纹理,我们引入了一种空间细节增强鉴别器,该鉴别器结合了基于级联反馈的特征映射。这使ASENet能够更好地捕捉图像细节,保持结构一致性,并有效区分薄云和高反射率背景物体。在L8-ImgSet、RICE1和WHUS2-CR三个基准数据集上进行的大量实验表明,我们提出的ASENet在主观和客观评估指标上都优于最先进的方法,证明了其在复杂场景下瘦云去除任务中的有效性。
{"title":"ASENet: Thin Cloud Removal Network for Complex Scenes via Atmospheric Scattering Modeling and Feedback Enhancement","authors":"Jiayi Liu;Zhe Guo;Rui Luo;Yi Liu;Shaohui Mei","doi":"10.1109/JSTARS.2025.3650563","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3650563","url":null,"abstract":"In optical remote sensing, thin clouds pose a significant challenge for cloud removal due to their high brightness and spectral similarity to bright man-made objects, such as buildings. Existing thin cloud removal methods typically rely on single feature extraction or fixed physical model, which struggle to differentiate thin clouds from bright backgrounds in complex scenes, resulting in suboptimal image recovery. To address these issues, we propose atmospheric scattering-driven recovery enhancement network (ASENet), a novel network that integrates atmospheric scattering modeling with multilevel feedback enhancement mechanism to improve thin cloud removal for complex scenes. By learning the shape details of both thin clouds and ground features, ASENet dynamically adjusts weights in high-concentration cloud regions, ensuring clearer image recovery. Specifically, we design a feature fusion residual dehazing generator, which leverages deep residual blocks and high-resolution dehazing modules to capture environmental memory and enhance detail features, improving the model's adaptability and recovery accuracy in thin cloud regions. In addition, to better preserve the edges and textures of buildings and other ground objects, we introduce a spatial detail enhanced discriminator that incorporates the cascaded feedback-based feature mapping. This enables ASENet to better capture image details, maintain structural consistency, and effectively distinguish thin clouds from high-reflectance background objects. Extensive experiments on three benchmark datasets L8-ImgSet, RICE1, and WHUS2-CR demonstrate that our proposed ASENet outperforms state-of-the-art methods across both subjective and objective evaluation metrics, proving its effectiveness in thin cloud removal tasks under complex scenes.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"3964-3982"},"PeriodicalIF":5.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11328777","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Learning-Based Model for Nowcasting of Convective Initiation Using Infrared Observations 基于红外观测的对流起始临近预报的深度学习模型
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-05 DOI: 10.1109/JSTARS.2025.3650686
Huijie Zhao;Xiaohang Ma;Guorui Jia;Jialu Xu;Yihan Xie;Yujun Zhao
As severe convective weather exerts growing influence on public safety, enhancing forecast accuracy has become critically important. However, the predictive capability remains limited due to insufficient observational coverageenlrg in certain regions or variables, as well as the inadequate representation of the fine-scale physical processes responsible for local convective development. In response to these challenges, this study proposes a physically embedded neural network based on heterogeneous meteorological data, which utilizes satellite multispectral images and atmospheric temperature and humidity profile synergistically retrieved from space-based and ground-based infrared spectral observations, to forecast local convective initiation (CI) within a 6-hour lead time. The core innovation of this study lies in the development of a physically consistent model that explicitly embeds the convective available potential energy equation into the network architecture. By embedding physical information, the model enables the atmospheric thermodynamic feature extraction module to generate physically consistent feature tensors, thereby enhancing the representation of key convective processes. We trained the network using the pretraining and fine-tuning approach, then validated its effectiveness with reanalysis and actual observational data. The results demonstrate that incorporating the retrieved atmospheric profile data leads to a 40% improvement in the 6-hour average critical success index (CSI), increasing from 0.44 to 0.62 relative to forecasts without atmospheric profile input. Furthermore, in validation experiments using reanalysis data and radar observations, the proposed atmospheric profile feature extraction module consistently improves the model’s average forecast CSI by more than 29% compared to models utilizing purely data-driven profile extraction modules.
随着强对流天气对公共安全的影响越来越大,提高预报精度变得至关重要。然而,由于某些区域或变量的观测覆盖不足,以及对负责局部对流发展的精细尺度物理过程的代表性不足,预测能力仍然有限。为了应对这些挑战,本研究提出了一种基于异构气象数据的物理嵌入式神经网络,该网络利用卫星多光谱图像和天基和地面红外光谱观测协同检索的大气温度和湿度剖面,在6小时内预测局部对流开始(CI)。本研究的核心创新在于开发了一个物理一致的模型,该模型明确地将对流可用势能方程嵌入到网络架构中。该模型通过嵌入物理信息,使大气热力特征提取模块能够生成物理上一致的特征张量,从而增强对关键对流过程的表征。我们使用预训练和微调方法对网络进行训练,然后通过再分析和实际观测数据验证其有效性。结果表明,与没有大气廓线输入的预报相比,纳入大气廓线数据可使6小时平均临界成功指数(CSI)提高40%,从0.44提高到0.62。此外,在使用再分析数据和雷达观测的验证实验中,与使用纯粹数据驱动的剖面提取模块的模型相比,所提出的大气剖面特征提取模块始终将模型的平均预测CSI提高了29%以上。
{"title":"A Deep Learning-Based Model for Nowcasting of Convective Initiation Using Infrared Observations","authors":"Huijie Zhao;Xiaohang Ma;Guorui Jia;Jialu Xu;Yihan Xie;Yujun Zhao","doi":"10.1109/JSTARS.2025.3650686","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3650686","url":null,"abstract":"As severe convective weather exerts growing influence on public safety, enhancing forecast accuracy has become critically important. However, the predictive capability remains limited due to insufficient observational coverageenlrg in certain regions or variables, as well as the inadequate representation of the fine-scale physical processes responsible for local convective development. In response to these challenges, this study proposes a physically embedded neural network based on heterogeneous meteorological data, which utilizes satellite multispectral images and atmospheric temperature and humidity profile synergistically retrieved from space-based and ground-based infrared spectral observations, to forecast local convective initiation (CI) within a 6-hour lead time. The core innovation of this study lies in the development of a physically consistent model that explicitly embeds the convective available potential energy equation into the network architecture. By embedding physical information, the model enables the atmospheric thermodynamic feature extraction module to generate physically consistent feature tensors, thereby enhancing the representation of key convective processes. We trained the network using the pretraining and fine-tuning approach, then validated its effectiveness with reanalysis and actual observational data. The results demonstrate that incorporating the retrieved atmospheric profile data leads to a 40% improvement in the 6-hour average critical success index (CSI), increasing from 0.44 to 0.62 relative to forecasts without atmospheric profile input. Furthermore, in validation experiments using reanalysis data and radar observations, the proposed atmospheric profile feature extraction module consistently improves the model’s average forecast CSI by more than 29% compared to models utilizing purely data-driven profile extraction modules.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4188-4202"},"PeriodicalIF":5.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11328812","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Few-Shot Object Detection on Remote Sensing Images Based on Decoupled Training, Contrastive Learning, and Self-Training 基于解耦训练、对比学习和自训练的遥感图像小目标检测
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-01 DOI: 10.1109/JSTARS.2025.3650394
Shun Zhang;Xuebin Zhang;Yaohui Xu;Ke Wang
Few-shot object detection (FSOD) in remote sensing imagery faces two critical challenges compared to general methods trained on large datasets, first, only a few labeled instances leveraged as the training set significantly limit the feature representation learning of deep neural networks; second, Remote sensing image data contain complicated background and multiple objects with greatly different sizes in the same image, which leads the detector to large numbers of false alarms and miss detections. This article proposes a FSOD framework (called DeCL-Det) that applies self-training to generate high-quality pseudoannotations from unlabeled target domain data. These refined pseudolabels are iteratively integrated into the training set to expand supervision for novel classes. An auxiliary network is introduced to mitigate label noise by rectifying misclassifications in pseudolabeled regions, ensuring robust learning. For multiscale feature learning, we propose a gradient-decoupled framework, GCFPN, combining feature pyramid networks (FPN) with a gradient decoupled layer (GDL). FPN is to extract multiscale feature representations, and GDL is to decouple the modules between the region proposal network and RCNN head into two stages or tasks through gradients. The two modules, FPN and GDL, train Faster R-CNN in a decoupled way to facilitate the multiscale feature learning of novel objects. To further enhance the classification ability, we introduce a supervised contrastive learning head to enhance feature discrimination, reinforcing robustness in FSOD. Experiments on the DIOR dataset indicate that our method performs better than several existing approaches and achieves competitive results.
与在大数据集上训练的一般方法相比,遥感图像中的少镜头目标检测(FSOD)面临两个关键挑战:第一,仅利用少数标记实例作为训练集,严重限制了深度神经网络的特征表示学习;其次,遥感图像数据背景复杂,同一图像中存在多个大小差异较大的目标,导致探测器出现大量误报和漏检。本文提出了一个FSOD框架(称为DeCL-Det),它应用自我训练从未标记的目标域数据生成高质量的伪注释。这些改进的伪标签被迭代地集成到训练集中,以扩大对新类的监督。引入了一个辅助网络,通过纠正伪标记区域的错误分类来减轻标签噪声,确保鲁棒性学习。对于多尺度特征学习,我们提出了一种梯度解耦框架GCFPN,将特征金字塔网络(FPN)与梯度解耦层(GDL)相结合。FPN是提取多尺度特征表示,GDL是通过梯度将区域提议网络和RCNN头部之间的模块解耦为两个阶段或任务。FPN和GDL两个模块以解耦的方式训练Faster R-CNN,以促进新对象的多尺度特征学习。为了进一步提高分类能力,我们引入了一个监督对比学习头来增强特征识别,增强FSOD的鲁棒性。在DIOR数据集上的实验表明,我们的方法优于现有的几种方法,并取得了具有竞争力的结果。
{"title":"Few-Shot Object Detection on Remote Sensing Images Based on Decoupled Training, Contrastive Learning, and Self-Training","authors":"Shun Zhang;Xuebin Zhang;Yaohui Xu;Ke Wang","doi":"10.1109/JSTARS.2025.3650394","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3650394","url":null,"abstract":"Few-shot object detection (FSOD) in remote sensing imagery faces two critical challenges compared to general methods trained on large datasets, first, only a few labeled instances leveraged as the training set significantly limit the feature representation learning of deep neural networks; second, Remote sensing image data contain complicated background and multiple objects with greatly different sizes in the same image, which leads the detector to large numbers of false alarms and miss detections. This article proposes a FSOD framework (called DeCL-Det) that applies self-training to generate high-quality pseudoannotations from unlabeled target domain data. These refined pseudolabels are iteratively integrated into the training set to expand supervision for novel classes. An auxiliary network is introduced to mitigate label noise by rectifying misclassifications in pseudolabeled regions, ensuring robust learning. For multiscale feature learning, we propose a gradient-decoupled framework, GCFPN, combining feature pyramid networks (FPN) with a gradient decoupled layer (GDL). FPN is to extract multiscale feature representations, and GDL is to decouple the modules between the region proposal network and RCNN head into two stages or tasks through gradients. The two modules, FPN and GDL, train Faster R-CNN in a decoupled way to facilitate the multiscale feature learning of novel objects. To further enhance the classification ability, we introduce a supervised contrastive learning head to enhance feature discrimination, reinforcing robustness in FSOD. Experiments on the DIOR dataset indicate that our method performs better than several existing approaches and achieves competitive results.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"3983-3997"},"PeriodicalIF":5.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11321270","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heterogeneous RFI Mitigation in Image-Domain via Subimage Segmentation and Local Frequency Feature Analysis 基于子图像分割和局部频率特征分析的图像域异构RFI抑制
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-31 DOI: 10.1109/JSTARS.2025.3649816
Siqi Lai;Mingliang Tao;Yanyang Liu;Lei Cui;Jia Su;Ling Wang
Radio frequency interference (RFI) may degrade the quality of remote sensing images acquired by spaceborne synthetic aperture radar (SAR). In the interferometric wide-swath mode of the Sentinel-1 satellite, the SAR receiver may capture multiple types of RFI signals within a single observation period, which is referred to as heterogeneous RFI, increasing the complexity of interference detection and mitigation. This article proposes a heterogeneous interference mitigation method based on subimage segmentation and local spectral features analysis. The proposed method divides the original single look complex image into multiple subimages along the range direction, enhancing the representation of interference features in the range frequency domain. Spectral analysis is then performed on each subimage to detect and mitigate interference. Finally, the image after RFI mitigation is reconstructed by stitching the subimages together. Experiments were conducted using simulated interference data generated from LuTan-1 and measured interference data from Sentinel-1. The results demonstrate that the proposed method can effectively mitigate RFI artifacts in various typical interference scenarios and restore the obscured ground object information in the images.
射频干扰会降低星载合成孔径雷达(SAR)遥感图像的质量。在Sentinel-1卫星的干涉宽幅模式下,SAR接收器可以在一个观测周期内捕获多种类型的RFI信号,这被称为异构RFI,增加了干扰检测和缓解的复杂性。提出了一种基于子图像分割和局部光谱特征分析的异构干扰抑制方法。该方法将原单视复杂图像沿距离方向分割成多个子图像,增强了干涉特征在距离频域的表示。然后对每个子图像进行光谱分析以检测和减轻干扰。最后,将子图像拼接在一起,重建RFI缓解后的图像。利用陆坦一号的模拟干扰数据和哨兵一号的实测干扰数据进行实验。结果表明,该方法能有效地抑制各种典型干扰场景下的射频干扰伪影,恢复图像中被遮挡的地物信息。
{"title":"Heterogeneous RFI Mitigation in Image-Domain via Subimage Segmentation and Local Frequency Feature Analysis","authors":"Siqi Lai;Mingliang Tao;Yanyang Liu;Lei Cui;Jia Su;Ling Wang","doi":"10.1109/JSTARS.2025.3649816","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3649816","url":null,"abstract":"Radio frequency interference (RFI) may degrade the quality of remote sensing images acquired by spaceborne synthetic aperture radar (SAR). In the interferometric wide-swath mode of the Sentinel-1 satellite, the SAR receiver may capture multiple types of RFI signals within a single observation period, which is referred to as heterogeneous RFI, increasing the complexity of interference detection and mitigation. This article proposes a heterogeneous interference mitigation method based on subimage segmentation and local spectral features analysis. The proposed method divides the original single look complex image into multiple subimages along the range direction, enhancing the representation of interference features in the range frequency domain. Spectral analysis is then performed on each subimage to detect and mitigate interference. Finally, the image after RFI mitigation is reconstructed by stitching the subimages together. Experiments were conducted using simulated interference data generated from LuTan-1 and measured interference data from Sentinel-1. The results demonstrate that the proposed method can effectively mitigate RFI artifacts in various typical interference scenarios and restore the obscured ground object information in the images.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4069-4084"},"PeriodicalIF":5.3,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11320316","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1