Distributed multiple-input–multiple-output synthe- tic aperture radar (MIMO-SAR) provides a new paradigm for radar imaging, which utilizes multiple distributed sensors to improve imaging performance. However, synchronization errors have a significant impact on imaging quality in these systems. The transmitted and received echo signals exhibit reciprocity, which can be exploited to estimate synchronization errors. By comparing echoes between different sensors, the synchronization errors could be estimated and compensated. This work presents a synchronization error-resistant imaging algorithm for distributed MIMO-SAR systems. First, the synchronization errors are estimated in the range domain by comparing the reciprocal echo signal pairs. Then, the errors are compensated during a fast back-projection (BP)-based SAR imaging process. The effectiveness of the proposed algorithm has been verified by experiments.
{"title":"Compensation Approach to Synchronization Errors in Distributed MIMO-SAR System","authors":"Wanqing Ma;Zhong Xu;Jinshan Ding;Ljubisa Stankovic","doi":"10.1109/LGRS.2025.3603396","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3603396","url":null,"abstract":"Distributed multiple-input–multiple-output synthe- tic aperture radar (MIMO-SAR) provides a new paradigm for radar imaging, which utilizes multiple distributed sensors to improve imaging performance. However, synchronization errors have a significant impact on imaging quality in these systems. The transmitted and received echo signals exhibit reciprocity, which can be exploited to estimate synchronization errors. By comparing echoes between different sensors, the synchronization errors could be estimated and compensated. This work presents a synchronization error-resistant imaging algorithm for distributed MIMO-SAR systems. First, the synchronization errors are estimated in the range domain by comparing the reciprocal echo signal pairs. Then, the errors are compensated during a fast back-projection (BP)-based SAR imaging process. The effectiveness of the proposed algorithm has been verified by experiments.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-26DOI: 10.1109/LGRS.2025.3602896
Ruyi Feng;Zhixin Zhao;Tao Zhao;Lizhe Wang
Remote sensing image target detection plays a pivotal role in Earth observation, offering substantial value for applications such as urban planning and environmental monitoring. Due to the significant scale variations among targets, complex backgrounds with dense small object distributions, and strong intertarget scene correlations, existing target detection methods usually fail to effectively model target relationships and contextual information for remote sensing imagery. To address these limitations, we proposed YOLO-ALS, a novel remote sensing target detection network that integrates adaptive local scene context. The proposed framework introduces three key points. First, a full-dimensional dynamic convolution reconstruction C2f module enhances target feature representation by overcoming local context extraction limitations and target co-occurrence prior deficiencies. Second, an adaptive local scene context module (ALSCM) dynamically integrates multiscale receptive field features through spatial attention, enabling background window adaptive selection and cross-scale feature alignment. Finally, a co-occurrence matrix-integrated classification auxiliary module mines target association rules through data-driven learning, correcting classification probabilities in low-confidence areas by combining high-confidence areas’ co-occurrence information with an optimal threshold, which can significantly reduce missed detection rates. Comprehensive experiments on multiple public remote sensing datasets demonstrate the superiority of the proposed method through extensive ablation studies and comparative analyses. The proposed method has achieved state-of-the-art performance while addressing the unique challenges of remote sensing target detection.
{"title":"YOLO-ALS: Dynamic Convolution With Adaptive Local Context for Remote Sensing Target Detection","authors":"Ruyi Feng;Zhixin Zhao;Tao Zhao;Lizhe Wang","doi":"10.1109/LGRS.2025.3602896","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3602896","url":null,"abstract":"Remote sensing image target detection plays a pivotal role in Earth observation, offering substantial value for applications such as urban planning and environmental monitoring. Due to the significant scale variations among targets, complex backgrounds with dense small object distributions, and strong intertarget scene correlations, existing target detection methods usually fail to effectively model target relationships and contextual information for remote sensing imagery. To address these limitations, we proposed YOLO-ALS, a novel remote sensing target detection network that integrates adaptive local scene context. The proposed framework introduces three key points. First, a full-dimensional dynamic convolution reconstruction C2f module enhances target feature representation by overcoming local context extraction limitations and target co-occurrence prior deficiencies. Second, an adaptive local scene context module (ALSCM) dynamically integrates multiscale receptive field features through spatial attention, enabling background window adaptive selection and cross-scale feature alignment. Finally, a co-occurrence matrix-integrated classification auxiliary module mines target association rules through data-driven learning, correcting classification probabilities in low-confidence areas by combining high-confidence areas’ co-occurrence information with an optimal threshold, which can significantly reduce missed detection rates. Comprehensive experiments on multiple public remote sensing datasets demonstrate the superiority of the proposed method through extensive ablation studies and comparative analyses. The proposed method has achieved state-of-the-art performance while addressing the unique challenges of remote sensing target detection.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-26DOI: 10.1109/LGRS.2025.3602854
Min Duan;Yuanxu Wang;Lu Bai;Yujiang He;Zhichao Zhao;Yurong Qian;Xuanchen Liu
The accelerated nonagriculturalization of cropland has increasingly highlighted the importance of remote sensing (RS) change detection (CD) for monitoring land-use transitions. However, variations in RS imaging conditions and irregular cropland changes often result in noisy or inaccurate change maps. To address these challenges, we propose a novel deep learning framework named change-aware and Fourier feature exchange network (CAFENet). The method introduces a dedicated change-aware (CA) branch to extract discriminative change cues from pseudo-video sequences and integrates them into the backbone network. A Fourier feature exchange module (FFEM) is designed to reduce brightness, color, and style discrepancies between bitemporal images, thereby enhancing robustness under varying acquisition conditions. Fused features are further refined using an efficient multiscale attention mechanism (EMSA) to capture rich spatial details. In the decoding stage, a dynamic content-aware upsampling module (DCAU), together with skip connections, progressively recovers spatial resolution while preserving structural information. The experimental results on three datasets—CLCD, SW-CLCD, and LuojiaSET-CLCD—demonstrate that CAFENet achieves superior performance over state-of-the-art methods in terms of both accuracy and robustness, particularly in complex agricultural landscapes.
{"title":"CAFENet: Change-Aware and Fourier Feature Exchange Network for Cropland Change Detection in Remote Sensing Images","authors":"Min Duan;Yuanxu Wang;Lu Bai;Yujiang He;Zhichao Zhao;Yurong Qian;Xuanchen Liu","doi":"10.1109/LGRS.2025.3602854","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3602854","url":null,"abstract":"The accelerated nonagriculturalization of cropland has increasingly highlighted the importance of remote sensing (RS) change detection (CD) for monitoring land-use transitions. However, variations in RS imaging conditions and irregular cropland changes often result in noisy or inaccurate change maps. To address these challenges, we propose a novel deep learning framework named change-aware and Fourier feature exchange network (CAFENet). The method introduces a dedicated change-aware (CA) branch to extract discriminative change cues from pseudo-video sequences and integrates them into the backbone network. A Fourier feature exchange module (FFEM) is designed to reduce brightness, color, and style discrepancies between bitemporal images, thereby enhancing robustness under varying acquisition conditions. Fused features are further refined using an efficient multiscale attention mechanism (EMSA) to capture rich spatial details. In the decoding stage, a dynamic content-aware upsampling module (DCAU), together with skip connections, progressively recovers spatial resolution while preserving structural information. The experimental results on three datasets—CLCD, SW-CLCD, and LuojiaSET-CLCD—demonstrate that CAFENet achieves superior performance over state-of-the-art methods in terms of both accuracy and robustness, particularly in complex agricultural landscapes.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-26DOI: 10.1109/LGRS.2025.3602769
Yuying Zhu;Qian Wang;Muyu Hou
Despite the impressive performance of deep learning in synthetic aperture radar (SAR) automatic target recognition (ATR), its generalization capability remains a critical concern, particularly when facing domain shifts between training and testing environments. Considering the inherent robustness and interpretability of electromagnetic scattering characteristics, we explore leveraging these properties to guide deep learning training, thereby improving generalization. To this end, we propose a dual-layer dynamic scattering filtering network (DL-DSFN) that leverages external physical priors to guide the learning process. The first layer adaptively generates convolutional kernels conditioned on scattering cues, enabling localized modeling of target-specific scattering phenomena. The second layer establishes a cross-domain mapping from SAR imagery to scattering features, facilitating automatic extraction of salient scattering characteristics. Furthermore, an adaptive mechanism for determining the number of scattering centers is also incorporated. Experiments conducted under significant variations between training and testing sets demonstrate that our method achieves competitive recognition accuracy while maintaining low computational cost, with only approximately 0.16 M parameters and 0.002 G FLOPs.
尽管深度学习在合成孔径雷达(SAR)自动目标识别(ATR)中的表现令人印象深刻,但其泛化能力仍然是一个关键问题,特别是当面临训练和测试环境之间的域转换时。考虑到电磁散射特性固有的鲁棒性和可解释性,我们探索利用这些特性来指导深度学习训练,从而提高泛化。为此,我们提出了一种双层动态散射滤波网络(DL-DSFN),它利用外部物理先验来指导学习过程。第一层自适应地生成基于散射信号的卷积核,实现目标特定散射现象的局部建模。第二层建立了SAR图像到散射特征的跨域映射,便于自动提取显著散射特征。此外,还引入了一种确定散射中心数目的自适应机制。在训练集和测试集之间存在显著差异的情况下进行的实验表明,我们的方法在保持较低的计算成本的同时获得了具有竞争力的识别精度,只有大约0.16 M个参数和0.002 G FLOPs。
{"title":"DL-DSFN: Dual-Layer Dynamic Scattering Filtering for Robust SAR Target Recognition","authors":"Yuying Zhu;Qian Wang;Muyu Hou","doi":"10.1109/LGRS.2025.3602769","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3602769","url":null,"abstract":"Despite the impressive performance of deep learning in synthetic aperture radar (SAR) automatic target recognition (ATR), its generalization capability remains a critical concern, particularly when facing domain shifts between training and testing environments. Considering the inherent robustness and interpretability of electromagnetic scattering characteristics, we explore leveraging these properties to guide deep learning training, thereby improving generalization. To this end, we propose a dual-layer dynamic scattering filtering network (DL-DSFN) that leverages external physical priors to guide the learning process. The first layer adaptively generates convolutional kernels conditioned on scattering cues, enabling localized modeling of target-specific scattering phenomena. The second layer establishes a cross-domain mapping from SAR imagery to scattering features, facilitating automatic extraction of salient scattering characteristics. Furthermore, an adaptive mechanism for determining the number of scattering centers is also incorporated. Experiments conducted under significant variations between training and testing sets demonstrate that our method achieves competitive recognition accuracy while maintaining low computational cost, with only approximately 0.16 M parameters and 0.002 G FLOPs.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-25DOI: 10.1109/LGRS.2025.3602267
Jinglei Bai;Jinfu Yang;Tao Xiang;Shu Cai
Multimodal aerial image semantic segmentation enables fine-grained land cover classification by integrating data from different sensors, yet it remains challenged by information redundancy, intermodal feature discrepancies, and class confusion in complex scenes. To address these issues, we propose a cross-modal hierarchical feature fusion network (CMHFNet) based on an encoder–decoder architecture. The encoder incorporates a pixelwise attention-guided fusion module (PAFM) and a multistage progressive fusion transformer (MPFT) to suppress redundancy and model long-range intermodal dependencies and scale variations. The decoder introduces a residual information-guided feature compensation mechanism to recover spatial details and mitigate class ambiguity. The experiments on DDOS, Vaihingen, and Potsdam datasets demonstrate that the CMHFNet surpasses state-of-the-art methods, validating its effectiveness and practical value.
{"title":"Aerial Image Semantic Segmentation Method Based on Cross-Modal Hierarchical Feature Fusion","authors":"Jinglei Bai;Jinfu Yang;Tao Xiang;Shu Cai","doi":"10.1109/LGRS.2025.3602267","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3602267","url":null,"abstract":"Multimodal aerial image semantic segmentation enables fine-grained land cover classification by integrating data from different sensors, yet it remains challenged by information redundancy, intermodal feature discrepancies, and class confusion in complex scenes. To address these issues, we propose a cross-modal hierarchical feature fusion network (CMHFNet) based on an encoder–decoder architecture. The encoder incorporates a pixelwise attention-guided fusion module (PAFM) and a multistage progressive fusion transformer (MPFT) to suppress redundancy and model long-range intermodal dependencies and scale variations. The decoder introduces a residual information-guided feature compensation mechanism to recover spatial details and mitigate class ambiguity. The experiments on DDOS, Vaihingen, and Potsdam datasets demonstrate that the CMHFNet surpasses state-of-the-art methods, validating its effectiveness and practical value.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-25DOI: 10.1109/LGRS.2025.3602092
Junpeng Ai;Liang Luo;Shijie Wang;Liandong Hao
In ship detection using synthetic aperture radar (SAR), small targets and complex background noise remain key challenges that restrict the detection performance. In this letter, we propose a small-target ship detection network based on a small object detection network (SOD-Net) using SAR images. First, we construct a U-shaped feature preextraction network and adopt a spatial pixel attention (SPA) mechanism to enhance the initial feature representation ability. Second, a pinwheel convolution (PC) convolutional neural network (CNN)-based cross-scale feature fusion (CCFF) module is designed. By expanding the receptive field through asymmetric convolution kernels and reducing the parameter scale, features of small targets are properly captured. Evaluation results show that the proposed SOD-Net achieves evaluation accuracies of 98.4% and 91.0% on the benchmark SSDD and HRSID datasets (mean average precision (mAP) at an intersection over union of 0.5), respectively, with only 28 million parameters, thus outperforming state-of-the-art models (e.g., YOLOv8 and D-FINE). Visual analysis confirmed that the SOD-Net is robust in scenarios, including complex sea conditions, dense port berthing, and noise interference, thereby providing an accurate and efficient solution for SAR maritime monitoring.
{"title":"SOD-Net: A Small Ship Object Detection Network for SAR Images","authors":"Junpeng Ai;Liang Luo;Shijie Wang;Liandong Hao","doi":"10.1109/LGRS.2025.3602092","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3602092","url":null,"abstract":"In ship detection using synthetic aperture radar (SAR), small targets and complex background noise remain key challenges that restrict the detection performance. In this letter, we propose a small-target ship detection network based on a small object detection network (SOD-Net) using SAR images. First, we construct a U-shaped feature preextraction network and adopt a spatial pixel attention (SPA) mechanism to enhance the initial feature representation ability. Second, a pinwheel convolution (PC) convolutional neural network (CNN)-based cross-scale feature fusion (CCFF) module is designed. By expanding the receptive field through asymmetric convolution kernels and reducing the parameter scale, features of small targets are properly captured. Evaluation results show that the proposed SOD-Net achieves evaluation accuracies of 98.4% and 91.0% on the benchmark SSDD and HRSID datasets (mean average precision (mAP) at an intersection over union of 0.5), respectively, with only 28 million parameters, thus outperforming state-of-the-art models (e.g., YOLOv8 and D-FINE). Visual analysis confirmed that the SOD-Net is robust in scenarios, including complex sea conditions, dense port berthing, and noise interference, thereby providing an accurate and efficient solution for SAR maritime monitoring.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144914215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Airborne terahertz (THz) synthetic aperture radar (SAR) exhibits unique potential for ground-moving target imaging (GMTIm), due to its high-frame rate and high-resolution capabilities. However, the short wavelength of THz waves significantly increases Doppler sensitivity. When a ground-moving target performs curvilinear motion, such as turns, velocity inconsistencies among scattering points induce variations in Doppler centroid frequencies, and chirp rates, leading to defocusing and geometric deformation. To address these issues, an effective curvilinear moving target refocusing method is proposed in this letter. First, a localized phase gradient autofocus (LPGA) method is employed to compensate for Doppler chirp rate inconsistencies. Second, the additional spatial-domain information from a dual-channel system is utilized to correct geometric deformation. Finally, both simulated and measured data are analyzed to validate the effectiveness of the proposed method.
{"title":"Subspectrum Division-Based Imaging Method for Curvilinear Moving Target in Terahertz SAR","authors":"Zhenjiang Li;Chenggao Luo;Hongqiang Wang;Qi Yang;Heng Zhang;Chuanying Liang","doi":"10.1109/LGRS.2025.3602279","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3602279","url":null,"abstract":"Airborne terahertz (THz) synthetic aperture radar (SAR) exhibits unique potential for ground-moving target imaging (GMTIm), due to its high-frame rate and high-resolution capabilities. However, the short wavelength of THz waves significantly increases Doppler sensitivity. When a ground-moving target performs curvilinear motion, such as turns, velocity inconsistencies among scattering points induce variations in Doppler centroid frequencies, and chirp rates, leading to defocusing and geometric deformation. To address these issues, an effective curvilinear moving target refocusing method is proposed in this letter. First, a localized phase gradient autofocus (LPGA) method is employed to compensate for Doppler chirp rate inconsistencies. Second, the additional spatial-domain information from a dual-channel system is utilized to correct geometric deformation. Finally, both simulated and measured data are analyzed to validate the effectiveness of the proposed method.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-25DOI: 10.1109/LGRS.2025.3602095
Yinan Ye;Nicholas C. Coops;Txomin Hermosilla;Michael A. Wulder;Sarah E. Gergel
The object-based image segmentation techniques are widely utilized in environmental disciplines to partition remotely sensed imagery into objects representing distinct conditions, such as vegetation structure or landform. However, most approaches are applied to a single temporal snapshot, limiting their ability to update polygons over time. To address this, we proposed a temporally consistent segmentation algorithm based on a two-phase region growing approach designed to be applied to time series of annual Landsat surface reflectance composites. We developed and demonstrated this new approach over six fire-disturbed forested study areas in British Columbia, Canada, to dynamically delineate polygons over time as they underwent land cover change. Our approach maintained the existing boundaries for forest polygons with no land cover change while updating those subject to change as forest regenerated and followed successional processes. Rapidly recovering areas, such as Cariboo and Fraser-Fort George, showed increases in mean segment area from 12 to 21 and 14 to 25 ha, respectively, approaching or exceeding predisturbance values. Additionally, segment shape complexity increased over time, reflecting the structural development of recovering stands. This work demonstrated the potential of utilizing Landsat surface reflectance data to update forest polygons over time with reference to forest development and increasing maturity.
{"title":"Temporally Consistent Forest Stand Segmentation Using Landsat Imagery","authors":"Yinan Ye;Nicholas C. Coops;Txomin Hermosilla;Michael A. Wulder;Sarah E. Gergel","doi":"10.1109/LGRS.2025.3602095","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3602095","url":null,"abstract":"The object-based image segmentation techniques are widely utilized in environmental disciplines to partition remotely sensed imagery into objects representing distinct conditions, such as vegetation structure or landform. However, most approaches are applied to a single temporal snapshot, limiting their ability to update polygons over time. To address this, we proposed a temporally consistent segmentation algorithm based on a two-phase region growing approach designed to be applied to time series of annual Landsat surface reflectance composites. We developed and demonstrated this new approach over six fire-disturbed forested study areas in British Columbia, Canada, to dynamically delineate polygons over time as they underwent land cover change. Our approach maintained the existing boundaries for forest polygons with no land cover change while updating those subject to change as forest regenerated and followed successional processes. Rapidly recovering areas, such as Cariboo and Fraser-Fort George, showed increases in mean segment area from 12 to 21 and 14 to 25 ha, respectively, approaching or exceeding predisturbance values. Additionally, segment shape complexity increased over time, reflecting the structural development of recovering stands. This work demonstrated the potential of utilizing Landsat surface reflectance data to update forest polygons over time with reference to forest development and increasing maturity.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11137370","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-25DOI: 10.1109/LGRS.2025.3602123
Bingquan Han;Chen Yu;Zhenhong Li;Chuang Song;Xiaoning Hu;Jie Li
Accurately measuring surface deformation velocity using interferometric synthetic aperture radar (InSAR) is crucial for understanding geophysical processes. However, traditional methods often face challenges in capturing subtle deformations over long distances, as errors introduced during unwrapping can accumulate overextended spatial extents. This study introduces a multiarc adjustment (MAA) method aimed at mitigating these errors, especially in high-precision monitoring scenarios, where velocities are sensitive to the location of the reference point. Simulation results demonstrate that the MAA method significantly outperforms the traditional method, achieving substantial reductions in rms under noisy conditions and complex phase unwrapping scenarios. Furthermore, integrating the MAA method into fault slip inversion improves the accuracy of slip distribution estimations. Applications to real datasets from the southern Tibet region and the San Andreas Fault further validate the MAA method’s effectiveness. These findings underscore the MAA method’s potential to enhance deformation velocity measurements in challenging environments, establishing it as a valuable tool for geodetic and tectonic studies.
{"title":"A Multiarc Adjustment Method for Interferometric Synthetic Aperture Radar Time-Series Analysis","authors":"Bingquan Han;Chen Yu;Zhenhong Li;Chuang Song;Xiaoning Hu;Jie Li","doi":"10.1109/LGRS.2025.3602123","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3602123","url":null,"abstract":"Accurately measuring surface deformation velocity using interferometric synthetic aperture radar (InSAR) is crucial for understanding geophysical processes. However, traditional methods often face challenges in capturing subtle deformations over long distances, as errors introduced during unwrapping can accumulate overextended spatial extents. This study introduces a multiarc adjustment (MAA) method aimed at mitigating these errors, especially in high-precision monitoring scenarios, where velocities are sensitive to the location of the reference point. Simulation results demonstrate that the MAA method significantly outperforms the traditional method, achieving substantial reductions in rms under noisy conditions and complex phase unwrapping scenarios. Furthermore, integrating the MAA method into fault slip inversion improves the accuracy of slip distribution estimations. Applications to real datasets from the southern Tibet region and the San Andreas Fault further validate the MAA method’s effectiveness. These findings underscore the MAA method’s potential to enhance deformation velocity measurements in challenging environments, establishing it as a valuable tool for geodetic and tectonic studies.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In seismic exploration, the multiple suppression is crucial for accurate subsurface imaging and resource identification. Internal multiples, generated by multiple reflections at impedance interfaces, act as interference signals that can mislead resource exploration. Compared to traditional methods, the conventional Marchenko multiple elimination (C-MME) method allows for the direct extraction of primary waves from seismic records without requiring a macro velocity model or predictive subtraction, thereby preserving effective signals. However, challenges, such as low signal-to-noise ratios (SNRs) and high-density sampling requirements, have hindered its application to field land seismic data. To address these challenges of C-MME in field seismic data processing, we propose a compressive sensing-based Marchenko multiple elimination (CS-MME) method, which incorporates efficient denoising, reconstruction, and deconvolution capabilities. In this study, the CS-MME method has demonstrated exceptional performance in processing field land seismic data, successfully overcoming the aforementioned challenges. marking the first successful implementation of Marchenko multiple elimination (MME) on field land data.
{"title":"Compressive Sensing-Marchenko Multiple Elimination in Complex Field Land Seismic Data","authors":"Haoxin Zhu;Zhangqing Sun;Jianwei Nie;Bin Hu;Fei Jiang;Fuxing Han;Yang Zhang;Mingchen Liu;Zhenghui Gao","doi":"10.1109/LGRS.2025.3601629","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3601629","url":null,"abstract":"In seismic exploration, the multiple suppression is crucial for accurate subsurface imaging and resource identification. Internal multiples, generated by multiple reflections at impedance interfaces, act as interference signals that can mislead resource exploration. Compared to traditional methods, the conventional Marchenko multiple elimination (C-MME) method allows for the direct extraction of primary waves from seismic records without requiring a macro velocity model or predictive subtraction, thereby preserving effective signals. However, challenges, such as low signal-to-noise ratios (SNRs) and high-density sampling requirements, have hindered its application to field land seismic data. To address these challenges of C-MME in field seismic data processing, we propose a compressive sensing-based Marchenko multiple elimination (CS-MME) method, which incorporates efficient denoising, reconstruction, and deconvolution capabilities. In this study, the CS-MME method has demonstrated exceptional performance in processing field land seismic data, successfully overcoming the aforementioned challenges. marking the first successful implementation of Marchenko multiple elimination (MME) on field land data.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144914217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}