Pub Date : 2025-11-18DOI: 10.1109/LGRS.2025.3634350
Wenqiang Ding;Changying Ma;Xintong Dong;Xuan Li
The heterogeneity of subsurface media induces multipath scattering and dielectric loss in ground penetrating radar (GPR) signal propagation, which results in wavefront distortion and signal attenuation. These effects degrade B-scan profiles by blurring target signatures, hindering automated feature extraction, and reducing the clarity of regions of interest (ROI). To address these issues, we propose the adaptive region target enhancement algorithm (ARTEA), a multistage preprocessing framework. ARTEA integrates dynamic range compression, continuous-scale normalization guided by adaptive sigma maps, and a frequency-domain refinement step. By dynamically adjusting parameters according to local signal characteristics, ARTEA is designed to achieve an effective tradeoff between artifact suppression and target preservation. Experiments on both synthetic and field GPR data demonstrate that ARTEA can enhance target contrast and structural fidelity while suppressing artifacts and preserving essential target features.
{"title":"ARTEA: A Multistage Adaptive Preprocessing Algorithm for Subsurface Target Enhancement in Ground Penetrating Radar","authors":"Wenqiang Ding;Changying Ma;Xintong Dong;Xuan Li","doi":"10.1109/LGRS.2025.3634350","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3634350","url":null,"abstract":"The heterogeneity of subsurface media induces multipath scattering and dielectric loss in ground penetrating radar (GPR) signal propagation, which results in wavefront distortion and signal attenuation. These effects degrade B-scan profiles by blurring target signatures, hindering automated feature extraction, and reducing the clarity of regions of interest (ROI). To address these issues, we propose the adaptive region target enhancement algorithm (ARTEA), a multistage preprocessing framework. ARTEA integrates dynamic range compression, continuous-scale normalization guided by adaptive sigma maps, and a frequency-domain refinement step. By dynamically adjusting parameters according to local signal characteristics, ARTEA is designed to achieve an effective tradeoff between artifact suppression and target preservation. Experiments on both synthetic and field GPR data demonstrate that ARTEA can enhance target contrast and structural fidelity while suppressing artifacts and preserving essential target features.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remote sensing (RS) image scene classification has wide applications in the field of RS. Although the existing methods have achieved remarkable performance, there are still limitations in feature extraction and lightweight design. Current multibranch models, although performing well, have large parameter counts and high computational costs, making them difficult to deploy on resource-constrained edge devices, such as uncrewed aerial vehicles (UAVs). On the other hand, lightweight models like StarNet, having less parameter, but rely on elementwise multiplication to generate features and lack the capture of explicit long-range spatial feature, resulting in insufficient classification accuracy. To address these issues, this letter proposes a lightweight mamba-based hybrid network, namely LMHMamba, whose core is an innovative lightweight multifeature hybrid Mamba (LMHM) module. This module combines the advantage of StarNet in implicitly generating high-dimensional nonlinear features, introduces a lightweight state-space module to enhance spatial feature learning capabilities, and then uses local and global attention modules to emphasize local and global features. This enables effective multidimensional feature fusion while maintaining low parameter. We validate the performance of LMHMamba model on three RS scene classification datasets and compare it with mainstream lightweight models and the latest methods. Experimental results show that LMHMamba achieves advanced levels in both classification accuracy and computational efficiency, significantly outperforming the existing lightweight models, providing an efficient solution for edge deployment. Code is available at https://github.com/yizhilanmaodhh/LMHMamba
{"title":"A Lightweight Multifeature Hybrid Mamba for Remote Sensing Image Scene Classification","authors":"Huihui Dong;Jingcao Li;Zongfang Ma;Zhijie Li;Mengkun Liu;Xiaohui Wei;Licheng Jiao","doi":"10.1109/LGRS.2025.3634398","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3634398","url":null,"abstract":"Remote sensing (RS) image scene classification has wide applications in the field of RS. Although the existing methods have achieved remarkable performance, there are still limitations in feature extraction and lightweight design. Current multibranch models, although performing well, have large parameter counts and high computational costs, making them difficult to deploy on resource-constrained edge devices, such as uncrewed aerial vehicles (UAVs). On the other hand, lightweight models like StarNet, having less parameter, but rely on elementwise multiplication to generate features and lack the capture of explicit long-range spatial feature, resulting in insufficient classification accuracy. To address these issues, this letter proposes a lightweight mamba-based hybrid network, namely LMHMamba, whose core is an innovative lightweight multifeature hybrid Mamba (LMHM) module. This module combines the advantage of StarNet in implicitly generating high-dimensional nonlinear features, introduces a lightweight state-space module to enhance spatial feature learning capabilities, and then uses local and global attention modules to emphasize local and global features. This enables effective multidimensional feature fusion while maintaining low parameter. We validate the performance of LMHMamba model on three RS scene classification datasets and compare it with mainstream lightweight models and the latest methods. Experimental results show that LMHMamba achieves advanced levels in both classification accuracy and computational efficiency, significantly outperforming the existing lightweight models, providing an efficient solution for edge deployment. Code is available at <uri>https://github.com/yizhilanmaodhh/LMHMamba</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-18DOI: 10.1109/LGRS.2025.3634345
Bo Zhang;Yaxiong Chen;Ruilin Yao;Shengwu Xiong
The core of hyperspectral change detection lies in accurately capturing spectral feature differences across different temporal phases to determine whether surface objects have changed. Since spectral variations of different ground objects often manifest more prominently in specific wavelength bands, we design a weighted cascaded encoder–decoder network (WCEDNet) based on spatial–spectral difference features for hyperspectral change detection. First, unlike conventional change detection frameworks based on siamese networks, our proposed single-branch approach focuses more intensively on extracting spatial–spectral difference features. Second, the weighted cascaded structure introduced in the encoder stage enables differential attention to different bands, enhancing focus on spectral bands with high responsiveness. Furthermore, we have developed a spatial–spectral cross-attention (SSCA) module to model intrafeature correlations within spatial and spectral domains. Our method was evaluated on three challenging hyperspectral change detection datasets, and experimental results demonstrate its superior performance compared to competitive models. The detailed code has been open-sourced at https://github.com/WUTCM-Lab/WCEDNet
{"title":"WCEDNet: A Weighted Cascaded Encoder–Decoder Network for Hyperspectral Change Detection Based on Spatial–Spectral Difference Features","authors":"Bo Zhang;Yaxiong Chen;Ruilin Yao;Shengwu Xiong","doi":"10.1109/LGRS.2025.3634345","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3634345","url":null,"abstract":"The core of hyperspectral change detection lies in accurately capturing spectral feature differences across different temporal phases to determine whether surface objects have changed. Since spectral variations of different ground objects often manifest more prominently in specific wavelength bands, we design a weighted cascaded encoder–decoder network (WCEDNet) based on spatial–spectral difference features for hyperspectral change detection. First, unlike conventional change detection frameworks based on siamese networks, our proposed single-branch approach focuses more intensively on extracting spatial–spectral difference features. Second, the weighted cascaded structure introduced in the encoder stage enables differential attention to different bands, enhancing focus on spectral bands with high responsiveness. Furthermore, we have developed a spatial–spectral cross-attention (SSCA) module to model intrafeature correlations within spatial and spectral domains. Our method was evaluated on three challenging hyperspectral change detection datasets, and experimental results demonstrate its superior performance compared to competitive models. The detailed code has been open-sourced at <uri>https://github.com/WUTCM-Lab/WCEDNet</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-17DOI: 10.1109/LGRS.2025.3633285
Weihua Shen;Yalin Li;Xiaohua Chen;Chunzhi Li
There are multiple challenges in small object detection (SOD), including limited instances, insufficient features, diverse scales, uneven distribution, ambiguous boundaries, and complex backgrounds. These issues often lead to high false detection rates and hinder model generalization and convergence. This study proposes a multiscale object detection algorithm that enhances the detection of subtle features by improving the change detection to DH throughout and incorporating a minimum point distance intersection-over-union loss. The enhanced DH improves target representation, enabling more precise localization and classification of small objects. Meanwhile, the new loss (NL) function stabilizes bounding box regression by adaptively adjusting auxiliary bounding box scales. Evaluations on two benchmark datasets demonstrate that our method achieves a 2.6% increase in mAP50 and a 1.8% improvement in mAP50:95 on the satellite imagery multivehicles dataset (SIMD) and a 1.9% increase in mAP50:95 on the DIOR dataset. Furthermore, the model reduces the number of parameters by 2.5% and the computational cost by 1.4%, demonstrating its potential for real-time detection applications.
{"title":"A Multiscale Feature Refinement Detector for Small Objects With Ambiguous Boundaries","authors":"Weihua Shen;Yalin Li;Xiaohua Chen;Chunzhi Li","doi":"10.1109/LGRS.2025.3633285","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3633285","url":null,"abstract":"There are multiple challenges in small object detection (SOD), including limited instances, insufficient features, diverse scales, uneven distribution, ambiguous boundaries, and complex backgrounds. These issues often lead to high false detection rates and hinder model generalization and convergence. This study proposes a multiscale object detection algorithm that enhances the detection of subtle features by improving the change detection to DH throughout and incorporating a minimum point distance intersection-over-union loss. The enhanced DH improves target representation, enabling more precise localization and classification of small objects. Meanwhile, the new loss (NL) function stabilizes bounding box regression by adaptively adjusting auxiliary bounding box scales. Evaluations on two benchmark datasets demonstrate that our method achieves a 2.6% increase in mAP50 and a 1.8% improvement in mAP50:95 on the satellite imagery multivehicles dataset (SIMD) and a 1.9% increase in mAP50:95 on the DIOR dataset. Furthermore, the model reduces the number of parameters by 2.5% and the computational cost by 1.4%, demonstrating its potential for real-time detection applications.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-17DOI: 10.1109/LGRS.2025.3633718
Kwonyoung Kim;Jungin Park;Kwanghoon Sohn
Parameter-efficient fine-tuning (PEFT) adapts large pretrained foundation models to downstream tasks, such as remote sensing scene classification, by learning a small set of additional parameters while keeping the pretrained parameters frozen. While PEFT offers substantial training efficiency over full fine-tuning (FT), it still incurs high inference costs due to reliance on both pretrained and task-specific parameters. To address this limitation, we propose a novel PEFT approach with model truncation, termed truncated parameter-efficient fine-tuning (TruncPEFT), enabling efficiency gains to persist during inference. Observing that predictions from final and intermediate layers often exhibit high agreement, we truncate a set of final layers and replace them with a lightweight attention module. Additionally, we introduce a token dropping strategy to mitigate interclass interference, reducing the model’s sensitivity to visual similarities between different classes in remote sensing data. Extensive experiments on seven remote sensing scene classification datasets demonstrate the effectiveness of the proposed method, significantly improving training, inference, and GPU memory efficiencies while achieving comparable or even better performance than prior PEFT methods and full FT.
{"title":"Geospatial Domain Adaptation With Truncated Parameter-Efficient Fine-Tuning","authors":"Kwonyoung Kim;Jungin Park;Kwanghoon Sohn","doi":"10.1109/LGRS.2025.3633718","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3633718","url":null,"abstract":"Parameter-efficient fine-tuning (PEFT) adapts large pretrained foundation models to downstream tasks, such as remote sensing scene classification, by learning a small set of additional parameters while keeping the pretrained parameters frozen. While PEFT offers substantial training efficiency over full fine-tuning (FT), it still incurs high inference costs due to reliance on both pretrained and task-specific parameters. To address this limitation, we propose a novel PEFT approach with model truncation, termed truncated parameter-efficient fine-tuning (TruncPEFT), enabling efficiency gains to persist during inference. Observing that predictions from final and intermediate layers often exhibit high agreement, we truncate a set of final layers and replace them with a lightweight attention module. Additionally, we introduce a token dropping strategy to mitigate interclass interference, reducing the model’s sensitivity to visual similarities between different classes in remote sensing data. Extensive experiments on seven remote sensing scene classification datasets demonstrate the effectiveness of the proposed method, significantly improving training, inference, and GPU memory efficiencies while achieving comparable or even better performance than prior PEFT methods and full FT.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-17DOI: 10.1109/LGRS.2025.3633588
Yanyan Zhang;Akira Hirose;Ryo Natsuaki
Advanced Land Observing Satellite-4 (ALOS-4) is a spaceborne high-resolution and wide-swath synthetic aperture radar (HRWS-SAR) that uses a variable pulse repetition interval (VPRI) technique to achieve continuous wide imaging. In some ALOS-4 images, azimuth fractional ambiguity caused by the VPRI is observed, and it differs from the usual integer ambiguity, resulting from interchannel errors in that it occurs at smaller intervals. In this letter, we propose a sequential Doppler offset (SDO) method for locating the original target (OT) that causes azimuth fractional ambiguity. First, the ratio of the interval of integer ambiguity to that of fractional ambiguity is obtained, which is used to generate SAR images with different Doppler center frequencies. Second, the coherence between the sum image of the generated images and the image with a zero Doppler center frequency is calculated. Third, some points with coherence greater than a threshold are selected based on the coherence. Finally, the final OT is obtained by detecting the filtered selected points. Some experiments are conducted based on ALOS-4 L1.2 data, and the results demonstrate that the method locates the OT accurately. In short, the proposed method provides a starting point for fractional ambiguity suppression in HRWS-SAR.
{"title":"A Sequential Doppler Offset (SDO) Method for Locating Targets Causing Azimuth Fractional Ambiguity in Spaceborne HRWS-SAR","authors":"Yanyan Zhang;Akira Hirose;Ryo Natsuaki","doi":"10.1109/LGRS.2025.3633588","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3633588","url":null,"abstract":"Advanced Land Observing Satellite-4 (ALOS-4) is a spaceborne high-resolution and wide-swath synthetic aperture radar (HRWS-SAR) that uses a variable pulse repetition interval (VPRI) technique to achieve continuous wide imaging. In some ALOS-4 images, azimuth fractional ambiguity caused by the VPRI is observed, and it differs from the usual integer ambiguity, resulting from interchannel errors in that it occurs at smaller intervals. In this letter, we propose a sequential Doppler offset (SDO) method for locating the original target (OT) that causes azimuth fractional ambiguity. First, the ratio of the interval of integer ambiguity to that of fractional ambiguity is obtained, which is used to generate SAR images with different Doppler center frequencies. Second, the coherence between the sum image of the generated images and the image with a zero Doppler center frequency is calculated. Third, some points with coherence greater than a threshold are selected based on the coherence. Finally, the final OT is obtained by detecting the filtered selected points. Some experiments are conducted based on ALOS-4 L1.2 data, and the results demonstrate that the method locates the OT accurately. In short, the proposed method provides a starting point for fractional ambiguity suppression in HRWS-SAR.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Depth-resolved estimation of soil organic carbon (SOC) remains challenging because optical measurements originate at the surface while carbon dynamics vary vertically. We propose a physics-aware uncrewed aerial vehicle (UAV) framework that integrates multispectral imagery (MSI) and hyperspectral imagery (HSI) to estimate SOC concentration (%) across five depths. The experiment was conducted at Plantheaven Farms, Missouri, with ten sorghum genotypes across three replicates. Feature construction combined spectral derivatives from HSI with texture features from MSI, compressed via principal component analysis (PCA). Physics-based regularization was implemented through: 1) a second-difference penalty to enforce vertical smoothness and 2) a profile-integral consistency constraint to preserve whole-profile balance. Four model configurations evaluated on local data showed progressive improvements: MSI-only, MSI + HSI, MSI + HSI with smoothness, and MSI + HSI with full physics constraints. In addition, transfer learning from the open soil spectral library (OSSL) was tested to address data limitations. Model fitting on the available data achieved ${R} ^{2} = 0.72$ at 0–30 cm, with physics-aware constraints notably improving vertical coherence. The physics-aware model reduced variance and improved plausibility. In-sample, transfer learning achieved ${R} ^{2}=0.60$ at 0–30 cm, with conservative interpretation below 90 cm due to reduced optical sensitivity. Exploratory genotype patterns suggested higher surface SOC percent for PI 656 029 and PI 656 057, and lower values for PI 276 837 and PI 656 044.
{"title":"Physics-Aware Neural Framework for Multidepth Soil Carbon Mapping","authors":"Bishal Roy;Vasit Sagan;Haireti Alifu;Jocelyn Saxton;Cagri Gul;Nadia Shakoor","doi":"10.1109/LGRS.2025.3632815","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3632815","url":null,"abstract":"Depth-resolved estimation of soil organic carbon (SOC) remains challenging because optical measurements originate at the surface while carbon dynamics vary vertically. We propose a physics-aware uncrewed aerial vehicle (UAV) framework that integrates multispectral imagery (MSI) and hyperspectral imagery (HSI) to estimate SOC concentration (%) across five depths. The experiment was conducted at Plantheaven Farms, Missouri, with ten sorghum genotypes across three replicates. Feature construction combined spectral derivatives from HSI with texture features from MSI, compressed via principal component analysis (PCA). Physics-based regularization was implemented through: 1) a second-difference penalty to enforce vertical smoothness and 2) a profile-integral consistency constraint to preserve whole-profile balance. Four model configurations evaluated on local data showed progressive improvements: MSI-only, MSI + HSI, MSI + HSI with smoothness, and MSI + HSI with full physics constraints. In addition, transfer learning from the open soil spectral library (OSSL) was tested to address data limitations. Model fitting on the available data achieved <inline-formula> <tex-math>${R} ^{2} = 0.72$ </tex-math></inline-formula> at 0–30 cm, with physics-aware constraints notably improving vertical coherence. The physics-aware model reduced variance and improved plausibility. In-sample, transfer learning achieved <inline-formula> <tex-math>${R} ^{2}=0.60$ </tex-math></inline-formula> at 0–30 cm, with conservative interpretation below 90 cm due to reduced optical sensitivity. Exploratory genotype patterns suggested higher surface SOC percent for PI 656 029 and PI 656 057, and lower values for PI 276 837 and PI 656 044.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This letter presents a low-complexity attention module for fast change detection. The proposed module computes the absolute difference between bitemporal features extracted by a Siamese backbone network and sequentially applies spatial and channel attention to generate key change representations. Spatial attention emphasizes important spatial locations using representative values from channelwise pooling, while channel attention highlights discriminative feature responses using values from spatialwise pooling. By leveraging low-dimensional representative features, the module significantly reduces computational cost. Additionally, its dual-attention structure-driven by feature differences-enhances both spatial localization and semantic relevance of changes. Compared to the change-guided network (CGNet), the proposed method reduces multiply-accumulate operations (MACs) by 53.81% with only a 0.15% drop in ${F}1$ -score, demonstrating high efficiency with minimal performance degradation. These results suggest that the proposed method is suitable for large-scale or real-time remote sensing (RS) applications where computational efficiency is essential.
{"title":"Lightweight Attention Mechanism With Feature Differences for Efficient Change Detection in Remote Sensing","authors":"Jangsoo Park;EunSeong Lee;Jongseok Lee;Seoung-Jun Oh;Donggyu Sim","doi":"10.1109/LGRS.2025.3633179","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3633179","url":null,"abstract":"This letter presents a low-complexity attention module for fast change detection. The proposed module computes the absolute difference between bitemporal features extracted by a Siamese backbone network and sequentially applies spatial and channel attention to generate key change representations. Spatial attention emphasizes important spatial locations using representative values from channelwise pooling, while channel attention highlights discriminative feature responses using values from spatialwise pooling. By leveraging low-dimensional representative features, the module significantly reduces computational cost. Additionally, its dual-attention structure-driven by feature differences-enhances both spatial localization and semantic relevance of changes. Compared to the change-guided network (CGNet), the proposed method reduces multiply-accumulate operations (MACs) by 53.81% with only a 0.15% drop in <inline-formula> <tex-math>${F}1$ </tex-math></inline-formula>-score, demonstrating high efficiency with minimal performance degradation. These results suggest that the proposed method is suitable for large-scale or real-time remote sensing (RS) applications where computational efficiency is essential.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-14DOI: 10.1109/LGRS.2025.3632860
Qiang Na;Biao Cao;Wanchun Zhang;Limeng Zheng;Xi Zhang;Ziyi Yang;Qinhuo Liu
Satellite-derived surface upward longwave radiation (SULR) is essential for monitoring the global surface radiation budget, ecological processes, and climate change. However, the widely used SULR products derived from thermal infrared (TIR) remote sensing exhibit spatial discontinuities because TIR signals cannot penetrate cloud cover. Conventional cloud-sky SULR estimation approaches often utilize post-processed reanalysis data as inputs, which could not meet the real-time requirement of the operational system. This study proposes a lightweight cloud-sky SULR real-time estimation method for the Fengyun-4A (FY-4A) geostationary satellite using a Light Gradient Boosting Machine (LightGBM) model. The daytime cloud-sky SULR is estimated by applying the established relationship between auxiliary variables and clear-sky SULR to cloudy conditions, while the nighttime cloud-sky SULR values are estimated by applying the determined relationship between input variables and a publicly accessible, gap-filled SULR product. The model inputs include: 1) spatial-temporal location record data; 2) multiple surface characteristic parameters generated from previous-year data; and 3) two categories of operational FY-4A radiation products, with both components being available in real-time. Validation against six Heihe Watershed Allied Telemetry Experimental Research (HiWATER) sites demonstrates that the reconstructed cloud-sky SULR achieves acceptable root mean square error (RMSE) and mean bias error (MBE) values of 33.4 W/m2 (1.5 W/m2) for daytime and 25.2 W/m2 (4.7 W/m2) for nighttime conditions. Therefore, the proposed lightweight method could improve the spatial coverage of the current FY-4A SULR product and further promote real-time SULR-related applications.
{"title":"A Lightweight Method of Cloud-Sky Surface Upward Longwave Radiation Real-Time Estimation for FY-4A Geostationary Satellite","authors":"Qiang Na;Biao Cao;Wanchun Zhang;Limeng Zheng;Xi Zhang;Ziyi Yang;Qinhuo Liu","doi":"10.1109/LGRS.2025.3632860","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3632860","url":null,"abstract":"Satellite-derived surface upward longwave radiation (SULR) is essential for monitoring the global surface radiation budget, ecological processes, and climate change. However, the widely used SULR products derived from thermal infrared (TIR) remote sensing exhibit spatial discontinuities because TIR signals cannot penetrate cloud cover. Conventional cloud-sky SULR estimation approaches often utilize post-processed reanalysis data as inputs, which could not meet the real-time requirement of the operational system. This study proposes a lightweight cloud-sky SULR real-time estimation method for the Fengyun-4A (FY-4A) geostationary satellite using a Light Gradient Boosting Machine (LightGBM) model. The daytime cloud-sky SULR is estimated by applying the established relationship between auxiliary variables and clear-sky SULR to cloudy conditions, while the nighttime cloud-sky SULR values are estimated by applying the determined relationship between input variables and a publicly accessible, gap-filled SULR product. The model inputs include: 1) spatial-temporal location record data; 2) multiple surface characteristic parameters generated from previous-year data; and 3) two categories of operational FY-4A radiation products, with both components being available in real-time. Validation against six Heihe Watershed Allied Telemetry Experimental Research (HiWATER) sites demonstrates that the reconstructed cloud-sky SULR achieves acceptable root mean square error (RMSE) and mean bias error (MBE) values of 33.4 W/m2 (1.5 W/m2) for daytime and 25.2 W/m2 (4.7 W/m2) for nighttime conditions. Therefore, the proposed lightweight method could improve the spatial coverage of the current FY-4A SULR product and further promote real-time SULR-related applications.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Infrared small target detection (IRSTD) remains a long challenging problem in infrared imaging technology. To enhance detection performance while more effectively exploiting target-specific characteristics, a novel U-shaped segmentation network called knowledge-embedded CSwin-UNet (KECS-Net) is proposed in this letter. KECS-Net first incorporates a CSwin transformer module into the encoder of the UNet backbone, enabling the extraction of multiscale features from infrared targets within an expanded receptive field, while achieving higher computational efficiency compared to the original Swin transformer. Besides, a multiscale local contrast enhancement module (MLCEM) is introduced, which utilizes hand-crafted dilated convolution operators to amplify locally salient target responses and suppress background noise, thereby guiding the model to focus on potential target regions. Finally, a slicing-aided hypersegmentation (SAHS) method is also designed to resize and rescale the output image, increasing the relative size of small targets and improving segmentation accuracy during inference. Extensive experiments on three benchmark datasets demonstrate that the proposed KECS-Net outperforms the state-of-the-art (SOTA) methods in both quantitative metrics and visual quality. Relevant code will be available at https://github.com/Lilingxiao-image/KECS-Net
{"title":"KECS-Net: Knowledge-Embedded CSwin-UNet With Slicing-Aided Hypersegmentation for Infrared Small Target Detection","authors":"Lingxiao Li;Linlin Liu;Dan Huang;Sen Wang;Xutao Wang;Yunan He;Zhuqiang Zhong","doi":"10.1109/LGRS.2025.3632827","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3632827","url":null,"abstract":"Infrared small target detection (IRSTD) remains a long challenging problem in infrared imaging technology. To enhance detection performance while more effectively exploiting target-specific characteristics, a novel U-shaped segmentation network called knowledge-embedded CSwin-UNet (KECS-Net) is proposed in this letter. KECS-Net first incorporates a CSwin transformer module into the encoder of the UNet backbone, enabling the extraction of multiscale features from infrared targets within an expanded receptive field, while achieving higher computational efficiency compared to the original Swin transformer. Besides, a multiscale local contrast enhancement module (MLCEM) is introduced, which utilizes hand-crafted dilated convolution operators to amplify locally salient target responses and suppress background noise, thereby guiding the model to focus on potential target regions. Finally, a slicing-aided hypersegmentation (SAHS) method is also designed to resize and rescale the output image, increasing the relative size of small targets and improving segmentation accuracy during inference. Extensive experiments on three benchmark datasets demonstrate that the proposed KECS-Net outperforms the state-of-the-art (SOTA) methods in both quantitative metrics and visual quality. Relevant code will be available at <uri>https://github.com/Lilingxiao-image/KECS-Net</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}