Pub Date : 2025-11-12DOI: 10.1109/LGRS.2025.3631867
Shangshang Zhang;Yulong Fan;Lin Sun
Accurate retrieval of the spatiotemporal distribution of atmospheric aerosols is essential for studying aerosolradiationcloud interactions, air-quality forecasting, and climate-change assessment. Although data-driven methods have significantly advanced aerosol retrieval, the existing models often neglect the influence of aerosol type on retrieval accuracy. To address this gap, this study presents an improved data-driven aerosol retrieval framework that explicitly incorporates aerosol type information into model training. Aerosol classification is performed using the $K$ -means unsupervised clustering algorithm to optimize training samples, thereby enhancing model adaptability and retrieval accuracy. The refined samples are then used to train an extremely randomized trees (ERTs) model, achieving an optimal balance between accuracy and computational efficiency. Validation results demonstrate strong performance, with a correlation coefficient of 0.93, a root mean square error (RMSE) of 0.072, and over 89% of results falling within the expected error range [(EE: ± (0.05+20% $times $ in situ observations)], better than that of the traditional model. The findings demonstrate that integrating aerosol-type information into data-driven retrievals substantially improves accuracy and applicability for aerosol remote sensing. Future research should focus on refining aerosol classification techniques and integrating multisource remote sensing data to enhance model robustness and global applicability further.
{"title":"K-Means Clustering for Improved Data-Driven Satellite Aerosol Retrieval","authors":"Shangshang Zhang;Yulong Fan;Lin Sun","doi":"10.1109/LGRS.2025.3631867","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3631867","url":null,"abstract":"Accurate retrieval of the spatiotemporal distribution of atmospheric aerosols is essential for studying aerosolradiationcloud interactions, air-quality forecasting, and climate-change assessment. Although data-driven methods have significantly advanced aerosol retrieval, the existing models often neglect the influence of aerosol type on retrieval accuracy. To address this gap, this study presents an improved data-driven aerosol retrieval framework that explicitly incorporates aerosol type information into model training. Aerosol classification is performed using the <inline-formula> <tex-math>$K$ </tex-math></inline-formula>-means unsupervised clustering algorithm to optimize training samples, thereby enhancing model adaptability and retrieval accuracy. The refined samples are then used to train an extremely randomized trees (ERTs) model, achieving an optimal balance between accuracy and computational efficiency. Validation results demonstrate strong performance, with a correlation coefficient of 0.93, a root mean square error (RMSE) of 0.072, and over 89% of results falling within the expected error range [(EE: ± (0.05+20% <inline-formula> <tex-math>$times $ </tex-math></inline-formula> in situ observations)], better than that of the traditional model. The findings demonstrate that integrating aerosol-type information into data-driven retrievals substantially improves accuracy and applicability for aerosol remote sensing. Future research should focus on refining aerosol classification techniques and integrating multisource remote sensing data to enhance model robustness and global applicability further.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data gaps exist in the measured spectral reflectance and atmospheric data from the radiometric calibration network (RadCalNet) due to instrument malfunctions or weather-related interferences, which severely impedes the application of the data. Therefore, developing a method to fill these missing RadCalNet data is a pressing issue. This study focuses on four RadCalNet sites with distinct surface types and proposes a high-precision bottom-of-atmosphere (BOA) spectral reflectance model. With on-site atmospheric data from RadCalNet, the predicted results achieve a root mean square error (RMSE) of no more than 1.26%. In scenarios where in situ atmospheric conditions are completely missing, the ERA5 dataset is used as a substitute and validated with Landsat 8 surface reflectance products; the absolute errors for all sites did not exceed 4.58%, validating the proposed method’s effectiveness. Additionally, the importance of input parameters and the impact of their uncertainties on prediction accuracy are discussed.
{"title":"A Method for Reconstructing Surface Spectral Reflectance With Missing RadCalNet Data","authors":"Shutian Zhu;Qiyue Liu;Chuanzhao Tian;Hanlie Xu;Jie Han;Wenhao Zhang;Na Xu","doi":"10.1109/LGRS.2025.3631876","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3631876","url":null,"abstract":"Data gaps exist in the measured spectral reflectance and atmospheric data from the radiometric calibration network (RadCalNet) due to instrument malfunctions or weather-related interferences, which severely impedes the application of the data. Therefore, developing a method to fill these missing RadCalNet data is a pressing issue. This study focuses on four RadCalNet sites with distinct surface types and proposes a high-precision bottom-of-atmosphere (BOA) spectral reflectance model. With on-site atmospheric data from RadCalNet, the predicted results achieve a root mean square error (RMSE) of no more than 1.26%. In scenarios where in situ atmospheric conditions are completely missing, the ERA5 dataset is used as a substitute and validated with Landsat 8 surface reflectance products; the absolute errors for all sites did not exceed 4.58%, validating the proposed method’s effectiveness. Additionally, the importance of input parameters and the impact of their uncertainties on prediction accuracy are discussed.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep learning has emerged as the predominant approach for ship detection in synthetic aperture radar (SAR) imagery. Nevertheless, persistent challenges such as densely clustered vessels, intricate background complexity, and multiscale target variations often lead to incomplete feature extraction, resulting in false alarms and missed detections. To address these limitations, this study presents LD-YOLO, an enhanced model based on YOLOv8n, which incorporates three critical innovations. Dynamic convolution layers are strategically embedded within key backbone stages to adaptively adjust kernel parameters, enhancing multiscale feature discriminability while maintaining computational efficiency. The proposed C2f-LSK module combines decomposed large-kernel convolution with attention mechanisms, enabling dynamic optimization of receptive field contributions across different detection stages and effective modeling of global contextual information. Considering the characteristics of small vessels in SAR imagery and the impact of downsampling rates on image quality, a dedicated $160times 160$ detection head is further integrated to preserve fine-grained details of small targets, complemented by bidirectional feature fusion to strengthen semantic context propagation. Extensive experiments validate the model’s superiority, achieving 98.2% of AP50 and 73.1% of AP50-95 on the SSDD benchmark, with consistent performance improvements demonstrated on HRSID (94.6% AP50) datasets. These advancements position LD-YOLO as a robust solution for maritime surveillance applications requiring high-precision SAR image analysis under complex operational conditions.
{"title":"LD-YOLO: A Lightweight Dynamic Convolution-Based YOLOv8n Framework for Robust Ship Detection in SAR Imagery","authors":"Jiqiang Niu;Mengyang Li;Hao Lin;Yichen Liu;Zijian Liu;Hongrui Li;Shaomian Niu","doi":"10.1109/LGRS.2025.3630098","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3630098","url":null,"abstract":"Deep learning has emerged as the predominant approach for ship detection in synthetic aperture radar (SAR) imagery. Nevertheless, persistent challenges such as densely clustered vessels, intricate background complexity, and multiscale target variations often lead to incomplete feature extraction, resulting in false alarms and missed detections. To address these limitations, this study presents LD-YOLO, an enhanced model based on YOLOv8n, which incorporates three critical innovations. Dynamic convolution layers are strategically embedded within key backbone stages to adaptively adjust kernel parameters, enhancing multiscale feature discriminability while maintaining computational efficiency. The proposed C2f-LSK module combines decomposed large-kernel convolution with attention mechanisms, enabling dynamic optimization of receptive field contributions across different detection stages and effective modeling of global contextual information. Considering the characteristics of small vessels in SAR imagery and the impact of downsampling rates on image quality, a dedicated <inline-formula> <tex-math>$160times 160$ </tex-math></inline-formula> detection head is further integrated to preserve fine-grained details of small targets, complemented by bidirectional feature fusion to strengthen semantic context propagation. Extensive experiments validate the model’s superiority, achieving 98.2% of AP50 and 73.1% of AP50-95 on the SSDD benchmark, with consistent performance improvements demonstrated on HRSID (94.6% AP50) datasets. These advancements position LD-YOLO as a robust solution for maritime surveillance applications requiring high-precision SAR image analysis under complex operational conditions.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Coherent S-band radar has recently emerged as a promising technique for ocean surface wave and current detection. It can measure ocean surface current by estimating Doppler frequency shifts from sea surface signals. However, the conventional time averaging (TA) method neglects spatial dimension information and is unavailable under low wind speed conditions. Two algorithms for ocean current inversion are proposed in this letter: the spatial–temporal averaging (STA) method and the wavenumber--frequency (WF) method. In the STA method, the TA method is extended to the spatial–temporal domain. This approach fully exploits the spatial continuity of radar signals. In the WF method, a 2-D Fast Fourier Transform (2-D FFT) is applied to transform the spatial–temporal radial velocities into the WF domain. After employing dual filtering to eliminate nonlinear components, the radial current velocity is estimated through a modified dispersion relation fitting. The two methods are based on different physical mechanisms: the STA method measurements include wind drift components, while the WF method remains unaffected by wind drift. Therefore, wind drift can be effectively estimated by calculating the difference between the two methods’ measurements. Validation using observational data collected at Beishuang Island during Typhoon Catfish shows that the estimated wind drifts achieve a correlation coefficient (COR) of 0.90 with the “empirical model predictions.” This confirms the effectiveness of the proposed algorithms.
{"title":"Spatial–Temporal and Wavenumber--Frequency Inversion Algorithms for Ocean Surface Current Using Coherent S-Band Radar","authors":"Xinyu Fu;Chen Zhao;Zezong Chen;Sitao Wu;Fan Ding;Rui Liu;Guoxing Zheng","doi":"10.1109/LGRS.2025.3629684","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3629684","url":null,"abstract":"Coherent S-band radar has recently emerged as a promising technique for ocean surface wave and current detection. It can measure ocean surface current by estimating Doppler frequency shifts from sea surface signals. However, the conventional time averaging (TA) method neglects spatial dimension information and is unavailable under low wind speed conditions. Two algorithms for ocean current inversion are proposed in this letter: the spatial–temporal averaging (STA) method and the wavenumber--frequency (WF) method. In the STA method, the TA method is extended to the spatial–temporal domain. This approach fully exploits the spatial continuity of radar signals. In the WF method, a 2-D Fast Fourier Transform (2-D FFT) is applied to transform the spatial–temporal radial velocities into the WF domain. After employing dual filtering to eliminate nonlinear components, the radial current velocity is estimated through a modified dispersion relation fitting. The two methods are based on different physical mechanisms: the STA method measurements include wind drift components, while the WF method remains unaffected by wind drift. Therefore, wind drift can be effectively estimated by calculating the difference between the two methods’ measurements. Validation using observational data collected at Beishuang Island during Typhoon Catfish shows that the estimated wind drifts achieve a correlation coefficient (COR) of 0.90 with the “empirical model predictions.” This confirms the effectiveness of the proposed algorithms.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ship detection in remote sensing images plays an important role in various maritime activities. However, the existing deep learning methods face challenges, such as changes in ship target size, complex backgrounds, and noise interference in remote sensing images, which can lead to low detection accuracy and incomplete target detection. To address these issues, we proposed a synthetic aperture radar (SAR) image target detection framework called SDWPNet, aimed at improving target detection performance in complex scenes. First, we proposed SDWavetpool (SDW), which optimizes feature downsampling through multiscale wavelet features, effectively reducing the dimensionality of the feature map while preserving the detailed information of small targets. It can more accurately identify medium and large targets in complex backgrounds, fully utilizing multilevel features. Then, the network structure was optimized using a feature extraction module that combines the PPA mechanism, making it more focused on the details of small targets. In addition, we further improved the detection accuracy by improving the loss function (ICMPIoU). The experiments on the SAR ship detection dataset (SSDD) and high-resolution SAR image dataset (HRSID) show that this framework performs well in both accuracy and response speed of target detection, achieving 74.5% and 67.6% in $mathbf {mAP_{.50:.95}}$ , using only parameter 2.97 M.
{"title":"SDWPNet: A Downsampling-Driven Network for SAR Ship Detection With Refined Features and Optimized Loss","authors":"Xingyu Hu;Hongyu Chen;Yugang Chang;Xue Yang;Weiming Zeng","doi":"10.1109/LGRS.2025.3629377","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3629377","url":null,"abstract":"Ship detection in remote sensing images plays an important role in various maritime activities. However, the existing deep learning methods face challenges, such as changes in ship target size, complex backgrounds, and noise interference in remote sensing images, which can lead to low detection accuracy and incomplete target detection. To address these issues, we proposed a synthetic aperture radar (SAR) image target detection framework called SDWPNet, aimed at improving target detection performance in complex scenes. First, we proposed SDWavetpool (SDW), which optimizes feature downsampling through multiscale wavelet features, effectively reducing the dimensionality of the feature map while preserving the detailed information of small targets. It can more accurately identify medium and large targets in complex backgrounds, fully utilizing multilevel features. Then, the network structure was optimized using a feature extraction module that combines the PPA mechanism, making it more focused on the details of small targets. In addition, we further improved the detection accuracy by improving the loss function (ICMPIoU). The experiments on the SAR ship detection dataset (SSDD) and high-resolution SAR image dataset (HRSID) show that this framework performs well in both accuracy and response speed of target detection, achieving 74.5% and 67.6% in <inline-formula> <tex-math>$mathbf {mAP_{.50:.95}}$ </tex-math></inline-formula>, using only parameter 2.97 M.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-05DOI: 10.1109/LGRS.2025.3629303
Elman Ghazaei;Erchan Aptoula
ConvNets and Vision Transformers (ViTs) have been widely used for change detection (CD), though they exhibit limitations: long-range dependencies are not effectively captured by the former, while the latter are associated with high computational demands. Vision Mamba, based on State Space Models, has been proposed as an alternative, yet has been primarily utilized as a feature extraction backbone. In this work, the change state space model (CSSM) is introduced as a task-specific approach for CD, designed to focus exclusively on relevant changes between bitemporal images while filtering out irrelevant information. Through this design, the number of parameters is reduced, computational efficiency is improved, and robustness is enhanced. CSSM is evaluated on three benchmark datasets, where superior performance is achieved compared to ConvNets, ViTs, and Mamba-based models, at a significantly lower computational cost. The code will be made publicly available at https://github.com/Elman295/CSSM upon acceptance
{"title":"Efficient Remote Sensing Change Detection With Change State Space Models","authors":"Elman Ghazaei;Erchan Aptoula","doi":"10.1109/LGRS.2025.3629303","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3629303","url":null,"abstract":"ConvNets and Vision Transformers (ViTs) have been widely used for change detection (CD), though they exhibit limitations: long-range dependencies are not effectively captured by the former, while the latter are associated with high computational demands. Vision Mamba, based on State Space Models, has been proposed as an alternative, yet has been primarily utilized as a feature extraction backbone. In this work, the change state space model (CSSM) is introduced as a task-specific approach for CD, designed to focus exclusively on relevant changes between bitemporal images while filtering out irrelevant information. Through this design, the number of parameters is reduced, computational efficiency is improved, and robustness is enhanced. CSSM is evaluated on three benchmark datasets, where superior performance is achieved compared to ConvNets, ViTs, and Mamba-based models, at a significantly lower computational cost. The code will be made publicly available at <uri>https://github.com/Elman295/CSSM</uri> upon acceptance","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-30DOI: 10.1109/LGRS.2025.3626855
Dat Minh-Tien Nguyen;Thien Huynh-The
Remote sensing object detection faces challenges such as small object sizes, complex backgrounds, and computational constraints. To overcome these challenges, we propose XSNet, an efficient deep learning (DL) model proficiently designed to enhance feature representation and multiscale detection. Concretely, XSNet introduces three key innovations: swin-involution transformer (SIner) to improve local self-attention and spatial adaptability, positional weight bi-level routing attention (PosWeightRA) to refine spatial awareness and preserve positional encoding, and an X-shaped multiscale feature fusion strategy to optimize feature aggregation while reducing computational cost. These components collectively improve detection accuracy, particularly for small and overlapping objects. Through extensive experiments, XSNet achieves impressive mAP0.5 and mAP0.95 scores of 47.1% and 28.2% on VisDrone2019, and 92.9% and 66.0% on RSOD. It outperforms state-of-the-art models while maintaining a compact size of 7.11 million parameters and fast inference time of 35.5 ms, making it well-suited for real-time remote sensing in resource-constrained environments.
{"title":"XSNet: Lightweight Object Detection Model Using X-Shaped Architecture in Remote Sensing Images","authors":"Dat Minh-Tien Nguyen;Thien Huynh-The","doi":"10.1109/LGRS.2025.3626855","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3626855","url":null,"abstract":"Remote sensing object detection faces challenges such as small object sizes, complex backgrounds, and computational constraints. To overcome these challenges, we propose XSNet, an efficient deep learning (DL) model proficiently designed to enhance feature representation and multiscale detection. Concretely, XSNet introduces three key innovations: swin-involution transformer (SIner) to improve local self-attention and spatial adaptability, positional weight bi-level routing attention (PosWeightRA) to refine spatial awareness and preserve positional encoding, and an X-shaped multiscale feature fusion strategy to optimize feature aggregation while reducing computational cost. These components collectively improve detection accuracy, particularly for small and overlapping objects. Through extensive experiments, XSNet achieves impressive mAP0.5 and mAP0.95 scores of 47.1% and 28.2% on VisDrone2019, and 92.9% and 66.0% on RSOD. It outperforms state-of-the-art models while maintaining a compact size of 7.11 million parameters and fast inference time of 35.5 ms, making it well-suited for real-time remote sensing in resource-constrained environments.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145510076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-30DOI: 10.1109/LGRS.2025.3626786
Binge Cui;Shengyun Liu;Jing Zhang;Yan Lu
Coastline extraction from remote sensing imagery is persistently challenged by intra-class heterogeneity (e.g., diverse coastline types) and boundary ambiguity. Existing methods often exhibit suboptimal performance in complex scenes mixing artificial and natural landforms, as they tend to ignore coastline morphological priors and struggle to recover details in low-contrast regions. To address these issues, this letter introduces TopoSegNet, a novel collaborative framework centered on a dual-decoder architecture. A segmentation decoder utilizes a morphology-aware attention (MAA) module to adaptively decouple and model diverse coastline morphologies and a structure-detail synergistic enhancement (SDSE) module to reconstruct weak boundaries with high fidelity. Meanwhile, a learnable topology decoder frames topology construction as a graph reasoning task, which ensures the geometric and topological integrity of the final vector output. TopoSegNet was evaluated on the public Landsat-8 and a custom Lianyungang Gaofen-1 (GF-1) dataset. The experimental results show that the proposed method reached 98.64%, 66.80%, and 0.795% on the mIoU, BIoU, and average path length similarity (APLS) metrics, respectively, verifying its validity and superiority. Compared to the state-of-the-art methods, the TopoSegNet model demonstrates significantly higher accuracy and topological fidelity.
{"title":"TopoSegNet: Enhancing Geometric Fidelity of Coastline Extraction via a Joint Segmentation and Topological Reasoning Framework","authors":"Binge Cui;Shengyun Liu;Jing Zhang;Yan Lu","doi":"10.1109/LGRS.2025.3626786","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3626786","url":null,"abstract":"Coastline extraction from remote sensing imagery is persistently challenged by intra-class heterogeneity (e.g., diverse coastline types) and boundary ambiguity. Existing methods often exhibit suboptimal performance in complex scenes mixing artificial and natural landforms, as they tend to ignore coastline morphological priors and struggle to recover details in low-contrast regions. To address these issues, this letter introduces TopoSegNet, a novel collaborative framework centered on a dual-decoder architecture. A segmentation decoder utilizes a morphology-aware attention (MAA) module to adaptively decouple and model diverse coastline morphologies and a structure-detail synergistic enhancement (SDSE) module to reconstruct weak boundaries with high fidelity. Meanwhile, a learnable topology decoder frames topology construction as a graph reasoning task, which ensures the geometric and topological integrity of the final vector output. TopoSegNet was evaluated on the public Landsat-8 and a custom Lianyungang Gaofen-1 (GF-1) dataset. The experimental results show that the proposed method reached 98.64%, 66.80%, and 0.795% on the mIoU, BIoU, and average path length similarity (APLS) metrics, respectively, verifying its validity and superiority. Compared to the state-of-the-art methods, the TopoSegNet model demonstrates significantly higher accuracy and topological fidelity.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep learning approaches that jointly learn feature extraction have achieved remarkable progress in image matching. However, current methods often treat central and neighboring pixels uniformly and use static feature selection strategies that fail to account for environmental variations. This results in limited robustness of descriptors and keypoints, thereby affecting matching accuracy. To address these limitations, we propose a robust joint optimization network for feature detection and description in optical and SAR image matching. A center-weighted module (CWM) is designed to enhance local feature representation by emphasizing the hierarchical relationship between central and surrounding features. Furthermore, a multiscale gated aggregation (MSGA) module is introduced to suppress redundant responses and improve keypoint discriminability through a gating mechanism. To address the inconsistency of score maps across heterogeneous modalities, we design a position-constrained repeatability loss to guide the network in learning stable and consistent keypoint correspondences. Experimental results across various scenarios demonstrate that the proposed method outperforms state-of-the-art techniques in terms of both matching accuracy and the number of correct matches, highlighting its robustness and effectiveness.
{"title":"A Robust Joint Optimization Network for Feature Detection and Description in Optical and SAR Image Matching","authors":"Xinshan Zhang;Zhitao Fu;Menghua Li;Shaochen Zhang;Han Nie;Bo-Hui Tang","doi":"10.1109/LGRS.2025.3626750","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3626750","url":null,"abstract":"Deep learning approaches that jointly learn feature extraction have achieved remarkable progress in image matching. However, current methods often treat central and neighboring pixels uniformly and use static feature selection strategies that fail to account for environmental variations. This results in limited robustness of descriptors and keypoints, thereby affecting matching accuracy. To address these limitations, we propose a robust joint optimization network for feature detection and description in optical and SAR image matching. A center-weighted module (CWM) is designed to enhance local feature representation by emphasizing the hierarchical relationship between central and surrounding features. Furthermore, a multiscale gated aggregation (MSGA) module is introduced to suppress redundant responses and improve keypoint discriminability through a gating mechanism. To address the inconsistency of score maps across heterogeneous modalities, we design a position-constrained repeatability loss to guide the network in learning stable and consistent keypoint correspondences. Experimental results across various scenarios demonstrate that the proposed method outperforms state-of-the-art techniques in terms of both matching accuracy and the number of correct matches, highlighting its robustness and effectiveness.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145537627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-28DOI: 10.1109/LGRS.2025.3626369
Haowen Jin;Yuankang Ye;Chang Liu;Feng Gao
Precipitation nowcasting using radar echo data is critical for issuing timely extreme weather warnings, yet the existing models struggle to balance computational efficiency with prediction accuracy when modeling complex, nonlinear echo sequences. To address these challenges, we propose MambaCast, a novel dual-branch precipitation nowcasting model built upon the Mamba framework. Specifically, MambaCast incorporates three key components: a state-space model (SSM) branch, a convolutional neural network (CNN) branch and a CastFusion module. The SSM branch captures global low-frequency evolution features in the radar echo field through a selective scanning mechanism, while the CNN branch extracts local high-frequency transient features using gated spatiotemporal attention (gSTA). The CastFusion module dynamically integrates features across different frequency scales, enabling adaptive fusion of spatiotemporal distribution. Experiments on two public radar datasets show that MambaCast consistently outperforms baseline models.
{"title":"MambaCast: An Efficient Precipitation Nowcasting Model With Dual-Branch Mamba","authors":"Haowen Jin;Yuankang Ye;Chang Liu;Feng Gao","doi":"10.1109/LGRS.2025.3626369","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3626369","url":null,"abstract":"Precipitation nowcasting using radar echo data is critical for issuing timely extreme weather warnings, yet the existing models struggle to balance computational efficiency with prediction accuracy when modeling complex, nonlinear echo sequences. To address these challenges, we propose MambaCast, a novel dual-branch precipitation nowcasting model built upon the Mamba framework. Specifically, MambaCast incorporates three key components: a state-space model (SSM) branch, a convolutional neural network (CNN) branch and a CastFusion module. The SSM branch captures global low-frequency evolution features in the radar echo field through a selective scanning mechanism, while the CNN branch extracts local high-frequency transient features using gated spatiotemporal attention (gSTA). The CastFusion module dynamically integrates features across different frequency scales, enabling adaptive fusion of spatiotemporal distribution. Experiments on two public radar datasets show that MambaCast consistently outperforms baseline models.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145537628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}