Pub Date : 2026-02-03DOI: 10.1109/JSTARS.2026.3660704
Cong Huang;Chun Liu;Ke Shi;Jian Yang
Accurate bridge detection in polarimetric SAR (PolSAR) imagery remains challenging due to the diversity of bridges, strong speckle noise, and complex backgrounds. Bridges spanning narrow river branches are particularly prone to missed detection. We propose a bridge detection method that combines cross-section probability modeling and graph topology analysis for accurate detection of bridges over narrow river branches in complex water network. The core innovation of the proposed bridge detection method lies in a novel approach to constructing water networks. Specifically, cross-sections are extracted at the termini of water branches, which are subsequently connected to construct a water network. Bridges are detected as land regions that connect adjacent branches of the water network. Water regions are first segmented from PolSAR images using a likelihood ratio test. Subsequently, cross-sections at the ends of water branches are estimated via a particle filtering algorithm, ensuring precise localization and shape representation of branch termini. Finally, matched cross-section pairs are used to construct the water network, enabling the reliable detection of bridge regions. Experiments on Gaofen-3 (GF-3) and RADARSAT-2 datasets covering single-branch, multibranch, and complex waterway scenarios demonstrate the effectiveness of the proposed approach. Compared with existing methods, it achieves superior performance with an F1-score of 0.94 and a mIoU of 0.66, confirming its robustness and accuracy.
{"title":"A PolSAR Bridge Detection Method Integrating Cross-Sectional Probability Modeling and Graph Topology Analysis","authors":"Cong Huang;Chun Liu;Ke Shi;Jian Yang","doi":"10.1109/JSTARS.2026.3660704","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3660704","url":null,"abstract":"Accurate bridge detection in polarimetric SAR (PolSAR) imagery remains challenging due to the diversity of bridges, strong speckle noise, and complex backgrounds. Bridges spanning narrow river branches are particularly prone to missed detection. We propose a bridge detection method that combines cross-section probability modeling and graph topology analysis for accurate detection of bridges over narrow river branches in complex water network. The core innovation of the proposed bridge detection method lies in a novel approach to constructing water networks. Specifically, cross-sections are extracted at the termini of water branches, which are subsequently connected to construct a water network. Bridges are detected as land regions that connect adjacent branches of the water network. Water regions are first segmented from PolSAR images using a likelihood ratio test. Subsequently, cross-sections at the ends of water branches are estimated via a particle filtering algorithm, ensuring precise localization and shape representation of branch termini. Finally, matched cross-section pairs are used to construct the water network, enabling the reliable detection of bridge regions. Experiments on Gaofen-3 (GF-3) and RADARSAT-2 datasets covering single-branch, multibranch, and complex waterway scenarios demonstrate the effectiveness of the proposed approach. Compared with existing methods, it achieves superior performance with an F1-score of 0.94 and a mIoU of 0.66, confirming its robustness and accuracy.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"6460-6476"},"PeriodicalIF":5.3,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11371302","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1109/JSTARS.2026.3659984
Jie Yu;Xin Chen;Yi Lin;Yu Rong;Junbo Lv;Yuxuan Yang;Daiqi Zhong;Yiyuan Tian;Yi Jing;Xiaonan Yang
Spectral variability and nonlinear mixing interactions critically degrade spectral unmixing accuracy, especially in heterogeneous environments. To address these challenges, this study proposes a robust nonlinear spectral variability-aware unmixing model, AD-HKFCM, which integrates fuzzy clustering, kernel-driven nonlinear mapping, and intraclass/interclass affinity cohesion. The model introduces a hybrid kernel function combining polynomial and radial basis kernels to enhance linear separability in high-dimensional space. By replacing conventional fuzzy c-means prototypes with support vector data description-derived hypersphere centers, the model reduces dependency on pure pixels and adaptively suppresses outliers through adaptive penalty weight optimization. A physics-informed affinity distance metric is designed to explicitly quantify spectral variability by penalizing intraclass dispersion and amplifying inter-class separation, thereby enabling the precise inference of “virtual pure endmembers” from intimately mixed data. Experiments on simulated (including Orchard 2EM/3EM benchmarks and synthetic hyperspectral) and real satellite datasets demonstrate that AD-HKFCM achieves 5–26% lower abundance estimation errors compared to the best-performing comparative methods, particularly in densely mixed regions with seasonal vegetation variability. This work unifies spectral variability compensation and nonlinear unmixing into a cohesive architecture, offering a generalizable solution for robust unmixing in complex environments.
{"title":"AD-HKFCM: A Robust Nonlinear Spectral Variability-Aware Unmixing via Intra/Inter-Class Affinity Cohesion","authors":"Jie Yu;Xin Chen;Yi Lin;Yu Rong;Junbo Lv;Yuxuan Yang;Daiqi Zhong;Yiyuan Tian;Yi Jing;Xiaonan Yang","doi":"10.1109/JSTARS.2026.3659984","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3659984","url":null,"abstract":"Spectral variability and nonlinear mixing interactions critically degrade spectral unmixing accuracy, especially in heterogeneous environments. To address these challenges, this study proposes a robust nonlinear spectral variability-aware unmixing model, AD-HKFCM, which integrates fuzzy clustering, kernel-driven nonlinear mapping, and intraclass/interclass affinity cohesion. The model introduces a hybrid kernel function combining polynomial and radial basis kernels to enhance linear separability in high-dimensional space. By replacing conventional fuzzy c-means prototypes with support vector data description-derived hypersphere centers, the model reduces dependency on pure pixels and adaptively suppresses outliers through adaptive penalty weight optimization. A physics-informed affinity distance metric is designed to explicitly quantify spectral variability by penalizing intraclass dispersion and amplifying inter-class separation, thereby enabling the precise inference of “virtual pure endmembers” from intimately mixed data. Experiments on simulated (including Orchard 2EM/3EM benchmarks and synthetic hyperspectral) and real satellite datasets demonstrate that AD-HKFCM achieves 5–26% lower abundance estimation errors compared to the best-performing comparative methods, particularly in densely mixed regions with seasonal vegetation variability. This work unifies spectral variability compensation and nonlinear unmixing into a cohesive architecture, offering a generalizable solution for robust unmixing in complex environments.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"7280-7294"},"PeriodicalIF":5.3,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11370046","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate monitoring and recognition of Arctic sea ice are essential for understanding global climate change and the evolution of polar ecosystems. With the rapid advancement of satellite remote sensing technologies, integrating data from multiple remote sensing sources has shown strong potential for improving sea ice recognition. However, existing studies have not sufficiently explored how to jointly capture high-frequency spatial details and low-frequency global structures from multisource observations, nor have they fully addressed effective information interaction across different data modalities. To overcome these limitations, this study proposes a novel multimodal fusion network, termed global local feature fusion network (GLFFuse), designed for fine grained Arctic sea ice recognition. The proposed framework integrates synthetic aperture radar (SAR) imagery, advanced microwave scanning radiometer 2 (AMSR2) data, ECMWF Reanalysis v5 (ERA5) data, and other auxiliary variables. It combines a long short-range attention mechanism with an invertible neural network (INN) to jointly model global contextual patterns and local structural details, thereby enhancing the complementarity among multimodal features. Extensive quantitative and qualitative evaluations on the AI4Arctic dataset demonstrate that the proposed feature level fusion strategy consistently outperforms conventional convolutional neural network-based and attention-based models across different sea ice recognition tasks. In addition, seasonal analysis results indicate that multimodal data fusion significantly improves prediction accuracy and stability under varying seasonal conditions, effectively reducing systematic biases and predictive uncertainty across a wide range of sea ice concentrations.
{"title":"GLFFuse: A Multimodal Feature-Level Fusion Network for Multitask Fine-Grained Recognition of Arctic Sea Ice","authors":"Tianen Ma;Xinwei Chen;Haipeng Qin;Linlin Xu;Peilin Yu;Weimin Huang","doi":"10.1109/JSTARS.2026.3660828","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3660828","url":null,"abstract":"Accurate monitoring and recognition of Arctic sea ice are essential for understanding global climate change and the evolution of polar ecosystems. With the rapid advancement of satellite remote sensing technologies, integrating data from multiple remote sensing sources has shown strong potential for improving sea ice recognition. However, existing studies have not sufficiently explored how to jointly capture high-frequency spatial details and low-frequency global structures from multisource observations, nor have they fully addressed effective information interaction across different data modalities. To overcome these limitations, this study proposes a novel multimodal fusion network, termed global local feature fusion network (GLFFuse), designed for fine grained Arctic sea ice recognition. The proposed framework integrates synthetic aperture radar (SAR) imagery, advanced microwave scanning radiometer 2 (AMSR2) data, ECMWF Reanalysis v5 (ERA5) data, and other auxiliary variables. It combines a long short-range attention mechanism with an invertible neural network (INN) to jointly model global contextual patterns and local structural details, thereby enhancing the complementarity among multimodal features. Extensive quantitative and qualitative evaluations on the AI4Arctic dataset demonstrate that the proposed feature level fusion strategy consistently outperforms conventional convolutional neural network-based and attention-based models across different sea ice recognition tasks. In addition, seasonal analysis results indicate that multimodal data fusion significantly improves prediction accuracy and stability under varying seasonal conditions, effectively reducing systematic biases and predictive uncertainty across a wide range of sea ice concentrations.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"7661-7679"},"PeriodicalIF":5.3,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11370817","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1109/JSTARS.2026.3660330
Jia Song;Luosheng Xia
Self-supervised learning (SSL) offers a promising solution to reduce reliance on labeled data. Among SSL approaches, Masked Image Modeling (MIM) has demonstrated significant potential in remote sensing applications such as scene classification and semantic segmentation, owing to its ability to capture pixel-level details. However, existing MIM frameworks, originally designed for natural images, struggle to adapt to the spectral-spatial characteristics of multispectral satellite imagery. While recent studies have introduced spectral-enhanced MIM SSL methods, most rely on band-group embedding, which imposes constraints on band utilization flexibility in downstream fine-tuning tasks and limits the granularity of spectral feature learning. To address these challenges, this study proposes Band-Independent Masked Image Modeling (BIMIM) with Transformer, a novel SSL framework specifically designed for multispectral satellite imagery. BIMIM not only enables finer band-specific spectral feature extraction, allowing for more effective capture of subtle spectral variations, but also introduces spatially random masking at the single-band level, facilitating more efficient interband feature learning. Extensive experiments on publicly available remote sensing datasets demonstrate that BIMIM achieves state-of-the-art performance in downstream tasks such as scene classification and semantic segmentation. This study provides a new perspective on SSL for multispectral remote sensing, paving the way for more effective spectral-spatial feature extraction and adaptation in SSL frameworks.
{"title":"BIMIM: Band-Independent Masked Image Modeling With Transformer for Multispectral Satellite Imagery","authors":"Jia Song;Luosheng Xia","doi":"10.1109/JSTARS.2026.3660330","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3660330","url":null,"abstract":"Self-supervised learning (SSL) offers a promising solution to reduce reliance on labeled data. Among SSL approaches, Masked Image Modeling (MIM) has demonstrated significant potential in remote sensing applications such as scene classification and semantic segmentation, owing to its ability to capture pixel-level details. However, existing MIM frameworks, originally designed for natural images, struggle to adapt to the spectral-spatial characteristics of multispectral satellite imagery. While recent studies have introduced spectral-enhanced MIM SSL methods, most rely on band-group embedding, which imposes constraints on band utilization flexibility in downstream fine-tuning tasks and limits the granularity of spectral feature learning. To address these challenges, this study proposes Band-Independent Masked Image Modeling (BIMIM) with Transformer, a novel SSL framework specifically designed for multispectral satellite imagery. BIMIM not only enables finer band-specific spectral feature extraction, allowing for more effective capture of subtle spectral variations, but also introduces spatially random masking at the single-band level, facilitating more efficient interband feature learning. Extensive experiments on publicly available remote sensing datasets demonstrate that BIMIM achieves state-of-the-art performance in downstream tasks such as scene classification and semantic segmentation. This study provides a new perspective on SSL for multispectral remote sensing, paving the way for more effective spectral-spatial feature extraction and adaptation in SSL frameworks.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"6443-6459"},"PeriodicalIF":5.3,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11370492","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1109/JSTARS.2026.3660363
Agata M. Wijata;Lukasz Tulczyjew;Peter Naylor;Bertrand Le Saux;Nicolas Longépé;Jakub Nalepa
Technological advancements are expanding the potential of hyperspectral image (HSI) analysis for Earth observation. Extracting insights from high-dimensional images led to various supervised artificial intelligence (AI) approaches. However, the world is not labeled, and acquiring ground truth (GT) is expensive. For certain tasks, such as estimating soil parameters, only coarse-grained and field-level measurements are available. We tackle the challenge of building (un)supervised AI models from weakly-labeled sets with image-level labels. We propose a comprehensive framework for estimating soil parameters from HSIs, and a spectrally- and spatially informed algorithm for generating pseudolabels based on the original GT. We analyze the spatial variations of spectral pixel characteristics within the parcels (images) using superpixels, which group neighboring pixels based on their spectral features—superpixels are determined using unsupervised clustering. Then, the superpixels are clustered (based on the feature vectors calculated for the superpixels). The resulting clusters are either used to estimate soil parameters for the incoming unseen samples (pixels, superpixels, or images) in a fully unsupervised fashion, or are exploited to elaborate pseudolabels (at the superpixel level), which can be used to train supervised models. The experiments not only showed that our (un)supervised methods outperform the supervised state-of-the-art models on the HYPERVIEW benchmark, collocating HSIs and in situ measurements of soil parameters, but also indicated that pseudolabels make the modeling process much easier and generalizable, and may be considered as denoising of the originally acquired data. We release our models and pseudolabels to ensure reproducibility of our study.
{"title":"Getting the Most Out of the Image-Level Labels: (Un)Supervised Learning for Extracting Soil Parameters From Hyperspectral Images","authors":"Agata M. Wijata;Lukasz Tulczyjew;Peter Naylor;Bertrand Le Saux;Nicolas Longépé;Jakub Nalepa","doi":"10.1109/JSTARS.2026.3660363","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3660363","url":null,"abstract":"Technological advancements are expanding the potential of hyperspectral image (HSI) analysis for Earth observation. Extracting insights from high-dimensional images led to various supervised artificial intelligence (AI) approaches. However, the world is not labeled, and acquiring ground truth (GT) is expensive. For certain tasks, such as estimating soil parameters, only coarse-grained and field-level measurements are available. We tackle the challenge of building (un)supervised AI models from weakly-labeled sets with image-level labels. We propose a comprehensive framework for estimating soil parameters from HSIs, and a spectrally- and spatially informed algorithm for generating pseudolabels based on the original GT. We analyze the spatial variations of spectral pixel characteristics within the parcels (images) using superpixels, which group neighboring pixels based on their spectral features—superpixels are determined using unsupervised clustering. Then, the superpixels are clustered (based on the feature vectors calculated for the superpixels). The resulting clusters are either used to estimate soil parameters for the incoming unseen samples (pixels, superpixels, or images) in a fully unsupervised fashion, or are exploited to elaborate pseudolabels (at the superpixel level), which can be used to train supervised models. The experiments not only showed that our (un)supervised methods outperform the supervised state-of-the-art models on the HYPERVIEW benchmark, collocating HSIs and in situ measurements of soil parameters, but also indicated that pseudolabels make the modeling process much easier and generalizable, and may be considered as denoising of the originally acquired data. We release our models and pseudolabels to ensure reproducibility of our study.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"7401-7418"},"PeriodicalIF":5.3,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11370490","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1109/JSTARS.2026.3659848
Xinyu Cui;Bing Tu;Bo Liu;Yan He;Antonio Plaza
The ubiquitous presence of clouds in optical remote sensing images (RSIs) degrades image quality. Thin and fragmented clouds often exhibit low contrast and diverse morphology in complex scenes, which poses a significant challenge for accurate detection. To address the current challenges in thin cloud detection, this study proposes an adaptive cloud detection network in RSIs via local–global spatial context (LSCNet). It aims to process thin cloud features within global and local spatial contexts, thereby improving the accuracy and robustness of thin cloud detection in complex environments. Specifically, the network simulates a dual perspective by constructing a Mamba-based multiscale fusion block. This block utilizes learnable fusion weights to adaptively integrate differential and complementary information, thereby capturing thin cloud variations across both spatial and spectral dimensions in RSIs. In addition, we propose a local gated Mamba block for detailed feature enhancement. This module utilizes a spatial gating mechanism inspired by long short-term memory to capture key thin-cloud features and suppress residual background noise. By fully leveraging the spatial structure and morphology of thin clouds and building connections between thin cloud features across different spatial scales, the module achieves precise segmentation of cloud and ground features, thereby boosting the classification performance for thin and thick clouds in identical observational contexts. Extensive experiments conducted on the L8-Biome dataset and the WHUS2-CD+ dataset demonstrate that our method outperforms other existing cloud detection methods.
{"title":"LSCNet: An Adaptive Cloud Detection Network via Local–Global Spatial Context","authors":"Xinyu Cui;Bing Tu;Bo Liu;Yan He;Antonio Plaza","doi":"10.1109/JSTARS.2026.3659848","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3659848","url":null,"abstract":"The ubiquitous presence of clouds in optical remote sensing images (RSIs) degrades image quality. Thin and fragmented clouds often exhibit low contrast and diverse morphology in complex scenes, which poses a significant challenge for accurate detection. To address the current challenges in thin cloud detection, this study proposes an adaptive cloud detection network in RSIs via local–global spatial context (LSCNet). It aims to process thin cloud features within global and local spatial contexts, thereby improving the accuracy and robustness of thin cloud detection in complex environments. Specifically, the network simulates a dual perspective by constructing a Mamba-based multiscale fusion block. This block utilizes learnable fusion weights to adaptively integrate differential and complementary information, thereby capturing thin cloud variations across both spatial and spectral dimensions in RSIs. In addition, we propose a local gated Mamba block for detailed feature enhancement. This module utilizes a spatial gating mechanism inspired by long short-term memory to capture key thin-cloud features and suppress residual background noise. By fully leveraging the spatial structure and morphology of thin clouds and building connections between thin cloud features across different spatial scales, the module achieves precise segmentation of cloud and ground features, thereby boosting the classification performance for thin and thick clouds in identical observational contexts. Extensive experiments conducted on the L8-Biome dataset and the WHUS2-CD+ dataset demonstrate that our method outperforms other existing cloud detection methods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"6637-6652"},"PeriodicalIF":5.3,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11370041","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1109/JSTARS.2026.3660339
Xin Sui;Guibin Liu;Changqiang Wang;Jiaxin Gao
To mitigate the impact of nondiffuse reflection noise in light detection and ranging (LiDAR) point clouds on the mapping accuracy of simultaneous localization and mapping technology, this paper proposes a contour-matching-based method for detecting nondiffuse reflection noise in point clouds. First, based on the clustering segmentation results of a single-frame point cloud, all independent point cloud clusters are projected onto the X–Z and Y–Z planes. Then, the alpha shape algorithm is applied to extract the two-dimensional contours of each projected point cloud cluster, and corresponding feature vectors are constructed based on the geometric properties of these contours. Finally, by leveraging the symmetry properties of nondiffuse reflection point cloud clusters, nondiffuse reflection noise points are identified through the computation of cosine similarity between feature vectors. Experimental results indicate that the proposed method achieves a correct removal rate of 91.44% for nondiffuse noise points and a false removal rate of 5.77% for non-noise points. Compared with similar approaches, the correct detection rate of nondiffuse noise points is improved by 2.71%, while the false detection rate of non-noise points is reduced by 1.49%. Each point cloud frame is processed in 0.0741 s. The proposed approach effectively eliminates nondiffuse reflection noise points in point cloud data, mitigating the adverse impact of nondiffuse objects on LiDAR data quality.
{"title":"Contour Matching-Based Nondiffuse Reflection Noise Point Cloud Detection Method","authors":"Xin Sui;Guibin Liu;Changqiang Wang;Jiaxin Gao","doi":"10.1109/JSTARS.2026.3660339","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3660339","url":null,"abstract":"To mitigate the impact of nondiffuse reflection noise in light detection and ranging (LiDAR) point clouds on the mapping accuracy of simultaneous localization and mapping technology, this paper proposes a contour-matching-based method for detecting nondiffuse reflection noise in point clouds. First, based on the clustering segmentation results of a single-frame point cloud, all independent point cloud clusters are projected onto the <italic>X–Z</i> and <italic>Y–Z</i> planes. Then, the alpha shape algorithm is applied to extract the two-dimensional contours of each projected point cloud cluster, and corresponding feature vectors are constructed based on the geometric properties of these contours. Finally, by leveraging the symmetry properties of nondiffuse reflection point cloud clusters, nondiffuse reflection noise points are identified through the computation of cosine similarity between feature vectors. Experimental results indicate that the proposed method achieves a correct removal rate of 91.44% for nondiffuse noise points and a false removal rate of 5.77% for non-noise points. Compared with similar approaches, the correct detection rate of nondiffuse noise points is improved by 2.71%, while the false detection rate of non-noise points is reduced by 1.49%. Each point cloud frame is processed in 0.0741 s. The proposed approach effectively eliminates nondiffuse reflection noise points in point cloud data, mitigating the adverse impact of nondiffuse objects on LiDAR data quality.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"6497-6516"},"PeriodicalIF":5.3,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11370496","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1109/JSTARS.2026.3659506
Huiying Chen;Long Chen;Xingfang Pei;Yifei Guan;Zhenhua Zhou;Guanjun Liu;Senlin Zhu;Yi Luo
The decline in surface wind speed (SWS) caused by rapid urbanization has been confirmed in many regions. However, due to technological limitations, previous studies have often overlooked the multidimensional aspects of urban morphology and the coupled effects among these morphological factors, especially at large scales. Recent advances in remote sensing, particularly the availability of long-term 3D urban morphology datasets derived from global settlement products, have enabled such analyses. In this article, we collected and integrated SWS data from 183 stations across China for 1975–2020 with satellite-derived 2D–3D urban morphological indicators to quantitatively assess the impacts of urban morphology on wind speed variation. We found that during the entire study period, the mean SWS at urban stations (2.38 ± 0.13 m s−1) was significantly lower than that at rural stations (2.86 ± 0.28 m s−1); meanwhile, the declining trend at urban stations (–0.11 m s−1 decade−1, p < 0.05) was faster than that at rural stations (–0.07 m s−1 decade−1, p < 0.05). In addition, we observed that the wind reversal phenomenon (i.e., a transition from widespread decline to widespread increase over recent decades) occurred earlier at rural stations (2008) than at urban stations (2011). Moreover, the linear mixed-effects regression model, with urban–rural station pairing treated as a random effect to account for location-specific variability, indicated that building surface density, near-surface air temperature, and building-topographic height difference jointly dominated the variation of SWS at urban stations, and the coupled effects of multiple urban morphological factors exceeded those of any single factor. This article provides evidence for mitigating the adverse impacts of extreme heat on populations through urban morphological modifications.
快速城市化导致的地面风速下降已在许多地区得到证实。然而,由于技术的限制,以往的研究往往忽略了城市形态的多维方面以及这些形态因素之间的耦合效应,特别是在大尺度上。遥感技术的最新进展,特别是从全球聚落产品中获得的长期三维城市形态数据集,使这种分析成为可能。在本文中,我们收集了1975-2020年中国183个站点的SWS数据,并结合卫星2D-3D城市形态指标,定量评估了城市形态对风速变化的影响。结果表明,在整个研究期间,城市站点的平均SWS(2.38±0.13 m s−1)显著低于农村站点(2.86±0.28 m s−1);同时,城市站的下降趋势(-0.11 m s - 1 decade - 1, p < 0.05)快于农村站(-0.07 m s - 1 decade - 1, p < 0.05)。此外,我们还观察到,风逆转现象(即近几十年来从普遍下降到普遍增加的转变)在农村站(2008年)比城市站(2011年)发生得更早。此外,将城乡站点配对作为随机效应来考虑位置特异性变异的线性混合效应回归模型表明,建筑表面密度、近地表气温和建筑地形高差共同主导了城市站点SWS的变化,并且多个城市形态因子的耦合效应超过任何单一因素的耦合效应。本文为通过城市形态改变来减轻极端高温对人口的不利影响提供了证据。
{"title":"Regulatory Effects of Urban Morphology on Near-Surface Wind Speed in China Revealed by Observed and Remote Sensing Data","authors":"Huiying Chen;Long Chen;Xingfang Pei;Yifei Guan;Zhenhua Zhou;Guanjun Liu;Senlin Zhu;Yi Luo","doi":"10.1109/JSTARS.2026.3659506","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3659506","url":null,"abstract":"The decline in surface wind speed (SWS) caused by rapid urbanization has been confirmed in many regions. However, due to technological limitations, previous studies have often overlooked the multidimensional aspects of urban morphology and the coupled effects among these morphological factors, especially at large scales. Recent advances in remote sensing, particularly the availability of long-term 3D urban morphology datasets derived from global settlement products, have enabled such analyses. In this article, we collected and integrated SWS data from 183 stations across China for 1975–2020 with satellite-derived 2D–3D urban morphological indicators to quantitatively assess the impacts of urban morphology on wind speed variation. We found that during the entire study period, the mean SWS at urban stations (2.38 ± 0.13 m s<sup>−1</sup>) was significantly lower than that at rural stations (2.86 ± 0.28 m s<sup>−1</sup>); meanwhile, the declining trend at urban stations (–0.11 m s<sup>−1</sup> decade<sup>−1</sup>, <italic>p</i> < 0.05) was faster than that at rural stations (–0.07 m s<sup>−1</sup> decade<sup>−1</sup>, <italic>p</i> < 0.05). In addition, we observed that the wind reversal phenomenon (i.e., a transition from widespread decline to widespread increase over recent decades) occurred earlier at rural stations (2008) than at urban stations (2011). Moreover, the linear mixed-effects regression model, with urban–rural station pairing treated as a random effect to account for location-specific variability, indicated that building surface density, near-surface air temperature, and building-topographic height difference jointly dominated the variation of SWS at urban stations, and the coupled effects of multiple urban morphological factors exceeded those of any single factor. This article provides evidence for mitigating the adverse impacts of extreme heat on populations through urban morphological modifications.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"7084-7096"},"PeriodicalIF":5.3,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11368858","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1109/JSTARS.2026.3659926
Jingxin Hu;Juan Du;Guoying Yin;Wei He;Xiong Xu
Soil moisture (SM) is a crucial variable for regulating global climate change. However, SM products derived from microwave remote sensing often suffer from low resolution and incomplete coverage, hindering their use in regional hydrology and precision agriculture. To address these challenges, this study developed a three-step downscaling framework, SM-RDC, which consists of processes of reconstruction, downscaling, and calibration. In the reconstruction step, a temporal-spatial 3-D (TS3D) convolutional network was proposed to fill gaps in the 9 km soil moisture active passive (SMAP) time series by leveraging high-resolution auxiliary data, producing continuous, high-quality SM labels. For downscaling step, an attention-CNN model incorporating the convolutional block attention module was designed to robustly extract spatial and channel features from eleven auxiliary variables, covering meteorology, climate, vegetation, energy, land cover, topography, and soil properties, to predict 1-km SMs. In the calibration step, a residual correction method was applied to refine the downscaled results. The framework was tested by downscaling the SMAP product from 9 km to 1 km resolution over the Tibetan Plateau. Results illustrated that the proposed SM-RDC framework is capable of producing daily 1-km SMs with high accuracy and seamless spatial coverage, with an average ubRMSE of 0.056 compared to in-situ observation data, and an average ubRMSE of 0.036 against the original SMAP data, demonstrating its potential for enhancing climate monitoring and hydrological applications.
土壤湿度是调节全球气候变化的重要变量。然而,微波遥感衍生的SM产品往往存在分辨率低、覆盖不全等问题,阻碍了其在区域水文和精准农业中的应用。为了应对这些挑战,本研究开发了一个三步降尺度框架SM-RDC,该框架由重建、降尺度和校准过程组成。在重建步骤中,提出了一种时空三维(TS3D)卷积网络,利用高分辨率辅助数据填补9 km土壤湿度主动被动(SMAP)时间序列的空白,生成连续的高质量SM标签。在降尺度步骤中,设计了一个包含卷积块注意力模块的注意力- cnn模型,从11个辅助变量(包括气象、气候、植被、能源、土地覆盖、地形和土壤性质)中稳健地提取空间和通道特征,以预测1公里SMs。在标定步骤中,采用残差校正方法对降尺度结果进行细化。通过将SMAP产品在青藏高原从9公里分辨率降至1公里分辨率,对该框架进行了测试。结果表明,本文提出的SM-RDC框架能够产生高精度、无缝空间覆盖的每日1 km SMs,与原位观测数据相比,平均ubRMSE为0.056,与原始SMAP数据相比,平均ubRMSE为0.036,显示了其增强气候监测和水文应用的潜力。
{"title":"SM-RDC: A Three-Step Downscaling Framework for Daily 1-km Seamless SMAP Product Generation Over the Tibetan Plateau","authors":"Jingxin Hu;Juan Du;Guoying Yin;Wei He;Xiong Xu","doi":"10.1109/JSTARS.2026.3659926","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3659926","url":null,"abstract":"Soil moisture (SM) is a crucial variable for regulating global climate change. However, SM products derived from microwave remote sensing often suffer from low resolution and incomplete coverage, hindering their use in regional hydrology and precision agriculture. To address these challenges, this study developed a three-step downscaling framework, SM-RDC, which consists of processes of reconstruction, downscaling, and calibration. In the reconstruction step, a temporal-spatial 3-D (TS3D) convolutional network was proposed to fill gaps in the 9 km soil moisture active passive (SMAP) time series by leveraging high-resolution auxiliary data, producing continuous, high-quality SM labels. For downscaling step, an attention-CNN model incorporating the convolutional block attention module was designed to robustly extract spatial and channel features from eleven auxiliary variables, covering meteorology, climate, vegetation, energy, land cover, topography, and soil properties, to predict 1-km SMs. In the calibration step, a residual correction method was applied to refine the downscaled results. The framework was tested by downscaling the SMAP product from 9 km to 1 km resolution over the Tibetan Plateau. Results illustrated that the proposed SM-RDC framework is capable of producing daily 1-km SMs with high accuracy and seamless spatial coverage, with an average ubRMSE of 0.056 compared to in-situ observation data, and an average ubRMSE of 0.036 against the original SMAP data, demonstrating its potential for enhancing climate monitoring and hydrological applications.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"6425-6442"},"PeriodicalIF":5.3,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11369458","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The time reversal (TR) imaging technique demonstrates favorable positioning accuracy and target discrimination capability, making it well-suited for unexploded ordnance (UXO) detection applications. However, its practical applicability and real-time performance in engineering contexts remain limited due to two primary factors: the requirement for large-scale antenna arrays in conventional TR-based imaging, and the significant computational complexity associated with decomposing the full time reversal operator (TRO). To address these limitations, this article proposes a common-offset-based space–frequency multiple signal classification (CSF-MUSIC) algorithm. The method reconstructs the conventional space–frequency multistatic data matrix (SF-MDM) into a novel common-offset SF-MDM (CSF-MDM). Crucially, CSF-MUSIC operates in a dual-antenna measurement mode, which overcomes the fundamental constraint in traditional MUSIC algorithms mandating more antennas than targets, facilitating more compact detection system design. Furthermore, we introduce an optimized iterative QR decomposition to replace conventional singular value decomposition for TRO processing derived from CSF-MDM, which reduces the complexity of decomposing the full TRO, substantially reducing computational time. Simulation and experimental results demonstrate that the proposed algorithm enables precise localization and imaging of UXO using a dual-antenna detection system. Compared to conventional TR-based methods, CSF-MUSIC delivers enhanced spatial resolution while reducing computation time by 89%, markedly improving imaging efficiency.
{"title":"An Efficient Time Reversal Technique Based on CSF-MUSIC for Unexploded Ordinances Localization","authors":"Jinhong Wang;Xiaoshuai Wang;Lele Zhang;Binfeng Yang;Yuanguo Zhou;Quan Xue","doi":"10.1109/JSTARS.2026.3659674","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3659674","url":null,"abstract":"The time reversal (TR) imaging technique demonstrates favorable positioning accuracy and target discrimination capability, making it well-suited for unexploded ordnance (UXO) detection applications. However, its practical applicability and real-time performance in engineering contexts remain limited due to two primary factors: the requirement for large-scale antenna arrays in conventional TR-based imaging, and the significant computational complexity associated with decomposing the full time reversal operator (TRO). To address these limitations, this article proposes a common-offset-based space–frequency multiple signal classification (CSF-MUSIC) algorithm. The method reconstructs the conventional space–frequency multistatic data matrix (SF-MDM) into a novel common-offset SF-MDM (CSF-MDM). Crucially, CSF-MUSIC operates in a dual-antenna measurement mode, which overcomes the fundamental constraint in traditional MUSIC algorithms mandating more antennas than targets, facilitating more compact detection system design. Furthermore, we introduce an optimized iterative QR decomposition to replace conventional singular value decomposition for TRO processing derived from CSF-MDM, which reduces the complexity of decomposing the full TRO, substantially reducing computational time. Simulation and experimental results demonstrate that the proposed algorithm enables precise localization and imaging of UXO using a dual-antenna detection system. Compared to conventional TR-based methods, CSF-MUSIC delivers enhanced spatial resolution while reducing computation time by 89%, markedly improving imaging efficiency.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"6546-6565"},"PeriodicalIF":5.3,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11369292","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}