首页 > 最新文献

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing最新文献

英文 中文
Small-Vehicle Change Detection in UAV Imagery via Physics-Aware Spatiotemporal Cues and Reproducible Evaluation 基于物理感知时空线索和可重复性评估的无人机图像中小型车辆变化检测
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-02-23 DOI: 10.1109/JSTARS.2026.3665495
Qiong Ran;Pengfei Bian;Luyang Cai;Jinlin Chen;Huanqian Yan;He Sun
Vehicle detection holds significant research value and practical application potential. However, existing vehicle detection algorithms struggle to meet the demands for fast, accurate detection and tracking of weak and small targets in uncrewed aerial vehicle (UAV) remote sensing imagery. In this work, weak targets refer to vehicle targets with small spatial extent, low contrast with the background, and subtle temporal changes, which are difficult to detect reliably in UAV imagery. To address this issue, this article proposes a bitemporal change detection method based on spatiotemporal features. Specifically, the proposed method involves processing the data captured by UAVs, applying feature point matching for image alignment, and performing change detection on bitemporal data to achieve more precise recognition of vehicle targets. The results demonstrate that the proposed algorithm exhibits more competitive detection performance compared to traditional unsupervised methods, such as change vector analysis, differential component analysis, iteratively weighted multivariate extinction detection, and multivariate detection. Compared to traditional methods, our proposed method achieves superior performance in detecting small and weak targets, particularly excelling in identifying weak targets while reducing the occurrence of false positives.
车辆检测具有重要的研究价值和实际应用潜力。然而,现有的飞行器检测算法难以满足无人机遥感图像中对弱小目标的快速、准确检测和跟踪需求。在本文中,弱目标是指空间范围小、与背景对比度低、时间变化微妙的车辆目标,在无人机图像中难以可靠地检测到。针对这一问题,本文提出了一种基于时空特征的双时变化检测方法。具体而言,该方法包括对无人机捕获的数据进行处理,应用特征点匹配进行图像对齐,并对双时数据进行变化检测,以实现对飞行器目标的更精确识别。结果表明,与变化向量分析、差分分量分析、迭代加权多元消光检测和多元检测等传统的无监督检测方法相比,该算法具有更强的检测性能。与传统方法相比,我们的方法在检测弱小目标方面取得了优异的性能,特别是在识别弱目标的同时减少了误报的发生。
{"title":"Small-Vehicle Change Detection in UAV Imagery via Physics-Aware Spatiotemporal Cues and Reproducible Evaluation","authors":"Qiong Ran;Pengfei Bian;Luyang Cai;Jinlin Chen;Huanqian Yan;He Sun","doi":"10.1109/JSTARS.2026.3665495","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3665495","url":null,"abstract":"Vehicle detection holds significant research value and practical application potential. However, existing vehicle detection algorithms struggle to meet the demands for fast, accurate detection and tracking of weak and small targets in uncrewed aerial vehicle (UAV) remote sensing imagery. In this work, weak targets refer to vehicle targets with small spatial extent, low contrast with the background, and subtle temporal changes, which are difficult to detect reliably in UAV imagery. To address this issue, this article proposes a bitemporal change detection method based on spatiotemporal features. Specifically, the proposed method involves processing the data captured by UAVs, applying feature point matching for image alignment, and performing change detection on bitemporal data to achieve more precise recognition of vehicle targets. The results demonstrate that the proposed algorithm exhibits more competitive detection performance compared to traditional unsupervised methods, such as change vector analysis, differential component analysis, iteratively weighted multivariate extinction detection, and multivariate detection. Compared to traditional methods, our proposed method achieves superior performance in detecting small and weak targets, particularly excelling in identifying weak targets while reducing the occurrence of false positives.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"8239-8249"},"PeriodicalIF":5.3,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11407963","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Surging Dynamics of the ZhongFeng Glacier, Western Kunlun Mountains 西昆仑峰峰冰川的涌动动力学
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-02-20 DOI: 10.1109/JSTARS.2026.3665525
Yongpeng Gao;Miaomiao Qi;Jianxin Mu;Yang Liu;Chunhai Xu;Pengbin Liang
The Western Kunlun Mountains are a region known for a high concentration of surge-type glaciers in High Mountain Asia and have long been of interest to glaciologists. This article examines the 2021–2023 surge of the eastern branch of ZhongFeng Glacier (ZFG) and reviews the 2003–2004 surge of its western branch, utilising multisource digital elevation models, Landsat MSS/ETM+/OLI, Sentinel-2, and meteorological data. Our findings reveal that surges in both the eastern and western branches of the ZFG were initiated during the summer, with durations of 2 years and 1 year, respectively. Peak flow velocities exceeded 10 m/day, more than 50 times the velocities observed during quiescent periods. During surges, the glacier termini of the eastern and western branches thickened by 60.25 ± 3.07 m and 76.21 ± 8.05 m, respectively, corresponding to ice mass gains of 0.53 ± 0.03 km3 and 0.74 ± 0.08 km3. Based on the timing characteristics of these surges, we conclude that both branches of the ZFG are influenced by hydrological mechanisms. Furthermore, differences in surface and subglacial topography are determined to be the primary factors contributing to the asynchrony of surges between the two branches.
西昆仑山脉是亚洲高山地区以涌浪型冰川高度集中而闻名的地区,长期以来一直是冰川学家的兴趣所在。本文利用多源数字高程模型、Landsat MSS/ETM+/OLI、Sentinel-2和气象资料,研究了中丰冰川东部分支(ZFG) 2021-2023年的浪涌,并回顾了其西部分支2003-2004年的浪涌。研究结果表明,东支和西支的涌浪都是在夏季开始的,持续时间分别为2年和1年。峰值流速超过10米/天,是静息期流速的50多倍。在涌流期间,东支和西支冰川末端分别增厚60.25±3.07 m和76.21±8.05 m,对应的冰质量增加分别为0.53±0.03 km3和0.74±0.08 km3。根据这些浪涌的时间特征,我们得出结论,ZFG的两个分支都受到水文机制的影响。此外,地表和冰下地形的差异被确定为导致两个分支之间浪涌不同步的主要因素。
{"title":"Surging Dynamics of the ZhongFeng Glacier, Western Kunlun Mountains","authors":"Yongpeng Gao;Miaomiao Qi;Jianxin Mu;Yang Liu;Chunhai Xu;Pengbin Liang","doi":"10.1109/JSTARS.2026.3665525","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3665525","url":null,"abstract":"The Western Kunlun Mountains are a region known for a high concentration of surge-type glaciers in High Mountain Asia and have long been of interest to glaciologists. This article examines the 2021–2023 surge of the eastern branch of ZhongFeng Glacier (ZFG) and reviews the 2003–2004 surge of its western branch, utilising multisource digital elevation models, Landsat MSS/ETM+/OLI, Sentinel-2, and meteorological data. Our findings reveal that surges in both the eastern and western branches of the ZFG were initiated during the summer, with durations of 2 years and 1 year, respectively. Peak flow velocities exceeded 10 m/day, more than 50 times the velocities observed during quiescent periods. During surges, the glacier termini of the eastern and western branches thickened by 60.25 ± 3.07 m and 76.21 ± 8.05 m, respectively, corresponding to ice mass gains of 0.53 ± 0.03 km<sup>3</sup> and 0.74 ± 0.08 km<sup>3</sup>. Based on the timing characteristics of these surges, we conclude that both branches of the ZFG are influenced by hydrological mechanisms. Furthermore, differences in surface and subglacial topography are determined to be the primary factors contributing to the asynchrony of surges between the two branches.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"8319-8328"},"PeriodicalIF":5.3,"publicationDate":"2026-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11404153","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CASCADE-3D: A GUI-Driven Framework for Automated 3D Building Model Reconstruction CASCADE-3D:用于自动3D建筑模型重建的gui驱动框架
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-02-19 DOI: 10.1109/JSTARS.2026.3663677
Ruli Andaru;Bambang Kun Cahyono;Yulaikhah;Trias Aditya;Purnama Budi Santosa;Calvin Wijaya;Riyas Syamsul;Fairuz Akmal;Hyatma Adikara;Habib Muhammad;Fikri Kurniawan
The generation of rapid and accurate geospatial data and three-dimensional (3D) features is essential for supporting multipurpose land management services. This study presents CAdastre and Spatial map adjustment with spatial Computation for Automatic builDing dEtection and 3D generation (CASCADE-3D), a graphical user interface (GUI) developed for the automated reconstruction of 3D models at Levels of Detail (LOD) 1 and 2. CASCADE-3D integrates advanced deep-learning frameworks to perform building outline detection and point cloud classification. Building outlines are extracted using SAM, a promptable segmentation system capable of zero-shot generalization to unfamiliar objects and images without requiring additional training. The CASCADE-3D GUI enables interactive digitization, automatic regularization, and refinement of the segmentation mask based on its primary orientation. Each building height model (BHM) is generated by classifying raw point clouds with the DGCNN algorithm to extract ground and building classes. Accurate reconstruction of complex LOD2 models requires precise extraction of roof structures that captures the geometric configuration and orientation of roofs in intricate architectural forms. To achieve this, roof structure detection techniques were applied using each building’s aspect. The study utilized point clouds and orthophotos of 1,215 buildings, encompassing diverse architectural forms and land cover types, across several provinces in Indonesia. The CASCADE-3D GUI was evaluated for its accuracy in detecting building outlines and roof structures, and performing LOD1/2 reconstruction. The results indicate that the reconstructed 3D building geometries yielded an RMSE of 0.36 m. Subsequently, CASCADE-3D reconstructs LOD1 and LOD2 building models and exports them in CityJSON format.
快速准确的地理空间数据和三维特征的生成对于支持多用途土地管理服务至关重要。本研究介绍了地籍地和空间地图调整与自动建筑检测和3D生成的空间计算(CASCADE-3D),这是一个图形用户界面(GUI),用于在详细级别(LOD) 1和2上自动重建3D模型。CASCADE-3D集成了先进的深度学习框架来执行建筑轮廓检测和点云分类。使用SAM提取建筑物轮廓,SAM是一种即时分割系统,能够对不熟悉的物体和图像进行零射击泛化,而无需额外的训练。CASCADE-3D图形用户界面支持交互式数字化、自动正则化和基于其主要方向的分割掩码细化。每个建筑物高度模型(BHM)是通过对原始点云进行分类,并使用DGCNN算法提取地面和建筑物类别而生成的。复杂LOD2模型的精确重建需要精确提取屋顶结构,以捕捉复杂建筑形式中屋顶的几何配置和方向。为了实现这一点,屋顶结构检测技术应用于每个建筑的侧面。该研究利用了1,215座建筑的点云和正射影像,涵盖了印度尼西亚几个省份的不同建筑形式和土地覆盖类型。对CASCADE-3D GUI在检测建筑物轮廓和屋顶结构以及执行LOD1/2重建方面的准确性进行了评估。结果表明,重建的三维建筑几何形状的RMSE为0.36 m。随后,CASCADE-3D重建LOD1和LOD2建筑模型,并以CityJSON格式导出。
{"title":"CASCADE-3D: A GUI-Driven Framework for Automated 3D Building Model Reconstruction","authors":"Ruli Andaru;Bambang Kun Cahyono;Yulaikhah;Trias Aditya;Purnama Budi Santosa;Calvin Wijaya;Riyas Syamsul;Fairuz Akmal;Hyatma Adikara;Habib Muhammad;Fikri Kurniawan","doi":"10.1109/JSTARS.2026.3663677","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3663677","url":null,"abstract":"The generation of rapid and accurate geospatial data and three-dimensional (3D) features is essential for supporting multipurpose land management services. This study presents CAdastre and Spatial map adjustment with spatial Computation for Automatic builDing dEtection and 3D generation (CASCADE-3D), a graphical user interface (GUI) developed for the automated reconstruction of 3D models at Levels of Detail (LOD) 1 and 2. CASCADE-3D integrates advanced deep-learning frameworks to perform building outline detection and point cloud classification. Building outlines are extracted using SAM, a promptable segmentation system capable of zero-shot generalization to unfamiliar objects and images without requiring additional training. The CASCADE-3D GUI enables interactive digitization, automatic regularization, and refinement of the segmentation mask based on its primary orientation. Each building height model (BHM) is generated by classifying raw point clouds with the DGCNN algorithm to extract ground and building classes. Accurate reconstruction of complex LOD2 models requires precise extraction of roof structures that captures the geometric configuration and orientation of roofs in intricate architectural forms. To achieve this, roof structure detection techniques were applied using each building’s aspect. The study utilized point clouds and orthophotos of 1,215 buildings, encompassing diverse architectural forms and land cover types, across several provinces in Indonesia. The CASCADE-3D GUI was evaluated for its accuracy in detecting building outlines and roof structures, and performing LOD1/2 reconstruction. The results indicate that the reconstructed 3D building geometries yielded an RMSE of 0.36 m. Subsequently, CASCADE-3D reconstructs LOD1 and LOD2 building models and exports them in CityJSON format.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"7419-7442"},"PeriodicalIF":5.3,"publicationDate":"2026-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11400619","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fourier Decomposition-Based Phase Processing Technique: A Novel Approach for 1-D Phase Unwrapping 基于傅立叶分解的相位处理技术:一种一维相位展开的新方法
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-02-19 DOI: 10.1109/JSTARS.2026.3666269
Chenghao Lu;Donglin Li;Hengyi Jia;Taoli Yang
The performance of coherence analysis system (CAS) is critically dependent on phase accuracy. Most advanced phase unwrapping (PU) algorithms are designed for 2-D problem, but data from various interferometric systems are typically acquired as 1-D time series. The 1-D PU problem is more challenging since it suffers more seriously from noise and the available adjacent points are severely restricted compared to the 2-D case. To address the prevalent challenge of 1-D phase noise with a skewed nonzero-mean distribution, this article introduces a novel Fourier decomposition-based phase-processing technique (FDPT). The FDPT procedure begins with a fast Fourier transform (FFT) of the original noisy phase signal. The frequency spectrums of them are then divided into subbands, which undergo a flatness evaluation to identify and extract the dominant frequency components. The inverse FFT is applied to each phase component, converting them back for individual processing using the adaptive nonlocal filtering algorithm. Finally, the subphase components are coherently summed, and followed by PU methods to reconstruct the phase. Simulation results demonstrate the superiority of the proposed FDPT over conventional methods, confirming improvements in waveform similarity and a reduction in root-mean-square error.
相干分析系统的性能与相位精度密切相关。大多数先进的相位展开算法都是针对二维问题设计的,但来自各种干涉系统的数据通常是作为一维时间序列获取的。与二维情况相比,一维PU问题更具挑战性,因为它受到更严重的噪声影响,并且可用的相邻点受到严重限制。为了解决具有偏态非零均值分布的一维相位噪声的普遍挑战,本文介绍了一种新的基于傅立叶分解的相位处理技术(FDPT)。FDPT程序从原始噪声相位信号的快速傅里叶变换(FFT)开始。然后将它们的频谱划分为子带,进行平坦度评估以识别和提取优势频率成分。将逆FFT应用于每个相位分量,使用自适应非局部滤波算法将它们转换回单独处理。最后,对子相位分量进行相干求和,然后用PU方法重构相位。仿真结果证明了所提出的FDPT比传统方法的优越性,证实了波形相似度的提高和均方根误差的减小。
{"title":"Fourier Decomposition-Based Phase Processing Technique: A Novel Approach for 1-D Phase Unwrapping","authors":"Chenghao Lu;Donglin Li;Hengyi Jia;Taoli Yang","doi":"10.1109/JSTARS.2026.3666269","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3666269","url":null,"abstract":"The performance of coherence analysis system (CAS) is critically dependent on phase accuracy. Most advanced phase unwrapping (PU) algorithms are designed for 2-D problem, but data from various interferometric systems are typically acquired as 1-D time series. The 1-D PU problem is more challenging since it suffers more seriously from noise and the available adjacent points are severely restricted compared to the 2-D case. To address the prevalent challenge of 1-D phase noise with a skewed nonzero-mean distribution, this article introduces a novel Fourier decomposition-based phase-processing technique (FDPT). The FDPT procedure begins with a fast Fourier transform (FFT) of the original noisy phase signal. The frequency spectrums of them are then divided into subbands, which undergo a flatness evaluation to identify and extract the dominant frequency components. The inverse FFT is applied to each phase component, converting them back for individual processing using the adaptive nonlocal filtering algorithm. Finally, the subphase components are coherently summed, and followed by PU methods to reconstruct the phase. Simulation results demonstrate the superiority of the proposed FDPT over conventional methods, confirming improvements in waveform similarity and a reduction in root-mean-square error.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"8679-8687"},"PeriodicalIF":5.3,"publicationDate":"2026-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11399896","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Land Surface Temperature Trends Over Central and Southern Europe: Derivation and Analyses of Long-Term (1986–2018) Monthly Maxima 中欧和南欧地表温度趋势:长期(1986-2018)月最大值的推导和分析
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-02-18 DOI: 10.1109/JSTARS.2026.3666131
Christina Eisfelder;Philipp Reiners;Claudia Kuenzer
Monitoring long-term land surface temperature (LST) time series and analyzing their anomalies and trends are essential for understanding spatial patterns of global warming, particularly in Europe—the fastest-warming continent. In this study, we derived and analyzed monthly maximum LST trends over central and southern Europe at 1 km2 resolution from advanced very high resolution radiometer-based TIMELINE LST data for the period 1986–2018. We found that almost 40% of the study area exhibited statistically significant (p<0.1)>0.5 K/decade and thus significantly contribute to the overall surface warming. In contrast, forested areas showed lower LST trend magnitudes (<0.5 K/decade) and a smaller share of areas with significant trends. With respect to elevation, our results revealed the lowest LST trends below 50 m and at mid-elevation ranges (750–1250 m). Both the magnitude of LST trends and the percentage area with significant trends rise towards both lower and higher altitudes. These results help to understand current warming patterns and demonstrate that long-term, high-resolution LST datasets can be used to study land-climate interactions in depth.
监测长期地表温度(LST)时间序列并分析其异常和趋势对于了解全球变暖的空间格局至关重要,特别是在欧洲这个变暖最快的大陆。在这项研究中,我们从1986-2018年基于非常高分辨率辐射计的先进TIMELINE LST数据中导出并分析了中欧和南欧1 km2分辨率下的月最大LST趋势。研究发现,近40%的研究区域表现出显著的统计学意义(p0.5 K/ 10年),从而显著促进了整体地表变暖。相比之下,森林地区地表温度趋势值较小(<0.5 K/ a),趋势显著的地区所占比例较小。从高程上看,50 m以下和750 ~ 1250 m的中海拔区域地表温度变化趋势最低。地表温度变化趋势的幅度和显著趋势面积的百分比在海拔高度和海拔高度均呈上升趋势。这些结果有助于了解当前的变暖模式,并证明长期、高分辨率的地表温度数据集可用于深入研究陆地-气候相互作用。
{"title":"Land Surface Temperature Trends Over Central and Southern Europe: Derivation and Analyses of Long-Term (1986–2018) Monthly Maxima","authors":"Christina Eisfelder;Philipp Reiners;Claudia Kuenzer","doi":"10.1109/JSTARS.2026.3666131","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3666131","url":null,"abstract":"Monitoring long-term land surface temperature (LST) time series and analyzing their anomalies and trends are essential for understanding spatial patterns of global warming, particularly in Europe—the fastest-warming continent. In this study, we derived and analyzed monthly maximum LST trends over central and southern Europe at 1 km<sup>2</sup> resolution from advanced very high resolution radiometer-based TIMELINE LST data for the period 1986–2018. We found that almost 40% of the study area exhibited statistically significant (<italic>p</i><0.1)>0.5 K/decade and thus significantly contribute to the overall surface warming. In contrast, forested areas showed lower LST trend magnitudes (<0.5 K/decade) and a smaller share of areas with significant trends. With respect to elevation, our results revealed the lowest LST trends below 50 m and at mid-elevation ranges (750–1250 m). Both the magnitude of LST trends and the percentage area with significant trends rise towards both lower and higher altitudes. These results help to understand current warming patterns and demonstrate that long-term, high-resolution LST datasets can be used to study land-climate interactions in depth.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"8108-8125"},"PeriodicalIF":5.3,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11398115","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Adaptive Regular Window Optimization-Based Radiometric Calibration Model for Airborne SAR 基于自适应规则窗口优化的机载SAR辐射定标模型
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-02-17 DOI: 10.1109/JSTARS.2026.3665843
Jiafeng Wang;Hao Li;Xuedong Yao;Jianhua Li
The radiometric calibration is critical for ensuring the quantitative reliability of synthetic aperture radar (SAR) sensor across multiple applications. However, traditional calibration models often struggle with adaptability when corner reflectors (CRs) deviate from ideal cross-shaped responses and appear instead as patch-like bright spots, thereby reducing calibration accuracy. This article proposed a core response energy extraction model for CRs based on adaptive regular window (ARW) optimization, leading to the development of an improved SAR radiometric calibration model, referred as ARW-RC. The ARW-RC significantly improves the completeness of core energy extraction, background clutter suppression, and adaptability. The core energy extraction of CRs from multiband SAR images in Hainan and Rizhao demonstrates that the proposed model effectively captures the core region boundaries, proving its robustness and adaptability across diverse imaging scenarios. Specifically, compared with traditional calibration models, the ARW-RC achieved a standard deviation of 0.55 dB for the CRs response energy in X-band SAR image. After radiometric calibration, the relative accuracy improved to 0.70 dB, representing more than a twofold improvement in radiometric accuracy over traditional models. In addition, the absolute accuracy improved to 0.50 dB, an improvement of 0.69 dB. For the S-band SAR image, the ARW-RC achieved a standard deviation of 1.37 dB in CRs response energy. The relative and absolute accuracies were 1.52 dB and 1.14 dB, respectively. These confirm that the ARW-RC model offers high accuracy and broad applicability, providing an effective solution for SAR sensors calibration and multisource data fusion.
辐射定标是保证合成孔径雷达(SAR)传感器在多种应用中的定量可靠性的关键。然而,当角反射镜(cr)偏离理想的十字形响应而出现斑块状亮点时,传统的校准模型往往难以适应,从而降低了校准精度。本文提出了一种基于自适应规则窗(ARW)优化的SAR核心响应能量提取模型,从而发展了一种改进的SAR辐射定标模型ARW- rc。ARW-RC显著提高了核心能量提取的完整性、背景杂波抑制能力和自适应性。海南和日照多波段SAR影像的核心能量提取结果表明,该模型能有效捕获核心区域边界,证明了该模型在不同成像场景下的鲁棒性和适应性。具体而言,与传统定标模型相比,ARW-RC在x波段SAR图像上的CRs响应能量的标准差为0.55 dB。经过辐射校准后,相对精度提高到0.70 dB,比传统模型的辐射精度提高了两倍以上。此外,绝对精度提高到0.50 dB,提高了0.69 dB。对于s波段SAR图像,ARW-RC的响应能标准差为1.37 dB。相对精度为1.52 dB,绝对精度为1.14 dB。这证实了ARW-RC模型具有较高的精度和广泛的适用性,为SAR传感器标定和多源数据融合提供了有效的解决方案。
{"title":"An Adaptive Regular Window Optimization-Based Radiometric Calibration Model for Airborne SAR","authors":"Jiafeng Wang;Hao Li;Xuedong Yao;Jianhua Li","doi":"10.1109/JSTARS.2026.3665843","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3665843","url":null,"abstract":"The radiometric calibration is critical for ensuring the quantitative reliability of synthetic aperture radar (SAR) sensor across multiple applications. However, traditional calibration models often struggle with adaptability when corner reflectors (CRs) deviate from ideal cross-shaped responses and appear instead as patch-like bright spots, thereby reducing calibration accuracy. This article proposed a core response energy extraction model for CRs based on adaptive regular window (ARW) optimization, leading to the development of an improved SAR radiometric calibration model, referred as ARW-RC. The ARW-RC significantly improves the completeness of core energy extraction, background clutter suppression, and adaptability. The core energy extraction of CRs from multiband SAR images in Hainan and Rizhao demonstrates that the proposed model effectively captures the core region boundaries, proving its robustness and adaptability across diverse imaging scenarios. Specifically, compared with traditional calibration models, the ARW-RC achieved a standard deviation of 0.55 dB for the CRs response energy in X-band SAR image. After radiometric calibration, the relative accuracy improved to 0.70 dB, representing more than a twofold improvement in radiometric accuracy over traditional models. In addition, the absolute accuracy improved to 0.50 dB, an improvement of 0.69 dB. For the S-band SAR image, the ARW-RC achieved a standard deviation of 1.37 dB in CRs response energy. The relative and absolute accuracies were 1.52 dB and 1.14 dB, respectively. These confirm that the ARW-RC model offers high accuracy and broad applicability, providing an effective solution for SAR sensors calibration and multisource data fusion.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"8329-8345"},"PeriodicalIF":5.3,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11397644","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrastive Prototype Clustering for Multimodal Remote Sensing Data Based on Spectral–Spatial Cross Mamba 基于光谱-空间交叉曼巴的多模态遥感数据对比原型聚类
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-02-17 DOI: 10.1109/JSTARS.2026.3665649
Li Lv;Zhenyang Xie;Hongmin Gao;Shufang Xu;Zhenzhen Li;Haihua Xie;Dongxiao Liu
With the increasing diversity of remote sensing (RS) data sources, joint clustering of multimodal RS data demonstrates tremendous potential in Earth observation applications by aggregating multisource information without relying on labeled data. Although significant progress has been made in multiview subspace clustering, existing algorithms still face two limitations: inadequate exploration of complex cross-modal interactions and long-range dependencies, as well as limited capability in handling large-scale multimodal RS datasets. To address these challenges, this article proposes contrastive prototype clustering for multimodal RS data based on spectral–spatial cross Mamba (CPCM). The proposed method encompasses two core innovations. First, we design a multimodal spectral–spatial cross Mamba (S2CM) that performs global contextual modeling with linear complexity in both spectral and spatial dimensions through dual-path Mamba blocks, while employing cross-attention mechanisms to achieve deep semantic fusion of multidimensional features. Second, an end-to-end joint optimization framework is developed, which integrates contrastive learning with clustering learning through a unified objective function. This framework achieves collaborative convergence of feature learning and cluster refinement through an online clustering mechanism that utilizes prototype learning, making it scalable for large-scale multimodal datasets. The effectiveness of the proposed CPCM method is evaluated on three real-world RS datasets: Trento, MUUFL, and Augsburg. Experimental results demonstrate that CPCM achieves overall clustering accuracies of 94.69%, 69.29%, and 83.21% on these datasets, respectively, indicating its superior performance and strong capability in handling large-scale datasets.
随着遥感数据源的日益多样化,多模态遥感数据联合聚类在对地观测应用中显示出巨大的潜力,可以在不依赖标记数据的情况下聚合多源信息。尽管在多视图子空间聚类方面取得了重大进展,但现有算法仍然面临两个局限性:对复杂的跨模态交互和远程依赖关系的探索不足,以及处理大规模多模态RS数据集的能力有限。为了解决这些问题,本文提出了基于光谱-空间交叉曼巴(CPCM)的多模态遥感数据的对比原型聚类方法。所提出的方法包含两个核心创新。首先,我们设计了一个多模态光谱-空间交叉曼巴(S2CM),通过双路径曼巴块在光谱和空间维度上执行线性复杂性的全局上下文建模,同时采用交叉注意机制实现多维特征的深度语义融合。其次,建立了端到端的联合优化框架,通过统一的目标函数将对比学习与聚类学习相结合;该框架通过利用原型学习的在线聚类机制实现了特征学习和聚类优化的协同收敛,使其可扩展到大规模多模态数据集。在Trento, MUUFL和Augsburg三个真实RS数据集上评估了所提出的CPCM方法的有效性。实验结果表明,CPCM在这些数据集上的总体聚类准确率分别达到了94.69%、69.29%和83.21%,表明CPCM在处理大规模数据集方面具有优越的性能和较强的能力。
{"title":"Contrastive Prototype Clustering for Multimodal Remote Sensing Data Based on Spectral–Spatial Cross Mamba","authors":"Li Lv;Zhenyang Xie;Hongmin Gao;Shufang Xu;Zhenzhen Li;Haihua Xie;Dongxiao Liu","doi":"10.1109/JSTARS.2026.3665649","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3665649","url":null,"abstract":"With the increasing diversity of remote sensing (RS) data sources, joint clustering of multimodal RS data demonstrates tremendous potential in Earth observation applications by aggregating multisource information without relying on labeled data. Although significant progress has been made in multiview subspace clustering, existing algorithms still face two limitations: inadequate exploration of complex cross-modal interactions and long-range dependencies, as well as limited capability in handling large-scale multimodal RS datasets. To address these challenges, this article proposes contrastive prototype clustering for multimodal RS data based on spectral–spatial cross Mamba (CPCM). The proposed method encompasses two core innovations. First, we design a multimodal spectral–spatial cross Mamba (S2CM) that performs global contextual modeling with linear complexity in both spectral and spatial dimensions through dual-path Mamba blocks, while employing cross-attention mechanisms to achieve deep semantic fusion of multidimensional features. Second, an end-to-end joint optimization framework is developed, which integrates contrastive learning with clustering learning through a unified objective function. This framework achieves collaborative convergence of feature learning and cluster refinement through an online clustering mechanism that utilizes prototype learning, making it scalable for large-scale multimodal datasets. The effectiveness of the proposed CPCM method is evaluated on three real-world RS datasets: Trento, MUUFL, and Augsburg. Experimental results demonstrate that CPCM achieves overall clustering accuracies of 94.69%, 69.29%, and 83.21% on these datasets, respectively, indicating its superior performance and strong capability in handling large-scale datasets.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"8056-8070"},"PeriodicalIF":5.3,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11397521","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SBSNet: Spatial–Spectral Background–Target Separation Network for Hyperspectral Target Detection 用于高光谱目标检测的空间-光谱背景-目标分离网络
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-02-17 DOI: 10.1109/JSTARS.2026.3665707
Jianlin Xiang;Yanshan Li;Linhui Dai;Ruo Qi;Haojin Tang;Li Zhang;Kunhua Zhang;Weixin Xie
Hyperspectral target detection (HTD) aims to identify target locations in a hyperspectral image (HSI) using limited prior target spectra. Existing methods often use contrastive learning to construct target and background sample sets from unlabeled HSI and compare their similarity in feature space to enhance background–target separability. However, they often fail to ensure high-purity sample sets, limiting their ability to effectively separate target and background features. Therefore, we propose a spatial–spectral background–target separation network (SBSNet). The SBSNet leverages prior target spectra to construct high-purity target and background sets from the unlabeled HSI and integrates them into a multiscale spatial–spectral feature learning framework to optimize the feature space for more discriminative target detection. Specifically, the primary contributions of this article are threefold. First, we propose a local spatial–spectral feature fusion module to extract spatial–spectral feature from the raw HSI and proposes a spatial–spectral pseudolabel purification strategy to obtain pure target and background pixel sets from unlabeled HSI. In addition, we introduce the pseudolabel map as prior information to supervise the training process. Second, we design a highly robust multiscale spatial–spectral autoencoder specifically for HTD, which is used for sample generation during the data preparation and for feature extraction during the training. Third, we propose a clustered adaptive focus training strategy, which synergistically optimizes the feature space through clustered sampling and adaptive exponential weighted loss. Finally, experimental results demonstrate that the proposed SBSNet achieves superior detection performance on five public HSI datasets in various scenarios, compared with state-of-the-art HTD methods.
高光谱目标检测(HTD)旨在利用有限的先验目标光谱识别高光谱图像中的目标位置。现有方法通常使用对比学习从未标记的HSI中构建目标和背景样本集,并比较它们在特征空间中的相似度,以增强背景和目标的可分性。然而,它们往往不能保证高纯度的样本集,限制了它们有效分离目标和背景特征的能力。为此,我们提出了一种空间-光谱背景-目标分离网络(SBSNet)。SBSNet利用先验目标光谱从未标记的HSI中构建高纯度目标和背景集,并将其集成到多尺度空间-光谱特征学习框架中,优化特征空间,以实现更具判别性的目标检测。具体来说,本文的主要贡献有三个方面。首先,我们提出了一种局部空间光谱特征融合模块,从原始HSI中提取空间光谱特征,并提出了一种空间光谱伪标记纯化策略,从未标记的HSI中获得纯目标和背景像素集。此外,我们引入伪标签映射作为先验信息来监督训练过程。其次,我们专门为HTD设计了一个高度鲁棒的多尺度空间光谱自编码器,用于数据准备过程中的样本生成和训练过程中的特征提取。第三,提出了一种聚类自适应焦点训练策略,通过聚类采样和自适应指数加权损失对特征空间进行协同优化。最后,实验结果表明,与现有的HTD方法相比,所提出的SBSNet在不同场景下对5个公共HSI数据集实现了更好的检测性能。
{"title":"SBSNet: Spatial–Spectral Background–Target Separation Network for Hyperspectral Target Detection","authors":"Jianlin Xiang;Yanshan Li;Linhui Dai;Ruo Qi;Haojin Tang;Li Zhang;Kunhua Zhang;Weixin Xie","doi":"10.1109/JSTARS.2026.3665707","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3665707","url":null,"abstract":"Hyperspectral target detection (HTD) aims to identify target locations in a hyperspectral image (HSI) using limited prior target spectra. Existing methods often use contrastive learning to construct target and background sample sets from unlabeled HSI and compare their similarity in feature space to enhance background–target separability. However, they often fail to ensure high-purity sample sets, limiting their ability to effectively separate target and background features. Therefore, we propose a spatial–spectral background–target separation network (SBSNet). The SBSNet leverages prior target spectra to construct high-purity target and background sets from the unlabeled HSI and integrates them into a multiscale spatial–spectral feature learning framework to optimize the feature space for more discriminative target detection. Specifically, the primary contributions of this article are threefold. First, we propose a local spatial–spectral feature fusion module to extract spatial–spectral feature from the raw HSI and proposes a spatial–spectral pseudolabel purification strategy to obtain pure target and background pixel sets from unlabeled HSI. In addition, we introduce the pseudolabel map as prior information to supervise the training process. Second, we design a highly robust multiscale spatial–spectral autoencoder specifically for HTD, which is used for sample generation during the data preparation and for feature extraction during the training. Third, we propose a clustered adaptive focus training strategy, which synergistically optimizes the feature space through clustered sampling and adaptive exponential weighted loss. Finally, experimental results demonstrate that the proposed SBSNet achieves superior detection performance on five public HSI datasets in various scenarios, compared with state-of-the-art HTD methods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"8648-8663"},"PeriodicalIF":5.3,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11397668","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STAR: Spatial and Temporal Context-Aware Network for Resident Space Object Detection 用于驻留空间目标检测的时空上下文感知网络
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-02-17 DOI: 10.1109/JSTARS.2026.3665808
Bowen Gan;Wei Liang;Zhaodong Niu
Most resident space object (RSO) detection methods are designed based on the observation mode of the telescope, focusing the core of RSO detection on point-like or streak-like object detection. These RSOs are typically small in size and weak in energy, while noise and stars will affect the detection. Therefore, these methods adopt complex processing pipelines or design neural networks to achieve detection. However, for dim RSOs, the features about point or streak are not prominent, making it difficult for these methods to maintain stable detection performance. In this article, we propose a network called STAR (Spatial and Temporal Context-Aware Network for RSO Detection), which attempts to use spatial and temporal context information as supplementary cues to enhance detection performance. STAR introduces Spatial Context Extraction module that fuses small and large kernel convolutions to capture fine morphological feature and surrounding context information, respectively, and Temporal Context Extraction module that employs deformable attention to adaptively model motion patterns across frames. Experiments on a self-collected dataset composed entirely of real images show that STAR exhibits excellent detection capability. On the public dataset SpotGEO, STAR achieves 94.93% F1 score and 29646.49 mean squared error, surpassing the champion of the SpotGEO challenge and outperforming many current deep-learning-based methods.
大多数驻留空间目标探测方法都是根据望远镜的观测模式设计的,将驻留空间目标探测的核心集中在点状或条纹状目标的探测上。这些rso通常体积小,能量弱,而噪音和恒星会影响探测。因此,这些方法采用复杂的处理管道或设计神经网络来实现检测。然而,对于微弱的rso,点或条纹的特征并不突出,使得这些方法难以保持稳定的检测性能。在本文中,我们提出了一个名为STAR (Spatial and Temporal context - aware network for RSO Detection)的网络,它试图使用空间和时间上下文信息作为补充线索来提高检测性能。STAR引入了空间上下文提取模块,该模块分别融合小核卷积和大核卷积来捕获精细的形态特征和周围的上下文信息,以及时间上下文提取模块,该模块采用可变形的注意力来自适应地跨帧建模运动模式。在完全由真实图像组成的自采集数据集上进行的实验表明,STAR具有出色的检测能力。在公共数据集SpotGEO上,STAR达到了94.93%的F1分数和29646.49的均方误差,超过了SpotGEO挑战赛的冠军,并且优于当前许多基于深度学习的方法。
{"title":"STAR: Spatial and Temporal Context-Aware Network for Resident Space Object Detection","authors":"Bowen Gan;Wei Liang;Zhaodong Niu","doi":"10.1109/JSTARS.2026.3665808","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3665808","url":null,"abstract":"Most resident space object (RSO) detection methods are designed based on the observation mode of the telescope, focusing the core of RSO detection on point-like or streak-like object detection. These RSOs are typically small in size and weak in energy, while noise and stars will affect the detection. Therefore, these methods adopt complex processing pipelines or design neural networks to achieve detection. However, for dim RSOs, the features about point or streak are not prominent, making it difficult for these methods to maintain stable detection performance. In this article, we propose a network called STAR (Spatial and Temporal Context-Aware Network for RSO Detection), which attempts to use spatial and temporal context information as supplementary cues to enhance detection performance. STAR introduces Spatial Context Extraction module that fuses small and large kernel convolutions to capture fine morphological feature and surrounding context information, respectively, and Temporal Context Extraction module that employs deformable attention to adaptively model motion patterns across frames. Experiments on a self-collected dataset composed entirely of real images show that STAR exhibits excellent detection capability. On the public dataset SpotGEO, STAR achieves 94.93% F1 score and 29646.49 mean squared error, surpassing the champion of the SpotGEO challenge and outperforming many current deep-learning-based methods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"8428-8440"},"PeriodicalIF":5.3,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11397566","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OWT-DNet: A Timely and High-Accuracy End-to-End Offshore Wind Turbine Detection Network Based on Multimodal Remote Sensing Data 基于多模态遥感数据的实时、高精度端到端海上风电检测网络OWT-DNet
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-02-17 DOI: 10.1109/JSTARS.2026.3665662
Shuai Zhang;Fangxiong Wang;Wubiao Huang;Fei Deng
With the rapid development of the offshore wind power industry in recent years, its effects on local social, economic, and ecological environments have attracted widespread attention. Therefore, a timely understanding of the development status of offshore wind power, specifically, offshore wind turbines (OWTs) is crucial for the healthy and sustainable development of the offshore wind power industry. However, existing OWT detection methods often struggle to achieve timely, high-precision end-to-end detection of OWTs. To address this, in this study, an OWT detection network (OWT-DNet) based on multimodal remote sensing data is proposed. This network integrates Sentinel-1 synthetic aperture radar imagery and Sentinel-2 optical imagery, effectively addressing the insufficient semantic information inherent in single-modal data for OWT detection. Experiments across five global test regions demonstrate that OWT-DNet achieves detection accuracy, recall, and comprehensive evaluation metrics that exceed 99.9%. Furthermore, OWT-DNet demonstrates outstanding detection performance under complex weather conditions. Comparative and ablation experiments validate the network's superior capability in OWT detection tasks. Overall, timely, high-precision end-to-end OWT detection is achieved for the first time on the basis of multimodal remote sensing data. Furthermore, an inaugural multimodal OWT sample dataset is established, laying a solid foundation for future OWT detection research.
近年来,随着海上风电产业的快速发展,其对当地社会、经济和生态环境的影响引起了广泛关注。因此,及时了解海上风电特别是海上风电机组的发展现状,对于海上风电产业的健康可持续发展至关重要。然而,现有的OWT检测方法往往难以实现及时、高精度的端到端OWT检测。为了解决这一问题,本研究提出了一种基于多模态遥感数据的OWT检测网络(OWT- dnet)。该网络集成了Sentinel-1合成孔径雷达图像和Sentinel-2光学图像,有效地解决了OWT检测中单模态数据固有的语义信息不足的问题。在五个全球测试区域的实验表明,OWT-DNet实现了超过99.9%的检测准确性、召回率和综合评估指标。此外,OWT-DNet在复杂天气条件下表现出出色的检测性能。对比实验和烧蚀实验验证了该网络在OWT检测任务中的优越性能。总体而言,首次在多模态遥感数据基础上实现了及时、高精度的端到端OWT检测。建立了首个多模态OWT样本数据集,为今后OWT检测研究奠定了坚实的基础。
{"title":"OWT-DNet: A Timely and High-Accuracy End-to-End Offshore Wind Turbine Detection Network Based on Multimodal Remote Sensing Data","authors":"Shuai Zhang;Fangxiong Wang;Wubiao Huang;Fei Deng","doi":"10.1109/JSTARS.2026.3665662","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3665662","url":null,"abstract":"With the rapid development of the offshore wind power industry in recent years, its effects on local social, economic, and ecological environments have attracted widespread attention. Therefore, a timely understanding of the development status of offshore wind power, specifically, offshore wind turbines (OWTs) is crucial for the healthy and sustainable development of the offshore wind power industry. However, existing OWT detection methods often struggle to achieve timely, high-precision end-to-end detection of OWTs. To address this, in this study, an OWT detection network (OWT-DNet) based on multimodal remote sensing data is proposed. This network integrates Sentinel-1 synthetic aperture radar imagery and Sentinel-2 optical imagery, effectively addressing the insufficient semantic information inherent in single-modal data for OWT detection. Experiments across five global test regions demonstrate that OWT-DNet achieves detection accuracy, recall, and comprehensive evaluation metrics that exceed 99.9%. Furthermore, OWT-DNet demonstrates outstanding detection performance under complex weather conditions. Comparative and ablation experiments validate the network's superior capability in OWT detection tasks. Overall, timely, high-precision end-to-end OWT detection is achieved for the first time on the basis of multimodal remote sensing data. Furthermore, an inaugural multimodal OWT sample dataset is established, laying a solid foundation for future OWT detection research.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"7991-8004"},"PeriodicalIF":5.3,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11397679","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1