首页 > 最新文献

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing最新文献

英文 中文
Dual-Perception Detector for Ship Detection in SAR Images 基于双感知检测器的SAR图像船舶检测
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-16 DOI: 10.1109/JSTARS.2026.3654602
Ming Tong;Shenghua Fan;Jiu Jiang;Hezhi Sun;Jisan Yang;Chu He
Recently, detectors based on deep learning have boosted the state-of-the-art of application on ship detection in synthetic aperture radar (SAR) images. However, constructing discriminative feature from scattering of background and distinguishing contour of ship precisely still present challenging subject to the inherent scattering mechanism of SAR. In this article, a dual-branch detection framework with perception of scattering characteristic and geometric contour is introduced to deal with the problem. First, a scattering characteristic perception branch is proposed to fit the scattering distribution of SAR ship through conditional diffusion model, which introduces learnable scattering feature. Second, a convex contour perception branch is designed as two-stage coarse-to-fine pipeline to delimit the irregular boundary of ship by learning scattering key points. Finally, a cross-token integration module following Bayesian framework is introduced to couple features of scattering and texture adaptively to learn construction of discriminative feature. Furthermore, comprehensive experiments on three authoritative SAR datasets for oriented ship detection demonstrate the effectiveness of proposed method.
近年来,基于深度学习的探测器在合成孔径雷达(SAR)图像舰船检测中的应用水平得到了提升。然而,由于SAR固有的散射机制,从背景散射中构造判别特征并精确识别船舶轮廓仍然是一个挑战。本文引入了一种具有散射特征和几何轮廓感知的双分支检测框架来解决这一问题。首先,通过引入可学习散射特征的条件扩散模型,提出了一个散射特征感知分支来拟合SAR舰船的散射分布;其次,设计一个凸轮廓感知分支作为两阶段粗到细的管道,通过学习散射关键点来划分船舶的不规则边界;最后,引入贝叶斯框架下的交叉标记积分模块,自适应耦合散射和纹理特征,学习判别特征的构建。最后,在三个权威SAR数据集上进行了船舶定向检测实验,验证了该方法的有效性。
{"title":"Dual-Perception Detector for Ship Detection in SAR Images","authors":"Ming Tong;Shenghua Fan;Jiu Jiang;Hezhi Sun;Jisan Yang;Chu He","doi":"10.1109/JSTARS.2026.3654602","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3654602","url":null,"abstract":"Recently, detectors based on deep learning have boosted the state-of-the-art of application on ship detection in synthetic aperture radar (SAR) images. However, constructing discriminative feature from scattering of background and distinguishing contour of ship precisely still present challenging subject to the inherent scattering mechanism of SAR. In this article, a dual-branch detection framework with perception of scattering characteristic and geometric contour is introduced to deal with the problem. First, a scattering characteristic perception branch is proposed to fit the scattering distribution of SAR ship through conditional diffusion model, which introduces learnable scattering feature. Second, a convex contour perception branch is designed as two-stage coarse-to-fine pipeline to delimit the irregular boundary of ship by learning scattering key points. Finally, a cross-token integration module following Bayesian framework is introduced to couple features of scattering and texture adaptively to learn construction of discriminative feature. Furthermore, comprehensive experiments on three authoritative SAR datasets for oriented ship detection demonstrate the effectiveness of proposed method.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4790-4808"},"PeriodicalIF":5.3,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11355870","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Extraction of 3-D Windows From MVS Point Clouds by Comprehensive Fusion of Multitype Features 基于多类型特征综合融合的MVS点云三维窗口自动提取
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-14 DOI: 10.1109/JSTARS.2026.3654241
Yuan Li;Tianzhu Zhang;Ziyi Xiong;Junying Lv;Yinning Pang
Detecting three-dimensional (3-D) windows is vital for creating semantic building models with high level of detail, furnishing smart city and digital twin programs. Existing studies on window extraction using street imagery or laser scanning data often rely on limited types of features, resulting in compromised accuracy and completeness due to shadows and geometric decorations caused by curtains, balconies, plants, and other objects. To enhance the effectiveness and robustness of building window extraction in 3-D, this article proposes an automatic method that leverages synergistic information from multiview-stereo (MVS) point clouds, through an adaptive divide-and-combine pipeline. Color information inherited from the imagery serves as a main clue to acquire the point clouds of individual building façades that may be coplanar and connected. The geometric information associated with normal vectors is then combined with color, to adaptively divide individual building façade into an irregular grid that conforms to the window edges. Subsequently, HSV color and depth distances within each grid cell are computed, and the grid cells are encoded to quantify the global arrangement features of windows. Finally, the multitype features are fused in an integer programming model, by solving which the optimal combination of grid cells corresponding to windows is obtained. Benefitting from the informative MVS point clouds and the fusion of multitype features, our method is able to directly produce 3-D models with high regularity for buildings with different appearances. Experimental results demonstrate that the proposed method is effective in 3-D window extraction while overcoming variations in façade appearances caused by foreign objects and missing data, with a high point-wise precision of 92.7%, recall of 77.09%, IoU of 71.95%, and F1-score of 83.42%. The results also exhibit a high level of integrity, with the accuracy of correctly extracted windows reaching 89.81%. In the future, we will focus on the development of a more universal façade dividing method to deal with even more complicated windows.
检测三维(3-D)窗口对于创建具有高水平细节的语义建筑模型,提供智慧城市和数字孪生计划至关重要。现有的利用街道图像或激光扫描数据进行窗口提取的研究往往依赖于有限类型的特征,由于窗帘、阳台、植物和其他物体造成的阴影和几何装饰,导致准确性和完整性受到影响。为了提高三维建筑窗口提取的有效性和鲁棒性,本文提出了一种利用多视立体(MVS)点云的协同信息,通过自适应分并管道自动提取的方法。从图像中继承的颜色信息作为获取单个建筑立面点云的主要线索,这些立面可能是共面的,也可能是连通的。然后将与法向量相关的几何信息与颜色相结合,自适应地将单个建筑立面划分为符合窗户边缘的不规则网格。然后,计算每个网格单元内的HSV颜色距离和深度距离,并对网格单元进行编码,量化窗口的全局排列特征。最后,将多类型特征融合到一个整数规划模型中,通过求解该模型得到窗口对应网格单元的最优组合。利用丰富的MVS点云和多类型特征的融合,我们的方法可以直接生成具有高规则性的不同外观建筑物的三维模型。实验结果表明,该方法在克服异物和数据缺失引起的表面形貌变化的同时,能够有效地提取出三维窗口,点向精度为92.7%,召回率为77.09%,IoU为71.95%,f1分数为83.42%。结果也显示出很高的完整性,正确提取窗口的准确率达到89.81%。在未来,我们将专注于开发一种更通用的farade划分方法来处理更复杂的窗口。
{"title":"Automated Extraction of 3-D Windows From MVS Point Clouds by Comprehensive Fusion of Multitype Features","authors":"Yuan Li;Tianzhu Zhang;Ziyi Xiong;Junying Lv;Yinning Pang","doi":"10.1109/JSTARS.2026.3654241","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3654241","url":null,"abstract":"Detecting three-dimensional (3-D) windows is vital for creating semantic building models with high level of detail, furnishing smart city and digital twin programs. Existing studies on window extraction using street imagery or laser scanning data often rely on limited types of features, resulting in compromised accuracy and completeness due to shadows and geometric decorations caused by curtains, balconies, plants, and other objects. To enhance the effectiveness and robustness of building window extraction in 3-D, this article proposes an automatic method that leverages synergistic information from multiview-stereo (MVS) point clouds, through an adaptive divide-and-combine pipeline. Color information inherited from the imagery serves as a main clue to acquire the point clouds of individual building façades that may be coplanar and connected. The geometric information associated with normal vectors is then combined with color, to adaptively divide individual building façade into an irregular grid that conforms to the window edges. Subsequently, HSV color and depth distances within each grid cell are computed, and the grid cells are encoded to quantify the global arrangement features of windows. Finally, the multitype features are fused in an integer programming model, by solving which the optimal combination of grid cells corresponding to windows is obtained. Benefitting from the informative MVS point clouds and the fusion of multitype features, our method is able to directly produce 3-D models with high regularity for buildings with different appearances. Experimental results demonstrate that the proposed method is effective in 3-D window extraction while overcoming variations in façade appearances caused by foreign objects and missing data, with a high point-wise precision of 92.7%, recall of 77.09%, IoU of 71.95%, and F1-score of 83.42%. The results also exhibit a high level of integrity, with the accuracy of correctly extracted windows reaching 89.81%. In the future, we will focus on the development of a more universal façade dividing method to deal with even more complicated windows.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4918-4934"},"PeriodicalIF":5.3,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11353237","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Insights on the Working Principles of a CNN for Forest Height Regression From Single-Pass InSAR Data 基于单次InSAR数据的森林高度回归CNN工作原理研究
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-14 DOI: 10.1109/JSTARS.2026.3654195
Daniel Carcereri;Luca Dell’Amore;Stefano Tebaldini;Paola Rizzoli
The increasing use of artificial intelligence (AI) models in Earth Observation (EO) applications, such as forest height estimation, has led to a growing need for explainable AI (XAI) methods. Despite their high accuracy, AI models are often criticized for their “black-box” nature, making it difficult to understand the inner decision-making process. In this study, we propose a multifaceted approach to XAI for a convolutional neural network (CNN)-based model that estimates forest height from TanDEM-X single-pass InSAR data. By combining domain knowledge, saliency maps, and feature importance analysis through exhaustive model permutations, we provide a comprehensive investigation of the network working principles. Our results suggests that the proposed model is implicitly capable of recognizing and compensating for the SAR acquisition geometry-related distortions. We find that the mean phase center height and its local variability represents the most informative predictor. We also find evidence that the interferometric coherence and the backscatter maps capture complementary but equally relevant views of the vegetation. This work contributes to advance the understanding of the model’s inner workings, and targets the development of more transparent and trustworthy AI for EO applications, ultimately leading to improved accuracy and reliability in the estimation of forest parameters.
人工智能(AI)模型在地球观测(EO)应用中的使用越来越多,例如森林高度估计,导致对可解释的人工智能(XAI)方法的需求日益增长。尽管具有很高的准确性,但人工智能模型经常因其“黑箱”性质而受到批评,难以理解内部决策过程。在本研究中,我们提出了一种基于卷积神经网络(CNN)的XAI方法,该模型从TanDEM-X单次InSAR数据中估计森林高度。通过结合领域知识、显著性图和通过穷举模型排列的特征重要性分析,我们对网络工作原理进行了全面的研究。我们的研究结果表明,所提出的模型能够隐式地识别和补偿SAR捕获几何相关的畸变。我们发现平均相位中心高度及其局部变率是最具信息量的预测因子。我们还发现有证据表明,干涉相干性和后向散射图捕获了互补但同样相关的植被视图。这项工作有助于促进对模型内部工作原理的理解,并旨在为EO应用开发更透明、更可信的人工智能,最终提高森林参数估计的准确性和可靠性。
{"title":"Insights on the Working Principles of a CNN for Forest Height Regression From Single-Pass InSAR Data","authors":"Daniel Carcereri;Luca Dell’Amore;Stefano Tebaldini;Paola Rizzoli","doi":"10.1109/JSTARS.2026.3654195","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3654195","url":null,"abstract":"The increasing use of artificial intelligence (AI) models in Earth Observation (EO) applications, such as forest height estimation, has led to a growing need for explainable AI (XAI) methods. Despite their high accuracy, AI models are often criticized for their “black-box” nature, making it difficult to understand the inner decision-making process. In this study, we propose a multifaceted approach to XAI for a convolutional neural network (CNN)-based model that estimates forest height from TanDEM-X single-pass InSAR data. By combining domain knowledge, saliency maps, and feature importance analysis through exhaustive model permutations, we provide a comprehensive investigation of the network working principles. Our results suggests that the proposed model is implicitly capable of recognizing and compensating for the SAR acquisition geometry-related distortions. We find that the mean phase center height and its local variability represents the most informative predictor. We also find evidence that the interferometric coherence and the backscatter maps capture complementary but equally relevant views of the vegetation. This work contributes to advance the understanding of the model’s inner workings, and targets the development of more transparent and trustworthy AI for EO applications, ultimately leading to improved accuracy and reliability in the estimation of forest parameters.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4809-4824"},"PeriodicalIF":5.3,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11352840","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Hybrid Machine Learning Framework for Water Quality Index Prediction Using Feature-Based Neural Network Initialization 基于特征神经网络初始化的水质指标预测混合机器学习框架
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-14 DOI: 10.1109/JSTARS.2026.3654017
Ali Al Bataineh;Bandi Vamsi;Scott Alan Smith
Accurate prediction of the water quality index is essential for protecting public health and managing freshwater resources. Existing models often rely on arbitrary weight initialization and make limited use of ensemble learning, which results in unstable performance and reduced interpretability. This study introduces a hybrid machine learning framework that combines feature-informed neural network initialization with gradient boosting (XGBoost) to address these limitations. Neural network weights are initialized using feature significance scores derived from SHapley Additive exPlanations (SHAP) and predictions are iteratively refined using XGBoost. The model was trained and evaluated using the public quality of freshwater dataset and compared against several baselines, including random forest, support vector regression, a conventional artificial neural network with Xavier initialization, and an XGBoost-only model. Our framework achieved an accuracy of 86.9%, an F1-score of 0.849, and a receiver operating characteristic–area under the curve of 0.894, outperforming all comparative methods. Ablation experiments showed that both the SHAP-based initialization and the boosting component each improved performance over simpler baselines.
准确预测水质指数对保护公众健康和管理淡水资源至关重要。现有模型往往依赖于任意权值初始化,集成学习的使用有限,导致性能不稳定,可解释性降低。本研究引入了一种混合机器学习框架,该框架结合了特征信息神经网络初始化和梯度增强(XGBoost)来解决这些限制。神经网络权重使用SHapley加性解释(SHAP)衍生的特征显著性分数初始化,并使用XGBoost迭代改进预测。该模型使用公共质量的淡水数据集进行训练和评估,并与多个基线进行比较,包括随机森林、支持向量回归、带有Xavier初始化的传统人工神经网络和仅xgboost模型。该框架的准确率为86.9%,f1得分为0.849,接收者工作特征曲线下面积为0.894,优于所有比较方法。烧蚀实验表明,在较简单的基线上,基于shap的初始化和助推组件都提高了性能。
{"title":"A Hybrid Machine Learning Framework for Water Quality Index Prediction Using Feature-Based Neural Network Initialization","authors":"Ali Al Bataineh;Bandi Vamsi;Scott Alan Smith","doi":"10.1109/JSTARS.2026.3654017","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3654017","url":null,"abstract":"Accurate prediction of the water quality index is essential for protecting public health and managing freshwater resources. Existing models often rely on arbitrary weight initialization and make limited use of ensemble learning, which results in unstable performance and reduced interpretability. This study introduces a hybrid machine learning framework that combines feature-informed neural network initialization with gradient boosting (XGBoost) to address these limitations. Neural network weights are initialized using feature significance scores derived from SHapley Additive exPlanations (SHAP) and predictions are iteratively refined using XGBoost. The model was trained and evaluated using the public quality of freshwater dataset and compared against several baselines, including random forest, support vector regression, a conventional artificial neural network with Xavier initialization, and an XGBoost-only model. Our framework achieved an accuracy of 86.9%, an <italic>F</i>1-score of 0.849, and a receiver operating characteristic–area under the curve of 0.894, outperforming all comparative methods. Ablation experiments showed that both the SHAP-based initialization and the boosting component each improved performance over simpler baselines.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4887-4905"},"PeriodicalIF":5.3,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11353250","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AMFC-DEIM: Improved DEIM With Adaptive Matching and Focal Convolution for Remote Sensing Small Object Detection AMFC-DEIM:基于自适应匹配和焦点卷积的改进DEIM遥感小目标检测
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-13 DOI: 10.1109/JSTARS.2026.3653626
Xiaole Lin;Guangping Li;Jiahua Xie;Zhuokun Zhi
While convolutional neural network (CNN)-based methods for small object detection in remote sensing imagery have advanced considerably, substantial challenges remain unresolved, primarily stemming from complex backgrounds and insufficient feature representation. To address these issues, we propose a novel architecture specifically designed to accommodate the unique demands of small objects, termed AMFC-DEIM. This framework introduces three key innovations: first, the adaptive one-to-one (O2O) matching mechanism, which enhances dense O2O matching by adaptively adjusting the matching grid configuration to the object distribution, thereby preserving the resolution of small objects throughout training; second, the focal convolution module, engineered to explicitly align with the spatial characteristics of small objects for extracting fine-grained features; and third, the enhanced normalized Wasserstein distance, which stabilizes the training process and bolsters performance on small targets. Comprehensive experiments conducted on three benchmark remote sensing small object detection datasets: RSOD, LEVIR-SHIP and NWPU VHR-10, demonstrate that AMFC-DEIM achieves remarkable performance, attaining AP$_{50}$ scores of 96.2%, 86.2%, and 95.1%, respectively, while maintaining only 5.27 M parameters. These results substantially outperform several established benchmark models and state-of-the-art methods.
虽然基于卷积神经网络(CNN)的遥感图像小目标检测方法已经取得了长足的进步,但仍然存在大量的挑战,主要源于复杂的背景和不足的特征表示。为了解决这些问题,我们提出了一种新的架构,专门设计用于适应小物体的独特需求,称为AMFC-DEIM。该框架引入了三个关键创新:第一,自适应一对一(O2O)匹配机制,通过自适应调整匹配网格配置以适应目标分布,从而在整个训练过程中保持小目标的分辨率,从而增强密集的O2O匹配;第二,焦点卷积模块,设计明确对准小物体的空间特征,提取细粒度特征;第三,增强的归一化Wasserstein距离,稳定了训练过程,提高了在小目标上的表现。在RSOD、levirship和NWPU VHR-10三个基准遥感小目标检测数据集上进行的综合实验表明,AMFC-DEIM在仅保留5.27个参数的情况下,取得了显著的性能,分别获得了96.2%、86.2%和95.1%的AP$_{50}$得分。这些结果大大优于几种已建立的基准模型和最先进的方法。
{"title":"AMFC-DEIM: Improved DEIM With Adaptive Matching and Focal Convolution for Remote Sensing Small Object Detection","authors":"Xiaole Lin;Guangping Li;Jiahua Xie;Zhuokun Zhi","doi":"10.1109/JSTARS.2026.3653626","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3653626","url":null,"abstract":"While convolutional neural network (CNN)-based methods for small object detection in remote sensing imagery have advanced considerably, substantial challenges remain unresolved, primarily stemming from complex backgrounds and insufficient feature representation. To address these issues, we propose a novel architecture specifically designed to accommodate the unique demands of small objects, termed AMFC-DEIM. This framework introduces three key innovations: first, the adaptive one-to-one (O2O) matching mechanism, which enhances dense O2O matching by adaptively adjusting the matching grid configuration to the object distribution, thereby preserving the resolution of small objects throughout training; second, the focal convolution module, engineered to explicitly align with the spatial characteristics of small objects for extracting fine-grained features; and third, the enhanced normalized Wasserstein distance, which stabilizes the training process and bolsters performance on small targets. Comprehensive experiments conducted on three benchmark remote sensing small object detection datasets: RSOD, LEVIR-SHIP and NWPU VHR-10, demonstrate that AMFC-DEIM achieves remarkable performance, attaining AP<inline-formula><tex-math>$_{50}$</tex-math></inline-formula> scores of 96.2%, 86.2%, and 95.1%, respectively, while maintaining only 5.27 M parameters. These results substantially outperform several established benchmark models and state-of-the-art methods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"5021-5034"},"PeriodicalIF":5.3,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11347584","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Learning-Based Model for Forest Canopy Height Mapping Using Multisource Remote Sensing Data 基于深度学习的多源遥感森林冠层高度制图模型
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-13 DOI: 10.1109/JSTARS.2026.3653676
Jiapeng Huang;Yue Zhang;Xiaozhu Yang;Fan Mo
Forest canopy height is a critical structural parameter for accurately assessing forest carbon storage. This study integrates Global Ecosystem Dynamics Investigation (GEDI) LiDAR data with multisource remote sensing features to construct a multidimensional feature space comprising 13 parameters. By employing high-dimensional feature vectors of “spatial coordinates + environmental features,” the proposed deep learning-based neural network-guided interpolation (NNGI) model effectively harnesses the capacity of deep learning to model complex nonlinear relationships and adaptively extract local features. This method adopts a dual-network collaborative architecture to dynamically learn interpolation weights based on environmental similarity in the feature space, rather than relying on fixed parameters or merely considering spatial distance, thereby effectively fusing the complex nonlinear relationship modeling capability of deep learning with the concept of spatial interpolation. Experiments conducted across five representative regions in the United States demonstrate that the overall accuracy of the NNGI model significantly outperforms traditional machine learning methods, Pearson correlation coefffcient (r) = 0.79, root-mean-square error (RMSE) = 5.38 m, mean absolute error = 4.04 m, bias = –0.15 m. In areas with low (0% –20% ) and high (61% –80% ) vegetation cover fractions, the RMSE decreased by 37.52% and 5.37%, respectively, while the r-value increased by 15.87% and 35.90%, respectively. Regarding different slope aspects, the RMSE for southeastern and western slopes decreased by 30.38% and 18.70%, respectively. This study provides a more reliable solution for the accurate estimation of forest structural parameters in complex environments.
森林冠层高度是准确评估森林碳储量的重要结构参数。本研究将全球生态系统动力学调查(GEDI)激光雷达数据与多源遥感特征相结合,构建了包含13个参数的多维特征空间。通过采用“空间坐标+环境特征”的高维特征向量,所提出的基于深度学习的神经网络引导插值(NNGI)模型有效地利用了深度学习的能力来建模复杂的非线性关系并自适应地提取局部特征。该方法采用双网络协同架构,基于特征空间中的环境相似性动态学习插值权值,而不是依赖于固定参数或仅仅考虑空间距离,从而有效地将深度学习的复杂非线性关系建模能力与空间插值的概念融合在一起。在美国五个具有代表性的地区进行的实验表明,NNGI模型的整体精度显著优于传统的机器学习方法,Pearson相关系数(r) = 0.79,均方根误差(RMSE) = 5.38 m,平均绝对误差= 4.04 m,偏差= -0.15 m。低植被覆盖度(0% ~ 20%)和高植被覆盖度(61% ~ 80%)区域的RMSE分别降低了37.52%和5.37%,r值分别增加了15.87%和35.90%。在不同坡向上,东南坡和西坡的RMSE分别下降了30.38%和18.70%。该研究为复杂环境下森林结构参数的准确估计提供了更可靠的解决方案。
{"title":"A Deep Learning-Based Model for Forest Canopy Height Mapping Using Multisource Remote Sensing Data","authors":"Jiapeng Huang;Yue Zhang;Xiaozhu Yang;Fan Mo","doi":"10.1109/JSTARS.2026.3653676","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3653676","url":null,"abstract":"Forest canopy height is a critical structural parameter for accurately assessing forest carbon storage. This study integrates Global Ecosystem Dynamics Investigation (GEDI) LiDAR data with multisource remote sensing features to construct a multidimensional feature space comprising 13 parameters. By employing high-dimensional feature vectors of “spatial coordinates + environmental features,” the proposed deep learning-based neural network-guided interpolation (NNGI) model effectively harnesses the capacity of deep learning to model complex nonlinear relationships and adaptively extract local features. This method adopts a dual-network collaborative architecture to dynamically learn interpolation weights based on environmental similarity in the feature space, rather than relying on fixed parameters or merely considering spatial distance, thereby effectively fusing the complex nonlinear relationship modeling capability of deep learning with the concept of spatial interpolation. Experiments conducted across five representative regions in the United States demonstrate that the overall accuracy of the NNGI model significantly outperforms traditional machine learning methods, Pearson correlation coefffcient (<italic>r</i>) = 0.79, root-mean-square error (RMSE) = 5.38 m, mean absolute error = 4.04 m, bias = –0.15 m. In areas with low (0% –20% ) and high (61% –80% ) vegetation cover fractions, the RMSE decreased by 37.52% and 5.37%, respectively, while the <italic>r</i>-value increased by 15.87% and 35.90%, respectively. Regarding different slope aspects, the RMSE for southeastern and western slopes decreased by 30.38% and 18.70%, respectively. This study provides a more reliable solution for the accurate estimation of forest structural parameters in complex environments.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4842-4857"},"PeriodicalIF":5.3,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11348094","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MEETNet: Morphology-Edge Enhanced Triple-Cascaded Network for Infrared Small Target Detection 用于红外小目标检测的形态学边缘增强三级联网络
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-12 DOI: 10.1109/JSTARS.2026.3651900
Enyu Zhao;Yu Shi;Nianxin Qu;Yulei Wang;Hang Zhao
Infrared small target detection is focused on accurately identifying tiny targets with low signal-to-noise ratio against complex backgrounds, representing a critical challenge in the field of infrared image processing. Existing approaches frequently fail to retain small target information during global semantic extraction and struggle with preserving detailed features and achieving effective feature fusion. To address these limitations, this article proposes a morphology-edge enhanced triple-cascaded network (MEETNet) for infrared small target detection. The network employs a triple-cascaded architecture that maintains high resolution and enhances information interaction between different stages, facilitating effective multilevel feature fusion while safeguarding deep small-target characteristics. MEETNet integrates an edge-detail enhanced module (EDEM) and a detail-aware multi-scale fusion module (DMSFM). These modules introduce edge-detail enhanced features that amalgamate contrast and edge information, thereby amplifying target saliency and improving edge representation. Specifically, EDEM augments target contrast and edge structures by integrating edge-detail-enhanced features with shallow details. This integration improves the discriminability capacity of shallow features for detecting small targets. Moreover, DMSFM implements a multireceptive field mechanism to merge target details with deep semantic insights, enabling the capture of more distinctive global contextual features. Experimental evaluations conducted using two public datasets—NUAA-SIRST and NUDT-SIRST—demonstrate that the proposed MEETNet surpasses existing state-of-the-art methods for infrared small target detection in terms of detection accuracy.
红外小目标检测的重点是在复杂背景下准确识别低信噪比的微小目标,是红外图像处理领域的一个关键挑战。现有的方法在全局语义提取过程中往往不能保留小目标信息,难以保留细节特征并实现有效的特征融合。为了解决这些限制,本文提出了一种用于红外小目标检测的形态学边缘增强三级联网络(MEETNet)。该网络采用三级联架构,既保持了高分辨率,又增强了各阶段之间的信息交互,在保证深度小目标特征的同时,实现了有效的多级特征融合。MEETNet集成了边缘细节增强模块(EDEM)和细节感知多尺度融合模块(DMSFM)。这些模块引入边缘细节增强特征,合并对比度和边缘信息,从而放大目标显著性并改善边缘表示。具体来说,EDEM通过将边缘细节增强特征与浅层细节相结合来增强目标对比度和边缘结构。这种融合提高了浅层特征对小目标的识别能力。此外,DMSFM实现了一种多接受场机制,将目标细节与深度语义洞察合并在一起,从而能够捕获更多独特的全局上下文特征。使用两个公共数据集(nuaa - sirst和nudt - sirst)进行的实验评估表明,所提出的MEETNet在检测精度方面优于现有的最先进的红外小目标检测方法。
{"title":"MEETNet: Morphology-Edge Enhanced Triple-Cascaded Network for Infrared Small Target Detection","authors":"Enyu Zhao;Yu Shi;Nianxin Qu;Yulei Wang;Hang Zhao","doi":"10.1109/JSTARS.2026.3651900","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3651900","url":null,"abstract":"Infrared small target detection is focused on accurately identifying tiny targets with low signal-to-noise ratio against complex backgrounds, representing a critical challenge in the field of infrared image processing. Existing approaches frequently fail to retain small target information during global semantic extraction and struggle with preserving detailed features and achieving effective feature fusion. To address these limitations, this article proposes a morphology-edge enhanced triple-cascaded network (MEETNet) for infrared small target detection. The network employs a triple-cascaded architecture that maintains high resolution and enhances information interaction between different stages, facilitating effective multilevel feature fusion while safeguarding deep small-target characteristics. MEETNet integrates an edge-detail enhanced module (EDEM) and a detail-aware multi-scale fusion module (DMSFM). These modules introduce edge-detail enhanced features that amalgamate contrast and edge information, thereby amplifying target saliency and improving edge representation. Specifically, EDEM augments target contrast and edge structures by integrating edge-detail-enhanced features with shallow details. This integration improves the discriminability capacity of shallow features for detecting small targets. Moreover, DMSFM implements a multireceptive field mechanism to merge target details with deep semantic insights, enabling the capture of more distinctive global contextual features. Experimental evaluations conducted using two public datasets—NUAA-SIRST and NUDT-SIRST—demonstrate that the proposed MEETNet surpasses existing state-of-the-art methods for infrared small target detection in terms of detection accuracy.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4748-4765"},"PeriodicalIF":5.3,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11340625","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature-Screened and Structure-Constrained Deep Forest for Unsupervised SAR Image Change Detection 基于特征筛选和结构约束的无监督SAR图像变化检测深度森林
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-12 DOI: 10.1109/JSTARS.2026.3651534
Wanying Song;Ruijing Zhu;Jie Wang;Yinyin Jiang;Yan Wu
Deep forest-based models for synthetic aperture radar (SAR) image change detection are generally challenged by noise sensitivity and high feature redundancy, which significantly degrade the prediction performance. To address these issues, this article proposes a structure-constrained and feature-screened deep forest, abbreviated as SC-FS-DF, for SAR image change detection. In preclassification, a fuzzy multineighborhood information C-means clustering is proposed to generate high-quality pseudo-labels. It introduces the edge information, the nonlocal and intrasuperpixel neighborhoods into the objective function of fuzzy local information C-means, thus suppressing the speckle noise and constraining structures of targets. In the sample learning and label prediction module, a feature-screened deep forest (FS-DF) framework is constructed by combining feature importance and redundancy analysis with a dropout strategy, thus screening out the noninformative features and meanwhile retaining the informative ones for learning at each cascade layer. Finally, a novel energy function fusing the nonlocal and superpixel information is derived for refining the detection map generated by FS-DF, further preserving fine details and edge locations. Extensive comparison and ablation experiments on five real SAR datasets verify the effectiveness and robustness of the proposed SC-FS-DF, and demonstrate that the SC-FS-DF can well screen the high-dimensional features in change detection and constrain the structures of targets.
基于深度森林的合成孔径雷达(SAR)图像变化检测模型存在噪声敏感性和特征冗余度高的问题,严重影响了预测效果。为了解决这些问题,本文提出了一种结构约束和特征筛选的深森林,简称SC-FS-DF,用于SAR图像变化检测。在预分类中,提出了一种模糊多邻域信息c均值聚类方法来生成高质量的伪标签。在模糊局部信息C-means的目标函数中引入边缘信息、非局部和超像素内邻域,从而抑制散斑噪声和约束目标结构。在样本学习和标签预测模块中,将特征重要性和冗余分析与dropout策略相结合,构建了特征筛选深度森林(FS-DF)框架,从而筛选出非信息特征,同时保留每个级联层学习的信息特征。最后,导出了一种融合非局部和超像素信息的能量函数,用于细化FS-DF生成的检测图,进一步保留了精细细节和边缘位置。在5个真实SAR数据集上进行了大量对比和烧蚀实验,验证了该算法的有效性和鲁棒性,并证明了该算法在变化检测中能够很好地筛选高维特征并约束目标结构。
{"title":"Feature-Screened and Structure-Constrained Deep Forest for Unsupervised SAR Image Change Detection","authors":"Wanying Song;Ruijing Zhu;Jie Wang;Yinyin Jiang;Yan Wu","doi":"10.1109/JSTARS.2026.3651534","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3651534","url":null,"abstract":"Deep forest-based models for synthetic aperture radar (SAR) image change detection are generally challenged by noise sensitivity and high feature redundancy, which significantly degrade the prediction performance. To address these issues, this article proposes a structure-constrained and feature-screened deep forest, abbreviated as SC-FS-DF, for SAR image change detection. In preclassification, a fuzzy multineighborhood information C-means clustering is proposed to generate high-quality pseudo-labels. It introduces the edge information, the nonlocal and intrasuperpixel neighborhoods into the objective function of fuzzy local information C-means, thus suppressing the speckle noise and constraining structures of targets. In the sample learning and label prediction module, a feature-screened deep forest (FS-DF) framework is constructed by combining feature importance and redundancy analysis with a dropout strategy, thus screening out the noninformative features and meanwhile retaining the informative ones for learning at each cascade layer. Finally, a novel energy function fusing the nonlocal and superpixel information is derived for refining the detection map generated by FS-DF, further preserving fine details and edge locations. Extensive comparison and ablation experiments on five real SAR datasets verify the effectiveness and robustness of the proposed SC-FS-DF, and demonstrate that the SC-FS-DF can well screen the high-dimensional features in change detection and constrain the structures of targets.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4056-4068"},"PeriodicalIF":5.3,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11339914","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DTWSTSR: Dual-Tree Complex Wavelet and Swin Transformer Based Remote Sensing Images Super-Resolution Network 基于双树复小波和Swin变压器的遥感图像超分辨网络
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-12 DOI: 10.1109/JSTARS.2026.3651075
Yu Yao;Hengbin Wang;Xiang Gao;Ziyao Xing;Xiaodong Zhang;Yuanyuan Zhao;Shaoming Li;Zhe Liu
High-resolution remote sensing images provide crucial data support for applications such as precision agriculture and water resource management. However, super-resolution reconstructions often suffer from over-smoothed textures and structural distortions, failing to accurately recover the intricate details of ground objects. To address this issue, this article proposes a remote sensing image super-resolution network (DTWSTSR) that combines the Dual-Tree Complex Wavelet Transform and Swin Transformer, which enhances the ability of texture detail reconstruction by fusing frequency-domain and spatial-domain features. This model includes a Dual-Tree Complex Wavelet Texture Feature Sensing Module (DWTFSM) for integrating frequency and spatial features, and a Multiscale Efficient Channel Attention mechanism to enhance attention to multiscale and global details. In addition, we design a Kolmogorov–Arnold Network based on a branch attention mechanism, which improves the model’s ability to represent complex nonlinear features. During the training process, we investigate the impact of hyperparameters and propose the two-stage SSIM&SL1 loss function to reduce structural differences between images. Experimental results show that DTWSTSR outperforms existing mainstream methods under different magnification factors (×2, ×3, ×4), ranking among the top two in multiple metrics. For example, at ×2 magnification, its PSNR value is 0.64–2.68 dB higher than that of other models. Visual comparisons demonstrate that the proposed model achieves clearer and more accurate detail reconstruction of target ground objects. Furthermore, the model exhibits excellent generalization ability in cross-sensor image (OLI2MSI dataset) reconstruction.
高分辨率遥感图像为精准农业和水资源管理等应用提供了重要的数据支持。然而,超分辨率重建往往存在纹理过度平滑和结构扭曲的问题,无法准确恢复地物的复杂细节。针对这一问题,本文提出了一种结合双树复小波变换和Swin变压器的遥感图像超分辨率网络(DTWSTSR),通过融合频域和空域特征,增强了纹理细节的重建能力。该模型采用双树复小波纹理特征感知模块(DWTFSM)对频率和空间特征进行融合,采用多尺度高效通道关注机制对多尺度和全局细节进行关注。此外,我们设计了一个基于分支注意机制的Kolmogorov-Arnold网络,提高了模型表征复杂非线性特征的能力。在训练过程中,我们研究了超参数的影响,并提出了两阶段SSIM&SL1损失函数来减少图像之间的结构差异。实验结果表明,DTWSTSR在不同放大倍数(×2, ×3, ×4)下优于现有主流方法,多项指标均排名前两位。例如,在×2放大倍数下,其PSNR值比其他模型高0.64-2.68 dB。目视对比表明,该模型对目标地物的细节重建更加清晰、准确。此外,该模型在跨传感器图像(OLI2MSI数据集)重建中表现出良好的泛化能力。
{"title":"DTWSTSR: Dual-Tree Complex Wavelet and Swin Transformer Based Remote Sensing Images Super-Resolution Network","authors":"Yu Yao;Hengbin Wang;Xiang Gao;Ziyao Xing;Xiaodong Zhang;Yuanyuan Zhao;Shaoming Li;Zhe Liu","doi":"10.1109/JSTARS.2026.3651075","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3651075","url":null,"abstract":"High-resolution remote sensing images provide crucial data support for applications such as precision agriculture and water resource management. However, super-resolution reconstructions often suffer from over-smoothed textures and structural distortions, failing to accurately recover the intricate details of ground objects. To address this issue, this article proposes a remote sensing image super-resolution network (DTWSTSR) that combines the Dual-Tree Complex Wavelet Transform and Swin Transformer, which enhances the ability of texture detail reconstruction by fusing frequency-domain and spatial-domain features. This model includes a Dual-Tree Complex Wavelet Texture Feature Sensing Module (DWTFSM) for integrating frequency and spatial features, and a Multiscale Efficient Channel Attention mechanism to enhance attention to multiscale and global details. In addition, we design a Kolmogorov–Arnold Network based on a branch attention mechanism, which improves the model’s ability to represent complex nonlinear features. During the training process, we investigate the impact of hyperparameters and propose the two-stage SSIM&SL1 loss function to reduce structural differences between images. Experimental results show that DTWSTSR outperforms existing mainstream methods under different magnification factors (×2, ×3, ×4), ranking among the top two in multiple metrics. For example, at ×2 magnification, its PSNR value is 0.64–2.68 dB higher than that of other models. Visual comparisons demonstrate that the proposed model achieves clearer and more accurate detail reconstruction of target ground objects. Furthermore, the model exhibits excellent generalization ability in cross-sensor image (OLI2MSI dataset) reconstruction.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4730-4747"},"PeriodicalIF":5.3,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11329193","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation of Ships’ Complex High-Resolution Range Profiles Based on Sparse Optimization Method in Non-Gaussian Sea Clutter 非高斯海杂波下基于稀疏优化方法的舰船复杂高分辨率距离轮廓估计
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-12 DOI: 10.1109/JSTARS.2026.3651639
Yang Liu;Kun Zhang;Chun-Yi Song;Zhi-Wei Xu
In high-resolution maritime radar working in scanning mode, the classification and identification of ships require the recovery of the ship’s high-resolution range profiles (HRRPs) from radar returns. The return signal from the ship is a complex sparse signal interfered by non-Gaussian sea clutter. In this article, three sparse optimization methods matching the non-Gaussian characteristics of sea clutter, i.e., the sparse optimization matching K-distribution method, the sparse optimization matching generalized Pareto distribution method, the sparse optimization matching CGIG distribution method, are proposed to estimate complex HRRPs of ships. The compound Gaussian model is used to describe the non-Gaussianity of sea clutter, and the sparsity of ships’ complex HRRPs is constrained by the random distribution with one parameter. In the three methods, the Anderson–Darling test is used to search the parameters of the sparse constraint model. Besides, the non-Gaussian characteristics of sea clutter depend on the marine environment parameters and radar operating parameters. For different scenarios, the minimal criterion of the Kolmogorov–Smirnov distance is used to select the best model from the three compound Gaussian models, and then select the corresponding proposed methods. Simulated and measured radar data are used to evaluate the performance of the proposed methods and the results show that the proposed methods obtain better estimates of ship HRRPs compared to the recent SRIM method and the classical SLIM method.
在扫描模式下工作的高分辨率海事雷达中,船舶的分类和识别需要从雷达回波中恢复船舶的高分辨率距离像(hrrp)。船舶回波信号是受非高斯海杂波干扰的复稀疏信号。本文针对海杂波的非高斯特性,提出了稀疏优化匹配k -分布法、稀疏优化匹配广义Pareto分布法、稀疏优化匹配CGIG分布法三种稀疏优化方法来估计舰船的复杂hrrp。采用复合高斯模型描述海杂波的非高斯性,舰船复杂hrrp的稀疏性受单参数随机分布的约束。在这三种方法中,使用Anderson-Darling检验来搜索稀疏约束模型的参数。此外,海杂波的非高斯特性取决于海洋环境参数和雷达工作参数。针对不同的场景,采用Kolmogorov-Smirnov距离最小准则从三种复合高斯模型中选择最佳模型,然后选择相应的建议方法。利用模拟和实测雷达数据对所提方法的性能进行了评价,结果表明,与现有的SRIM方法和经典的SLIM方法相比,所提方法获得了更好的舰船hrrp估计。
{"title":"Estimation of Ships’ Complex High-Resolution Range Profiles Based on Sparse Optimization Method in Non-Gaussian Sea Clutter","authors":"Yang Liu;Kun Zhang;Chun-Yi Song;Zhi-Wei Xu","doi":"10.1109/JSTARS.2026.3651639","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3651639","url":null,"abstract":"In high-resolution maritime radar working in scanning mode, the classification and identification of ships require the recovery of the ship’s high-resolution range profiles (HRRPs) from radar returns. The return signal from the ship is a complex sparse signal interfered by non-Gaussian sea clutter. In this article, three sparse optimization methods matching the non-Gaussian characteristics of sea clutter, i.e., the sparse optimization matching K-distribution method, the sparse optimization matching generalized Pareto distribution method, the sparse optimization matching CGIG distribution method, are proposed to estimate complex HRRPs of ships. The compound Gaussian model is used to describe the non-Gaussianity of sea clutter, and the sparsity of ships’ complex HRRPs is constrained by the random distribution with one parameter. In the three methods, the Anderson–Darling test is used to search the parameters of the sparse constraint model. Besides, the non-Gaussian characteristics of sea clutter depend on the marine environment parameters and radar operating parameters. For different scenarios, the minimal criterion of the Kolmogorov–Smirnov distance is used to select the best model from the three compound Gaussian models, and then select the corresponding proposed methods. Simulated and measured radar data are used to evaluate the performance of the proposed methods and the results show that the proposed methods obtain better estimates of ship HRRPs compared to the recent SRIM method and the classical SLIM method.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"3998-4013"},"PeriodicalIF":5.3,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11339885","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1