首页 > 最新文献

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing最新文献

英文 中文
Identification of Spectrally Similar Materials From Multispectral Imagery Based on Condition Number of Matrix
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-22 DOI: 10.1109/JSTARS.2025.3532816
Maozhi Wang;Shu-Hua Chen;Jun Feng;Wenxi Xu;Daming Wang
Identification of spectrally similar materials from multispectral remote sensing (RS) imagery with only several bands is an important issue that challenges comprehensive applications of the RS of surface characteristics. This study proposes a new method to identify spectrally similar materials from these types of imagery. The method is constructed based on the theory of condition number of matrix, and a theorem is proven as the foundation of the designed identification algorithm. Mathematically, the motivation behind designing this new algorithm is to decrease the condition number of the matrix for a linear system and, by doing so, to change an ill-conditioned system to a well-conditioned one. Technically, this new method achieves the purpose by adding supplementary features to all the original spectra including similar materials, which can be further used as indicative signatures to identify these materials. Thus, the proposed method is named a condition number-based method with supplementary features (SF-CNM). The threshold scheme and supplementary features are two main novelty techniques to ensure the uniqueness and accuracy of the proposed SF-CNM for specified samples. The results for a case study to identify water, ice, snow, shadow, and other materials from Landsat 8 OLI data indicate that SF-CNM can identify the materials specified by the given samples successfully and accurately and that SF-CNM significantly outperforms those of spectral angle mapper algorithm, Mahalanobis classifier, maximum likelihood, and artificial neural network, and produces the performance similar to, even slightly better than that of support vector machine.
{"title":"Identification of Spectrally Similar Materials From Multispectral Imagery Based on Condition Number of Matrix","authors":"Maozhi Wang;Shu-Hua Chen;Jun Feng;Wenxi Xu;Daming Wang","doi":"10.1109/JSTARS.2025.3532816","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3532816","url":null,"abstract":"Identification of spectrally similar materials from multispectral remote sensing (RS) imagery with only several bands is an important issue that challenges comprehensive applications of the RS of surface characteristics. This study proposes a new method to identify spectrally similar materials from these types of imagery. The method is constructed based on the theory of condition number of matrix, and a theorem is proven as the foundation of the designed identification algorithm. Mathematically, the motivation behind designing this new algorithm is to decrease the condition number of the matrix for a linear system and, by doing so, to change an ill-conditioned system to a well-conditioned one. Technically, this new method achieves the purpose by adding supplementary features to all the original spectra including similar materials, which can be further used as indicative signatures to identify these materials. Thus, the proposed method is named a condition number-based method with supplementary features (SF-CNM). The threshold scheme and supplementary features are two main novelty techniques to ensure the uniqueness and accuracy of the proposed SF-CNM for specified samples. The results for a case study to identify water, ice, snow, shadow, and other materials from Landsat 8 OLI data indicate that SF-CNM can identify the materials specified by the given samples successfully and accurately and that SF-CNM significantly outperforms those of spectral angle mapper algorithm, Mahalanobis classifier, maximum likelihood, and artificial neural network, and produces the performance similar to, even slightly better than that of support vector machine.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4751-4766"},"PeriodicalIF":4.7,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10849635","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Context-Driven Automatic Target Detection With Cross-Modality Real-Synthetic Image Merging
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-21 DOI: 10.1109/JSTARS.2025.3531788
Zhe Geng;Shiyu Zhang;Chongqi Xu;Haowen Zhou;Wei Li;Xiang Yu;Daiyin Zhu;Gong Zhang
This article presents pioneer research on joint scene-target analysis and proposes novel cross-modality real-synthetic target feature fusion method. To begin, multisensor remote sensing images are jointly leveraged for geographical region classification. After that, a novel Context-Aware Region Masking and Situation AWareness (CARMSAW) strategy is employed for target classification based on the inherent target properties and capabilities reflected by SAR and infrared (IR) imagery, and the cross-modality Real-synthetic Image Merging (CRIM) strategy is employed for feature enhancement. Specifically, to tackle with the random deviations of the real SAR imagery from the ideal ones, the synthetic SAR signature generated based on the target CAD model is treated as a “skeleton” with known structure for real-sync target feature alignment. To facilitate the recognition of aircrafts, we leverage on the IR images to construct an “exoskeleton” for the target SAR signature, so that the dimension/shape/contour of the target and its electromagnetic features are united. Furthermore, we propose a novel color-guided component-level attention mechanism, in which the SAR image is partitioned into several subregions highlighted or blacked-out adaptively based on their significance level. To demonstrate the effectiveness of the proposed CARMSAW strategy, a series of experiments are carried out based on the SAR-optical image pairs from the SEN1-2 dataset, the SpaceNet6 dataset, and a self-constructed ship detection dataset featuring the Port of Rotterdam. To verify the performance the proposed CRIM method, experiment results based on both the self-constructed SAR-IR dataset and the MSTAR-SAMPLE dataset in the public domain are provided.
{"title":"Context-Driven Automatic Target Detection With Cross-Modality Real-Synthetic Image Merging","authors":"Zhe Geng;Shiyu Zhang;Chongqi Xu;Haowen Zhou;Wei Li;Xiang Yu;Daiyin Zhu;Gong Zhang","doi":"10.1109/JSTARS.2025.3531788","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3531788","url":null,"abstract":"This article presents pioneer research on joint scene-target analysis and proposes novel cross-modality real-synthetic target feature fusion method. To begin, multisensor remote sensing images are jointly leveraged for geographical region classification. After that, a novel Context-Aware Region Masking and Situation AWareness (CARMSAW) strategy is employed for target classification based on the inherent target properties and capabilities reflected by SAR and infrared (IR) imagery, and the cross-modality Real-synthetic Image Merging (CRIM) strategy is employed for feature enhancement. Specifically, to tackle with the random deviations of the real SAR imagery from the ideal ones, the synthetic SAR signature generated based on the target CAD model is treated as a “skeleton” with known structure for real-sync target feature alignment. To facilitate the recognition of aircrafts, we leverage on the IR images to construct an “exoskeleton” for the target SAR signature, so that the dimension/shape/contour of the target and its electromagnetic features are united. Furthermore, we propose a novel color-guided component-level attention mechanism, in which the SAR image is partitioned into several subregions highlighted or blacked-out adaptively based on their significance level. To demonstrate the effectiveness of the proposed CARMSAW strategy, a series of experiments are carried out based on the SAR-optical image pairs from the SEN1-2 dataset, the SpaceNet6 dataset, and a self-constructed ship detection dataset featuring the Port of Rotterdam. To verify the performance the proposed CRIM method, experiment results based on both the self-constructed SAR-IR dataset and the MSTAR-SAMPLE dataset in the public domain are provided.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"5600-5618"},"PeriodicalIF":4.7,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10848124","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143489073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Characterizing Storm-Induced Coastal Flooding Using SAR Imagery and Deep Learning
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-21 DOI: 10.1109/JSTARS.2025.3530255
Deanna Edwing;Lingsheng Meng;Suna Lv;Xiao-Hai Yan
Flooding is among the most common yet costly worldwide annual disasters. Previous studies have proven that synthetic aperture radar (SAR) is an effective tool for flooding observation due to its high-resolution and timely observations, and deep learning-based models can accurately extract water bodies from SAR imagery. However, many previous flood analyses do not account for influences of tides and permanent water bodies, and the comprehensive characteristics of coastal storm flooding are still not fully understood. This study therefore presents a novel approach for isolating storm-induced flood waters in coastal regions from SAR imagery through the identification and removal of permanent water bodies and tidal inundation. This methodology is applied to the Delaware Bay region, with ancillary geospatial data used to determine resulting landcover impacts. Results indicate that flooding primarily impacts agricultural and marsh regions, as well as urban areas like airports and road systems adjacent to rivers or large inland bays. The sensitivity impacts of tides on flood estimates reveals that estimates significantly increase if included in analysis, highlighting the importance of their removal prior to flood identification. Finally, exploration into intense coastal storm events in the Delaware Bay region reveal the importance of storm characteristics like high water levels, wind, and precipitation in generating extreme flooding conditions. The case study presented here has important implications for other coastal regions and provides an innovative and comprehensive approach to coastal storm flood identification and characterization which can benefit coastal managers, emergency responders, coastal communities, and researchers interested in coastal flood hazards.
{"title":"Characterizing Storm-Induced Coastal Flooding Using SAR Imagery and Deep Learning","authors":"Deanna Edwing;Lingsheng Meng;Suna Lv;Xiao-Hai Yan","doi":"10.1109/JSTARS.2025.3530255","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3530255","url":null,"abstract":"Flooding is among the most common yet costly worldwide annual disasters. Previous studies have proven that synthetic aperture radar (SAR) is an effective tool for flooding observation due to its high-resolution and timely observations, and deep learning-based models can accurately extract water bodies from SAR imagery. However, many previous flood analyses do not account for influences of tides and permanent water bodies, and the comprehensive characteristics of coastal storm flooding are still not fully understood. This study therefore presents a novel approach for isolating storm-induced flood waters in coastal regions from SAR imagery through the identification and removal of permanent water bodies and tidal inundation. This methodology is applied to the Delaware Bay region, with ancillary geospatial data used to determine resulting landcover impacts. Results indicate that flooding primarily impacts agricultural and marsh regions, as well as urban areas like airports and road systems adjacent to rivers or large inland bays. The sensitivity impacts of tides on flood estimates reveals that estimates significantly increase if included in analysis, highlighting the importance of their removal prior to flood identification. Finally, exploration into intense coastal storm events in the Delaware Bay region reveal the importance of storm characteristics like high water levels, wind, and precipitation in generating extreme flooding conditions. The case study presented here has important implications for other coastal regions and provides an innovative and comprehensive approach to coastal storm flood identification and characterization which can benefit coastal managers, emergency responders, coastal communities, and researchers interested in coastal flood hazards.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"5619-5632"},"PeriodicalIF":4.7,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10848257","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143455232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A RFI Mitigation Approach for Spaceborne SAR Using Homologous Interference Knowledge at Coastal Regions
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-21 DOI: 10.1109/JSTARS.2025.3530473
Xuezhi Chen;Yan Huang;Xutao Yu;Yuan Mao;Zaichen Zhang;Zhanye Chen;Wei Hong
As a form of direct signal, radio frequency interference (RFI) received by spaceborne synthetic aperture radar (SAR) systems significantly submerges the useful echoes scattered from the ground scene. Therefore, one of the effective models is to represent RFI as a low-rank component, and the method with a low-rank and sparse framework can be used to suppress the RFI and extract the useful signals. These methods involve extracting the underlying RFI component directly from the polluted SAR signal matrix by minimizing its nuclear norm. However, in recent years, many studies have shown that it is difficult for the nuclear norm minimization (NNM) model to obtain an accurate singular value estimation of the RFI component from polluted observed signals. Observed from the Sentinel-1 RFI distribution published by European Space Agency (ESA), a large amount of RFI occur at coastal regions in the world. In this article, an interference self-cancellation (ISC) method using the knowledge of homologous interference (HI) at coastal regions is proposed to mitigate the RFI in the polluted SAR data. Herein, HI is defined as the RFI with similar artifacts in the same SAR image. We extract the singular value estimation of the reference HI area and add it to the low-rank model as a template, relieving overshrinking problem of the classic NNM algorithms. In addition, the proposed method can be considered as an extended framework of the previous low-rank methods. When the ideal HI cannot be extracted, the proposed method can be degraded into a previous NNM method, such as robust principal component analysis (RPCA) method or reweighted nuclear norm (RNN) method. Subsequently, we test the performance of the proposed algorithm in multiple different scenes obtained from measured Sentinel-1 data. All experiments verify that the proposed ISC method can remove RFI and keep more details of ground scenes compared to other state-of-the-art RFI mitigation methods.
{"title":"A RFI Mitigation Approach for Spaceborne SAR Using Homologous Interference Knowledge at Coastal Regions","authors":"Xuezhi Chen;Yan Huang;Xutao Yu;Yuan Mao;Zaichen Zhang;Zhanye Chen;Wei Hong","doi":"10.1109/JSTARS.2025.3530473","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3530473","url":null,"abstract":"As a form of direct signal, radio frequency interference (RFI) received by spaceborne synthetic aperture radar (SAR) systems significantly submerges the useful echoes scattered from the ground scene. Therefore, one of the effective models is to represent RFI as a low-rank component, and the method with a low-rank and sparse framework can be used to suppress the RFI and extract the useful signals. These methods involve extracting the underlying RFI component directly from the polluted SAR signal matrix by minimizing its nuclear norm. However, in recent years, many studies have shown that it is difficult for the nuclear norm minimization (NNM) model to obtain an accurate singular value estimation of the RFI component from polluted observed signals. Observed from the Sentinel-1 RFI distribution published by European Space Agency (ESA), a large amount of RFI occur at coastal regions in the world. In this article, an interference self-cancellation (ISC) method using the knowledge of homologous interference (HI) at coastal regions is proposed to mitigate the RFI in the polluted SAR data. Herein, HI is defined as the RFI with similar artifacts in the same SAR image. We extract the singular value estimation of the reference HI area and add it to the low-rank model as a template, relieving overshrinking problem of the classic NNM algorithms. In addition, the proposed method can be considered as an extended framework of the previous low-rank methods. When the ideal HI cannot be extracted, the proposed method can be degraded into a previous NNM method, such as robust principal component analysis (RPCA) method or reweighted nuclear norm (RNN) method. Subsequently, we test the performance of the proposed algorithm in multiple different scenes obtained from measured Sentinel-1 data. All experiments verify that the proposed ISC method can remove RFI and keep more details of ground scenes compared to other state-of-the-art RFI mitigation methods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"5990-6006"},"PeriodicalIF":4.7,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10848482","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STEPNet: A Spatial and Temporal Encoding Pipeline to Handle Temporal Heterogeneity in Climate Modeling Using AI: A Use Case of Sea Ice Forecasting
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-20 DOI: 10.1109/JSTARS.2025.3532219
Sizhe Wang;Wenwen Li;Chia-Yu Hsu
Sea ice forecasting remains a challenging topic due to the complexity of understanding its driving forces and modeling its dynamics. This article contributes to the expanding literature by developing a data-driven, artificial intelligence (AI)-based solution for forecasting sea ice concentration in the Arctic. Specifically, we introduced STEPNet—a spatial and temporal encoding pipeline capable of handling the temporal heterogeneity of multivariate sea ice drivers, including various climate and environmental factors with varying impacts on sea ice concentration changes. STEPNet employs dedicated encoders designed to effectively mine prominent spatial, temporal, and spatiotemporal relationships within the data. It builds on and extends the architecture of vision and temporal transformer architectures to leverage their power in extracting important hidden relationships over long data ranges. The learning pipeline is designed for flexibility and extendibility, enabling easy integration of different encoders to process diverse data characteristics and meet computational demands. A series of ablation studies and comparative experiments were conducted to validate the effectiveness of our architecture design and the superior performance of the proposed STEPNet model compared to other AI solutions and numerical models.
{"title":"STEPNet: A Spatial and Temporal Encoding Pipeline to Handle Temporal Heterogeneity in Climate Modeling Using AI: A Use Case of Sea Ice Forecasting","authors":"Sizhe Wang;Wenwen Li;Chia-Yu Hsu","doi":"10.1109/JSTARS.2025.3532219","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3532219","url":null,"abstract":"Sea ice forecasting remains a challenging topic due to the complexity of understanding its driving forces and modeling its dynamics. This article contributes to the expanding literature by developing a data-driven, artificial intelligence (AI)-based solution for forecasting sea ice concentration in the Arctic. Specifically, we introduced STEPNet—a spatial and temporal encoding pipeline capable of handling the temporal heterogeneity of multivariate sea ice drivers, including various climate and environmental factors with varying impacts on sea ice concentration changes. STEPNet employs dedicated encoders designed to effectively mine prominent spatial, temporal, and spatiotemporal relationships within the data. It builds on and extends the architecture of vision and temporal transformer architectures to leverage their power in extracting important hidden relationships over long data ranges. The learning pipeline is designed for flexibility and extendibility, enabling easy integration of different encoders to process diverse data characteristics and meet computational demands. A series of ablation studies and comparative experiments were conducted to validate the effectiveness of our architecture design and the superior performance of the proposed STEPNet model compared to other AI solutions and numerical models.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4921-4935"},"PeriodicalIF":4.7,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10848183","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STFCropNet: A Spatiotemporal Fusion Network for Crop Classification in Multiresolution Remote Sensing Images
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-20 DOI: 10.1109/JSTARS.2025.3531886
Wei Wu;Yapeng Liu;Kun Li;Haiping Yang;Liao Yang;Zuohui Chen
Remote sensing-based classification of crops is the foundation for the monitoring of food production and management. A range of remote sensing images, encompassing spatial, spectral, and temporal dimensions, has facilitated the classification of crops. However, prevailing methods for crop classification via remote sensing focus on either temporal or spatial features of images. These unimodal methods often encounter challenges posed by noise interference in real-world scenarios, and may struggle to discriminate between crops with similar spectral signatures, thereby leading to misclassification over extensive areas. To address the issue, we propose a novel approach termed spatiotemporal fusion-based crop classification network (STFCropNet), which integrates high-resolution (HR) images with medium-resolution time-series (TS) images. STFCropNet consists of a temporal branch, which captures seasonal spectral variations and coarse-grained spatial information from TS data, and a spatial branch that extracts geometric details and multiscale spatial features from HR images. By integrating features from both branches, STFCropNet achieves fine-grained crop classification while effectively reducing salt and pepper noise. We evaluate STFCropNet in two study areas of China with diverse topographic features. Experimental results demonstrate that STFCropNet outperforms state-of-the-art models in both study areas. STFCropNet achieves an overall accuracy of 83.2% and 90.6%, representing improvements of 3.6% and 4.1%, respectively, compared to the second-best baseline model. We release our code at.
{"title":"STFCropNet: A Spatiotemporal Fusion Network for Crop Classification in Multiresolution Remote Sensing Images","authors":"Wei Wu;Yapeng Liu;Kun Li;Haiping Yang;Liao Yang;Zuohui Chen","doi":"10.1109/JSTARS.2025.3531886","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3531886","url":null,"abstract":"Remote sensing-based classification of crops is the foundation for the monitoring of food production and management. A range of remote sensing images, encompassing spatial, spectral, and temporal dimensions, has facilitated the classification of crops. However, prevailing methods for crop classification via remote sensing focus on either temporal or spatial features of images. These unimodal methods often encounter challenges posed by noise interference in real-world scenarios, and may struggle to discriminate between crops with similar spectral signatures, thereby leading to misclassification over extensive areas. To address the issue, we propose a novel approach termed spatiotemporal fusion-based crop classification network (STFCropNet), which integrates high-resolution (HR) images with medium-resolution time-series (TS) images. STFCropNet consists of a temporal branch, which captures seasonal spectral variations and coarse-grained spatial information from TS data, and a spatial branch that extracts geometric details and multiscale spatial features from HR images. By integrating features from both branches, STFCropNet achieves fine-grained crop classification while effectively reducing salt and pepper noise. We evaluate STFCropNet in two study areas of China with diverse topographic features. Experimental results demonstrate that STFCropNet outperforms state-of-the-art models in both study areas. STFCropNet achieves an overall accuracy of 83.2% and 90.6%, representing improvements of 3.6% and 4.1%, respectively, compared to the second-best baseline model. We release our code at.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4736-4750"},"PeriodicalIF":4.7,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10848201","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143361111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retrieving Multiaspect Point Clouds From a Multichannel K-Band SAR Drone
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-20 DOI: 10.1109/JSTARS.2025.3532126
Peter Brotzer;Emiliano Casalini;David Small;Alexander Damm;Elías Méndez Domínguez
Satellite and airborne synthetic aperture radar (SAR) systems are frequently used for topographic mapping. However, their limited scene aspects lead to reduced angular coverage, making them less effective in environments with complex surface structures and tall objects. This limitation can be overcome by drone-based SAR systems, which are becoming increasingly advanced, but their potential for three-dimensional (3-D) imaging remains largely unexplored. In this article, we utilize multiaspect SAR data acquired with a K-band drone system with 700 MHz bandwidth and investigate the potential 3-D point cloud retrievals in high resolution. Through a series of experiments with increasingly complex 3-D structures, we evaluate the accuracy of the derived point clouds. Independent references—based on light detection and ranging (LiDAR) and 3-D construction models—are used to validate our results. Our findings demonstrate that the drone SAR system can produce accurate and complete point clouds, with average Chamfer distances on the order of 1 m compared to reference data, highlighting the significance of multiple aspect acquisitions for 3-D mapping applications.
卫星和机载合成孔径雷达(SAR)系统常用于地形测绘。然而,它们的场景范围有限,导致角度覆盖范围缩小,在表面结构复杂和有高大物体的环境中效果不佳。基于无人机的合成孔径雷达系统可以克服这一限制,该系统正变得越来越先进,但其在三维(3-D)成像方面的潜力在很大程度上仍未得到开发。本文利用 700 MHz 带宽的 K 波段无人机系统获取的多光谱合成孔径雷达数据,研究了高分辨率三维点云检索的潜力。通过对日益复杂的三维结构进行一系列实验,我们评估了所得点云的准确性。基于光探测与测距(LiDAR)和三维建筑模型的独立参考资料被用来验证我们的结果。我们的研究结果表明,无人机合成孔径雷达系统可以生成精确、完整的点云,与参考数据相比,平均倒角距离约为 1 米,突出了多方面采集对于三维测绘应用的重要性。
{"title":"Retrieving Multiaspect Point Clouds From a Multichannel K-Band SAR Drone","authors":"Peter Brotzer;Emiliano Casalini;David Small;Alexander Damm;Elías Méndez Domínguez","doi":"10.1109/JSTARS.2025.3532126","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3532126","url":null,"abstract":"Satellite and airborne synthetic aperture radar (SAR) systems are frequently used for topographic mapping. However, their limited scene aspects lead to reduced angular coverage, making them less effective in environments with complex surface structures and tall objects. This limitation can be overcome by drone-based SAR systems, which are becoming increasingly advanced, but their potential for three-dimensional (3-D) imaging remains largely unexplored. In this article, we utilize multiaspect SAR data acquired with a K-band drone system with 700 MHz bandwidth and investigate the potential 3-D point cloud retrievals in high resolution. Through a series of experiments with increasingly complex 3-D structures, we evaluate the accuracy of the derived point clouds. Independent references—based on light detection and ranging (LiDAR) and 3-D construction models—are used to validate our results. Our findings demonstrate that the drone SAR system can produce accurate and complete point clouds, with average Chamfer distances on the order of 1 m compared to reference data, highlighting the significance of multiple aspect acquisitions for 3-D mapping applications.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"5033-5045"},"PeriodicalIF":4.7,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10848217","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LCCDMamba: Visual State Space Model for Land Cover Change Detection of VHR Remote Sensing Images
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-20 DOI: 10.1109/JSTARS.2025.3531499
Junqing Huang;Xiaochen Yuan;Chan-Tong Lam;Yapeng Wang;Min Xia
Land cover change detection (LCCD) is a crucial research topic for rational planning of land use and facilitation of sustainable land resource growth. However, due to the complexity of LCCD tasks, integrating global and local features and fusing contextual information from remote sensing features are essential. Recently, with the advent of Mamba, which maintains linear time complexity and high efficiency in processing long-range data, it offers a new solution to address feature-fusion challenges in LCCD. Therefore, a novel visual state space model (SSM) for Land Cover Change Detection (LCCDMamba) is proposed, which uses Siam-VMamba as a backbone to extract multidimensional land cover features. To fuse the change information across difference temporal, multiscale information spatio-temporal fusion (MISF) module is designed to aggregate difference information from bitemporal features. The proposed MISF comprises multi-scale feature aggregation (MSFA), which utilizes strip convolution to aggregate multiscale local change information of bitemporal land cover features, and residual with SS2D (RSS) which employs residual structure with SS2D to capture global feature differences of bitemporal land cover features. To enhance the correlation of change features across different dimensions, in the decoder, we design a dual token modeling SSM (DTMS) through two token modeling approaches. This preserves high-dimensional semantic features and thus ensures that the multiscale change information across various dimensions will not be lost during feature restoration. Experiments have been conducted on WHU-CD, LEVIR-CD, and GVLM datasets, and the results demonstrate that LCCDMamba achieves F1 scores of 94.18%, 91.68%, and 87.14%, respectively, outperforming all the models compared.
{"title":"LCCDMamba: Visual State Space Model for Land Cover Change Detection of VHR Remote Sensing Images","authors":"Junqing Huang;Xiaochen Yuan;Chan-Tong Lam;Yapeng Wang;Min Xia","doi":"10.1109/JSTARS.2025.3531499","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3531499","url":null,"abstract":"Land cover change detection (LCCD) is a crucial research topic for rational planning of land use and facilitation of sustainable land resource growth. However, due to the complexity of LCCD tasks, integrating global and local features and fusing contextual information from remote sensing features are essential. Recently, with the advent of Mamba, which maintains linear time complexity and high efficiency in processing long-range data, it offers a new solution to address feature-fusion challenges in LCCD. Therefore, a novel visual state space model (SSM) for Land Cover Change Detection (LCCDMamba) is proposed, which uses Siam-VMamba as a backbone to extract multidimensional land cover features. To fuse the change information across difference temporal, multiscale information spatio-temporal fusion (MISF) module is designed to aggregate difference information from bitemporal features. The proposed MISF comprises multi-scale feature aggregation (MSFA), which utilizes strip convolution to aggregate multiscale local change information of bitemporal land cover features, and residual with SS2D (RSS) which employs residual structure with SS2D to capture global feature differences of bitemporal land cover features. To enhance the correlation of change features across different dimensions, in the decoder, we design a dual token modeling SSM (DTMS) through two token modeling approaches. This preserves high-dimensional semantic features and thus ensures that the multiscale change information across various dimensions will not be lost during feature restoration. Experiments have been conducted on WHU-CD, LEVIR-CD, and GVLM datasets, and the results demonstrate that LCCDMamba achieves F1 scores of 94.18%, 91.68%, and 87.14%, respectively, outperforming all the models compared.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"5765-5781"},"PeriodicalIF":4.7,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10845192","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143465759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EMSNet: Efficient Multimodal Symmetric Network for Semantic Segmentation of Urban Scene From Remote Sensing Imagery
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-20 DOI: 10.1109/JSTARS.2025.3531422
Yejian Zhou;Yachen Wang;Jie Su;Zhenyu Wen;Puzhao Zhang;Wenan Zhang
High-resolution remote sensing imagery (RSI) plays a pivotal role in the semantic segmentation (SS) of urban scenes, particularly in urban management tasks such as building planning and traffic flow analysis. However, the dense distribution of objects and the prevalent background noise in RSI make it challenging to achieve stable and accurate results from a single view. Integrating digital surface models (DSM) can achieve high-precision SS. But this often requires extensive computational resources. It is essential to address the tradeoff between accuracy and computational cost and optimize the method for deployment on edge devices. In this article, we introduce an efficient multimodal symmetric network (EMSNet) designed to perform SS by leveraging both optical and DSM images. Unlike other multimodal methods, EMSNet adopts a dual encoder–decoder structure to build a direct connection between DSM data and the final result, making full use of the advanced DSM. Between branches, we propose a continuous feature interaction to guide the DSM branch by RGB features. Within each branch, multilevel feature fusion captures low spatial and high semantic information, improving the model's scene perception. Meanwhile, knowledge distillation (KD) further improves the performance and generalization of EMSNet. Experiments on the Potsdam and Vaihingen datasets demonstrate the superiority of our method over other baseline models. Ablation experiments validate the effectiveness of each component. Besides, the KD strategy is confirmed by comparing it with the segment anything model (SAM). It enables the proposed multimodal SS network to match SAM's performance with only one-fifth of the parameters, computation, and latency.
{"title":"EMSNet: Efficient Multimodal Symmetric Network for Semantic Segmentation of Urban Scene From Remote Sensing Imagery","authors":"Yejian Zhou;Yachen Wang;Jie Su;Zhenyu Wen;Puzhao Zhang;Wenan Zhang","doi":"10.1109/JSTARS.2025.3531422","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3531422","url":null,"abstract":"High-resolution remote sensing imagery (RSI) plays a pivotal role in the semantic segmentation (SS) of urban scenes, particularly in urban management tasks such as building planning and traffic flow analysis. However, the dense distribution of objects and the prevalent background noise in RSI make it challenging to achieve stable and accurate results from a single view. Integrating digital surface models (DSM) can achieve high-precision SS. But this often requires extensive computational resources. It is essential to address the tradeoff between accuracy and computational cost and optimize the method for deployment on edge devices. In this article, we introduce an efficient multimodal symmetric network (EMSNet) designed to perform SS by leveraging both optical and DSM images. Unlike other multimodal methods, EMSNet adopts a dual encoder–decoder structure to build a direct connection between DSM data and the final result, making full use of the advanced DSM. Between branches, we propose a continuous feature interaction to guide the DSM branch by RGB features. Within each branch, multilevel feature fusion captures low spatial and high semantic information, improving the model's scene perception. Meanwhile, knowledge distillation (KD) further improves the performance and generalization of EMSNet. Experiments on the Potsdam and Vaihingen datasets demonstrate the superiority of our method over other baseline models. Ablation experiments validate the effectiveness of each component. Besides, the KD strategy is confirmed by comparing it with the segment anything model (SAM). It enables the proposed multimodal SS network to match SAM's performance with only one-fifth of the parameters, computation, and latency.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"5878-5892"},"PeriodicalIF":4.7,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10845133","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143465761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TanDepth: Leveraging Global DEMs for Metric Monocular Depth Estimation in UAVs
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-20 DOI: 10.1109/JSTARS.2025.3531984
Horatiu Florea;Sergiu Nedevschi
Aerial scene understanding systems face stringent payload restrictions and must often rely on monocular depth estimation for modeling scene geometry, which is an inherently ill-posed problem. Moreover, obtaining accurate ground truth data required by learning-based methods raises significant additional challenges in the aerial domain. Self-supervised approaches can bypass this problem, at the cost of providing only up-to-scale results. Similarly, recent supervised solutions which make good progress toward zero-shot generalization also provide only relative depth values. This work presents TanDepth, a practical scale recovery method for obtaining metric depth results from relative estimations at inference-time, irrespective of the type of model generating them. Tailored for uncrewed aerial vehicle (UAV) applications, our method leverages sparse measurements from Global Digital Elevation Models (GDEM) by projecting them to the camera view using extrinsic and intrinsic information. An adaptation to the cloth simulation filter is presented, which allows selecting ground points from the estimated depth map to then correlate with the projected reference points. We evaluate and compare our method against alternate scaling methods adapted for UAVs, on a variety of real-world scenes. Considering the limited availability of data for this domain, we construct and release a comprehensive, depth-focused extension to the popular UAVid dataset to further research.
{"title":"TanDepth: Leveraging Global DEMs for Metric Monocular Depth Estimation in UAVs","authors":"Horatiu Florea;Sergiu Nedevschi","doi":"10.1109/JSTARS.2025.3531984","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3531984","url":null,"abstract":"Aerial scene understanding systems face stringent payload restrictions and must often rely on monocular depth estimation for modeling scene geometry, which is an inherently ill-posed problem. Moreover, obtaining accurate ground truth data required by learning-based methods raises significant additional challenges in the aerial domain. Self-supervised approaches can bypass this problem, at the cost of providing only up-to-scale results. Similarly, recent supervised solutions which make good progress toward zero-shot generalization also provide only relative depth values. This work presents TanDepth, a practical scale recovery method for obtaining metric depth results from relative estimations at inference-time, irrespective of the type of model generating them. Tailored for uncrewed aerial vehicle (UAV) applications, our method leverages sparse measurements from Global Digital Elevation Models (GDEM) by projecting them to the camera view using extrinsic and intrinsic information. An adaptation to the cloth simulation filter is presented, which allows selecting ground points from the estimated depth map to then correlate with the projected reference points. We evaluate and compare our method against alternate scaling methods adapted for UAVs, on a variety of real-world scenes. Considering the limited availability of data for this domain, we construct and release a comprehensive, depth-focused extension to the popular UAVid dataset to further research.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"5445-5459"},"PeriodicalIF":4.7,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10848130","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143489143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1