首页 > 最新文献

ISPRS Journal of Photogrammetry and Remote Sensing最新文献

英文 中文
A deep data fusion-based reconstruction of water index time series for intermittent rivers and ephemeral streams monitoring
IF 12.7 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-12-29 DOI: 10.1016/j.isprsjprs.2024.12.015
Junyuan Fei, Xuan Zhang, Chong Li, Fanghua Hao, Yahui Guo, Yongshuo Fu
Intermittent Rivers and Ephemeral Streams (IRES) are the major sources of flowing water on Earth. Yet, their dynamics are challenging for optical and radar satellites to monitor due to the heavy cloud cover and narrow water surfaces. The significant backscattering mechanism change and image mismatch further hinder the joint use of optical-SAR images in IRES monitoring. Here, a Deep data fusion-based Reconstruction of the wide-accepted Modified Normalized Difference Water Index (MNDWI) time series is conducted for IRES Monitoring (DRIM). The study utilizes 3 categories of explanatory variables, i.e., the cross-orbits Sentinel-1 SAR for the continuous IRES observation, anchor data for the implicit co-registration, and auxiliary data that reflects the dynamics of IRES. A tight-coupled CNN-RNN architecture is designed to achieve pixel-level SAR-to-optical reconstruction under significant backscattering mechanism changes. The 10 m MNDWI time series with a 12-day interval is effectively regressed, R2 > 0.80, on the experimental catchment. The comparison with the RF, RNN, and CNN methods affirms the advantage of the tight-coupled CNN-RNN system in the SAR-to-optical regression with the R2 increasing by 0.68 at least. The ablation test highlights the contributions of the Sentinel-1 to the precise MNDWI time series reconstruction, and the anchor and auxiliary data to the effective multi-source data fusion, respectively. The reconstructions highly match the observations of IRES with river widths ranging from 2 m to 300 m. Furthermore, the DRIM method shows excellent applicability, i.e., average R2 of 0.77, in IRES under polar, temperate, tropical, and arid climates. In conclusion, the proposed method is powerful in reconstructing the MNDWI time series of sub-pixel to multi-pixel scale IRES under the problem of backscattering mechanism change and image mismatch. The reconstructed MNDWI time series are essential for exploring the hydrological processes of IRES dynamics and optimizing water resource management at the basin scale.
间歇性河流和短暂溪流(IRES)是地球上流动水的主要来源。然而,由于云层厚、水面窄,光学卫星和雷达卫星对其动态监测具有挑战性。显著的反向散射机制变化和图像不匹配进一步阻碍了光学-合成孔径雷达图像在 IRES 监测中的联合使用。在此,针对 IRES 监测(DRIM),对广泛接受的修正归一化差异水指数(MNDWI)时间序列进行了基于深度数据融合的重构。研究利用了三类解释变量,即用于连续 IRES 观测的跨轨道 Sentinel-1 SAR、用于隐式共存的锚数据以及反映 IRES 动态的辅助数据。设计了一个紧密耦合的 CNN-RNN 架构,以在显著的反向散射机制变化下实现像素级 SAR 到光学重建。10 m MNDWI 时间序列间隔为 12 天,对实验流域进行了有效回归,R2 > 0.80。与射频、RNN 和 CNN 方法的比较证实了紧耦合 CNN-RNN 系统在合成孔径雷达-光学回归中的优势,R2 至少增加了 0.68。消融测试凸显了 Sentinel-1 对精确 MNDWI 时间序列重建的贡献,以及锚数据和辅助数据对有效多源数据融合的贡献。此外,DRIM 方法在极地、温带、热带和干旱气候条件下的 IRES 中显示了极佳的适用性,即平均 R2 为 0.77。总之,在反向散射机制变化和图像不匹配的情况下,所提出的方法在重建亚像素到多像素尺度 IRES 的 MNDWI 时间序列方面具有强大的功能。重建的 MNDWI 时间序列对于探索 IRES 动态水文过程和优化流域尺度的水资源管理至关重要。
{"title":"A deep data fusion-based reconstruction of water index time series for intermittent rivers and ephemeral streams monitoring","authors":"Junyuan Fei, Xuan Zhang, Chong Li, Fanghua Hao, Yahui Guo, Yongshuo Fu","doi":"10.1016/j.isprsjprs.2024.12.015","DOIUrl":"https://doi.org/10.1016/j.isprsjprs.2024.12.015","url":null,"abstract":"Intermittent Rivers and Ephemeral Streams (IRES) are the major sources of flowing water on Earth. Yet, their dynamics are challenging for optical and radar satellites to monitor due to the heavy cloud cover and narrow water surfaces. The significant backscattering mechanism change and image mismatch further hinder the joint use of optical-SAR images in IRES monitoring. Here, a <ce:bold>D</ce:bold>eep data fusion-based <ce:bold>R</ce:bold>econstruction of the wide-accepted Modified Normalized Difference Water Index (MNDWI) time series is conducted for <ce:bold>I</ce:bold>RES <ce:bold>M</ce:bold>onitoring (DRIM). The study utilizes 3 categories of explanatory variables, i.e., the cross-orbits Sentinel-1 SAR for the continuous IRES observation, anchor data for the implicit co-registration, and auxiliary data that reflects the dynamics of IRES. A tight-coupled CNN-RNN architecture is designed to achieve pixel-level SAR-to-optical reconstruction under significant backscattering mechanism changes. The 10 m MNDWI time series with a 12-day interval is effectively regressed, <mml:math altimg=\"si1.svg\"><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant=\"normal\">R</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:math> &gt; 0.80, on the experimental catchment. The comparison with the RF, RNN, and CNN methods affirms the advantage of the tight-coupled CNN-RNN system in the SAR-to-optical regression with the <mml:math altimg=\"si1.svg\"><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant=\"normal\">R</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:math> increasing by 0.68 at least. The ablation test highlights the contributions of the Sentinel-1 to the precise MNDWI time series reconstruction, and the anchor and auxiliary data to the effective multi-source data fusion, respectively. The reconstructions highly match the observations of IRES with river widths ranging from 2 m to 300 m. Furthermore, the DRIM method shows excellent applicability, i.e., average <mml:math altimg=\"si1.svg\"><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant=\"normal\">R</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:math> of 0.77, in IRES under polar, temperate, tropical, and arid climates. In conclusion, the proposed method is powerful in reconstructing the MNDWI time series of sub-pixel to multi-pixel scale IRES under the problem of backscattering mechanism change and image mismatch. The reconstructed MNDWI time series are essential for exploring the hydrological processes of IRES dynamics and optimizing water resource management at the basin scale.","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 1","pages":""},"PeriodicalIF":12.7,"publicationDate":"2024-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142901851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FO-Net: An advanced deep learning network for individual tree identification using UAV high-resolution images
IF 12.7 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-12-28 DOI: 10.1016/j.isprsjprs.2024.12.020
Jian Zeng, Xin Shen, Kai Zhou, Lin Cao
The identification of individual trees can reveal the competitive and symbiotic relationships among trees within forest stands, which is fundamental understand biodiversity and forest ecosystems. Highly precise identification of individual trees can significantly improve the efficiency of forest resource inventory, and is valuable for biomass measurement and forest carbon storage assessment. In previous studies through deep learning approaches for identifying individual tree, feature extraction is usually difficult to adapt to the variation of tree crown architecture, and the loss of feature information in the multi-scale fusion process is also a marked challenge for extracting trees by remote sensing images. Based on the one-stage deep learning network structure, this study improves and optimizes the three stages of feature extraction, feature fusion and feature identification in deep learning methods, and constructs a novel feature-oriented individual tree identification network (FO-Net) suitable for UAV high-resolution images. Firstly, an adaptive feature extraction algorithm based on variable position drift convolution was proposed, which improved the feature extraction ability for the individual tree with various crown size and shape in UAV images. Secondly, to enhance the network’s ability to fuse multiscale forest features, a feature fusion algorithm based on the “gather-and-distribute” mechanism is proposed in the feature pyramid network, which realizes the lossless cross-layer transmission of feature map information. Finally, in the stage of individual tree identification, a unified self-attention identification head is introduced to enhanced FO-Net’s perception ability to identify the trees with small crown diameters. FO-Net achieved the best performance in quantitative analysis experiments on self-constructed datasets, with mAP50, F1-score, Precision, and Recall of 90.7%, 0.85, 85.8%, and 82.8%, respectively, realizing a relatively high accuracy for individual tree identification compared to the traditional deep learning methods. The proposed feature extraction and fusion algorithms have improved the accuracy of individual tree identification by 1.1% and 2.7% respectively. The qualitative experiments based on Grad-CAM heat maps also demonstrate that FO-Net can focus more on the contours of an individual tree in high-resolution images, and reduce the influence of background factors during feature extraction and individual tree identification. FO-Net deep learning network improves the accuracy of individual trees identification in UAV high-resolution images without significantly increasing the parameters of the network, which provides a reliable method to support various tasks in fine-scale precision forestry.
{"title":"FO-Net: An advanced deep learning network for individual tree identification using UAV high-resolution images","authors":"Jian Zeng, Xin Shen, Kai Zhou, Lin Cao","doi":"10.1016/j.isprsjprs.2024.12.020","DOIUrl":"https://doi.org/10.1016/j.isprsjprs.2024.12.020","url":null,"abstract":"The identification of individual trees can reveal the competitive and symbiotic relationships among trees within forest stands, which is fundamental understand biodiversity and forest ecosystems. Highly precise identification of individual trees can significantly improve the efficiency of forest resource inventory, and is valuable for biomass measurement and forest carbon storage assessment. In previous studies through deep learning approaches for identifying individual tree, feature extraction is usually difficult to adapt to the variation of tree crown architecture, and the loss of feature information in the multi-scale fusion process is also a marked challenge for extracting trees by remote sensing images. Based on the one-stage deep learning network structure, this study improves and optimizes the three stages of feature extraction, feature fusion and feature identification in deep learning methods, and constructs a novel feature-oriented individual tree identification network (FO-Net) suitable for UAV high-resolution images. Firstly, an adaptive feature extraction algorithm based on variable position drift convolution was proposed, which improved the feature extraction ability for the individual tree with various crown size and shape in UAV images. Secondly, to enhance the network’s ability to fuse multiscale forest features, a feature fusion algorithm based on the “gather-and-distribute” mechanism is proposed in the feature pyramid network, which realizes the lossless cross-layer transmission of feature map information. Finally, in the stage of individual tree identification, a unified self-attention identification head is introduced to enhanced FO-Net’s perception ability to identify the trees with small crown diameters. FO-Net achieved the best performance in quantitative analysis experiments on self-constructed datasets, with mAP50, F1-score, Precision, and Recall of 90.7%, 0.85, 85.8%, and 82.8%, respectively, realizing a relatively high accuracy for individual tree identification compared to the traditional deep learning methods. The proposed feature extraction and fusion algorithms have improved the accuracy of individual tree identification by 1.1% and 2.7% respectively. The qualitative experiments based on Grad-CAM heat maps also demonstrate that FO-Net can focus more on the contours of an individual tree in high-resolution images, and reduce the influence of background factors during feature extraction and individual tree identification. FO-Net deep learning network improves the accuracy of individual trees identification in UAV high-resolution images without significantly increasing the parameters of the network, which provides a reliable method to support various tasks in fine-scale precision forestry.","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"83 1","pages":""},"PeriodicalIF":12.7,"publicationDate":"2024-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142889390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiscale adaptive PolSAR image superpixel generation based on local iterative clustering and polarimetric scattering features
IF 12.7 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-12-25 DOI: 10.1016/j.isprsjprs.2024.12.011
Nengcai Li, Deliang Xiang, Xiaokun Sun, Canbin Hu, Yi Su
Superpixel generation is an essential preprocessing step for intelligent interpretation of object-level Polarimetric Synthetic Aperture Radar (PolSAR) images. The Simple Linear Iterative Clustering (SLIC) algorithm has become one of the primary methods for superpixel generation in PolSAR images due to its advantages of minimal human intervention and ease of implementation. However, existing SLIC-based superpixel generation methods for PolSAR images often use distance measures based on the complex Wishart distribution as the similarity metric. These methods are not ideal for segmenting heterogeneous regions, and a single superpixel generation result cannot simultaneously extract coarse and fine levels of detail in the image. To address this, this paper proposes a multiscale adaptive superpixel generation method for PolSAR images based on SLIC. To tackle the issue of the complex Wishart distribution’s inaccuracy in modeling urban heterogeneous regions, this paper employs the polarimetric target decomposition method. It extracts the polarimetric scattering features of the land cover, then constructs a similarity measure for these features using Riemannian metric. To achieve multiscale superpixel segmentation in a single superpixel segmentation process, this paper introduces a new method for initializing cluster centers based on polarimetric homogeneity measure. This initialization method assigns denser cluster centers in heterogeneous areas and automatically adjusts the size of the search regions according to the polarimetric homogeneity measure. Finally, a novel clustering distance metric is defined, integrating multiple types of information, including polarimetric scattering feature similarity, power feature similarity, and spatial similarity. This metric uses the polarimetric homogeneity measure to adaptively balance the relative weights between the various similarities. Comparative experiments were conducted using three real PolSAR datasets with state-of-the-art SLIC-based methods (Qin-RW and Yin-HLT). The results demonstrate that the proposed method provides richer multiscale detail information and significantly improves segmentation outcomes. For example, with the AIRSAR dataset and the step size of 42, the proposed method achieves improvements of 16.56% in BR and 12.01% in ASA compared to the Qin-RW method. Source code of the proposed method is made available at https://github.com/linengcai/PolSAR_MS_ASLIC.git.
{"title":"Multiscale adaptive PolSAR image superpixel generation based on local iterative clustering and polarimetric scattering features","authors":"Nengcai Li, Deliang Xiang, Xiaokun Sun, Canbin Hu, Yi Su","doi":"10.1016/j.isprsjprs.2024.12.011","DOIUrl":"https://doi.org/10.1016/j.isprsjprs.2024.12.011","url":null,"abstract":"Superpixel generation is an essential preprocessing step for intelligent interpretation of object-level Polarimetric Synthetic Aperture Radar (PolSAR) images. The Simple Linear Iterative Clustering (SLIC) algorithm has become one of the primary methods for superpixel generation in PolSAR images due to its advantages of minimal human intervention and ease of implementation. However, existing SLIC-based superpixel generation methods for PolSAR images often use distance measures based on the complex Wishart distribution as the similarity metric. These methods are not ideal for segmenting heterogeneous regions, and a single superpixel generation result cannot simultaneously extract coarse and fine levels of detail in the image. To address this, this paper proposes a multiscale adaptive superpixel generation method for PolSAR images based on SLIC. To tackle the issue of the complex Wishart distribution’s inaccuracy in modeling urban heterogeneous regions, this paper employs the polarimetric target decomposition method. It extracts the polarimetric scattering features of the land cover, then constructs a similarity measure for these features using Riemannian metric. To achieve multiscale superpixel segmentation in a single superpixel segmentation process, this paper introduces a new method for initializing cluster centers based on polarimetric homogeneity measure. This initialization method assigns denser cluster centers in heterogeneous areas and automatically adjusts the size of the search regions according to the polarimetric homogeneity measure. Finally, a novel clustering distance metric is defined, integrating multiple types of information, including polarimetric scattering feature similarity, power feature similarity, and spatial similarity. This metric uses the polarimetric homogeneity measure to adaptively balance the relative weights between the various similarities. Comparative experiments were conducted using three real PolSAR datasets with state-of-the-art SLIC-based methods (Qin-RW and Yin-HLT). The results demonstrate that the proposed method provides richer multiscale detail information and significantly improves segmentation outcomes. For example, with the AIRSAR dataset and the step size of 42, the proposed method achieves improvements of 16.56<mml:math altimg=\"si1.svg\" display=\"inline\"><mml:mtext>%</mml:mtext></mml:math> in BR and 12.01<mml:math altimg=\"si1.svg\" display=\"inline\"><mml:mtext>%</mml:mtext></mml:math> in ASA compared to the Qin-RW method. Source code of the proposed method is made available at <ce:inter-ref xlink:href=\"https://github.com/linengcai/PolSAR_MS_ASLIC.git\" xlink:type=\"simple\">https://github.com/linengcai/PolSAR_MS_ASLIC.git</ce:inter-ref>.","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"5 1","pages":""},"PeriodicalIF":12.7,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142889391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate and complete neural implicit surface reconstruction in street scenes using images and LiDAR point clouds
IF 12.7 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-12-23 DOI: 10.1016/j.isprsjprs.2024.12.012
Chenhui Shi, Fulin Tang, Yihong Wu, Hongtu Ji, Hongjie Duan
Surface reconstruction in street scenes is a critical task in computer vision and photogrammetry, with images and LiDAR point clouds being commonly used data sources. However, image-only reconstruction faces challenges such as lighting variations, weak textures, and sparse viewpoints, while LiDAR-only methods suffer from issues like sparse and noisy LiDAR point clouds. Effectively integrating these two modalities to leverage their complementary strengths remains an open problem. Inspired by recent advances in neural implicit representations, we propose a novel street-level neural implicit surface reconstruction approach that incorporates images and LiDAR point clouds into a unified framework for joint optimization. Three key components make our approach achieve state-of-the-art (SOTA) reconstruction performance with high accuracy and completeness in street scenes. First, we introduce an adaptive photometric constraint weighting method to mitigate the impacts of lighting variations and weak textures on reconstruction. Second, a new B-spline-based hierarchical hash encoder is proposed to ensure the continuity of gradient-derived normals and further to reduce the noise from images and LiDAR point clouds. Third, we implement effective signed distance field (SDF) constraints in a spatial hash grid allocated in near-surface space to fully exploit the geometric information provided by LiDAR point clouds. Additionally, we present two street-level datasets—one virtual and one real-world—offering a comprehensive set of resources that existing public datasets lack. Experimental results demonstrate the superior performance of our method. Compared to the SOTA image-LiDAR combined neural implicit method, namely StreetSurf, ours significantly improves the F-score by approximately 7 percentage points. Our code and data are available at https://github.com/SCH1001/StreetRecon.
{"title":"Accurate and complete neural implicit surface reconstruction in street scenes using images and LiDAR point clouds","authors":"Chenhui Shi, Fulin Tang, Yihong Wu, Hongtu Ji, Hongjie Duan","doi":"10.1016/j.isprsjprs.2024.12.012","DOIUrl":"https://doi.org/10.1016/j.isprsjprs.2024.12.012","url":null,"abstract":"Surface reconstruction in street scenes is a critical task in computer vision and photogrammetry, with images and LiDAR point clouds being commonly used data sources. However, image-only reconstruction faces challenges such as lighting variations, weak textures, and sparse viewpoints, while LiDAR-only methods suffer from issues like sparse and noisy LiDAR point clouds. Effectively integrating these two modalities to leverage their complementary strengths remains an open problem. Inspired by recent advances in neural implicit representations, we propose a novel street-level neural implicit surface reconstruction approach that incorporates images and LiDAR point clouds into a unified framework for joint optimization. Three key components make our approach achieve state-of-the-art (SOTA) reconstruction performance with high accuracy and completeness in street scenes. First, we introduce an adaptive photometric constraint weighting method to mitigate the impacts of lighting variations and weak textures on reconstruction. Second, a new B-spline-based hierarchical hash encoder is proposed to ensure the continuity of gradient-derived normals and further to reduce the noise from images and LiDAR point clouds. Third, we implement effective signed distance field (SDF) constraints in a spatial hash grid allocated in near-surface space to fully exploit the geometric information provided by LiDAR point clouds. Additionally, we present two street-level datasets—one virtual and one real-world—offering a comprehensive set of resources that existing public datasets lack. Experimental results demonstrate the superior performance of our method. Compared to the SOTA image-LiDAR combined neural implicit method, namely StreetSurf, ours significantly improves the F-score by approximately 7 percentage points. Our code and data are available at <ce:inter-ref xlink:href=\"https://github.com/SCH1001/StreetRecon\" xlink:type=\"simple\">https://github.com/SCH1001/StreetRecon</ce:inter-ref>.","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"14 1","pages":""},"PeriodicalIF":12.7,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142889392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel deep learning algorithm for broad scale seagrass extent mapping in shallow coastal environments
IF 12.7 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-12-22 DOI: 10.1016/j.isprsjprs.2024.12.008
Jianghai Peng, Jiwei Li, Thomas C. Ingalls, Steven R. Schill, Hannah R. Kerner, Gregory P. Asner
Recently, the importance of seagrasses in the functioning of coastal ecosystems and their ability to mitigate climate change has gained increased recognition. However, there has been a rapid global deterioration of seagrass ecosystems due to climate change and human-mediated disturbances. Accurate broad-scale mapping of seagrass extent is necessary for seagrass conservation and management actions. Traditionally, these mapping methods have primarily relied on spectral information, along with additional data such as manually designed spatial/texture features (e.g., from the Gray Level Co-Occurrence Matrix) and satellite-derived bathymetry. Despite the widely reported success of prior methods in mapping seagrass across small geographic areas, two challenges remain in broad-scale seagrass extent mapping: 1) spectral overlap between seagrass and other benthic habitats that results in the misclassification of coral/macroalgae to seagrass; 2) seagrass ecosystems exhibit spatial and temporal variability, most current models trained on data from specific locations or time periods encounter difficulties in generalizing to diverse locations or time periods with varying seagrass characteristics, such as density and species. In this study, we developed a novel deep learning model (i.e., Seagrass DenseNet: SGDenseNet) based on the DenseNet architecture to overcome these difficulties. The model was trained and validated using surface reflectance from Sentinel-2 MSI and 9,369 field data samples from four diverse regional shallow coastal water areas. Our model achieves an overall accuracy of 90% for seagrass extent mapping. Furthermore, we evaluated our deep learning model using 1,067 seagrass field data samples worldwide, achieving a producer’s accuracy of 81%. Our new deep learning model could be applied to map seagrass extents at a very broad-scale with high accuracy.
{"title":"A novel deep learning algorithm for broad scale seagrass extent mapping in shallow coastal environments","authors":"Jianghai Peng, Jiwei Li, Thomas C. Ingalls, Steven R. Schill, Hannah R. Kerner, Gregory P. Asner","doi":"10.1016/j.isprsjprs.2024.12.008","DOIUrl":"https://doi.org/10.1016/j.isprsjprs.2024.12.008","url":null,"abstract":"Recently, the importance of seagrasses in the functioning of coastal ecosystems and their ability to mitigate climate change has gained increased recognition. However, there has been a rapid global deterioration of seagrass ecosystems due to climate change and human-mediated disturbances. Accurate broad-scale mapping of seagrass extent is necessary for seagrass conservation and management actions. Traditionally, these mapping methods have primarily relied on spectral information, along with additional data such as manually designed spatial/texture features (e.g., from the Gray Level Co-Occurrence Matrix) and satellite-derived bathymetry. Despite the widely reported success of prior methods in mapping seagrass across small geographic areas, two challenges remain in broad-scale seagrass extent mapping: 1) spectral overlap between seagrass and other benthic habitats that results in the misclassification of coral/macroalgae to seagrass; 2) seagrass ecosystems exhibit spatial and temporal variability, most current models trained on data from specific locations or time periods encounter difficulties in generalizing to diverse locations or time periods with varying seagrass characteristics, such as density and species. In this study, we developed a novel deep learning model (i.e., Seagrass DenseNet: SGDenseNet) based on the DenseNet architecture to overcome these difficulties. The model was trained and validated using surface reflectance from Sentinel-2 MSI and 9,369 field data samples from four diverse regional shallow coastal water areas. Our model achieves an overall accuracy of 90% for seagrass extent mapping. Furthermore, we evaluated our deep learning model using 1,067 seagrass field data samples worldwide, achieving a producer’s accuracy of 81%. Our new deep learning model could be applied to map seagrass extents at a very broad-scale with high accuracy.","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"283 1","pages":""},"PeriodicalIF":12.7,"publicationDate":"2024-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142874862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of non-stand replacing disturbances (NSR) using Harmonized Landsat-Sentinel-2 time series
IF 12.7 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-12-20 DOI: 10.1016/j.isprsjprs.2024.12.014
Madison S. Brown, Nicholas C. Coops, Christopher Mulverhill, Alexis Achim
Non-stand replacing disturbances (NSRs) are events that do not result in complete removal of trees and generally occur at a low intensity over an extended period of time (e.g., insect infestation), or at spatially variable intensities over short time intervals (e.g., windthrow). These disturbances alter the quality and quantity of forest biomass, impacting timber supply and ecosystem services, making them critical to monitor over space and time. The increased accessibility of high frequency revisit, moderate spatial resolution satellite imagery, has led to a subsequent increase in algorithms designed to detect sub-annual change in forested landscapes across broad spatial scales. One such algorithm, the Bayesian Estimator of Abrupt change, Seasonal change, and Trend (BEAST) has shown promise with sub-annual change detection in temperate forested environments. Here, we evaluate the sensitivity of BEAST to detect NSRs across a range of severity levels and disturbance agents in Central British Columbia (BC), Canada. Moderate resolution satellite time series data were utilized by BEAST to produce rasters of change probability, which were compared to the occurrence, severity, and timing of disturbances as mapped by the annual British Columbia Aerial Overview Survey (BC AOS). Differences in the distributions of BEAST probabilities between agents and levels of severity were then compared to undisturbed pixels. In order to determine the applicability of the algorithm for updating forest inventories, BEAST probability distributions of major NSRs (> 5 % of total AOS disturbed area) were compared between consecutive years of disturbances. Cumulatively, all levels of disturbances had higher and statistically significant (p < 0.05) mean BEAST change probabilities compared with historically undisturbed areas. Additionally, 16 disturbance agents observed in the area had higher statistically significant (p < 0.05) probabilities. All major NSRs showed an upwards and statistically significant (p < 0.05) progression of BEAST probabilities over time corresponding to increases in BC AOS mapped area. The sensitivity of BEAST change probabilities to a wide range of NSR disturbance agents at varying intensities suggests promising opportunities for earlier detection of NSRs to inform continuously updating forest inventories and potentially inform adaptation and mitigation actions.
{"title":"Detection of non-stand replacing disturbances (NSR) using Harmonized Landsat-Sentinel-2 time series","authors":"Madison S. Brown, Nicholas C. Coops, Christopher Mulverhill, Alexis Achim","doi":"10.1016/j.isprsjprs.2024.12.014","DOIUrl":"https://doi.org/10.1016/j.isprsjprs.2024.12.014","url":null,"abstract":"Non-stand replacing disturbances (NSRs) are events that do not result in complete removal of trees and generally occur at a low intensity over an extended period of time (e.g., insect infestation), or at spatially variable intensities over short time intervals (e.g., windthrow). These disturbances alter the quality and quantity of forest biomass, impacting timber supply and ecosystem services, making them critical to monitor over space and time. The increased accessibility of high frequency revisit, moderate spatial resolution satellite imagery, has led to a subsequent increase in algorithms designed to detect sub-annual change in forested landscapes across broad spatial scales. One such algorithm, the Bayesian Estimator of Abrupt change, Seasonal change, and Trend (BEAST) has shown promise with sub-annual change detection in temperate forested environments. Here, we evaluate the sensitivity of BEAST to detect NSRs across a range of severity levels and disturbance agents in Central British Columbia (BC), Canada. Moderate resolution satellite time series data were utilized by BEAST to produce rasters of change probability, which were compared to the occurrence, severity, and timing of disturbances as mapped by the annual British Columbia Aerial Overview Survey (BC AOS). Differences in the distributions of BEAST probabilities between agents and levels of severity were then compared to undisturbed pixels. In order to determine the applicability of the algorithm for updating forest inventories, BEAST probability distributions of major NSRs (&gt; 5 % of total AOS disturbed area) were compared between consecutive years of disturbances. Cumulatively, all levels of disturbances had higher and statistically significant (p &lt; 0.05) mean BEAST change probabilities compared with historically undisturbed areas. Additionally, 16 disturbance agents observed in the area had higher statistically significant (p &lt; 0.05) probabilities. All major NSRs showed an upwards and statistically significant (p &lt; 0.05) progression of BEAST probabilities over time corresponding to increases in BC AOS mapped area. The sensitivity of BEAST change probabilities to a wide range of NSR disturbance agents at varying intensities suggests promising opportunities for earlier detection of NSRs to inform continuously updating forest inventories and potentially inform adaptation and mitigation actions.","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"22 1","pages":""},"PeriodicalIF":12.7,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142874483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate spaceborne waveform simulation in heterogeneous forests using small-footprint airborne LiDAR point clouds
IF 12.7 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-12-19 DOI: 10.1016/j.isprsjprs.2024.11.020
Yi Li, Guangjian Yan, Weihua Li, Donghui Xie, Hailan Jiang, Linyuan Li, Jianbo Qi, Ronghai Hu, Xihan Mu, Xiao Chen, Shanshan Wei, Hao Tang
Spaceborne light detection and ranging (LiDAR) waveform sensors require accurate signal simulations to facilitate prelaunch calibration, postlaunch validation, and the development of land surface data products. However, accurately simulating spaceborne LiDAR waveforms over heterogeneous forests remains challenging because data-driven methods do not account for complicated pulse transport within heterogeneous canopies, whereas analytical radiative transfer models overly rely on assumptions about canopy structure and distribution. Thus, a comprehensive simulation method is needed to account for both the complexity of pulse transport within canopies and the structural heterogeneity of forests. In this study, we propose a framework for spaceborne LiDAR waveform simulation by integrating a new radiative transfer model – the canopy voxel radiative transfer (CVRT) model – with reconstructed three-dimensional (3D) voxel forest scenes from small-footprint airborne LiDAR (ALS) point clouds. The CVRT model describes the radiative transfer process within canopy voxels and uses fractional crown cover to account for within-voxel heterogeneity, minimizing the need for assumptions about canopy shape and distribution and significantly reducing the number of input parameters. All the parameters for scene construction and model inputs can be obtained from the ALS point clouds. The performance of the proposed framework was assessed by comparing the results to the simulated LiDAR waveforms from DART, Global Ecosystem Dynamics Investigation (GEDI) data over heterogeneous forest stands, and Land, Vegetation, and Ice Sensor (LVIS) data from the National Ecological Observatory Network (NEON) site. The results suggest that compared with existing models, the new framework with the CVRT model achieved improved agreement with both simulated and measured data, with an average R2 improvement of approximately 2% to 5% and an average RMSE reduction of approximately 0.5% to 3%. The proposed framework was also highly adaptive and robust to variations in model configurations, input data quality, and environmental attributes. In summary, this work extends current research on accurate and robust large-footprint LiDAR waveform simulations over heterogeneous forest canopies and could help refine product development for emerging spaceborne LiDAR missions.
{"title":"Accurate spaceborne waveform simulation in heterogeneous forests using small-footprint airborne LiDAR point clouds","authors":"Yi Li, Guangjian Yan, Weihua Li, Donghui Xie, Hailan Jiang, Linyuan Li, Jianbo Qi, Ronghai Hu, Xihan Mu, Xiao Chen, Shanshan Wei, Hao Tang","doi":"10.1016/j.isprsjprs.2024.11.020","DOIUrl":"https://doi.org/10.1016/j.isprsjprs.2024.11.020","url":null,"abstract":"Spaceborne light detection and ranging (LiDAR) waveform sensors require accurate signal simulations to facilitate prelaunch calibration, postlaunch validation, and the development of land surface data products. However, accurately simulating spaceborne LiDAR waveforms over heterogeneous forests remains challenging because data-driven methods do not account for complicated pulse transport within heterogeneous canopies, whereas analytical radiative transfer models overly rely on assumptions about canopy structure and distribution. Thus, a comprehensive simulation method is needed to account for both the complexity of pulse transport within canopies and the structural heterogeneity of forests. In this study, we propose a framework for spaceborne LiDAR waveform simulation by integrating a new radiative transfer model – the canopy voxel radiative transfer (CVRT) model – with reconstructed three-dimensional (3D) voxel forest scenes from small-footprint airborne LiDAR (ALS) point clouds. The CVRT model describes the radiative transfer process within canopy voxels and uses fractional crown cover to account for within-voxel heterogeneity, minimizing the need for assumptions about canopy shape and distribution and significantly reducing the number of input parameters. All the parameters for scene construction and model inputs can be obtained from the ALS point clouds. The performance of the proposed framework was assessed by comparing the results to the simulated LiDAR waveforms from DART, Global Ecosystem Dynamics Investigation (GEDI) data over heterogeneous forest stands, and Land, Vegetation, and Ice Sensor (LVIS) data from the National Ecological Observatory Network (NEON) site. The results suggest that compared with existing models, the new framework with the CVRT model achieved improved agreement with both simulated and measured data, with an average R<ce:sup loc=\"post\">2</ce:sup> improvement of approximately 2% to 5% and an average RMSE reduction of approximately 0.5% to 3%. The proposed framework was also highly adaptive and robust to variations in model configurations, input data quality, and environmental attributes. In summary, this work extends current research on accurate and robust large-footprint LiDAR waveform simulations over heterogeneous forest canopies and could help refine product development for emerging spaceborne LiDAR missions.","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"64 1","pages":""},"PeriodicalIF":12.7,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142874574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D automatic detection and correction for phase unwrapping errors in time series SAR interferometry
IF 12.7 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-12-19 DOI: 10.1016/j.isprsjprs.2024.12.013
Ying Liu, Hong’an Wu, Yonghong Zhang, Zhong Lu, Yonghui Kang, Jujie Wei
Phase unwrapping (PhU) is one of the most critical steps in synthetic aperture radar interferometry (InSAR) technology. However, the current phase unwrapping methods cannot completely avoid the PhU errors, particularly in complex environments with low coherence. Here, we show that the PhU errors can be corrected well with the time series interferograms. We propose a three-dimensional automatic detection and correction (3D-ADAC) method based on phase closure for time-series InSAR PhU errors to improve the quality of the interferograms, especially for the regions with the same errors in different interferograms which cancel each other out in phase closure. The 3D-ADAC algorithm was evaluated with 26 Sentinel-1 SAR images and 72 phase closure loops over the Tianjin region, China, and compared with the popular MintPy and CorPhU methods. Our results demonstrate that the number of new arcs with model coherence coefficient greater than 0.7 achieved by the proposed method is 2.36 times that by the method used in the MintPy software and 3.07 times that by the CorPhU method. The corrected and improved interferograms will be helpful for accurately mapping the ground deformations or Earth topographies via InSAR technology. Codes and data are available at https://github.com/Lylionaurora/code3d-ADCD.
{"title":"3D automatic detection and correction for phase unwrapping errors in time series SAR interferometry","authors":"Ying Liu, Hong’an Wu, Yonghong Zhang, Zhong Lu, Yonghui Kang, Jujie Wei","doi":"10.1016/j.isprsjprs.2024.12.013","DOIUrl":"https://doi.org/10.1016/j.isprsjprs.2024.12.013","url":null,"abstract":"Phase unwrapping (PhU) is one of the most critical steps in synthetic aperture radar interferometry (InSAR) technology. However, the current phase unwrapping methods cannot completely avoid the PhU errors, particularly in complex environments with low coherence. Here, we show that the PhU errors can be corrected well with the time series interferograms. We propose a three-dimensional automatic detection and correction (3D-ADAC) method based on phase closure for time-series InSAR PhU errors to improve the quality of the interferograms, especially for the regions with the same errors in different interferograms which cancel each other out in phase closure. The 3D-ADAC algorithm was evaluated with 26 Sentinel-1 SAR images and 72 phase closure loops over the Tianjin region, China, and compared with the popular MintPy and CorPhU methods. Our results demonstrate that the number of new arcs with model coherence coefficient greater than 0.7 achieved by the proposed method is 2.36 times that by the method used in the MintPy software and 3.07 times that by the CorPhU method. The corrected and improved interferograms will be helpful for accurately mapping the ground deformations or Earth topographies via InSAR technology. Codes and data are available at https://github.com/Lylionaurora/code3d-ADCD.","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"1 1","pages":""},"PeriodicalIF":12.7,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142874572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A coupled optical–radiometric modeling approach to removing reflection noise in TLS data of urban areas
IF 12.7 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-12-18 DOI: 10.1016/j.isprsjprs.2024.12.005
Li Fang, Tianyu Li, Yanghong Lin, Shudong Zhou, Wei Yao
Point clouds, which are a fundamental type of 3D data, play an essential role in various applications like 3D reconstruction, autonomous driving, and robotics. However, point clouds generated via measuring the time-of-flight of emitted and backscattered laser pulses of TLS, frequently include false points caused by mirror-like reflective surfaces, resulting in degradation of data quality and fidelity. This study introduces an algorithm to eliminate reflection noise from TLS scan data. Our novel algorithm detects reflection planes by utilizing both geometric and physical characteristics to recognize reflection points according to optical reflection theory. Radiometric correction is applied to the raw laser intensity, after which reflective planes are extracted using a threshold. In the virtual points identification phase, these points are detected along the light propagation path, grounded on the specular reflection principle. Moreover, an improved feature descriptor, known as RE-LFSH, is employed to assess the similarity between two points in terms of reflection symmetry. We have adapted the LFSH feature descriptor to retain reflection features, mitigating interference from symmetrical architectural structures. Incorporating the Hausdorff feature distance into the algorithm fortifies its resistance to ghosting and deformations, thereby boosting the accuracy of virtual point detection. Additionally, to overcome the shortage of annotated datasets, a novel benchmark dataset named 3DRN, specifically designed for this task, is introduced. Extensive experiments on the 3DRN benchmark dataset, featuring diverse urban environments with virtual TLS reflection noise, show our algorithm improves precision and recall rates for 3D points in reflective areas by 57.03% and 31.80%, respectively. Our approach improves outlier detection by 9.17% and enhances accuracy by 5.65% compared to leading methods. You can access the 3DRN dataset at https://github.com/Tsuiky/3DRN.
{"title":"A coupled optical–radiometric modeling approach to removing reflection noise in TLS data of urban areas","authors":"Li Fang, Tianyu Li, Yanghong Lin, Shudong Zhou, Wei Yao","doi":"10.1016/j.isprsjprs.2024.12.005","DOIUrl":"https://doi.org/10.1016/j.isprsjprs.2024.12.005","url":null,"abstract":"Point clouds, which are a fundamental type of 3D data, play an essential role in various applications like 3D reconstruction, autonomous driving, and robotics. However, point clouds generated via measuring the time-of-flight of emitted and backscattered laser pulses of TLS, frequently include false points caused by mirror-like reflective surfaces, resulting in degradation of data quality and fidelity. This study introduces an algorithm to eliminate reflection noise from TLS scan data. Our novel algorithm detects reflection planes by utilizing both geometric and physical characteristics to recognize reflection points according to optical reflection theory. Radiometric correction is applied to the raw laser intensity, after which reflective planes are extracted using a threshold. In the virtual points identification phase, these points are detected along the light propagation path, grounded on the specular reflection principle. Moreover, an improved feature descriptor, known as RE-LFSH, is employed to assess the similarity between two points in terms of reflection symmetry. We have adapted the LFSH feature descriptor to retain reflection features, mitigating interference from symmetrical architectural structures. Incorporating the Hausdorff feature distance into the algorithm fortifies its resistance to ghosting and deformations, thereby boosting the accuracy of virtual point detection. Additionally, to overcome the shortage of annotated datasets, a novel benchmark dataset named 3DRN, specifically designed for this task, is introduced. Extensive experiments on the 3DRN benchmark dataset, featuring diverse urban environments with virtual TLS reflection noise, show our algorithm improves precision and recall rates for 3D points in reflective areas by 57.03% and 31.80%, respectively. Our approach improves outlier detection by 9.17% and enhances accuracy by 5.65% compared to leading methods. You can access the 3DRN dataset at <ce:inter-ref xlink:href=\"https://github.com/Tsuiky/3DRN\" xlink:type=\"simple\">https://github.com/Tsuiky/3DRN</ce:inter-ref>.","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"46 1","pages":""},"PeriodicalIF":12.7,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142874575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synthesis of complex-valued InSAR data with a multi-task convolutional neural network
IF 12.7 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-12-18 DOI: 10.1016/j.isprsjprs.2024.12.007
Philipp Sibler, Francescopaolo Sica, Michael Schmitt
Simulated remote sensing images bear great potential for many applications in the field of Earth observation. They can be used as controlled testbed for the development of signal and image processing algorithms or can provide a means to get an impression of the potential of new sensor concepts. With the rise of deep learning, the synthesis of artificial remote sensing images by means of deep neural networks has become a hot research topic. While the generation of optical data is relatively straightforward, as it can rely on the use of established models from the computer vision community, the generation of synthetic aperture radar (SAR) data until now is still largely restricted to intensity images since the processing of complex-valued numbers by conventional neural networks poses significant challenges. With this work, we propose to circumvent these challenges by decomposing SAR interferograms into real-valued components. These components are then simultaneously synthesized by different branches of a multi-branch encoder–decoder network architecture. In the end, these real-valued components can be combined again into the final, complex-valued interferogram. Moreover, the effect of speckle and interferometric phase noise is replicated and applied to the synthesized interferometric data. Experimental results on both medium-resolution C-band repeat-pass SAR data and high-resolution X-band single-pass SAR data, demonstrate the general feasibility of the approach.
{"title":"Synthesis of complex-valued InSAR data with a multi-task convolutional neural network","authors":"Philipp Sibler, Francescopaolo Sica, Michael Schmitt","doi":"10.1016/j.isprsjprs.2024.12.007","DOIUrl":"https://doi.org/10.1016/j.isprsjprs.2024.12.007","url":null,"abstract":"Simulated remote sensing images bear great potential for many applications in the field of Earth observation. They can be used as controlled testbed for the development of signal and image processing algorithms or can provide a means to get an impression of the potential of new sensor concepts. With the rise of deep learning, the synthesis of artificial remote sensing images by means of deep neural networks has become a hot research topic. While the generation of optical data is relatively straightforward, as it can rely on the use of established models from the computer vision community, the generation of synthetic aperture radar (SAR) data until now is still largely restricted to intensity images since the processing of complex-valued numbers by conventional neural networks poses significant challenges. With this work, we propose to circumvent these challenges by decomposing SAR interferograms into real-valued components. These components are then simultaneously synthesized by different branches of a multi-branch encoder–decoder network architecture. In the end, these real-valued components can be combined again into the final, complex-valued interferogram. Moreover, the effect of speckle and interferometric phase noise is replicated and applied to the synthesized interferometric data. Experimental results on both medium-resolution C-band repeat-pass SAR data and high-resolution X-band single-pass SAR data, demonstrate the general feasibility of the approach.","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"24 1","pages":""},"PeriodicalIF":12.7,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142874573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ISPRS Journal of Photogrammetry and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1