首页 > 最新文献

ISPRS Journal of Photogrammetry and Remote Sensing最新文献

英文 中文
In-orbit detection of the spectral smile for the Mars Mineral Spectrometer 火星矿物光谱仪光谱微笑的在轨探测
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-07-30 DOI: 10.1016/j.isprsjprs.2024.07.023

As a payload of Tianwen-1 (TW-1), the Mars Mineral Spectrometer (MMS) is tasked with acquiring hyperspectral data of the Martian surface to detect material composition. Microdeformations in optical, mechanical, and thermal components result in the MMS experiencing spectral response distortion in orbit, leading to systematic changes in pixel central wavelengths and full width at half maximum (FWHM). Known as the spectral smile, this distortion compromises the accuracy of reflectance inversion and material composition detection. This study introduces a method for detecting the spectral smile through the Martian atmospheric absorption channel, capitalizing on the distinct characteristics of the atmospheric composition and absorption patterns of Mars. A suitable technical route for in-orbit spectral smile detection was established and tested using simulation experiments and MMS-acquired hyperspectral data. Results suggest that the proposed method can attain central wavelength shifts with a maximum error of 0.32 nm and FWHM variations with a maximum error of 1.95 nm. Employing in-orbit spectral smile detection markedly enhances the correction of Martian atmospheric absorption and provides technical support for Martian surface reflectance inversion. https://github.com/wubingnote/MMS-Spectral-Smile.

作为天文一号(TW-1)的有效载荷,火星矿物光谱仪(MMS)的任务是获取火星表面的高光谱数据,以探测物质成分。光学、机械和热组件的微变形导致火星矿物光谱仪在轨道上出现光谱响应失真,导致像素中心波长和半最大全宽(FWHM)发生系统性变化。这种畸变被称为 "光谱微笑",会影响反射率反演和物质成分检测的准确性。本研究利用火星大气成分和吸收模式的显著特点,介绍了一种通过火星大气吸收通道探测光谱微笑的方法。利用模拟实验和 MMS 获取的高光谱数据,建立并测试了合适的在轨微笑光谱探测技术路线。结果表明,所提出的方法可以实现中心波长偏移,最大误差为 0.32 nm,最大全宽变化误差为 1.95 nm。采用在轨光谱微笑检测显著增强了火星大气吸收的校正,为火星表面反射率反演提供了技术支持。.
{"title":"In-orbit detection of the spectral smile for the Mars Mineral Spectrometer","authors":"","doi":"10.1016/j.isprsjprs.2024.07.023","DOIUrl":"10.1016/j.isprsjprs.2024.07.023","url":null,"abstract":"<div><p>As a payload of Tianwen-1 (TW-1), the Mars Mineral Spectrometer (MMS) is tasked with acquiring hyperspectral data of the Martian surface to detect material composition. Microdeformations in optical, mechanical, and thermal components result in the MMS experiencing spectral response distortion in orbit, leading to systematic changes in pixel central wavelengths and full width at half maximum (FWHM). Known as the spectral smile, this distortion compromises the accuracy of reflectance inversion and material composition detection. This study introduces a method for detecting the spectral smile through the Martian atmospheric absorption channel, capitalizing on the distinct characteristics of the atmospheric composition and absorption patterns of Mars. A suitable technical route for in-orbit spectral smile detection was established and tested using simulation experiments and MMS-acquired hyperspectral data. Results suggest that the proposed method can attain central wavelength shifts with a maximum error of 0.32 nm and FWHM variations with a maximum error of 1.95 nm. Employing in-orbit spectral smile detection markedly enhances the correction of Martian atmospheric absorption and provides technical support for Martian surface reflectance inversion. <span><span>https://github.com/wubingnote/MMS-Spectral-Smile</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0924271624002909/pdfft?md5=a27e58cab7ed3c97b62f6197b2cc801e&pid=1-s2.0-S0924271624002909-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141862562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive variational decomposition for water-related optical image enhancement 用于水相关光学图像增强的自适应变分分解技术
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-07-30 DOI: 10.1016/j.isprsjprs.2024.07.013

Underwater images suffer from blurred details and color distortion due to light attenuation from scattering and absorption. Current underwater image enhancement (UIE) methods overlook the effects of forward scattering, leading to difficulties in addressing low contrast and blurriness. To address the challenges caused by forward and backward scattering, we propose a novel variational-based adaptive method for removing scattering components. Our method addresses both forward and backward scattering and effectively removes interference from suspended particles, significantly enhancing image clarity and contrast for underwater applications. Specifically, our method employs a backward scattering pre-processing method to correct erroneous pixel interferences and histogram equalization to remove color bias, improving image contrast. The backward scattering noise removal method in the variational model uses horizontal and vertical gradients as constraints to remove backward scattering noise. However, it can remove a small portion of forward scattering components caused by light deviation. We develop an adaptive method using the Manhattan Distance to completely remove forward scattering. Our approach integrates prior knowledge to construct penalty terms and uses a fast solver to achieve strong decoupling of incident light and reflectance. We effectively enhance image contrast and color correction by combining variational methods with histogram equalization. Our method outperforms state-of-the-art methods on the UIEB dataset, achieving UCIQE and URanker scores of 0.636 and 2.411, respectively.

由于散射和吸收造成的光衰减,水下图像存在细节模糊和色彩失真的问题。目前的水下图像增强(UIE)方法忽视了前向散射的影响,导致难以解决低对比度和模糊问题。为了解决前向和后向散射带来的挑战,我们提出了一种新颖的基于变分的自适应方法来去除散射成分。我们的方法同时解决了前向和后向散射问题,有效消除了悬浮颗粒的干扰,大大提高了水下应用图像的清晰度和对比度。具体来说,我们的方法采用了一种后向散射预处理方法来纠正错误的像素干扰,并采用直方图均衡来消除色彩偏差,从而提高图像对比度。变分模型中的后向散射噪声去除方法使用水平和垂直梯度作为去除后向散射噪声的约束条件。然而,它只能去除一小部分由光偏差引起的前向散射成分。我们开发了一种使用曼哈顿距离的自适应方法,以完全消除前向散射。我们的方法整合了先验知识来构建惩罚项,并使用快速求解器来实现入射光和反射率的强解耦。通过将变异方法与直方图均衡化相结合,我们有效地增强了图像对比度和色彩校正。在 UIEB 数据集上,我们的方法优于最先进的方法,UCIQE 和 URanker 分数分别为 0.636 和 2.411。
{"title":"Adaptive variational decomposition for water-related optical image enhancement","authors":"","doi":"10.1016/j.isprsjprs.2024.07.013","DOIUrl":"10.1016/j.isprsjprs.2024.07.013","url":null,"abstract":"<div><p>Underwater images suffer from blurred details and color distortion due to light attenuation from scattering and absorption. Current underwater image enhancement (UIE) methods overlook the effects of forward scattering, leading to difficulties in addressing low contrast and blurriness. To address the challenges caused by forward and backward scattering, we propose a novel variational-based adaptive method for removing scattering components. Our method addresses both forward and backward scattering and effectively removes interference from suspended particles, significantly enhancing image clarity and contrast for underwater applications. Specifically, our method employs a backward scattering pre-processing method to correct erroneous pixel interferences and histogram equalization to remove color bias, improving image contrast. The backward scattering noise removal method in the variational model uses horizontal and vertical gradients as constraints to remove backward scattering noise. However, it can remove a small portion of forward scattering components caused by light deviation. We develop an adaptive method using the Manhattan Distance to completely remove forward scattering. Our approach integrates prior knowledge to construct penalty terms and uses a fast solver to achieve strong decoupling of incident light and reflectance. We effectively enhance image contrast and color correction by combining variational methods with histogram equalization. Our method outperforms state-of-the-art methods on the UIEB dataset, achieving UCIQE and URanker scores of 0.636 and 2.411, respectively.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141862567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulated SAR prior knowledge guided evidential deep learning for reliable few-shot SAR target recognition 模拟合成孔径雷达先验知识引导的证据深度学习,用于可靠的几发合成孔径雷达目标识别
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-07-30 DOI: 10.1016/j.isprsjprs.2024.07.014

Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) plays a pivotal role in civilian and military applications. However, the limited labeled samples present a significant challenge in deep learning-based SAR ATR. Few-shot learning (FSL) offers a potential solution, but models trained with limited samples may produce a high probability of incorrect results that can mislead decision-makers. To address this, we introduce uncertainty estimation into SAR ATR and propose Prior knowledge-guided Evidential Deep Learning (Prior-EDL) to ensure reliable recognition in FSL. Inspired by Bayesian principles, Prior-EDL leverages prior knowledge for improved predictions and uncertainty estimation. We use a deep learning model pre-trained on simulated SAR data to discover category correlations and represent them as label distributions. This knowledge is then embedded into the target model via a Prior-EDL loss function, which selectively uses the prior knowledge of samples due to the distribution shift between simulated data and real data. To unify the discovery and embedding of prior knowledge, we propose a framework based on the teacher-student network. Our approach enhances the model’s evidence assignment, improving its uncertainty estimation performance and target recognition accuracy. Extensive experiments on the MSTAR dataset demonstrate the effectiveness of Prior-EDL, achieving recognition accuracies of 70.19% and 92.97% in 4-way 1-shot and 4-way 20-shot scenarios, respectively. For Out-Of-Distribution data, Prior-EDL outperforms other uncertainty estimation methods. The code is available at https://github.com/Xiaoyan-Zhou/Prior-EDL/.

合成孔径雷达(SAR)自动目标识别(ATR)在民用和军事应用中发挥着举足轻重的作用。然而,有限的标注样本给基于深度学习的合成孔径雷达自动目标识别(ATR)带来了巨大挑战。少量学习(FSL)提供了一种潜在的解决方案,但使用有限样本训练的模型可能会产生高概率的错误结果,从而误导决策者。为了解决这个问题,我们在 SAR ATR 中引入了不确定性估计,并提出了以先验知识为指导的证据深度学习(Prior-EDL),以确保在 FSL 中进行可靠的识别。受贝叶斯原理的启发,Prior-EDL 利用先验知识改进预测和不确定性估计。我们使用在模拟 SAR 数据上预先训练的深度学习模型来发现类别相关性,并将其表示为标签分布。然后通过先验-EDL 损失函数将这些知识嵌入到目标模型中,由于模拟数据和真实数据之间的分布偏移,先验-EDL 损失函数会选择性地使用样本的先验知识。为了统一先验知识的发现和嵌入,我们提出了一个基于师生网络的框架。我们的方法增强了模型的证据分配,提高了其不确定性估计性能和目标识别准确率。在 MSTAR 数据集上进行的大量实验证明了 Prior-EDL 的有效性,在 4 路 1 发和 4 路 20 发场景下,识别准确率分别达到了 70.19% 和 92.97%。对于分布外数据,Prior-EDL 的表现优于其他不确定性估计方法。代码可在以下网址获取
{"title":"Simulated SAR prior knowledge guided evidential deep learning for reliable few-shot SAR target recognition","authors":"","doi":"10.1016/j.isprsjprs.2024.07.014","DOIUrl":"10.1016/j.isprsjprs.2024.07.014","url":null,"abstract":"<div><p>Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) plays a pivotal role in civilian and military applications. However, the limited labeled samples present a significant challenge in deep learning-based SAR ATR. Few-shot learning (FSL) offers a potential solution, but models trained with limited samples may produce a high probability of incorrect results that can mislead decision-makers. To address this, we introduce uncertainty estimation into SAR ATR and propose Prior knowledge-guided Evidential Deep Learning (Prior-EDL) to ensure reliable recognition in FSL. Inspired by Bayesian principles, Prior-EDL leverages prior knowledge for improved predictions and uncertainty estimation. We use a deep learning model pre-trained on simulated SAR data to discover category correlations and represent them as label distributions. This knowledge is then embedded into the target model via a Prior-EDL loss function, which selectively uses the prior knowledge of samples due to the distribution shift between simulated data and real data. To unify the discovery and embedding of prior knowledge, we propose a framework based on the teacher-student network. Our approach enhances the model’s evidence assignment, improving its uncertainty estimation performance and target recognition accuracy. Extensive experiments on the MSTAR dataset demonstrate the effectiveness of Prior-EDL, achieving recognition accuracies of 70.19% and 92.97% in 4-way 1-shot and 4-way 20-shot scenarios, respectively. For Out-Of-Distribution data, Prior-EDL outperforms other uncertainty estimation methods. The code is available at <span><span>https://github.com/Xiaoyan-Zhou/Prior-EDL/</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141862565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a gapless 1 km fractional snow cover via a data fusion framework 通过数据融合框架实现无间隙 1 千米分数雪覆盖率
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-07-29 DOI: 10.1016/j.isprsjprs.2024.07.018

Accurate quantification of snow cover facilitates the prediction of snowmelt runoff, the assessment of freshwater availability, and the analysis of Earth’s energy balance. Existing fractional snow cover (FSC) data, however, often suffer from limitations such as spatial and temporal gaps, compromised accuracy, and coarse spatial resolution. These limitations significantly hinder the ability to monitor snow cover dynamics effectively. To address these formidable challenges, this study introduces a novel data fusion framework specifically designed to generate high-resolution (1 km) daily FSC estimation across vast regions like North America, regardless of weather conditions. It achieved this by effectively integrating the complementary spatiotemporal characteristics of both coarse- and fine-resolution FSC data through a multi-stage processing pipeline. This pipeline incorporates innovative strategies for bias correction, gap filling, and consideration of dynamic characteristics of snow cover, ultimately leading to high accuracy and high spatiotemporal completeness in the fused FSC data. The accuracy of the fused FSC data was thoroughly evaluated over the study period (September 2015 to May 2016), demonstrating excellent consistency with independent datasets, including Landsat-derived FSC (total 24 scenes; RMSE=6.8–18.9 %) and ground-based snow observations (14,350 stations). Notably, the fused data outperforms the widely used Interactive Multi-sensor Snow and Ice Mapping System (IMS) daily snow cover extent data in overall accuracy (0.92 vs. 0.91), F1_score (0.86 vs. 0.83), and Kappa coefficient (0.80 vs. 0.77). Furthermore, the fused FSC data exhibits superior performance in accurately capturing the intricate daily snow cover dynamics compared to IMS data, as confirmed by superior agreement with ground-based observations in four snow-cover phenology metrics. In conclusion, the proposed data fusion framework offers a significant advancement in snow cover monitoring by generating high-accuracy, spatiotemporally complete daily FSC maps that effectively capture the spatial and temporal variability of snow cover. These FSC datasets hold substantial value for climate projections, hydrological studies, and water management at both global and regional scales.

准确量化积雪覆盖有助于预测融雪径流、评估淡水供应和分析地球能量平衡。然而,现有的部分积雪覆盖(FSC)数据往往存在时空差距、精度不高和空间分辨率较低等局限性。这些局限性极大地阻碍了有效监测雪盖动态的能力。为了应对这些严峻的挑战,本研究引入了一个新颖的数据融合框架,专门用于在北美等广大地区生成高分辨率(1 千米)的每日 FSC 估计值,而不受天气条件的影响。它通过多级处理管道有效整合了粗分辨率和精细分辨率 FSC 数据的互补时空特征,从而实现了这一目标。该管道采用创新策略进行偏差校正、差距填补,并考虑雪盖的动态特征,最终实现了高精度和高时空完整性的融合 FSC 数据。在研究期间(2015 年 9 月至 2016 年 5 月)对融合 FSC 数据的准确性进行了全面评估,结果表明该数据与独立数据集(包括 Landsat 衍生 FSC(共 24 个场景;RMSE=6.8-18.9 %)和地基雪观测数据(14,350 个站点))具有极佳的一致性。值得注意的是,融合数据在总体精度(0.92 对 0.91)、F1_score(0.86 对 0.83)和 Kappa 系数(0.80 对 0.77)方面均优于广泛使用的交互式多传感器冰雪测绘系统(IMS)每日积雪覆盖范围数据。此外,与 IMS 数据相比,融合后的 FSC 数据在准确捕捉错综复杂的日雪盖动态方面表现出更优越的性能,这一点从四个雪盖物候指标与地面观测数据的卓越一致性中得到了证实。总之,所提出的数据融合框架可生成高精度、时空完整的每日 FSC 地图,有效捕捉雪盖的时空变化,从而在雪盖监测方面取得重大进展。这些 FSC 数据集对于全球和区域范围内的气候预测、水文研究和水资源管理都具有重要价值。
{"title":"Towards a gapless 1 km fractional snow cover via a data fusion framework","authors":"","doi":"10.1016/j.isprsjprs.2024.07.018","DOIUrl":"10.1016/j.isprsjprs.2024.07.018","url":null,"abstract":"<div><p>Accurate quantification of snow cover facilitates the prediction of snowmelt runoff, the assessment of freshwater availability, and the analysis of Earth’s energy balance. Existing fractional snow cover (FSC) data, however, often suffer from limitations such as spatial and temporal gaps, compromised accuracy, and coarse spatial resolution. These limitations significantly hinder the ability to monitor snow cover dynamics effectively. To address these formidable challenges, this study introduces a novel data fusion framework specifically designed to generate high-resolution (1 km) daily FSC estimation across vast regions like North America, regardless of weather conditions. It achieved this by effectively integrating the complementary spatiotemporal characteristics of both coarse- and fine-resolution FSC data through a multi-stage processing pipeline. This pipeline incorporates innovative strategies for bias correction, gap filling, and consideration of dynamic characteristics of snow cover, ultimately leading to high accuracy and high spatiotemporal completeness in the fused FSC data. The accuracy of the fused FSC data was thoroughly evaluated over the study period (September 2015 to May 2016), demonstrating excellent consistency with independent datasets, including Landsat-derived FSC (total 24 scenes; RMSE=6.8–18.9 %) and ground-based snow observations (14,350 stations). Notably, the fused data outperforms the widely used Interactive Multi-sensor Snow and Ice Mapping System (IMS) daily snow cover extent data in overall accuracy (0.92 vs. 0.91), F1_score (0.86 vs. 0.83), and Kappa coefficient (0.80 vs. 0.77). Furthermore, the fused FSC data exhibits superior performance in accurately capturing the intricate daily snow cover dynamics compared to IMS data, as confirmed by superior agreement with ground-based observations in four snow-cover phenology metrics. In conclusion, the proposed data fusion framework offers a significant advancement in snow cover monitoring by generating high-accuracy, spatiotemporally complete daily FSC maps that effectively capture the spatial and temporal variability of snow cover. These FSC datasets hold substantial value for climate projections, hydrological studies, and water management at both global and regional scales.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0924271624002855/pdfft?md5=ea8b57387941493889d557dc49bb45cd&pid=1-s2.0-S0924271624002855-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141862574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PriNeRF: Prior constrained Neural Radiance Field for robust novel view synthesis of urban scenes with fewer views PriNeRF:先验约束神经辐射场,用于在视图较少的情况下对城市场景进行稳健的新视图合成
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-07-29 DOI: 10.1016/j.isprsjprs.2024.07.015

Novel view synthesis (NVS) of urban scenes enables the exploration of cities virtually and interactively, which can further be used for urban planning, navigation, digital tourism, etc. However, many current NVS methods require a large amount of images from known views as input and are sensitive to intrinsic and extrinsic camera parameters. In this paper, we propose a new unified framework for NVS of urban scenes with fewer required views via the integration of scene priors and the joint optimization of camera parameters under an geometric constraint along with NeRF weights. The integration of scene priors makes full use of the priors from the neighbor reference views to reduce the number of required known views. The joint optimization can correct the errors in camera parameters, which are usually derived from algorithms like Structure-from-Motion (SfM), and then further improves the quality of the generated novel views. Experiments show that our method achieves about 25.375 dB and 25.512 dB in average in terms of peak signal-to-noise (PSNR) on synthetic and real data, respectively. It outperforms popular state-of-the-art methods (i.e., BungeeNeRF and MegaNeRF ) by about 2–4 dB in PSNR. Notably, our method achieves better or competitive results than the baseline method with only one third of the known view images required for the baseline. The code and dataset are available at https://github.com/Dongber/PriNeRF.

城市场景的新视角合成(NVS)能够以虚拟和交互的方式探索城市,并可进一步用于城市规划、导航、数字旅游等领域。然而,目前的许多 NVS 方法都需要大量已知视图的图像作为输入,而且对相机的内在和外在参数非常敏感。在本文中,我们提出了一个新的统一框架,通过整合场景先验和几何约束下的相机参数联合优化以及 NeRF 权重,在所需视图较少的情况下实现城市场景的无损检测。场景先验的整合充分利用了邻近参考视图的先验,从而减少了所需已知视图的数量。联合优化可以纠正摄像机参数的误差,这些误差通常来自于结构运动(SfM)等算法,然后进一步提高生成的新视图的质量。实验表明,我们的方法在合成数据和真实数据上的平均峰值信噪比(PSNR)分别达到约 dB 和 dB。在 PSNR 方面,它比流行的先进方法(即 BungeeNeRF 和 MegaNeRF)高出约 2-dB。值得注意的是,我们的方法只需要基线方法三分之一的已知视图图像,就能获得比基线方法更好或更有竞争力的结果。代码和数据集可在以下网址获取。
{"title":"PriNeRF: Prior constrained Neural Radiance Field for robust novel view synthesis of urban scenes with fewer views","authors":"","doi":"10.1016/j.isprsjprs.2024.07.015","DOIUrl":"10.1016/j.isprsjprs.2024.07.015","url":null,"abstract":"<div><p>Novel view synthesis (NVS) of urban scenes enables the exploration of cities virtually and interactively, which can further be used for urban planning, navigation, digital tourism, etc. However, many current NVS methods require a large amount of images from known views as input and are sensitive to intrinsic and extrinsic camera parameters. In this paper, we propose a new unified framework for NVS of urban scenes with fewer required views via the integration of scene priors and the joint optimization of camera parameters under an geometric constraint along with NeRF weights. The integration of scene priors makes full use of the priors from the neighbor reference views to reduce the number of required known views. The joint optimization can correct the errors in camera parameters, which are usually derived from algorithms like Structure-from-Motion (SfM), and then further improves the quality of the generated novel views. Experiments show that our method achieves about <span><math><mrow><mn>25</mn><mo>.</mo><mn>375</mn></mrow></math></span> dB and <span><math><mrow><mn>25</mn><mo>.</mo><mn>512</mn></mrow></math></span> dB in average in terms of peak signal-to-noise (PSNR) on synthetic and real data, respectively. It outperforms popular state-of-the-art methods (i.e., BungeeNeRF and MegaNeRF ) by about 2–<span><math><mn>4</mn></math></span> dB in PSNR. Notably, our method achieves better or competitive results than the baseline method with only one third of the known view images required for the baseline. The code and dataset are available at <span><span>https://github.com/Dongber/PriNeRF</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141862575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A semi-supervised multi-temporal landslide and flash flood event detection methodology for unexplored regions using massive satellite image time series 利用海量卫星图像时间序列对未开发区域进行半监督式多时空滑坡和山洪事件检测的方法
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-07-29 DOI: 10.1016/j.isprsjprs.2024.07.010

Landslides and flash floods are geomorphic hazards (GH) that often co-occur and interact and frequently lead to societal and environmental impact. The compilation of detailed multi-temporal inventories of GH events over a variety of contrasting natural as well as human-influenced landscapes is essential to understanding their behavior in both space and time and allows to unravel the human drivers from the natural baselines. Yet, creating multi-temporal inventories of these GH events remains difficult and costly in terms of human labor, especially when relatively large regions are investigated. Methods to derive GH location from satellite optical imagery have been continuously developed and have shown a clear shift in recent years from conventional methodologies like thresholding and regression to machine learning (ML) methodologies given their improved predictive performance. However, these current generation ML methodologies generally rely on accurate information on either the GH location (training samples) or the GH timing (pre- and post-event imagery), making them unfit in unexplored regions without a priori information on GH occurrences. Currently, a detection methodology to create multi-temporal GH event inventories applicable in relatively large unexplored areas containing a variety of landscapes does not yet exist. We present a new semi-supervised methodology that allows for the detection of both location and timing of GH event occurrence with optical time series, while minimizing manual user interventions. We use the peak of the cumulative difference to the mean for a multitude of spectral indices derived from open-access, high spatial resolution (10–20 m) Copernicus Sentinel-2 time series and generate a map per Sentinel-2 tile that identifies impacted pixels and their related timing. These maps are used to identify GH event impacted zones. We use the generated maps, the identified GH events impacted zones and the automatically derived timing and use them as training sample in a Random Forest classifier to improve the spatial detection accuracy within the impacted zone. We showcase the methodology on six Sentinel-2 tiles in the tropical East African Rift where we detect 29 GH events between 2016 and 2021. We use 12 of these GH events (totalizing ∼3900 GH features) with varying time of occurrence, contrasting landscape conditions and different landslide to flash flood ratios to validate the detection methodology. The average identified timing of the GH events lies within two to four weeks of their actual occurrence. The sensitivity of the methodology is mainly influenced by the differences in landscapes, the amount of cloud cover and the size of the GH events. Our methodology is applicable in various landscapes, can be run in a systematic mode, and is dependent only on a few parameters. The methodology is adapted for massive computation.

山体滑坡和山洪暴发是地貌灾害(GH),它们经常同时发生并相互影响,经常对社会和环境造成影响。在各种不同的自然景观和受人类影响的景观中编制详细的多时空地质灾害事件清单,对于了解它们在空间和时间上的行为至关重要,并可从自然基线中揭示人类驱动因素。然而,建立这些温室气体事件的多时空清单仍然困难重重,人力成本高昂,尤其是在调查相对较大的区域时。从卫星光学图像中得出温室气体位置的方法一直在不断发展,近年来,这些方法明显从阈值法和回归法等传统方法转向机器学习(ML)方法,因为它们的预测性能得到了提高。然而,这些新一代的 ML 方法一般都依赖于有关 GH 位置(训练样本)或 GH 时间(事件发生前后的图像)的准确信息,这使得它们不适用于没有 GH 发生先验信息的未开发区域。目前,在包含各种地貌的相对较大的未勘探地区,还没有一种适用于建立多时空温室气体事件清单的检测方法。我们提出了一种新的半监督方法,可利用光学时间序列检测 GH 事件发生的位置和时间,同时最大限度地减少用户的人工干预。我们使用从开放获取的高空间分辨率(10-20 米)哥白尼哨兵-2 时间序列中提取的多种光谱指数的累积差值与平均值的峰值,并为每个哨兵-2 瓦片生成一张地图,以识别受影响的像素及其相关时间。这些地图用于确定 GH 事件影响区。我们使用生成的地图、确定的温室气体事件影响区和自动得出的时间,并将其作为随机森林分类器的训练样本,以提高影响区内的空间检测精度。我们在热带东非大裂谷的六个哨兵-2瓦片上展示了这一方法,我们在那里检测到了2016年至2021年间的29次温室气体事件。我们利用其中 12 个发生时间不同、景观条件对比强烈、滑坡与山洪比例不同的 GH 事件(总计 3900 个 GH 特征)来验证检测方法。所确定的 GH 事件平均发生时间与实际发生时间相差 2 至 4 周。该方法的灵敏度主要受地貌差异、云量和温室气体事件规模的影响。我们的方法适用于各种地貌,可以以系统模式运行,并且只依赖于几个参数。该方法适用于大规模计算。
{"title":"A semi-supervised multi-temporal landslide and flash flood event detection methodology for unexplored regions using massive satellite image time series","authors":"","doi":"10.1016/j.isprsjprs.2024.07.010","DOIUrl":"10.1016/j.isprsjprs.2024.07.010","url":null,"abstract":"<div><p>Landslides and flash floods are geomorphic hazards (GH) that often co-occur and interact and frequently lead to societal and environmental impact. The compilation of detailed multi-temporal inventories of GH events over a variety of contrasting natural as well as human-influenced landscapes is essential to understanding their behavior in both space and time and allows to unravel the human drivers from the natural baselines. Yet, creating multi-temporal inventories of these GH events remains difficult and costly in terms of human labor, especially when relatively large regions are investigated. Methods to derive GH location from satellite optical imagery have been continuously developed and have shown a clear shift in recent years from conventional methodologies like thresholding and regression to machine learning (ML) methodologies given their improved predictive performance. However, these current generation ML methodologies generally rely on accurate information on either the GH location (training samples) or the GH timing (pre- and post-event imagery), making them unfit in unexplored regions without a priori information on GH occurrences. Currently, a detection methodology to create multi-temporal GH event inventories applicable in relatively large unexplored areas containing a variety of landscapes does not yet exist. We present a new semi-supervised methodology that allows for the detection of both location and timing of GH event occurrence with optical time series, while minimizing manual user interventions. We use the peak of the cumulative difference to the mean for a multitude of spectral indices derived from open-access, high spatial resolution (10–20 m) Copernicus Sentinel-2 time series and generate a map per Sentinel-2 tile that identifies impacted pixels and their related timing. These maps are used to identify GH event impacted zones. We use the generated maps, the identified GH events impacted zones and the automatically derived timing and use them as training sample in a Random Forest classifier to improve the spatial detection accuracy within the impacted zone. We showcase the methodology on six Sentinel-2 tiles in the tropical East African Rift where we detect 29 GH events between 2016 and 2021. We use 12 of these GH events (totalizing ∼3900 GH features) with varying time of occurrence, contrasting landscape conditions and different landslide to flash flood ratios to validate the detection methodology. The average identified timing of the GH events lies within two to four weeks of their actual occurrence. The sensitivity of the methodology is mainly influenced by the differences in landscapes, the amount of cloud cover and the size of the GH events. Our methodology is applicable in various landscapes, can be run in a systematic mode, and is dependent only on a few parameters. The methodology is adapted for massive computation.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141862577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coherence bias mitigation through regularized tapered coherence matrix for phase linking in decorrelated environments 通过正则化锥形相干矩阵减轻相干偏差,实现装饰相关环境中的相位连接
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-07-27 DOI: 10.1016/j.isprsjprs.2024.07.016

Phase linking technique has shown the ability to mitigate the decorrelation effect on the time series interferometric synthetic aperture radar (InSAR) data. By imposing the temporal phase-closure constraint, this technique reconstructs a consistent phase series from the complex sample coherence matrix (SCM). However, the bias of coherence estimates degrades the performance of phase linking, especially in near-zero coherence environments with limited spatial sample support. In this study, we present a methodology to enhance phase linking, with an emphasis on SCM refinement. The incentive behind this is to shrink the tapered SCM towards a scaled identity matrix by exploiting the inner correlation and coherence loss trend in SCM. This allows debiasing the SCM magnitude even in the presence of small sample size. We demonstrate the performance of this method by simulations and real case studies using Sentinel-1 data over Hawaii island. Results from comprehensive comparisons validate the effectiveness of coherence matrix estimation and the enhancement to phase linking in different coherence scenarios. The source code and sample dataset are available at https://www.mathworks.com/matlabcentral/fileexchange/169553-insar-phase-linking-enhancement-by-scm-refinement.

相位连接技术已证明能够减轻时间序列干涉合成孔径雷达(InSAR)数据的相关性效应。通过施加时间相位封闭约束,该技术可从复杂的样本相干矩阵(SCM)中重建一致的相位序列。然而,相干性估计的偏差会降低相位连接的性能,尤其是在空间样本支持有限的近零相干性环境中。在本研究中,我们提出了一种增强相位连接的方法,重点是 SCM 的细化。其背后的动机是利用 SCM 中的内部相关性和相干性损失趋势,将锥形 SCM 缩小为按比例的同一矩阵。这样,即使在样本量较小的情况下,也能对 SCM 的大小进行细化。我们利用夏威夷岛上空的哨兵-1 数据,通过模拟和实际案例研究证明了这种方法的性能。综合比较的结果验证了相干矩阵估算的有效性以及在不同相干情况下对相位链接的增强。源代码和样本数据集可从以下网址获取。
{"title":"Coherence bias mitigation through regularized tapered coherence matrix for phase linking in decorrelated environments","authors":"","doi":"10.1016/j.isprsjprs.2024.07.016","DOIUrl":"10.1016/j.isprsjprs.2024.07.016","url":null,"abstract":"<div><p>Phase linking technique has shown the ability to mitigate the decorrelation effect on the time series interferometric synthetic aperture radar (InSAR) data. By imposing the temporal phase-closure constraint, this technique reconstructs a consistent phase series from the complex sample coherence matrix (SCM). However, the bias of coherence estimates degrades the performance of phase linking, especially in near-zero coherence environments with limited spatial sample support. In this study, we present a methodology to enhance phase linking, with an emphasis on SCM refinement. The incentive behind this is to shrink the tapered SCM towards a scaled identity matrix by exploiting the inner correlation and coherence loss trend in SCM. This allows debiasing the SCM magnitude even in the presence of small sample size. We demonstrate the performance of this method by simulations and real case studies using Sentinel-1 data over Hawaii island. Results from comprehensive comparisons validate the effectiveness of coherence matrix estimation and the enhancement to phase linking in different coherence scenarios. The source code and sample dataset are available at <span><span>https://www.mathworks.com/matlabcentral/fileexchange/169553-insar-phase-linking-enhancement-by-scm-refinement</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141862583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Point2Building: Reconstructing buildings from airborne LiDAR point clouds Point2Building:利用机载激光雷达点云重建建筑物
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-07-26 DOI: 10.1016/j.isprsjprs.2024.07.012

We present a learning-based approach to reconstructing buildings as 3D polygonal meshes from airborne LiDAR point clouds. What makes 3D building reconstruction from airborne LiDAR difficult is the large diversity of building designs, especially roof shapes, the low and varying point density across the scene, and the often incomplete coverage of building facades due to occlusions by vegetation or the sensor’s viewing angle. To cope with the diversity of shapes and inhomogeneous and incomplete object coverage, we introduce a generative model that directly predicts 3D polygonal meshes from input point clouds. Our autoregressive model, called Point2Building, iteratively builds up the mesh by generating sequences of vertices and faces. This approach enables our model to adapt flexibly to diverse geometries and building structures. Unlike many existing methods that rely heavily on pre-processing steps like exhaustive plane detection, our model learns directly from the point cloud data, thereby reducing error propagation and increasing the fidelity of the reconstruction. We experimentally validate our method on a collection of airborne LiDAR data from Zurich, Berlin, and Tallinn. Our method shows good generalization to diverse urban styles.

我们提出了一种基于学习的方法,用于从机载激光雷达点云中重建建筑物的三维多边形网格。从机载激光雷达重建三维建筑物的困难之处在于建筑物设计(尤其是屋顶形状)的多样性、整个场景中点密度的低度和变化、以及由于植被遮挡或传感器视角造成的建筑物外墙覆盖不全。为了应对形状的多样性以及不均匀和不完整的物体覆盖,我们引入了一种生成模型,可直接从输入点云预测三维多边形网格。我们的自回归模型称为 "Point2Building",它通过生成顶点和面的序列来迭代建立网格。这种方法使我们的模型能够灵活地适应各种几何形状和建筑结构。与许多严重依赖于详尽平面检测等预处理步骤的现有方法不同,我们的模型直接从点云数据中学习,从而减少了误差传播,提高了重建的保真度。我们在一组来自苏黎世、柏林和塔林的机载激光雷达数据上对我们的方法进行了实验验证。我们的方法对不同的城市风格显示出良好的通用性。
{"title":"Point2Building: Reconstructing buildings from airborne LiDAR point clouds","authors":"","doi":"10.1016/j.isprsjprs.2024.07.012","DOIUrl":"10.1016/j.isprsjprs.2024.07.012","url":null,"abstract":"<div><p>We present a learning-based approach to reconstructing buildings as 3D polygonal meshes from airborne LiDAR point clouds. What makes 3D building reconstruction from airborne LiDAR difficult is the large diversity of building designs, especially roof shapes, the low and varying point density across the scene, and the often incomplete coverage of building facades due to occlusions by vegetation or the sensor’s viewing angle. To cope with the diversity of shapes and inhomogeneous and incomplete object coverage, we introduce a generative model that directly predicts 3D polygonal meshes from input point clouds. Our autoregressive model, called Point2Building, iteratively builds up the mesh by generating sequences of vertices and faces. This approach enables our model to adapt flexibly to diverse geometries and building structures. Unlike many existing methods that rely heavily on pre-processing steps like exhaustive plane detection, our model learns directly from the point cloud data, thereby reducing error propagation and increasing the fidelity of the reconstruction. We experimentally validate our method on a collection of airborne LiDAR data from Zurich, Berlin, and Tallinn. Our method shows good generalization to diverse urban styles.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S092427162400279X/pdfft?md5=067fb622e160c62c25cd0c1d17abf2a3&pid=1-s2.0-S092427162400279X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141862580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gap completion in point cloud scene occluded by vehicles using SGC-Net 利用 SGC-Net 完成被车辆遮挡的点云场景中的间隙补全
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-07-23 DOI: 10.1016/j.isprsjprs.2024.07.009

Recent advances in mobile mapping systems have greatly enhanced the efficiency and convenience of acquiring urban 3D data. These systems utilize LiDAR sensors mounted on vehicles to capture vast cityscapes. However, a significant challenge arises due to occlusions caused by roadside parked vehicles, leading to the loss of scene information, particularly on the roads, sidewalks, curbs, and the lower sections of buildings. In this study, we present a novel approach that leverages deep neural networks to learn a model capable of filling gaps in urban scenes that are obscured by vehicle occlusion. We have developed an innovative technique where we place virtual vehicle models along road boundaries in the gap-free scene and utilize a ray-casting algorithm to create a new scene with occluded gaps. This allows us to generate diverse and realistic urban point cloud scenes with and without vehicle occlusion, surpassing the limitations of real-world training data collection and annotation. Furthermore, we introduce the Scene Gap Completion Network (SGC-Net), an end-to-end model that can generate well-defined shape boundaries and smooth surfaces within occluded gaps. The experiment results reveal that 97.66% of the filled points fall within a range of 5 centimeters relative to the high-density ground truth point cloud scene. These findings underscore the efficacy of our proposed model in gap completion and reconstructing urban scenes affected by vehicle occlusions.

移动测绘系统的最新进展大大提高了获取城市三维数据的效率和便利性。这些系统利用安装在车辆上的激光雷达传感器来捕捉广阔的城市景观。然而,由于路边停放的车辆造成的遮挡,导致场景信息丢失,尤其是道路、人行道、路边和建筑物下部的信息丢失,这是一个巨大的挑战。在这项研究中,我们提出了一种新颖的方法,利用深度神经网络来学习一个模型,该模型能够填补城市场景中因车辆遮挡而造成的空白。我们开发了一种创新技术,在无缝隙场景中沿道路边界放置虚拟车辆模型,并利用光线投射算法创建具有遮挡缝隙的新场景。这样,我们就能生成有车辆遮挡和无车辆遮挡的多种逼真城市点云场景,超越了真实世界训练数据收集和标注的限制。此外,我们还引入了场景间隙完成网络(SGC-Net),这是一个端到端模型,可以在闭塞间隙内生成定义明确的形状边界和光滑表面。实验结果表明,相对于高密度地面真实点云场景,97.66% 的填充点位于 5 厘米范围内。这些发现证明了我们提出的模型在完成间隙填充和重建受车辆遮挡影响的城市场景方面的功效。
{"title":"Gap completion in point cloud scene occluded by vehicles using SGC-Net","authors":"","doi":"10.1016/j.isprsjprs.2024.07.009","DOIUrl":"10.1016/j.isprsjprs.2024.07.009","url":null,"abstract":"<div><p>Recent advances in mobile mapping systems have greatly enhanced the efficiency and convenience of acquiring urban 3D data. These systems utilize LiDAR sensors mounted on vehicles to capture vast cityscapes. However, a significant challenge arises due to occlusions caused by roadside parked vehicles, leading to the loss of scene information, particularly on the roads, sidewalks, curbs, and the lower sections of buildings. In this study, we present a novel approach that leverages deep neural networks to learn a model capable of filling gaps in urban scenes that are obscured by vehicle occlusion. We have developed an innovative technique where we place virtual vehicle models along road boundaries in the gap-free scene and utilize a ray-casting algorithm to create a new scene with occluded gaps. This allows us to generate diverse and realistic urban point cloud scenes with and without vehicle occlusion, surpassing the limitations of real-world training data collection and annotation. Furthermore, we introduce the Scene Gap Completion Network (SGC-Net), an end-to-end model that can generate well-defined shape boundaries and smooth surfaces within occluded gaps. The experiment results reveal that 97.66% of the filled points fall within a range of 5 centimeters relative to the high-density ground truth point cloud scene. These findings underscore the efficacy of our proposed model in gap completion and reconstructing urban scenes affected by vehicle occlusions.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0924271624002764/pdfft?md5=b87853943de6c40edec975b26ac589b1&pid=1-s2.0-S0924271624002764-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141768934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incremental multi temporal InSAR analysis via recursive sequential estimator for long-term landslide deformation monitoring 通过递归序列估计器进行增量多时 InSAR 分析,用于长期滑坡变形监测
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-07-19 DOI: 10.1016/j.isprsjprs.2024.07.006

Distributed Scatterers Interferometry (DS-InSAR) has been widely applied to increase the number of measurement points (MP) in complex mountainous areas with dense vegetation and complicated topography. However, DS-InSAR method adopts batch processing mode. When new observation data acquired, the entire archived data is reprocessed, completely ignoring the existing results, and not suitable for high-performance processing of operational observation data. The current research focuses on the automation of SAR data acquisition and processing optimization, but the core time series analysis method remains unchanged. In this paper, based on the traditional Sequential Estimator proposed by Ansari in 2017, a Recursive Sequential Estimator with Flexible Batches (RSEFB) is improved to divide the large dataset flexibly without requirements on the number of images in each subset. This method updates and processes the newly acquired SAR data in near real-time, and obtains long-time sequence results without reprocessing the entire data archived, helpful to the early warning of landslide disaster in the future. 132 Sentinel-1 SAR images and 44 TerraSAR-X SAR images were utilized to inverse the line of sight (LOS) surface deformation of Xishancun landslide and Huangnibazi landslide in Li County, Sichuan Province, China. RSEFB method is applied to retrieve time-series displacements from Sentinel-1 and TerraSAR-X datasets, respectively. The comparison with the traditional Sequential Estimator and validation through Global Position System (GPS) monitoring data proved the effectiveness and reliability of the RSEFB method. The research shows that Xishancun landslide is in a state of slow and uneven deformation, and the non-sliding part of Huangnibazi landslide has obvious deformation signal, so continuous monitoring is needed to prevent and mitigate possible catastrophic slope failure events.

分布式散射体干涉测量法(DS-InSAR)已被广泛应用于植被茂密、地形复杂的山区,以增加测量点(MP)的数量。然而,DS-InSAR 方法采用批处理模式。当获取新的观测数据时,要对整个存档数据进行重新处理,完全忽略了已有的结果,不适合对业务观测数据进行高性能处理。目前的研究主要集中在合成孔径雷达数据获取自动化和处理优化方面,但核心的时间序列分析方法没有改变。本文在 Ansari 于 2017 年提出的传统 Sequential Estimator 的基础上,改进了一种具有灵活批次的递归序列估计器(Recursive Sequential Estimator with Flexible Batches,RSEFB),可以灵活划分大型数据集,对每个子集中的图像数量没有要求。该方法对新获取的合成孔径雷达数据进行近乎实时的更新和处理,并在不对全部存档数据进行重新处理的情况下获得长时间序列结果,有助于未来滑坡灾害的预警。利用 132 幅 Sentinel-1 SAR 图像和 44 幅 TerraSAR-X SAR 图像反演了四川省理县西山村滑坡和黄泥巴子滑坡的视线面变形。RSEFB 方法分别用于检索 Sentinel-1 和 TerraSAR-X 数据集的时间序列位移。通过与传统序列估计法的比较以及全球定位系统(GPS)监测数据的验证,证明了 RSEFB 方法的有效性和可靠性。研究表明,西山村滑坡处于缓慢和不均匀的变形状态,黄泥巴子滑坡的非滑动部分有明显的变形信号,因此需要持续监测以预防和减轻可能发生的灾难性斜坡崩塌事件。
{"title":"Incremental multi temporal InSAR analysis via recursive sequential estimator for long-term landslide deformation monitoring","authors":"","doi":"10.1016/j.isprsjprs.2024.07.006","DOIUrl":"10.1016/j.isprsjprs.2024.07.006","url":null,"abstract":"<div><p>Distributed Scatterers Interferometry (DS-InSAR) has been widely applied to increase the number of measurement points (MP) in complex mountainous areas with dense vegetation and complicated topography. However, DS-InSAR method adopts batch processing mode. When new observation data acquired, the entire archived data is reprocessed, completely ignoring the existing results, and not suitable for high-performance processing of operational observation data. The current research focuses on the automation of SAR data acquisition and processing optimization, but the core time series analysis method remains unchanged. In this paper, based on the traditional Sequential Estimator proposed by Ansari in 2017, a Recursive Sequential Estimator with Flexible Batches (RSEFB) is improved to divide the large dataset flexibly without requirements on the number of images in each subset. This method updates and processes the newly acquired SAR data in near real-time, and obtains long-time sequence results without reprocessing the entire data archived, helpful to the early warning of landslide disaster in the future. 132 Sentinel-1 SAR images and 44 TerraSAR-X SAR images were utilized to inverse the line of sight (LOS) surface deformation of Xishancun landslide and Huangnibazi landslide in Li County, Sichuan Province, China. RSEFB method is applied to retrieve time-series displacements from Sentinel-1 and TerraSAR-X datasets, respectively. The comparison with the traditional Sequential Estimator and validation through Global Position System (GPS) monitoring data proved the effectiveness and reliability of the RSEFB method. The research shows that Xishancun landslide is in a state of slow and uneven deformation, and the non-sliding part of Huangnibazi landslide has obvious deformation signal, so continuous monitoring is needed to prevent and mitigate possible catastrophic slope failure events.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141729218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ISPRS Journal of Photogrammetry and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1