首页 > 最新文献

ISPRS Open Journal of Photogrammetry and Remote Sensing最新文献

英文 中文
FeatureGS: Eigenvalue-feature optimization in 3D Gaussian Splatting for geometrically accurate and artifact-reduced reconstruction FeatureGS:三维高斯溅射的特征值-特征优化,用于几何精确和伪影减少重建
Pub Date : 2025-08-01 Epub Date: 2025-09-01 DOI: 10.1016/j.ophoto.2025.100100
Miriam Jäger, Markus Hillemann, Boris Jutzi
3D Gaussian Splatting (3DGS) has emerged as a powerful approach for 3D scene reconstruction using 3D Gaussians. However, neither the centers nor surfaces of the Gaussians are accurately aligned to the object surface, complicating their direct use in point cloud and mesh reconstruction. Additionally, 3DGS typically produces floater artifacts, increasing the number of Gaussians and storage requirements. To address these issues, we present FeatureGS, which incorporates an additional geometric loss term based on an eigenvalue-derived 3D shape feature into the optimization process of 3DGS. The goal is to improve geometric accuracy and enhance properties of planar surfaces with reduced structural entropy in local 3D neighborhoods, typically given in man-made environments. We present four alternative formulations for the geometric loss term based on ‘planarity’ of Gaussians, as well as ‘planarity’, ‘omnivariance’, and ‘eigenentropy’ of Gaussian neighborhoods. On the small-scale DTU benchmark with man-made scenes, FeatureGS achieves a 20% improvement in geometric accuracy, suppresses floater artifacts by 90%, and reduces the number of Gaussians by 95%. FeatureGS proves to be a strong method for geometrically accurate, artifact-reduced and memory-efficient 3D scene reconstruction, enabling the direct use of Gaussian centers for geometric representation.
三维高斯溅射(3DGS)已经成为一种使用三维高斯图像进行三维场景重建的强大方法。然而,高斯函数的中心和表面都不能准确地对准物体表面,这使得它们在点云和网格重建中的直接使用变得复杂。此外,3DGS通常会产生浮动伪影,增加高斯数和存储需求。为了解决这些问题,我们提出了FeatureGS,它将一个基于特征值派生的3D形状特征的额外几何损失项纳入到3DGS的优化过程中。目标是通过减少局部3D邻域的结构熵来提高几何精度和增强平面的特性,通常是在人造环境中给出的。我们提出了基于高斯分布的“平面性”以及高斯邻域的“平面性”、“全方差”和“特征熵”的几何损失项的四种替代公式。在人工场景的小规模DTU基准测试中,FeatureGS的几何精度提高了20%,抑制了90%的浮动伪像,减少了95%的高斯数。FeatureGS被证明是一种强大的几何精确,减少伪影和内存高效的3D场景重建方法,可以直接使用高斯中心进行几何表示。
{"title":"FeatureGS: Eigenvalue-feature optimization in 3D Gaussian Splatting for geometrically accurate and artifact-reduced reconstruction","authors":"Miriam Jäger,&nbsp;Markus Hillemann,&nbsp;Boris Jutzi","doi":"10.1016/j.ophoto.2025.100100","DOIUrl":"10.1016/j.ophoto.2025.100100","url":null,"abstract":"<div><div>3D Gaussian Splatting (3DGS) has emerged as a powerful approach for 3D scene reconstruction using 3D Gaussians. However, neither the centers nor surfaces of the Gaussians are accurately aligned to the object surface, complicating their direct use in point cloud and mesh reconstruction. Additionally, 3DGS typically produces floater artifacts, increasing the number of Gaussians and storage requirements. To address these issues, we present FeatureGS, which incorporates an additional geometric loss term based on an eigenvalue-derived 3D shape feature into the optimization process of 3DGS. The goal is to improve geometric accuracy and enhance properties of planar surfaces with reduced structural entropy in local 3D neighborhoods, typically given in man-made environments. We present four alternative formulations for the geometric loss term based on ‘planarity’ of Gaussians, as well as ‘planarity’, ‘omnivariance’, and ‘eigenentropy’ of Gaussian neighborhoods. On the small-scale DTU benchmark with man-made scenes, FeatureGS achieves a 20% improvement in geometric accuracy, suppresses floater artifacts by 90%, and reduces the number of Gaussians by 95%. FeatureGS proves to be a strong method for geometrically accurate, artifact-reduced and memory-efficient 3D scene reconstruction, enabling the direct use of Gaussian centers for geometric representation.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"17 ","pages":"Article 100100"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144988574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Seeing beyond vegetation: A comparative occlusion analysis between Multi-View Stereo, Neural Radiance Fields and Gaussian Splatting for 3D reconstruction 超越植被:用于3D重建的多视点立体、神经辐射场和高斯飞溅之间的对比遮挡分析
Pub Date : 2025-04-01 Epub Date: 2025-04-16 DOI: 10.1016/j.ophoto.2025.100089
Ivana Petrovska, Boris Jutzi
Image-based 3D reconstruction offers realistic scene representation for applications that require accurate geometric information. Although the assumption that images are simultaneously captured, perfectly posed and noise-free simplifies the 3D reconstruction, this rarely holds in real-world settings. A real-world scene comprises multiple objects which obstruct each other and certain object parts are occluded, thus it can be challenging to generate a complete and accurate geometry. Being a part of our environment, we are particularly interested in vegetation that often obscures important structures, leading to incomplete reconstruction of the underlying features. In this contribution, we present a comparative analysis of the geometry behind vegetation occlusions reconstructed by traditional Multi-View Stereo (MVS) and radiance field methods, namely: Neural Radiance Fields (NeRFs), 3D Gaussian Splatting (3DGS) and 2D Gaussian Splatting (2DGS). Excluding certain image parts and investigating how different level of vegetation occlusions affect the geometric reconstruction, we consider Synthetic masks with different occlusion coverage of 10% (Very Sparse), 30% (Sparse), 50% (Medium), 70% (Dense) and 90% (Very Dense). To additionally demonstrate the impact of spatially consistent 3D occlusions, we use Natural masks (up to 35%) where the vegetation is stationary in the 3D scene, but relative to the view-point. Our investigations are based on real-world scenarios; one occlusion-free indoor scenario, on which we apply the Synthetic masks and one outdoor scenario, from which we derive the Natural masks. The qualitative and quantitative 3D evaluation is based on point cloud comparison against a ground truth mesh addressing accuracy and completeness. The conducted experiments and results demonstrate that although MVS shows lowest accuracy errors in both scenarios, the completeness manifests a sharp decline as the occlusion percentage increases, eventually failing under Very Dense masks. NeRFs manifest robustness in the reconstruction with highest completeness considering masks, although the accuracy proportionally decreases with increasing the occlusions. 2DGS achieves second best accuracy results outperforming NeRFs and 3DGS, indicating a consistent performance across different occlusion scenarios. Additionally, by using MVS for initialization, 3DGS and 2DGS completeness improves without significantly sacrificing the accuracy, due to the more densely reconstructed homogeneous areas. We demonstrate that radiance field methods can compete against traditional MVS, showing robust performance for a complete reconstruction under vegetation occlusions.
基于图像的3D重建为需要精确几何信息的应用程序提供了逼真的场景表示。虽然假设图像是同时捕获的,完美的姿势和无噪声简化了3D重建,但这在现实世界的设置中很少成立。现实世界场景包含多个物体,这些物体相互阻碍,某些物体部分被遮挡,因此生成完整而准确的几何形状可能具有挑战性。作为我们环境的一部分,我们对植被特别感兴趣,因为它们常常掩盖了重要的结构,导致对底层特征的不完整重建。在这篇文章中,我们对传统的多视图立体(MVS)和辐射场方法(即:神经辐射场(NeRFs), 3D高斯飞溅(3DGS)和2D高斯飞溅(2DGS))重建的植被遮挡背后的几何结构进行了比较分析。排除某些图像部分并研究不同水平的植被遮挡如何影响几何重建,我们考虑了不同遮挡覆盖率的合成掩模,分别为10%(非常稀疏),30%(稀疏),50%(中等),70%(密集)和90%(非常密集)。为了进一步演示空间一致性3D遮挡的影响,我们使用自然遮罩(高达35%),其中植被在3D场景中是静止的,但相对于视点。我们的调查是基于真实世界的场景;一个无遮挡的室内场景,我们使用合成面具,一个室外场景,我们从中获得自然面具。定性和定量的三维评价是基于点云与地面真实网格的比较,寻址精度和完整性。实验和结果表明,尽管MVS在两种情况下都具有最低的精度误差,但完整性随着遮挡百分比的增加而急剧下降,最终在Very Dense掩模下失败。nerf在考虑掩模的最高完整性重建中表现出鲁棒性,尽管准确性随着遮挡的增加成比例地降低。2DGS达到了第二好的精度结果,优于nerf和3DGS,表明在不同遮挡场景下的一致性能。此外,通过使用MVS进行初始化,3DGS和2DGS的完整性得到了提高,但没有显著牺牲精度,因为重构的均匀区域更密集。我们证明了辐射场方法可以与传统的MVS竞争,在植被遮挡下显示出强大的完全重建性能。
{"title":"Seeing beyond vegetation: A comparative occlusion analysis between Multi-View Stereo, Neural Radiance Fields and Gaussian Splatting for 3D reconstruction","authors":"Ivana Petrovska,&nbsp;Boris Jutzi","doi":"10.1016/j.ophoto.2025.100089","DOIUrl":"10.1016/j.ophoto.2025.100089","url":null,"abstract":"<div><div>Image-based 3D reconstruction offers realistic scene representation for applications that require accurate geometric information. Although the assumption that images are simultaneously captured, perfectly posed and noise-free simplifies the 3D reconstruction, this rarely holds in real-world settings. A real-world scene comprises multiple objects which obstruct each other and certain object parts are occluded, thus it can be challenging to generate a complete and accurate geometry. Being a part of our environment, we are particularly interested in vegetation that often obscures important structures, leading to incomplete reconstruction of the underlying features. In this contribution, we present a comparative analysis of the geometry behind vegetation occlusions reconstructed by traditional Multi-View Stereo (MVS) and radiance field methods, namely: Neural Radiance Fields (NeRFs), 3D Gaussian Splatting (3DGS) and 2D Gaussian Splatting (2DGS). Excluding certain image parts and investigating how different level of vegetation occlusions affect the geometric reconstruction, we consider Synthetic masks with different occlusion coverage of 10% (Very Sparse), 30% (Sparse), 50% (Medium), 70% (Dense) and 90% (Very Dense). To additionally demonstrate the impact of spatially consistent 3D occlusions, we use Natural masks (up to 35%) where the vegetation is stationary in the 3D scene, but relative to the view-point. Our investigations are based on real-world scenarios; one occlusion-free indoor scenario, on which we apply the Synthetic masks and one outdoor scenario, from which we derive the Natural masks. The qualitative and quantitative 3D evaluation is based on point cloud comparison against a ground truth mesh addressing accuracy and completeness. The conducted experiments and results demonstrate that although MVS shows lowest accuracy errors in both scenarios, the completeness manifests a sharp decline as the occlusion percentage increases, eventually failing under Very Dense masks. NeRFs manifest robustness in the reconstruction with highest completeness considering masks, although the accuracy proportionally decreases with increasing the occlusions. 2DGS achieves second best accuracy results outperforming NeRFs and 3DGS, indicating a consistent performance across different occlusion scenarios. Additionally, by using MVS for initialization, 3DGS and 2DGS completeness improves without significantly sacrificing the accuracy, due to the more densely reconstructed homogeneous areas. We demonstrate that radiance field methods can compete against traditional MVS, showing robust performance for a complete reconstruction under vegetation occlusions.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"16 ","pages":"Article 100089"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144107473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Direct integration of ALS and MLS for real-time localization and mapping 直接集成ALS和MLS实时定位和地图
Pub Date : 2025-04-01 Epub Date: 2025-04-07 DOI: 10.1016/j.ophoto.2025.100088
Eugeniu Vezeteu , Aimad El Issaoui , Heikki Hyyti , Teemu Hakala , Jesse Muhojoki , Eric Hyyppä , Antero Kukko , Harri Kaartinen , Ville Kyrki , Juha Hyyppä
This paper presents a novel real-time fusion pipeline for integrating georeferenced airborne laser scanning (ALS) and online mobile laser scanning (MLS) data to enable accurate localization and mapping in complex natural environments. To address sensor drift caused by relative Light Detection and Ranging (lidar) and inertial measurements, occlusion affecting the Global Navigation Satellite System (GNSS) signal quality, and differences in the fields of view of the sensors, we propose a tightly coupled lidar-inertial registration system with an adaptive, robust Iterated Error-State Extended Kalman Filter (RIEKF). By leveraging ALS-derived prior maps as a global reference, our system effectively refines the MLS registration, even in challenging environments like forests. A novel coarse-to-fine initialization technique is introduced to estimate the initial transformation between the local MLS and global ALS frames using online GNSS measurements. Experimental results in forest environments demonstrate significant improvements in both absolute and relative trajectory accuracy, with relative mean localization errors as low as 0.17 m for a prior map based on dense ALS data and 0.22 m for a prior map based on sparse ALS data. We found that while GNSS does not significantly improve registration accuracy, it is essential for providing the initial transformation between the ALS and MLS frames, enabling their direct and online fusion. The proposed system predicts poses at an inertial measurement unit (IMU) rate of 400 Hz and updates the pose at the lidar frame rate of 10 Hz.
本文提出了一种新的实时融合管道,用于集成地理参考机载激光扫描(ALS)和在线移动激光扫描(MLS)数据,以实现复杂自然环境下的精确定位和制图。为了解决由相对光探测和测距(lidar)和惯性测量引起的传感器漂移、影响全球导航卫星系统(GNSS)信号质量的遮挡以及传感器视场的差异,我们提出了一种具有自适应、鲁棒迭代误差状态扩展卡尔曼滤波器(RIEKF)的紧密耦合lidar-惯性配准系统。通过利用als衍生的先验地图作为全局参考,我们的系统有效地改进了MLS配准,即使在森林等具有挑战性的环境中也是如此。引入了一种新的粗精初始化技术,利用在线GNSS测量估计局部MLS帧和全局ALS帧之间的初始转换。森林环境下的实验结果表明,绝对轨迹精度和相对轨迹精度都有显著提高,基于密集ALS数据的先验地图相对平均定位误差低至0.17 m,基于稀疏ALS数据的先验地图相对平均定位误差低至0.22 m。我们发现,虽然GNSS不能显著提高配准精度,但它对于提供渐近渐近和渐近渐近图像帧之间的初始转换至关重要,从而实现它们的直接和在线融合。该系统以400 Hz的惯性测量单元(IMU)速率预测姿态,并以10 Hz的激光雷达帧速率更新姿态。
{"title":"Direct integration of ALS and MLS for real-time localization and mapping","authors":"Eugeniu Vezeteu ,&nbsp;Aimad El Issaoui ,&nbsp;Heikki Hyyti ,&nbsp;Teemu Hakala ,&nbsp;Jesse Muhojoki ,&nbsp;Eric Hyyppä ,&nbsp;Antero Kukko ,&nbsp;Harri Kaartinen ,&nbsp;Ville Kyrki ,&nbsp;Juha Hyyppä","doi":"10.1016/j.ophoto.2025.100088","DOIUrl":"10.1016/j.ophoto.2025.100088","url":null,"abstract":"<div><div>This paper presents a novel real-time fusion pipeline for integrating georeferenced airborne laser scanning (ALS) and online mobile laser scanning (MLS) data to enable accurate localization and mapping in complex natural environments. To address sensor drift caused by relative Light Detection and Ranging (lidar) and inertial measurements, occlusion affecting the Global Navigation Satellite System (GNSS) signal quality, and differences in the fields of view of the sensors, we propose a tightly coupled lidar-inertial registration system with an adaptive, robust Iterated Error-State Extended Kalman Filter (RIEKF). By leveraging ALS-derived prior maps as a global reference, our system effectively refines the MLS registration, even in challenging environments like forests. A novel coarse-to-fine initialization technique is introduced to estimate the initial transformation between the local MLS and global ALS frames using online GNSS measurements. Experimental results in forest environments demonstrate significant improvements in both absolute and relative trajectory accuracy, with relative mean localization errors as low as 0.17 m for a prior map based on dense ALS data and 0.22 m for a prior map based on sparse ALS data. We found that while GNSS does not significantly improve registration accuracy, it is essential for providing the initial transformation between the ALS and MLS frames, enabling their direct and online fusion. The proposed system predicts poses at an inertial measurement unit (IMU) rate of 400 Hz and updates the pose at the lidar frame rate of 10 Hz.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"16 ","pages":"Article 100088"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143816764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transfer learning and single-polarized SAR image preprocessing for oil spill detection 溢油检测的迁移学习和单极化SAR图像预处理
Pub Date : 2025-01-01 Epub Date: 2024-12-24 DOI: 10.1016/j.ophoto.2024.100081
Nataliia Kussul , Yevhenii Salii , Volodymyr Kuzin , Bohdan Yailymov , Andrii Shelestov
This study addresses the challenge of oil spill detection using Synthetic Aperture Radar (SAR) satellite imagery, employing deep learning techniques to improve accuracy and efficiency. We investigated the effectiveness of various neural network architectures and encoders for this task, focusing on scenarios with limited training data. The research problem centered on enhancing feature extraction from single-channel SAR data to improve oil spill detection performance.
Our methodology involved developing a novel preprocessing pipeline that converts single-channel SAR data into a three-channel RGB representation. The preprocessing technique normalizes SAR intensity values and encodes extracted features into RGB channels.
Through an experiment, we have shown that a combination of the LinkNet with an EfficientNet-B4 is superior to pairs of other well-known architectures and encoders.
Quantitative evaluation revealed a significant improvement in F1-score of 0.064 compared to traditional dB-scale preprocessing methods. Qualitative assessment on independent SAR scenes from the Mediterranean Sea demonstrated better detection capabilities, albeit with increased sensitivity to look-alike.
We conclude that our proposed preprocessing technique shows promise for enhancing automatic oil spill segmentation from SAR imagery. The study contributes to advancing oil spill detection methods, with potential implications for environmental monitoring and marine ecosystem protection.
本研究解决了使用合成孔径雷达(SAR)卫星图像进行溢油检测的挑战,采用深度学习技术来提高准确性和效率。我们研究了各种神经网络架构和编码器在该任务中的有效性,重点关注训练数据有限的场景。研究问题集中在增强单通道SAR数据的特征提取,以提高溢油检测性能。我们的方法涉及开发一种新的预处理管道,将单通道SAR数据转换为三通道RGB表示。预处理技术对SAR强度值进行归一化处理,并将提取的特征编码到RGB通道中。通过实验,我们已经证明LinkNet与EfficientNet-B4的组合优于其他知名架构和编码器的组合。定量评价结果显示,与传统的db尺度预处理方法相比,f1得分显著提高0.064。对来自地中海的独立SAR场景的定性评估显示出更好的探测能力,尽管对相似物的灵敏度有所提高。我们的结论是,我们提出的预处理技术有望增强从SAR图像中自动分割溢油。该研究有助于改进溢油检测方法,对环境监测和海洋生态系统保护具有潜在的意义。
{"title":"Transfer learning and single-polarized SAR image preprocessing for oil spill detection","authors":"Nataliia Kussul ,&nbsp;Yevhenii Salii ,&nbsp;Volodymyr Kuzin ,&nbsp;Bohdan Yailymov ,&nbsp;Andrii Shelestov","doi":"10.1016/j.ophoto.2024.100081","DOIUrl":"10.1016/j.ophoto.2024.100081","url":null,"abstract":"<div><div>This study addresses the challenge of oil spill detection using Synthetic Aperture Radar (SAR) satellite imagery, employing deep learning techniques to improve accuracy and efficiency. We investigated the effectiveness of various neural network architectures and encoders for this task, focusing on scenarios with limited training data. The research problem centered on enhancing feature extraction from single-channel SAR data to improve oil spill detection performance.</div><div>Our methodology involved developing a novel preprocessing pipeline that converts single-channel SAR data into a three-channel RGB representation. The preprocessing technique normalizes SAR intensity values and encodes extracted features into RGB channels.</div><div>Through an experiment, we have shown that a combination of the LinkNet with an EfficientNet-B4 is superior to pairs of other well-known architectures and encoders.</div><div>Quantitative evaluation revealed a significant improvement in F1-score of 0.064 compared to traditional dB-scale preprocessing methods. Qualitative assessment on independent SAR scenes from the Mediterranean Sea demonstrated better detection capabilities, albeit with increased sensitivity to look-alike.</div><div>We conclude that our proposed preprocessing technique shows promise for enhancing automatic oil spill segmentation from SAR imagery. The study contributes to advancing oil spill detection methods, with potential implications for environmental monitoring and marine ecosystem protection.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100081"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new unified framework for supervised 3D crown segmentation (TreeisoNet) using deep neural networks across airborne, UAV-borne, and terrestrial laser scans 一种新的统一框架,用于监督3D冠分割(TreeisoNet),该框架使用机载、无人机机载和地面激光扫描的深度神经网络
Pub Date : 2025-01-01 Epub Date: 2025-01-15 DOI: 10.1016/j.ophoto.2025.100083
Zhouxin Xi, Dani Degenhardt
Accurately defining and isolating 3D tree space is critical for extracting and analyzing tree inventory attributes, yet it remains a challenge due to the structural complexity and heterogeneity within natural forests. This study introduces TreeisoNet, a suite of supervised deep neural networks tailored for robust 3D tree segmentation across natural forest environments. These networks are specifically designed to identify tree locations, stem components (if available), and crown clusters, making them adaptable to varying scales of laser scanning from airborne laser scannner (ALS), terrestrial laser scanner (TLS), and unmanned aerial vehicle (UAV). Our evaluation used three benchmark datasets with manually isolated tree references, achieving mean intersection-over-union (mIoU) accuracies of 0.81 for UAV, 0.76 for TLS, and 0.59 for ALS, which are competitive with contemporary algorithms such as ForAINet, Treeiso, Mask R-CNN, and AMS3D. Noise from stem point delineation minimally impacted stem location detection but significantly affected crown clustering. Moderate manual refinement of stem points or tree centers significantly improved tree segmentation accuracies, achieving 0.85 for UAV, 0.86 for TLS, and 0.80 for ALS. The study confirms SegFormer as an effective 3D point-level classifier and an offset-based UNet as a superior segmenter, with the latter outperforming unsupervised solutions like watershed and shortest-path methods. TreeisoNet demonstrates strong adaptability in capturing invariant tree geometry features, ensuring transferability across different resolutions, sites, and sensors with minimal accuracy loss.
准确定义和隔离三维树木空间对于提取和分析树木库存属性至关重要,但由于天然林结构的复杂性和异质性,这仍然是一个挑战。本研究介绍了TreeisoNet,这是一套为自然森林环境中强大的3D树木分割而定制的监督深度神经网络。这些网络专门用于识别树木位置、茎部组件(如果可用)和树冠簇,使其适应机载激光扫描仪(ALS)、地面激光扫描仪(TLS)和无人机(UAV)的不同规模的激光扫描。我们的评估使用了三个具有手动隔离树参考的基准数据集,实现了无人机的平均相交-过并(mIoU)精度为0.81,TLS为0.76,ALS为0.59,与当前算法(如ForAINet, Treeiso, Mask R-CNN和AMS3D)竞争。来自茎点描绘的噪声对茎位置检测影响最小,但对树冠聚类影响显著。对茎点或树中心进行适度的人工细化,显著提高了树木分割精度,无人机的精度为0.85,TLS的精度为0.86,ALS的精度为0.80。该研究证实了SegFormer是一种有效的3D点级分类器,而基于偏移量的UNet是一种优越的分割器,后者的性能优于分水岭和最短路径方法等无监督解决方案。TreeisoNet在捕获不变的树木几何特征方面表现出很强的适应性,确保在不同分辨率、地点和传感器之间的可转移性,并以最小的精度损失。
{"title":"A new unified framework for supervised 3D crown segmentation (TreeisoNet) using deep neural networks across airborne, UAV-borne, and terrestrial laser scans","authors":"Zhouxin Xi,&nbsp;Dani Degenhardt","doi":"10.1016/j.ophoto.2025.100083","DOIUrl":"10.1016/j.ophoto.2025.100083","url":null,"abstract":"<div><div>Accurately defining and isolating 3D tree space is critical for extracting and analyzing tree inventory attributes, yet it remains a challenge due to the structural complexity and heterogeneity within natural forests. This study introduces TreeisoNet, a suite of supervised deep neural networks tailored for robust 3D tree segmentation across natural forest environments. These networks are specifically designed to identify tree locations, stem components (if available), and crown clusters, making them adaptable to varying scales of laser scanning from airborne laser scannner (ALS), terrestrial laser scanner (TLS), and unmanned aerial vehicle (UAV). Our evaluation used three benchmark datasets with manually isolated tree references, achieving mean intersection-over-union (mIoU) accuracies of 0.81 for UAV, 0.76 for TLS, and 0.59 for ALS, which are competitive with contemporary algorithms such as ForAINet, Treeiso, Mask R-CNN, and AMS3D. Noise from stem point delineation minimally impacted stem location detection but significantly affected crown clustering. Moderate manual refinement of stem points or tree centers significantly improved tree segmentation accuracies, achieving 0.85 for UAV, 0.86 for TLS, and 0.80 for ALS. The study confirms SegFormer as an effective 3D point-level classifier and an offset-based UNet as a superior segmenter, with the latter outperforming unsupervised solutions like watershed and shortest-path methods. TreeisoNet demonstrates strong adaptability in capturing invariant tree geometry features, ensuring transferability across different resolutions, sites, and sensors with minimal accuracy loss.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100083"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting and measuring fine-scale urban tree canopy loss with deep learning and remote sensing 基于深度学习和遥感的精细尺度城市树冠损失检测与测量
Pub Date : 2025-01-01 Epub Date: 2025-01-14 DOI: 10.1016/j.ophoto.2025.100082
David Pedley, Justin Morgenroth
Urban trees provide a multitude of environmental and amenity benefits for city occupants yet face ongoing risk of removal due to urban pressures and the preferences of landowners. Understanding the extent and location of canopy loss is critical for the effective management of urban forests. Although city-scale assessments of urban forest canopy cover are common, the accurate identification of fine-scale canopy loss remains challenging. Evaluating change at the property scale is of particular importance given the localised benefits of urban trees and the scale at which tree removal decisions are made.
The objective of this study was to develop a method to accurately detect and quantify the city-wide loss of urban tree canopy (UTC) at the scale of individual properties using publicly available remote sensing data. The study area was the city of Christchurch, New Zealand, with the study focussed on UTC loss that occurred between 2016 and 2021. To accurately delineate the 2016 UTC, a semantic segmentation deep learning model (DeepLabv3+) was pretrained using existing UTC data and fine-tuned using high resolution aerial imagery. The output of this model was then segmented into polygons representing individual trees using the Segment Anything Model. To overcome poor alignment of aerial imagery, LiDAR point cloud data was utilised to identify changes in height between 2016 and 2021, which was overlaid across the 2016 UTC to map areas of UTC loss. The accuracy of UTC loss predictions was validated using a visual comparison of aerial imagery and LiDAR data, with UTC loss quantified for each property within the study area.
The loss detection method achieved accurate results for the property-scale identification of UTC loss, including a mean F1 score of 0.934 and a mean IOU of 0.883. Precision values were higher than recall values (0.941 compared to 0.811), which reflected a deliberately conservative approach to avoid false positive detections. Approximately 14.5% of 2016 UTC was lost by 2021, with 74.9% of the UTC loss occurring on residential land. This research provides a novel geospatial method for evaluating fine-scale city-wide tree dynamics using remote sensing data of varying type and quality with imperfect alignment. This creates the opportunity for detailed evaluation of the drivers of UTC loss on individual properties to enable better management of existing urban forests.
城市树木为城市居民提供了大量的环境和舒适的好处,但由于城市压力和土地所有者的偏好,它们面临着持续的移除风险。了解林冠损失的程度和位置对城市森林的有效管理至关重要。虽然城市尺度的城市森林冠层覆盖评估是常见的,但精确识别精细尺度的冠层损失仍然具有挑战性。考虑到城市树木的局部效益和树木移除决定的规模,在财产规模上评估变化尤为重要。本研究的目的是开发一种方法,利用公开的遥感数据,在单个属性的尺度上准确地检测和量化城市树冠(UTC)的损失。研究区域是新西兰的克赖斯特彻奇市,研究重点是2016年至2021年间发生的UTC损失。为了准确描绘2016年的UTC,语义分割深度学习模型(DeepLabv3+)使用现有的UTC数据进行预训练,并使用高分辨率航空图像进行微调。然后,该模型的输出被分割成多边形,使用分段任意模型表示单个树。为了克服航空图像对准不佳的问题,利用激光雷达点云数据来识别2016年至2021年之间的高度变化,并将其覆盖在2016年UTC上,以绘制UTC损失区域。通过对航空图像和激光雷达数据的视觉比较,验证了UTC损失预测的准确性,并对研究区域内每个属性的UTC损失进行了量化。该损失检测方法对UTC损失的属性尺度识别结果准确,平均F1分为0.934分,平均IOU为0.883分。精密度值高于召回率值(0.941比0.811),这反映了一种故意保守的方法,以避免假阳性检测。到2021年,2016年的UTC损失约14.5%,其中74.9%的UTC损失发生在住宅用地上。该研究提供了一种新的地理空间方法,利用不同类型和质量的不完全对准遥感数据来评估城市范围内的精细尺度树木动态。这为详细评价个别财产的联合技术损失的驱动因素创造了机会,以便更好地管理现有的城市森林。
{"title":"Detecting and measuring fine-scale urban tree canopy loss with deep learning and remote sensing","authors":"David Pedley,&nbsp;Justin Morgenroth","doi":"10.1016/j.ophoto.2025.100082","DOIUrl":"10.1016/j.ophoto.2025.100082","url":null,"abstract":"<div><div>Urban trees provide a multitude of environmental and amenity benefits for city occupants yet face ongoing risk of removal due to urban pressures and the preferences of landowners. Understanding the extent and location of canopy loss is critical for the effective management of urban forests. Although city-scale assessments of urban forest canopy cover are common, the accurate identification of fine-scale canopy loss remains challenging. Evaluating change at the property scale is of particular importance given the localised benefits of urban trees and the scale at which tree removal decisions are made.</div><div>The objective of this study was to develop a method to accurately detect and quantify the city-wide loss of urban tree canopy (UTC) at the scale of individual properties using publicly available remote sensing data. The study area was the city of Christchurch, New Zealand, with the study focussed on UTC loss that occurred between 2016 and 2021. To accurately delineate the 2016 UTC, a semantic segmentation deep learning model (DeepLabv3+) was pretrained using existing UTC data and fine-tuned using high resolution aerial imagery. The output of this model was then segmented into polygons representing individual trees using the Segment Anything Model. To overcome poor alignment of aerial imagery, LiDAR point cloud data was utilised to identify changes in height between 2016 and 2021, which was overlaid across the 2016 UTC to map areas of UTC loss. The accuracy of UTC loss predictions was validated using a visual comparison of aerial imagery and LiDAR data, with UTC loss quantified for each property within the study area.</div><div>The loss detection method achieved accurate results for the property-scale identification of UTC loss, including a mean F1 score of 0.934 and a mean IOU of 0.883. Precision values were higher than recall values (0.941 compared to 0.811), which reflected a deliberately conservative approach to avoid false positive detections. Approximately 14.5% of 2016 UTC was lost by 2021, with 74.9% of the UTC loss occurring on residential land. This research provides a novel geospatial method for evaluating fine-scale city-wide tree dynamics using remote sensing data of varying type and quality with imperfect alignment. This creates the opportunity for detailed evaluation of the drivers of UTC loss on individual properties to enable better management of existing urban forests.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100082"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance analysis of ultra-wideband positioning for measuring tree positions in boreal forest plots 超宽带定位在北方森林样地树木位置测量中的性能分析
Pub Date : 2025-01-01 Epub Date: 2025-03-05 DOI: 10.1016/j.ophoto.2025.100087
Zuoya Liu , Harri Kaartinen , Teemu Hakala , Heikki Hyyti , Juha Hyyppä , Antero Kukko , Ruizhi Chen
Accurate individual tree locations enable efficient forest inventory management and automation, support precise forest surveys, management decisions and future individual-tree harvesting plans. In this paper, we compared and analyzed in detail the performance of an ultra-wideband (UWB) data-driven method for mapping individual tree locations in boreal forest sample plots of varying complexity. Twelve forest sample plots selected from varying forest-stand conditions representing different developing stages, stem densities and abundance of sub canopy growth in boreal forests were tested. These plots were classified into three categories (“Easy”, “Medium” and “Difficult”) according to these varying stand conditions. The experimental results show that UWB data-driven method is able to map individual tree locations accurately with total root-mean-squared-errors (RMSEs) of 0.17 m, 0.2 m, and 0.26 m for “Easy”, “Medium” and “Difficult” forest plots, respectively, providing a strong reference for forest surveys.
准确的单株树位置可以实现有效的森林清查管理和自动化,支持精确的森林调查、管理决策和未来的单株树采伐计划。在本文中,我们详细比较和分析了一种超宽带(UWB)数据驱动方法在不同复杂性的北方森林样地中绘制单个树木位置的性能。选取不同林分条件下的12个样地,分别代表北方针叶林不同的发育阶段、茎密度和亚冠层生长丰度。根据这些不同的林分条件,将这些样地分为“容易”、“中等”和“困难”三类。实验结果表明,UWB数据驱动方法能够准确地绘制出“容易”、“中等”和“困难”森林样地的单株树位置,总均方根误差(rmse)分别为0.17 m、0.2 m和0.26 m,为森林调查提供了有力的参考。
{"title":"Performance analysis of ultra-wideband positioning for measuring tree positions in boreal forest plots","authors":"Zuoya Liu ,&nbsp;Harri Kaartinen ,&nbsp;Teemu Hakala ,&nbsp;Heikki Hyyti ,&nbsp;Juha Hyyppä ,&nbsp;Antero Kukko ,&nbsp;Ruizhi Chen","doi":"10.1016/j.ophoto.2025.100087","DOIUrl":"10.1016/j.ophoto.2025.100087","url":null,"abstract":"<div><div>Accurate individual tree locations enable efficient forest inventory management and automation, support precise forest surveys, management decisions and future individual-tree harvesting plans. In this paper, we compared and analyzed in detail the performance of an ultra-wideband (UWB) data-driven method for mapping individual tree locations in boreal forest sample plots of varying complexity. Twelve forest sample plots selected from varying forest-stand conditions representing different developing stages, stem densities and abundance of sub canopy growth in boreal forests were tested. These plots were classified into three categories (“Easy”, “Medium” and “Difficult”) according to these varying stand conditions. The experimental results show that UWB data-driven method is able to map individual tree locations accurately with total root-mean-squared-errors (RMSEs) of 0.17 m, 0.2 m, and 0.26 m for “Easy”, “Medium” and “Difficult” forest plots, respectively, providing a strong reference for forest surveys.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100087"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143580158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyperspectral unmixing with spatial context and endmember ensemble learning with attention mechanism 基于空间背景的高光谱解混与基于注意机制的端元集成学习
Pub Date : 2025-01-01 Epub Date: 2025-02-11 DOI: 10.1016/j.ophoto.2025.100086
R.M.K.L. Ratnayake, D.M.U.P. Sumanasekara, H.M.K.D. Wickramathilaka, G.M.R.I. Godaliyadda, H.M.V.R. Herath, M.P.B. Ekanayake
In recent years, transformer-based deep learning networks have gained popularity in Hyperspectral (HS) unmixing applications due to their superior performance. Most of these networks use an Endmember Extraction Algorithm(EEA) for the initialization of their network. As EEAs performance depends on the environment, single initialization does not ensure optimum performance. Also, only a few networks utilize the spatial context in HS Images to solve the unmixing problem. In this paper, we propose Hyperspectral Unmixing with Spatial Context and Endmember Ensemble Learning with Attention Mechanism (SCEELA) to address these issues. The proposed method has three main components, Signature Predictor (SP), Pixel Contextualizer (PC) and Abundance Predictor (AP). SP uses an ensemble of EEAs for each endmember as the initialization and the attention mechanism within the transformer enables ensemble learning to predict accurate endmembers. The attention mechanism in the PC enables the network to capture the contextual data and provide a more refined pixel to the AP to predict the abundance of that pixel. SCEELA was compared with eight state-of-the-art HS unmixing algorithms for three widely used real datasets and one synthetic dataset. The results show that the proposed method shows impressive performance when compared with other state-of-the-art algorithms.
近年来,基于变压器的深度学习网络由于其优越的性能在高光谱(HS)解混应用中得到了广泛的应用。这些网络大多使用端点提取算法(end - member Extraction Algorithm, EEA)来初始化它们的网络。由于EEAs的性能取决于环境,因此单个初始化并不能确保最佳性能。此外,只有少数网络利用HS图像中的空间上下文来解决解混问题。本文提出了基于空间上下文的高光谱解混和基于注意机制的端元集成学习(SCEELA)来解决这些问题。该方法由三个主要部分组成:特征预测器(SP)、像素上下文预测器(PC)和丰度预测器(AP)。SP对每个端成员使用eea集合作为初始化,转换器内的注意机制使集成学习能够预测准确的端成员。PC中的注意力机制使网络能够捕获上下文数据,并向AP提供更精细的像素,以预测该像素的丰度。在3个广泛使用的真实数据集和1个合成数据集上,对SCEELA与8种最先进的HS解混算法进行了比较。结果表明,与其他先进算法相比,该方法具有令人印象深刻的性能。
{"title":"Hyperspectral unmixing with spatial context and endmember ensemble learning with attention mechanism","authors":"R.M.K.L. Ratnayake,&nbsp;D.M.U.P. Sumanasekara,&nbsp;H.M.K.D. Wickramathilaka,&nbsp;G.M.R.I. Godaliyadda,&nbsp;H.M.V.R. Herath,&nbsp;M.P.B. Ekanayake","doi":"10.1016/j.ophoto.2025.100086","DOIUrl":"10.1016/j.ophoto.2025.100086","url":null,"abstract":"<div><div>In recent years, transformer-based deep learning networks have gained popularity in Hyperspectral (HS) unmixing applications due to their superior performance. Most of these networks use an Endmember Extraction Algorithm(EEA) for the initialization of their network. As EEAs performance depends on the environment, single initialization does not ensure optimum performance. Also, only a few networks utilize the spatial context in HS Images to solve the unmixing problem. In this paper, we propose Hyperspectral Unmixing with Spatial Context and Endmember Ensemble Learning with Attention Mechanism (SCEELA) to address these issues. The proposed method has three main components, Signature Predictor (SP), Pixel Contextualizer (PC) and Abundance Predictor (AP). SP uses an ensemble of EEAs for each endmember as the initialization and the attention mechanism within the transformer enables ensemble learning to predict accurate endmembers. The attention mechanism in the PC enables the network to capture the contextual data and provide a more refined pixel to the AP to predict the abundance of that pixel. SCEELA was compared with eight state-of-the-art HS unmixing algorithms for three widely used real datasets and one synthetic dataset. The results show that the proposed method shows impressive performance when compared with other state-of-the-art algorithms.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100086"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143420353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Plant trait retrieval from hyperspectral data: Collective efforts in scientific data curation outperform simulated data derived from the PROSAIL model 从高光谱数据中检索植物性状:科学数据管理的集体努力优于来自PROSAIL模型的模拟数据
Pub Date : 2025-01-01 Epub Date: 2024-12-09 DOI: 10.1016/j.ophoto.2024.100080
Daniel Mederer , Hannes Feilhauer , Eya Cherif , Katja Berger , Tobias B. Hank , Kyle R. Kovach , Phuong D. Dao , Bing Lu , Philip A. Townsend , Teja Kattenborn
Plant traits play a pivotal role in steering ecosystem dynamics. As plant canopies have evolved to interact with light, spectral data convey information on a variety of plant traits. Machine learning techniques have been used successfully to retrieve diverse traits from hyperspectral data. Nonetheless, the efficacy of machine learning is restricted by limited access to high-quality reference data for training. Previous studies showed that aggregating data across domains, sensors, or growth forms provided by collaborative efforts of the scientific community enables the creation of transferable models. However, even such curated databases are still sparse for several traits. To address these challenges, we investigated the potential of filling such data gaps with simulated hyperspectral data generated through the most widely-used radiative transfer model (RTM) PROSAIL. We coupled trait information from the TRY plant trait database with information on plant communities from the sPlot database, to build a realistic input trait dataset for the RTM-based simulation of canopy spectra. Our findings indicate that simulated data can alleviate the effects of data scarcity for highly underrepresented traits. In most other cases, however, the effects of including simulated data from RTMs are negligible or even negative. While more complex RTM models promise further improvements, their parameterization remains challenging. This highlights two key observations: firstly, RTM models, such as PROSAIL, exhibit limitations in producing realistic spectra across diverse ecosystems; secondly, real-world data repurposed from various sources exhibit superior retrieval success compared to simulated data. As a result, we advocate to emphasize the importance of active data sharing over secrecy and overreliance on modeling to address data limitations.
植物性状在生态系统动力学中起着关键性作用。由于植物冠层已经进化到与光相互作用,光谱数据传达了各种植物性状的信息。机器学习技术已经成功地用于从高光谱数据中检索各种特征。然而,机器学习的有效性受到高质量训练参考数据的限制。先前的研究表明,通过科学界的合作努力,跨领域、传感器或增长形式的数据聚合可以创建可转移的模型。然而,即使是这样精心整理的数据库,在一些特征上仍然是稀疏的。为了解决这些挑战,我们研究了通过最广泛使用的辐射传输模型(RTM) PROSAIL生成的模拟高光谱数据来填补这些数据空白的可能性。将TRY植物性状数据库中的性状信息与sPlot数据库中的植物群落信息进行耦合,为基于rtm的冠层光谱模拟构建真实的输入性状数据集。我们的研究结果表明,模拟数据可以缓解数据稀缺性对高度代表性不足的特征的影响。然而,在大多数其他情况下,包括来自rtm的模拟数据的影响可以忽略不计,甚至是负面的。虽然更复杂的RTM模型有望进一步改进,但它们的参数化仍然具有挑战性。这突出了两个关键的观察结果:首先,RTM模型,如PROSAIL,在产生不同生态系统的真实光谱方面存在局限性;其次,与模拟数据相比,从各种来源重新利用的真实数据显示出更高的检索成功率。因此,我们主张强调主动数据共享的重要性,而不是保密和过度依赖建模来解决数据限制。
{"title":"Plant trait retrieval from hyperspectral data: Collective efforts in scientific data curation outperform simulated data derived from the PROSAIL model","authors":"Daniel Mederer ,&nbsp;Hannes Feilhauer ,&nbsp;Eya Cherif ,&nbsp;Katja Berger ,&nbsp;Tobias B. Hank ,&nbsp;Kyle R. Kovach ,&nbsp;Phuong D. Dao ,&nbsp;Bing Lu ,&nbsp;Philip A. Townsend ,&nbsp;Teja Kattenborn","doi":"10.1016/j.ophoto.2024.100080","DOIUrl":"10.1016/j.ophoto.2024.100080","url":null,"abstract":"<div><div>Plant traits play a pivotal role in steering ecosystem dynamics. As plant canopies have evolved to interact with light, spectral data convey information on a variety of plant traits. Machine learning techniques have been used successfully to retrieve diverse traits from hyperspectral data. Nonetheless, the efficacy of machine learning is restricted by limited access to high-quality reference data for training. Previous studies showed that aggregating data across domains, sensors, or growth forms provided by collaborative efforts of the scientific community enables the creation of transferable models. However, even such curated databases are still sparse for several traits. To address these challenges, we investigated the potential of filling such data gaps with simulated hyperspectral data generated through the most widely-used radiative transfer model (RTM) PROSAIL. We coupled trait information from the TRY plant trait database with information on plant communities from the sPlot database, to build a realistic input trait dataset for the RTM-based simulation of canopy spectra. Our findings indicate that simulated data can alleviate the effects of data scarcity for highly underrepresented traits. In most other cases, however, the effects of including simulated data from RTMs are negligible or even negative. While more complex RTM models promise further improvements, their parameterization remains challenging. This highlights two key observations: firstly, RTM models, such as PROSAIL, exhibit limitations in producing realistic spectra across diverse ecosystems; secondly, real-world data repurposed from various sources exhibit superior retrieval success compared to simulated data. As a result, we advocate to emphasize the importance of active data sharing over secrecy and overreliance on modeling to address data limitations.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100080"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intensity-based stochastic model of terrestrial laser scanners: Methodological workflow, empirical derivation and practical benefit 基于强度的地面激光扫描仪随机模型:方法流程、经验推导和实际效益
Pub Date : 2025-01-01 Epub Date: 2024-12-11 DOI: 10.1016/j.ophoto.2024.100079
Florian Schill , Christoph Holst , Daniel Wujanz , Jens Hartmann , Jens-André Paffenholz
After more than twenty years of commercial use, laser scanners have reached technical maturity and consequently became a standard tool for 3D-data acquisition across various fields of application. Yet, meaningful stochastic information regarding the achieved metric quality of recorded points remains an open research question. Recent research demonstrated that raw intensity values can be deployed to derive stochastic models for reflectorless rangefinders. Yet, all existing studies focused on single instances of particular laser scanners while the derivation of the stochastic models required significant efforts.
Motivated by the aforementioned shortcomings, the focus of this study is set on the comparison of stochastic models for a series of eight identical phase-based scanners that differ in age, working hours and date of last calibration. In order to achieve this, a standardised methodological workflow is suggested to derive the unknown parameters of the individual stochastic models. Based on the generated outcome, a comparison is conducted which clarifies if a universally applicable stochastic model (type calibration) can be used for a particular scanner model or if individual parameter sets are still required for every scanner (instance calibration) to validate the practical benefit and usability of those models. The generated results successfully demonstrate that the computed stochastic model is transferable to all individual scanners of the series.
经过二十多年的商业应用,激光扫描仪已经达到了技术成熟,因此成为跨应用领域的3d数据采集的标准工具。然而,关于记录点的度量质量的有意义的随机信息仍然是一个开放的研究问题。最近的研究表明,原始强度值可以用来推导无反射镜测距仪的随机模型。然而,所有现有的研究都集中在特定激光扫描仪的单个实例上,而随机模型的推导需要大量的努力。由于上述缺点,本研究的重点是对一系列8个相同相位扫描仪的随机模型进行比较,这些扫描仪的年龄、工作时间和最后校准日期不同。为了实现这一目标,提出了一种标准化的方法工作流来推导单个随机模型的未知参数。根据生成的结果,进行比较,明确是否可以对特定扫描仪模型使用普遍适用的随机模型(类型校准),或者是否仍然需要每个扫描仪的单独参数集(实例校准),以验证这些模型的实际效益和可用性。生成的结果成功地证明了计算的随机模型可转移到该系列的所有单个扫描仪上。
{"title":"Intensity-based stochastic model of terrestrial laser scanners: Methodological workflow, empirical derivation and practical benefit","authors":"Florian Schill ,&nbsp;Christoph Holst ,&nbsp;Daniel Wujanz ,&nbsp;Jens Hartmann ,&nbsp;Jens-André Paffenholz","doi":"10.1016/j.ophoto.2024.100079","DOIUrl":"10.1016/j.ophoto.2024.100079","url":null,"abstract":"<div><div>After more than twenty years of commercial use, laser scanners have reached technical maturity and consequently became a standard tool for 3D-data acquisition across various fields of application. Yet, meaningful stochastic information regarding the achieved metric quality of recorded points remains an open research question. Recent research demonstrated that raw intensity values can be deployed to derive stochastic models for reflectorless rangefinders. Yet, all existing studies focused on single instances of particular laser scanners while the derivation of the stochastic models required significant efforts.</div><div>Motivated by the aforementioned shortcomings, the focus of this study is set on the comparison of stochastic models for a series of eight identical phase-based scanners that differ in age, working hours and date of last calibration. In order to achieve this, a standardised methodological workflow is suggested to derive the unknown parameters of the individual stochastic models. Based on the generated outcome, a comparison is conducted which clarifies if a universally applicable stochastic model (type calibration) can be used for a particular scanner model or if individual parameter sets are still required for every scanner (instance calibration) to validate the practical benefit and usability of those models. The generated results successfully demonstrate that the computed stochastic model is transferable to all individual scanners of the series.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100079"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ISPRS Open Journal of Photogrammetry and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1