首页 > 最新文献

ISPRS Open Journal of Photogrammetry and Remote Sensing最新文献

英文 中文
Branch information extraction from Norway spruce using handheld laser scanning point clouds in Nordic forests 用手持式激光扫描北欧森林点云提取挪威云杉的分支信息
Pub Date : 2023-08-01 DOI: 10.1016/j.ophoto.2023.100040
Olli Winberg , Jiri Pyörälä , Xiaowei Yu , Harri Kaartinen , Antero Kukko , Markus Holopainen , Johan Holmgren , Matti Lehtomäki , Juha Hyyppä

We showed that a mobile handheld laser scanner (HHLS) provides useful features concerning the wood quality-influencing external structures of trees. When linked with wood properties measured at a sawmill utilizing state-of-the-art X-ray scanners, these data enable the training of various wood quality models for use in targeting and planning future wood procurement. A total of 457 Norway spruce sample trees (Picea abies (L.) H. Karst.) from 13 spruce-dominated stands in southeastern Finland were used in the study. All test sites were recorded with a ZEB Horizon HHLS, and the sample trees were tracked to a sawmill and subjected to X-rays. Two branch extraction techniques were applied to the HHLS point clouds: 1) a method developed in this study that was based on the density-based spatial clustering of applications with noise (DBSCAN) and 2) segmentation-based quantitative structure model (treeQSM). On average, the treeQSM method detected 46% more branches per tree than the DBSCAN did. However, compared with the X-rayed references, some of the branches detected by the treeQSM may either be false positives or so small in size that the X-rays are unable to detect them as knots, as the method overestimated the whorl count by 19% when compared with the X-rays. On the other hand, the DBSCAN method only detected larger branches and showed a −11% bias in whorl count. Overall, the DBSCAN underestimated knot volumes within trees by 6%, while the treeQSM overestimated them by 25%. When we input the HHLS features into a Random Forest model, the knottiness variables measured at the sawmill were predicted with R2s of 0.47–0.64. The results were comparable with previous results derived with the static terrestrial laser scanners. The obtained stem branching data are relevant for predicting wood quality attributes but do not provide data that are directly comparable with the X-ray features. Future work should combine terrestrial point clouds with dense above-canopy point clouds to overcome the limitations related to vertical coverage.

我们表明,移动手持激光扫描仪(HHLS)提供了有关木材质量影响树木外部结构的有用功能。当与锯木厂使用最先进的X射线扫描仪测量的木材特性相联系时,这些数据可以培训各种木材质量模型,用于确定和规划未来的木材采购。总共457棵挪威云杉样品树(Picea abies(L.)H.Karst.)研究中使用了芬兰东南部13个云杉为主的林分。所有测试地点都用ZEB Horizon HHLS进行了记录,样本树被追踪到锯木厂并接受X射线检查。将两种分支提取技术应用于HHLS点云:1)本研究中开发的一种方法,该方法基于带噪声应用程序的基于密度的空间聚类(DBSCAN);2)基于分割的定量结构模型(treeQSM)。平均而言,treeQSM方法检测到的每棵树的分支比DBSCAN多46%。然而,与X射线参考相比,treeQSM检测到的一些分支可能是假阳性,或者尺寸太小,以至于X射线无法将其检测为结,因为与X射线相比,该方法高估了19%的螺纹数。另一方面,DBSCAN方法只检测到较大的分支,并且在螺纹计数中显示出−11%的偏差。总体而言,DBSCAN低估了6%的树木结体积,而treeQSM高估了25%。当我们将HHLS特征输入到随机森林模型中时,在锯木厂测量的棘手度变量被预测为R2s为0.47–0.64。该结果与之前用静态地面激光扫描仪得出的结果相当。所获得的树干分支数据与预测木材质量属性相关,但不能提供与X射线特征直接可比的数据。未来的工作应该将陆地点云与浓密的树冠上点云相结合,以克服与垂直覆盖相关的限制。
{"title":"Branch information extraction from Norway spruce using handheld laser scanning point clouds in Nordic forests","authors":"Olli Winberg ,&nbsp;Jiri Pyörälä ,&nbsp;Xiaowei Yu ,&nbsp;Harri Kaartinen ,&nbsp;Antero Kukko ,&nbsp;Markus Holopainen ,&nbsp;Johan Holmgren ,&nbsp;Matti Lehtomäki ,&nbsp;Juha Hyyppä","doi":"10.1016/j.ophoto.2023.100040","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100040","url":null,"abstract":"<div><p>We showed that a mobile handheld laser scanner (HHLS) provides useful features concerning the wood quality-influencing external structures of trees. When linked with wood properties measured at a sawmill utilizing state-of-the-art X-ray scanners, these data enable the training of various wood quality models for use in targeting and planning future wood procurement. A total of 457 Norway spruce sample trees (<em>Picea abies</em> (L.) H. Karst.) from 13 spruce-dominated stands in southeastern Finland were used in the study. All test sites were recorded with a ZEB Horizon HHLS, and the sample trees were tracked to a sawmill and subjected to X-rays. Two branch extraction techniques were applied to the HHLS point clouds: 1) a method developed in this study that was based on the density-based spatial clustering of applications with noise (DBSCAN) and 2) segmentation-based quantitative structure model (treeQSM). On average, the treeQSM method detected 46% more branches per tree than the DBSCAN did. However, compared with the X-rayed references, some of the branches detected by the treeQSM may either be false positives or so small in size that the X-rays are unable to detect them as knots, as the method overestimated the whorl count by 19% when compared with the X-rays. On the other hand, the DBSCAN method only detected larger branches and showed a −11% bias in whorl count. Overall, the DBSCAN underestimated knot volumes within trees by 6%, while the treeQSM overestimated them by 25%. When we input the HHLS features into a Random Forest model, the knottiness variables measured at the sawmill were predicted with R<sup>2</sup>s of 0.47–0.64. The results were comparable with previous results derived with the static terrestrial laser scanners. The obtained stem branching data are relevant for predicting wood quality attributes but do not provide data that are directly comparable with the X-ray features. Future work should combine terrestrial point clouds with dense above-canopy point clouds to overcome the limitations related to vertical coverage.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"9 ","pages":"Article 100040"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49753539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Point cloud registration for LiDAR and photogrammetric data: A critical synthesis and performance analysis on classic and deep learning algorithms 激光雷达和摄影测量数据的点云配准:经典和深度学习算法的关键综合和性能分析
Pub Date : 2023-04-01 DOI: 10.1016/j.ophoto.2023.100032
Ningli Xu , Rongjun Qin Ph.D. , Shuang Song

Three-dimensional (3D) point cloud registration is a fundamental step for many 3D modeling and mapping applications. Existing approaches are highly disparate in the data source, scene complexity, and application, therefore the current practices in various point cloud registration tasks are still ad-hoc processes. Recent advances in computer vision and deep learning have shown promising performance in estimating rigid/similarity transformation between unregistered point clouds of complex objects and scenes. However, their performances are mostly evaluated using a limited number of datasets from a single sensor (e.g. Kinect or RealSense cameras), lacking a comprehensive overview of their applicability in photogrammetric 3D mapping scenarios. In this work, we provide a comprehensive review of the state-of-the-art (SOTA) point cloud registration methods, where we analyze and evaluate these methods using a diverse set of point cloud data from indoor to satellite sources. The quantitative analysis allows for exploring the strengths, applicability, challenges, and future trends of these methods. In contrast to existing analysis works that introduce point cloud registration as a holistic process, our experimental analysis is based on its inherent two-step process to better comprehend these approaches including feature/keypoint-based initial coarse registration and dense fine registration through cloud-to-cloud (C2C) optimization. More than ten methods, including classic hand-crafted, deep-learning-based feature correspondence, and robust C2C methods were tested. We observed that the success rate of most of the algorithms are fewer than 40% over the datasets we tested and there are still are large margin of improvement upon existing algorithms concerning 3D sparse corresopondence search, and the ability to register point clouds with complex geometry and occlusions. With the evaluated statistics on three datasets, we conclude the best-performing methods for each step and provide our recommendations, and outlook future efforts.

三维(3D)点云配准是许多3D建模和映射应用程序的基本步骤。现有的方法在数据源、场景复杂性和应用程序方面高度不同,因此当前在各种点云注册任务中的实践仍然是自组织的过程。计算机视觉和深度学习的最新进展在估计复杂对象和场景的未注册点云之间的刚性/相似性变换方面显示出了良好的性能。然而,它们的性能大多是使用来自单个传感器(如Kinect或RealSense相机)的有限数量的数据集进行评估的,缺乏对其在摄影测量3D测绘场景中的适用性的全面概述。在这项工作中,我们对最先进的(SOTA)点云注册方法进行了全面的回顾,我们使用从室内到卫星源的一组不同的点云数据来分析和评估这些方法。定量分析允许探索这些方法的优势、适用性、挑战和未来趋势。与现有的将点云配准作为一个整体过程引入的分析工作相比,我们的实验分析基于其固有的两步过程,以更好地理解这些方法,包括基于特征/关键点的初始粗配准和通过云对云(C2C)优化的密集精细配准。测试了十多种方法,包括经典的手工制作、基于深度学习的特征对应和稳健的C2C方法。我们观察到,与我们测试的数据集相比,大多数算法的成功率都不到40%,并且在3D稀疏对应搜索以及注册具有复杂几何和遮挡的点云的能力方面,与现有算法相比仍有很大的改进空间。通过对三个数据集的统计数据进行评估,我们得出了每个步骤的最佳执行方法,并提供了我们的建议,并展望了未来的工作。
{"title":"Point cloud registration for LiDAR and photogrammetric data: A critical synthesis and performance analysis on classic and deep learning algorithms","authors":"Ningli Xu ,&nbsp;Rongjun Qin Ph.D. ,&nbsp;Shuang Song","doi":"10.1016/j.ophoto.2023.100032","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100032","url":null,"abstract":"<div><p>Three-dimensional (3D) point cloud registration is a fundamental step for many 3D modeling and mapping applications. Existing approaches are highly disparate in the data source, scene complexity, and application, therefore the current practices in various point cloud registration tasks are still ad-hoc processes. Recent advances in computer vision and deep learning have shown promising performance in estimating rigid/similarity transformation between unregistered point clouds of complex objects and scenes. However, their performances are mostly evaluated using a limited number of datasets from a single sensor (e.g. Kinect or RealSense cameras), lacking a comprehensive overview of their applicability in photogrammetric 3D mapping scenarios. In this work, we provide a comprehensive review of the state-of-the-art (SOTA) point cloud registration methods, where we analyze and evaluate these methods using a diverse set of point cloud data from indoor to satellite sources. The quantitative analysis allows for exploring the strengths, applicability, challenges, and future trends of these methods. In contrast to existing analysis works that introduce point cloud registration as a holistic process, our experimental analysis is based on its inherent two-step process to better comprehend these approaches including feature/keypoint-based initial coarse registration and dense fine registration through cloud-to-cloud (C2C) optimization. More than ten methods, including classic hand-crafted, deep-learning-based feature correspondence, and robust C2C methods were tested. We observed that the success rate of most of the algorithms are fewer than 40% over the datasets we tested and there are still are large margin of improvement upon existing algorithms concerning 3D sparse corresopondence search, and the ability to register point clouds with complex geometry and occlusions. With the evaluated statistics on three datasets, we conclude the best-performing methods for each step and provide our recommendations, and outlook future efforts.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100032"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49723908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Precision estimation of 3D objects using an observation distribution model in support of terrestrial laser scanner network design 基于观测分布模型的三维目标精度估计,支持地面激光扫描器网络设计
Pub Date : 2023-04-01 Epub Date: 2023-04-03 DOI: 10.1016/j.ophoto.2023.100035
D.D. Lichti , T.O. Chan , Kate Pexman

First order geometric network design is an important quality assurance process for terrestrial laser scanning of complex built environments for the construction of digital as-built models. A key design task is the determination of a set of instrument locations or viewpoints that provide complete site coverage while meeting quality criteria. Although simplified point precision measures are often used in this regard, precision measures for common geometric objects found in the built environment—planes, cylinders and spheres—are arguably more relevant indicators of as-built model quality. The computation of such measures at the design stage—which is not currently done—requires generation of artificial observations by ray casting, which can be a dissuasive factor for their adoption. This paper presents models for the rigorous computation of geometric object precision without the need for ray casting. Instead, a model for the 2D distribution of angular observations is coupled with candidate viewpoint-object geometry to derive the covariance matrix of parameters. Three-dimensional models are developed and tested for vertical cylinders, spheres and vertical, horizontal and tilted planes. Precision estimates from real experimental data were used as the reference for assessing the accuracy of the predicted precision—specifically the standard deviation—of the parameters of these objects. Results show that the mean accuracy of the model-predicted precision was 4.3% (of the read data value) or better for the planes, regardless of plane orientation. The mean accuracy of the cylinders was up to 6.2%. Larger differences were found for some datasets due to incomplete object coverage with the reference data, not due to the model. Mean precision for the spheres was similar, up to 6.1%, following adoption of a new model for deriving the angular scanning limits. The computational advantage of the proposed method over precision estimates from simulated, high-resolution point clouds is also demonstrated. The CPU time required to estimate precision can be reduced by up to three orders of magnitude. These results demonstrate the utility of the derived models for efficiently determining object precision in 3D network design in support of scanning surveys for reality capture.

一阶几何网络设计是复杂建筑环境的地面激光扫描的一个重要质量保证过程,用于构建数字竣工模型。一项关键的设计任务是确定一组仪器位置或视点,以提供完整的现场覆盖范围,同时满足质量标准。尽管在这方面经常使用简化的点精度测量,但在建筑环境中发现的常见几何对象——平面、圆柱体和球体——的精度测量可以说是竣工模型质量的更相关指标。在设计阶段计算这些措施——目前尚未完成——需要通过射线投射生成人工观测结果,这可能是采用这些措施的一个阻碍因素。本文提出了在不需要光线投射的情况下严格计算几何对象精度的模型。相反,将角度观测的2D分布的模型与候选视点对象几何相耦合,以导出参数的协方差矩阵。开发并测试了垂直圆柱体、球体以及垂直、水平和倾斜平面的三维模型。真实实验数据的精度估计值被用作评估这些物体参数预测精度的参考,特别是标准偏差。结果表明,无论平面方向如何,模型预测精度的平均精度均为(读取数据值的)4.3%或更好。圆柱体的平均精度高达6.2%。一些数据集的差异更大,这是由于与参考数据的对象覆盖不完整,而不是由于模型。在采用新的模型推导角度扫描极限后,球体的平均精度相似,高达6.1%。与模拟的高分辨率点云的精度估计相比,该方法的计算优势也得到了证明。估计精度所需的CPU时间最多可以减少三个数量级。这些结果证明了导出的模型在3D网络设计中有效确定目标精度的实用性,以支持用于真实捕捉的扫描测量。
{"title":"Precision estimation of 3D objects using an observation distribution model in support of terrestrial laser scanner network design","authors":"D.D. Lichti ,&nbsp;T.O. Chan ,&nbsp;Kate Pexman","doi":"10.1016/j.ophoto.2023.100035","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100035","url":null,"abstract":"<div><p>First order geometric network design is an important quality assurance process for terrestrial laser scanning of complex built environments for the construction of digital as-built models. A key design task is the determination of a set of instrument locations or viewpoints that provide complete site coverage while meeting quality criteria. Although simplified point precision measures are often used in this regard, precision measures for common geometric objects found in the built environment—planes, cylinders and spheres—are arguably more relevant indicators of as-built model quality. The computation of such measures at the design stage—which is not currently done—requires generation of artificial observations by ray casting, which can be a dissuasive factor for their adoption. This paper presents models for the rigorous computation of geometric object precision without the need for ray casting. Instead, a model for the 2D distribution of angular observations is coupled with candidate viewpoint-object geometry to derive the covariance matrix of parameters. Three-dimensional models are developed and tested for vertical cylinders, spheres and vertical, horizontal and tilted planes. Precision estimates from real experimental data were used as the reference for assessing the accuracy of the predicted precision—specifically the standard deviation—of the parameters of these objects. Results show that the mean accuracy of the model-predicted precision was 4.3% (of the read data value) or better for the planes, regardless of plane orientation. The mean accuracy of the cylinders was up to 6.2%. Larger differences were found for some datasets due to incomplete object coverage with the reference data, not due to the model. Mean precision for the spheres was similar, up to 6.1%, following adoption of a new model for deriving the angular scanning limits. The computational advantage of the proposed method over precision estimates from simulated, high-resolution point clouds is also demonstrated. The CPU time required to estimate precision can be reduced by up to three orders of magnitude. These results demonstrate the utility of the derived models for efficiently determining object precision in 3D network design in support of scanning surveys for reality capture.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100035"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49737030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards global scale segmentation with OpenStreetMap and remote sensing 基于OpenStreetMap和遥感的全球尺度分割
Pub Date : 2023-04-01 Epub Date: 2023-02-16 DOI: 10.1016/j.ophoto.2023.100031
Munazza Usmani , Maurizio Napolitano , Francesca Bovolo

Land Use Land Cover (LULC) segmentation is a famous application of remote sensing in an urban environment. Up-to-date and complete data are of major importance in this field. Although with some success, pixel-based segmentation remains challenging because of class variability. Due to the increasing popularity of crowd-sourcing projects, like OpenStreetMap, the need for user-generated content has also increased, providing a new prospect for LULC segmentation. We propose a deep-learning approach to segment objects in high-resolution imagery by using semantic crowdsource information. Due to satellite imagery and crowdsource database complexity, deep learning frameworks perform a significant role. This integration reduces computation and labor costs. Our methods are based on a fully convolutional neural network (CNN) that has been adapted for multi-source data processing. We discuss the use of data augmentation techniques and improvements to the training pipeline. We applied semantic (U-Net) and instance segmentation (Mask R-CNN) methods and, Mask R–CNN showed a significantly higher segmentation accuracy from both qualitative and quantitative viewpoints. The conducted methods reach 91% and 96% overall accuracy in building segmentation and 90% in road segmentation, demonstrating OSM and remote sensing complementarity and potential for city sensing applications.

土地利用-土地覆盖(LULC)分割是遥感在城市环境中的一个著名应用。最新和完整的数据在该领域具有重要意义。尽管取得了一些成功,但由于类别的可变性,基于像素的分割仍然具有挑战性。由于像OpenStreetMap这样的众包项目越来越受欢迎,对用户生成内容的需求也增加了,这为LULC细分提供了新的前景。我们提出了一种深度学习方法,通过使用语义众源信息来分割高分辨率图像中的对象。由于卫星图像和众包数据库的复杂性,深度学习框架发挥着重要作用。这种集成减少了计算和人工成本。我们的方法基于完全卷积神经网络(CNN),该网络已适用于多源数据处理。我们讨论了数据扩充技术的使用和对训练管道的改进。我们应用了语义(U-Net)和实例分割(Mask R-CNN)方法,从定性和定量的角度来看,Mask R–CNN显示出显著更高的分割精度。所进行的方法在建筑物分割和道路分割方面的总体准确率分别达到91%和96%和90%,证明了OSM和遥感的互补性和城市遥感应用的潜力。
{"title":"Towards global scale segmentation with OpenStreetMap and remote sensing","authors":"Munazza Usmani ,&nbsp;Maurizio Napolitano ,&nbsp;Francesca Bovolo","doi":"10.1016/j.ophoto.2023.100031","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100031","url":null,"abstract":"<div><p>Land Use Land Cover (LULC) segmentation is a famous application of remote sensing in an urban environment. Up-to-date and complete data are of major importance in this field. Although with some success, pixel-based segmentation remains challenging because of class variability. Due to the increasing popularity of crowd-sourcing projects, like OpenStreetMap, the need for user-generated content has also increased, providing a new prospect for LULC segmentation. We propose a deep-learning approach to segment objects in high-resolution imagery by using semantic crowdsource information. Due to satellite imagery and crowdsource database complexity, deep learning frameworks perform a significant role. This integration reduces computation and labor costs. Our methods are based on a fully convolutional neural network (CNN) that has been adapted for multi-source data processing. We discuss the use of data augmentation techniques and improvements to the training pipeline. We applied semantic (U-Net) and instance segmentation (Mask R-CNN) methods and, Mask R–CNN showed a significantly higher segmentation accuracy from both qualitative and quantitative viewpoints. The conducted methods reach 91% and 96% overall accuracy in building segmentation and 90% in road segmentation, demonstrating OSM and remote sensing complementarity and potential for city sensing applications.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100031"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49723992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards complete tree crown delineation by instance segmentation with Mask R–CNN and DETR using UAV-based multispectral imagery and lidar data 利用基于无人机的多光谱图像和激光雷达数据,利用掩模R-CNN和DETR进行实例分割,实现完整的树冠描绘
Pub Date : 2023-04-01 Epub Date: 2023-05-05 DOI: 10.1016/j.ophoto.2023.100037
S. Dersch , A. Schöttl , P. Krzystek , M. Heurich

Precise single tree delineation allows for a more reliable determination of essential parameters such as tree species, height and vitality. Methods of instance segmentation are powerful neural networks for detecting and segmenting single objects and have the potential to push the accuracy of tree segmentation methods to a new level. In this study, two instance segmentation methods, Mask R–CNN and DETR, were applied to precisely delineate single tree crowns using multispectral images and images generated from UAV lidar data. The study area was in Bavaria, 35 km north of Munich (Germany), comprising a mixed forest stand of around 7 ha characterised mainly by Norway spruce (Picea abies) and large groups of European beeches (Fagus sylvatica) with 181–236 trees per ha. The data set, consisting of multispectral images and lidar data, was acquired using a Micasense RedEdge-MX dual camera system and a Riegl miniVUX-1UAV lidar scanner, both mounted on a hexacopter (DJI Matrice 600 Pro). At an altitude of approximately 85 m, two flight missions were conducted at an airspeed of 5 m/s, leading to a ground resolution of 5 cm and a lidar point density of 560 points/m2. In total, 1408 trees were marked by visual interpretation of the remote sensing data for training and validating the classifiers. Additionally, 125 trees were surveyed by tacheometric means used to test the optimized neural networks. The evaluations showed that segmentation using only multispectral imagery performed slightly better than with images generated from lidar data. In terms of F1 score, Mask R–CNN with color infrared (CIR) images achieved 92% in coniferous, 85% in deciduous and 83% in mixed stands. Compared to the images generated by lidar data, these scores are the same for coniferous and slightly worse for deciduous and mixed plots, by 4% and 2%, respectively. DETR with CIR images achieved 90% in coniferous, 81% in deciduous and 84% in mixed stands. These scores were 2%, 1%, and 2% worse, respectively, compared to the lidar data images in the same test areas. Interestingly, four conventional segmentation methods performed significantly worse than CIR-based and lidar-based instance segmentations. Additionally, the results revealed that tree crowns were more accurately segmented by instance segmentation. All in all, the results highlight the practical potential of the two deep learning-based tree segmentation methods, especially in comparison to baseline methods.

精确的单株树木描绘可以更可靠地确定基本参数,如树种、高度和活力。实例分割方法是用于检测和分割单个对象的强大神经网络,有可能将树分割方法的准确性提高到一个新的水平。在本研究中,使用多光谱图像和无人机激光雷达数据生成的图像,应用Mask R–CNN和DETR两种实例分割方法来精确描绘单个树冠。研究区域位于慕尼黑(德国)以北35公里的巴伐利亚州,包括一个约7公顷的混合林分,主要以挪威云杉(云杉)和大量欧洲山毛榉(山毛榉)为特征,每公顷181–236棵树。该数据集由多光谱图像和激光雷达数据组成,使用安装在六旋翼机(DJI Matrice 600 Pro)上的Micasense RedEdge MX双摄像头系统和Riegl miniVUX-1UAV激光雷达扫描仪采集。在大约85米的高度,以5米/秒的空速执行了两次飞行任务,导致地面分辨率为5厘米,激光雷达点密度为560点/平方米。总共有1408棵树通过遥感数据的视觉解释进行了标记,用于训练和验证分类器。此外,通过速度测量方法对125棵树进行了调查,用于测试优化的神经网络。评估显示,仅使用多光谱图像的分割效果略好于使用激光雷达数据生成的图像。就F1得分而言,Mask R–CNN彩色红外图像在针叶林中达到92%,在落叶林中达到85%,在混交林中达到83%。与激光雷达数据生成的图像相比,针叶林的得分相同,落叶和混合林的得分略差,分别为4%和2%。采用CIR图像的DETR在针叶林中达到90%,在落叶林中达到81%,在混交林中达到84%。与相同测试区域的激光雷达数据图像相比,这些分数分别差2%、1%和2%。有趣的是,四种传统的分割方法的表现明显不如基于CIR和基于激光雷达的实例分割。此外,结果表明,通过实例分割可以更准确地分割树冠。总之,结果突出了两种基于深度学习的树分割方法的实用潜力,尤其是与基线方法相比。
{"title":"Towards complete tree crown delineation by instance segmentation with Mask R–CNN and DETR using UAV-based multispectral imagery and lidar data","authors":"S. Dersch ,&nbsp;A. Schöttl ,&nbsp;P. Krzystek ,&nbsp;M. Heurich","doi":"10.1016/j.ophoto.2023.100037","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100037","url":null,"abstract":"<div><p>Precise single tree delineation allows for a more reliable determination of essential parameters such as tree species, height and vitality. Methods of instance segmentation are powerful neural networks for detecting and segmenting single objects and have the potential to push the accuracy of tree segmentation methods to a new level. In this study, two instance segmentation methods, Mask R–CNN and DETR, were applied to precisely delineate single tree crowns using multispectral images and images generated from UAV lidar data. The study area was in Bavaria, 35 km north of Munich (Germany), comprising a mixed forest stand of around 7 ha characterised mainly by Norway spruce (<em>Picea abies</em>) and large groups of European beeches (<em>Fagus sylvatica</em>) with 181–236 trees per ha. The data set, consisting of multispectral images and lidar data, was acquired using a Micasense RedEdge-MX dual camera system and a Riegl miniVUX-1UAV lidar scanner, both mounted on a hexacopter (DJI Matrice 600 Pro). At an altitude of approximately 85 m, two flight missions were conducted at an airspeed of 5 m/s, leading to a ground resolution of 5 cm and a lidar point density of 560 points/<em>m</em><sup>2</sup>. In total, 1408 trees were marked by visual interpretation of the remote sensing data for training and validating the classifiers. Additionally, 125 trees were surveyed by tacheometric means used to test the optimized neural networks. The evaluations showed that segmentation using only multispectral imagery performed slightly better than with images generated from lidar data. In terms of F1 score, Mask R–CNN with color infrared (CIR) images achieved 92% in coniferous, 85% in deciduous and 83% in mixed stands. Compared to the images generated by lidar data, these scores are the same for coniferous and slightly worse for deciduous and mixed plots, by 4% and 2%, respectively. DETR with CIR images achieved 90% in coniferous, 81% in deciduous and 84% in mixed stands. These scores were 2%, 1%, and 2% worse, respectively, compared to the lidar data images in the same test areas. Interestingly, four conventional segmentation methods performed significantly worse than CIR-based and lidar-based instance segmentations. Additionally, the results revealed that tree crowns were more accurately segmented by instance segmentation. All in all, the results highlight the practical potential of the two deep learning-based tree segmentation methods, especially in comparison to baseline methods.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100037"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49723922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Pixel-based mapping of open field and protected agriculture using constrained Sentinel-2 data 利用受约束的Sentinel-2数据进行开放农田和受保护农业的基于像素的制图
Pub Date : 2023-04-01 Epub Date: 2023-02-27 DOI: 10.1016/j.ophoto.2023.100033
Daniele la Cecilia , Manu Tom , Christian Stamm , Daniel Odermatt

Protected agriculture boosts the production of vegetables, berries and fruits, and it plays a pivotal role in guaranteeing food security globally in the face of climate change. Remote sensing is proven to be useful for identifying the presence of (low-tech) plastic greenhouses and plastic mulches. However, the classification accuracy notoriously decreases in the presence of small-scale farming, heterogeneous land cover and unaccounted seasonal management of protected agriculture. Here, we present the random forest-based pixel-level Open field and Protected Agriculture land cover Classifier (OPAC) developed using Sentinel-2 L2A data. OPAC is trained using tiles from Switzerland over 2 years and the Almeria region in Spain over 1 acquisition day. OPAC classifies eight land covers typical of open field and protected agriculture (plastic mulches, low-tech greenhouses and for the first time high-tech greenhouses). Finally, we assess (1) how the land covers in OPAC are labelled in the Sentinel-2 Scene Classification Layer (SCL) and (2) the correspondence between pixels classified as protected agriculture by OPAC and by the best performing Advanced Plastic Greenhouse Index (APGI). To reduce anthropogenic land covers, we constrain the classification task to agricultural areas retrieved from cadastral data or the Corine Land Cover map. The 5-fold cross-validation reveals an overall accuracy of 92% but other classification scores are moderate when keeping the separation among the three classes of protected agriculture. However, all scores substantially improve upon grouping the three classes into one (with an Intersection Over Union of 0.58 as an average among the scores of the three classes and of 0.98 for one single class). Given the recently acknowledged importance of Sentinel-2 Band 1 (central wavelength of 443 nm), the classification accuracy of OPAC for the Swiss small-scale farming is mostly limited by the band's reduced spatial accuracy (60 m). A careful visual assessment indicates that OPAC achieves satisfactory generalization capabilities also in North European (the Netherlands) and four Mediterranean areas (Spain, Italy, Crete and Turkey) without the need of adding location and temporal specific information. There is good agreement among natural land covers classified by OPAC and the SCL. However, the SCL does not have a class for protected agriculture, the latter being often classified as clouds. APGI achieved similar to lower classification accuracies than OPAC. Importantly, the APGI classification task depends on a user-defined space- and time-specific threshold, whereas OPAC does not. Therefore, OPAC paves the way for rapid mapping of protected agriculture at continental scale.

保护性农业促进了蔬菜、浆果和水果的生产,在应对气候变化时,它在保障全球粮食安全方面发挥着关键作用。遥感被证明有助于识别(低技术)塑料温室和塑料薄膜的存在。然而,众所周知,在小规模农业、异质土地覆盖和保护农业季节性管理不明确的情况下,分类精度会下降。在这里,我们介绍了使用Sentinel-2 L2A数据开发的基于随机森林的像素级开阔地和受保护农业土地覆盖分类器(OPAC)。OPAC使用瑞士瓷砖进行为期2年的培训,使用西班牙阿尔梅里亚地区瓷砖进行为期1天的培训。OPAC对八种典型的露地和保护性农业土地覆盖进行了分类(塑料薄膜、低技术温室和首次高科技温室)。最后,我们评估了(1)如何在哨兵-2场景分类层(SCL)中标记OPAC中的土地覆盖物,以及(2)OPAC和表现最好的先进塑料温室指数(APGI)分类为受保护农业的像素之间的对应关系。为了减少人为土地覆盖,我们将分类任务限制在从地籍数据或Corine土地覆盖图中检索的农业区域。5倍的交叉验证显示总体准确率为92%,但在保持三类保护农业之间的分离时,其他分类得分中等。然而,在将三个班分组为一个班后,所有分数都显著提高(三个班的平均分数为0.58,一个班的分数为0.98)。鉴于最近公认的Sentinel-2波段1(中心波长443 nm)的重要性,OPAC对瑞士小型农业的分类精度主要受到波段空间精度降低(60 m)的限制。仔细的视觉评估表明,OPAC在北欧(荷兰)和四个地中海地区(西班牙、意大利、克里特岛和土耳其)也实现了令人满意的泛化能力,而无需添加位置和时间特定信息。OPAC和SCL分类的自然土地覆盖层之间存在良好的一致性。然而,SCL没有保护农业的类别,后者通常被归类为云。APGI实现了与OPAC类似的更低分类精度。重要的是,APGI分类任务依赖于用户定义的特定于空间和时间的阈值,而OPAC则不依赖。因此,OPAC为在大陆范围内快速绘制受保护农业地图铺平了道路。
{"title":"Pixel-based mapping of open field and protected agriculture using constrained Sentinel-2 data","authors":"Daniele la Cecilia ,&nbsp;Manu Tom ,&nbsp;Christian Stamm ,&nbsp;Daniel Odermatt","doi":"10.1016/j.ophoto.2023.100033","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100033","url":null,"abstract":"<div><p>Protected agriculture boosts the production of vegetables, berries and fruits, and it plays a pivotal role in guaranteeing food security globally in the face of climate change. Remote sensing is proven to be useful for identifying the presence of (low-tech) plastic greenhouses and plastic mulches. However, the classification accuracy notoriously decreases in the presence of small-scale farming, heterogeneous land cover and unaccounted seasonal management of protected agriculture. Here, we present the random forest-based pixel-level Open field and Protected Agriculture land cover Classifier (OPAC) developed using Sentinel-2 L2A data. OPAC is trained using tiles from Switzerland over 2 years and the Almeria region in Spain over 1 acquisition day. OPAC classifies eight land covers typical of open field and protected agriculture (plastic mulches, low-tech greenhouses and for the first time high-tech greenhouses). Finally, we assess (1) how the land covers in OPAC are labelled in the Sentinel-2 Scene Classification Layer (SCL) and (2) the correspondence between pixels classified as protected agriculture by OPAC and by the best performing Advanced Plastic Greenhouse Index (APGI). To reduce anthropogenic land covers, we constrain the classification task to agricultural areas retrieved from cadastral data or the Corine Land Cover map. The 5-fold cross-validation reveals an overall accuracy of 92% but other classification scores are moderate when keeping the separation among the three classes of protected agriculture. However, all scores substantially improve upon grouping the three classes into one (with an Intersection Over Union of 0.58 as an average among the scores of the three classes and of 0.98 for one single class). Given the recently acknowledged importance of Sentinel-2 Band 1 (central wavelength of 443 nm), the classification accuracy of OPAC for the Swiss small-scale farming is mostly limited by the band's reduced spatial accuracy (60 m). A careful visual assessment indicates that OPAC achieves satisfactory generalization capabilities also in North European (the Netherlands) and four Mediterranean areas (Spain, Italy, Crete and Turkey) without the need of adding location and temporal specific information. There is good agreement among natural land covers classified by OPAC and the SCL. However, the SCL does not have a class for protected agriculture, the latter being often classified as clouds. APGI achieved similar to lower classification accuracies than OPAC. Importantly, the APGI classification task depends on a user-defined space- and time-specific threshold, whereas OPAC does not. Therefore, OPAC paves the way for rapid mapping of protected agriculture at continental scale.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100033"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49723763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
UAV-based reference data for the prediction of fractional cover of standing deadwood from Sentinel time series 基于无人机的哨兵时间序列枯木覆盖度预测参考数据
Pub Date : 2023-04-01 Epub Date: 2023-03-08 DOI: 10.1016/j.ophoto.2023.100034
Felix Schiefer , Sebastian Schmidtlein , Annett Frick , Julian Frey , Randolf Klinke , Katarzyna Zielewska-Büttner , Samuli Junttila , Andreas Uhl , Teja Kattenborn

Increasing tree mortality due to climate change has been observed globally. Remote sensing is a suitable means for detecting tree mortality and has been proven effective for the assessment of abrupt and large-scale stand-replacing disturbances, such as those caused by windthrow, clear-cut harvesting, or wildfire. Non-stand replacing tree mortality events (e.g., due to drought) are more difficult to detect with satellite data – especially across regions and forest types. A common limitation for this is the availability of spatially explicit reference data. To address this issue, we propose an automated generation of reference data using uncrewed aerial vehicles (UAV) and deep learning-based pattern recognition. In this study, we used convolutional neural networks (CNN) to semantically segment crowns of standing dead trees from 176 UAV-based very high-resolution (<4 cm) RGB-orthomosaics that we acquired over six regions in Germany and Finland between 2017 and 2021. The local-level CNN-predictions were then extrapolated to landscape-level using Sentinel-1 (i.e., backscatter and interferometric coherence), Sentinel-2 time series, and long short term memory networks (LSTM) to predict the cover fraction of standing deadwood per Sentinel-pixel. The CNN-based segmentation of standing deadwood from UAV imagery was accurate (F1-score = 0.85) and consistent across the different study sites and years. Best results for the LSTM-based extrapolation of fractional cover of standing deadwood using Sentinel-1 and -2 time series were achieved using all available Sentinel-1 and --2 bands, kernel normalized difference vegetation index (kNDVI), and normalized difference water index (NDWI) (Pearson’s r = 0.66, total least squares regression slope = 1.58). The landscape-level predictions showed high spatial detail and were transferable across regions and years. Our results highlight the effectiveness of deep learning-based algorithms for an automated and rapid generation of reference data for large areas using UAV imagery. Potential for improving the presented upscaling approach was found particularly in ensuring the spatial and temporal consistency of the two data sources (e.g., co-registration of very high-resolution UAV data and medium resolution satellite data). The increasing availability of publicly available UAV imagery on sharing platforms combined with automated and transferable deep learning-based mapping algorithms will further increase the potential of such multi-scale approaches.

由于气候变化,全球树木死亡率不断上升。遥感是检测树木死亡率的合适手段,已被证明可有效评估突然和大规模的林分替代干扰,如由风吹、明确采伐或野火引起的干扰。非林分替代树木死亡事件(例如,由于干旱)更难用卫星数据检测到,尤其是在不同地区和森林类型之间。对此的一个常见限制是空间显式参考数据的可用性。为了解决这个问题,我们提出了一种使用无人机和基于深度学习的模式识别自动生成参考数据的方法。在这项研究中,我们使用卷积神经网络(CNN)从176个基于无人机的非常高分辨率(<;4cm)RGB正交镶嵌图中对直立枯树的树冠进行语义分割,这些镶嵌图是我们在2017年至2021年间在德国和芬兰的六个地区获得的。然后使用Sentinel-1(即反向散射和干涉相干性)、Sentinel-2时间序列和长短期记忆网络(LSTM)将局部水平的CNN预测外推到景观水平,以预测每个Sentinel像素的直立枯木覆盖率。基于CNN的无人机图像中直立枯木的分割是准确的(F1分数=0.85),并且在不同的研究地点和年份之间是一致的。使用所有可用的Sentinel-1和-2波段、核归一化差异植被指数(kNDVI)、,和归一化差异水指数(NDWI)(Pearson’s r=0.66,总最小二乘回归斜率=1.58)。景观水平预测显示出高度的空间细节,并且可跨区域和年份转移。我们的研究结果强调了基于深度学习的算法在使用无人机图像自动快速生成大面积参考数据方面的有效性。发现改进所提出的放大方法的潜力,特别是在确保两个数据源的空间和时间一致性方面(例如,非常高分辨率无人机数据和中等分辨率卫星数据的共同配准)。共享平台上公开可用的无人机图像的可用性越来越高,再加上基于自动和可转移的深度学习的地图算法,将进一步增加这种多尺度方法的潜力。
{"title":"UAV-based reference data for the prediction of fractional cover of standing deadwood from Sentinel time series","authors":"Felix Schiefer ,&nbsp;Sebastian Schmidtlein ,&nbsp;Annett Frick ,&nbsp;Julian Frey ,&nbsp;Randolf Klinke ,&nbsp;Katarzyna Zielewska-Büttner ,&nbsp;Samuli Junttila ,&nbsp;Andreas Uhl ,&nbsp;Teja Kattenborn","doi":"10.1016/j.ophoto.2023.100034","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100034","url":null,"abstract":"<div><p>Increasing tree mortality due to climate change has been observed globally. Remote sensing is a suitable means for detecting tree mortality and has been proven effective for the assessment of abrupt and large-scale stand-replacing disturbances, such as those caused by windthrow, clear-cut harvesting, or wildfire. Non-stand replacing tree mortality events (e.g., due to drought) are more difficult to detect with satellite data – especially across regions and forest types. A common limitation for this is the availability of spatially explicit reference data. To address this issue, we propose an automated generation of reference data using uncrewed aerial vehicles (UAV) and deep learning-based pattern recognition. In this study, we used convolutional neural networks (CNN) to semantically segment crowns of standing dead trees from 176 UAV-based very high-resolution (&lt;4 cm) RGB-orthomosaics that we acquired over six regions in Germany and Finland between 2017 and 2021. The local-level CNN-predictions were then extrapolated to landscape-level using Sentinel-1 (i.e., backscatter and interferometric coherence), Sentinel-2 time series, and long short term memory networks (LSTM) to predict the cover fraction of standing deadwood per Sentinel-pixel. The CNN-based segmentation of standing deadwood from UAV imagery was accurate (F1-score = 0.85) and consistent across the different study sites and years. Best results for the LSTM-based extrapolation of fractional cover of standing deadwood using Sentinel-1 and -2 time series were achieved using all available Sentinel-1 and --2 bands, kernel normalized difference vegetation index (kNDVI), and normalized difference water index (NDWI) (Pearson’s r = 0.66, total least squares regression slope = 1.58). The landscape-level predictions showed high spatial detail and were transferable across regions and years. Our results highlight the effectiveness of deep learning-based algorithms for an automated and rapid generation of reference data for large areas using UAV imagery. Potential for improving the presented upscaling approach was found particularly in ensuring the spatial and temporal consistency of the two data sources (e.g., co-registration of very high-resolution UAV data and medium resolution satellite data). The increasing availability of publicly available UAV imagery on sharing platforms combined with automated and transferable deep learning-based mapping algorithms will further increase the potential of such multi-scale approaches.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100034"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49737029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Spatial patterns of biomass change across Finland in 2009–2015 2009-2015年芬兰生物量变化空间格局
Pub Date : 2023-04-01 DOI: 10.1016/j.ophoto.2023.100036
Markus Haakana, Sakari Tuominen, Juha Heikkinen, Mikko Peltoniemi, Aleksi Lehtonen

Forest characteristics vary largely at the regional level and in smaller geographic areas in Finland. The amount of greenhouse gas emissions is related to changes in biomass and the soil type (e.g. upland soils vs. peatlands). In this paper, estimating and explaining spatial patterns of tree biomass change across Finland was the main interest. We analysed biomass changes on different soil and site types between the years 2009 and 2015 using the Finnish multi-source national forest inventory (MS-NFI) raster layers. MS-NFI method is based on combining information from satellite imagery, digital maps and national forest inventory (NFI) field data. Automatic segmentation was used to create silvicultural management and treatment units. An average biomass estimate of the segmented MS-NFI (MS–NFI–seg) map was 73.9 tons ha−1 compared to the national forest inventory estimate of 76.5 tons ha−1 in 2015. Forest soil type had a similar effect on average biomass in MS–NFI–seg and NFI data. Despite good regional and country-level results, segmentation narrowed the biomass distributions. Hence, biomass changes on segments can be considered only approximate values; also, those small differences in average biomass may accumulate when map layers from more than one time point are compared. A kappa of 0.44 was achieved for precision when comparing undisturbed and disturbed forest stands in the segmented Global Forest Change data (GFC-seg) and MS–NFI–seg map. Compared to NFI, 69% and 62% of disturbed areas were detected by GFC-seg and MS–NFI–seg, respectively. Spatially accurate map data of biomass changes on forest land improve the ability to suggest optimal management alternatives for any patch of land, e.g. in terms of climate change mitigation.

芬兰的森林特征在区域一级和较小的地理区域差异很大。温室气体排放量与生物量和土壤类型的变化有关(例如高地土壤与泥炭地)。在本文中,估计和解释芬兰树木生物量变化的空间模式是主要的兴趣。我们使用芬兰多源国家森林目录(MS-NFI)光栅层分析了2009年至2015年间不同土壤和场地类型的生物量变化。MS-NFI方法是基于结合卫星图像、数字地图和国家森林调查(NFI)实地数据的信息。自动分割用于创建造林管理和处理单元。分段MS-NFI(MS–NFI–seg)图的平均生物量估计为73.9吨ha−1,而2015年的国家森林存量估计为76.5吨ha−1。在MS–NFI–seg和NFI数据中,森林土壤类型对平均生物量的影响相似。尽管在区域和国家层面取得了良好的结果,但细分缩小了生物量的分布。因此,分段上的生物量变化只能被视为近似值;此外,当比较来自一个以上时间点的地图层时,平均生物量的这些微小差异可能会累积。在分段的全球森林变化数据(GFC seg)和MS–NFI–seg图中,当比较未受干扰和受干扰的林分时,精度达到了0.44的kappa。与NFI相比,GFC seg和MS–NFI–seg分别检测到69%和62%的干扰区域。林地生物量变化的空间精确地图数据提高了为任何一块土地提出最佳管理替代方案的能力,例如在减缓气候变化方面。
{"title":"Spatial patterns of biomass change across Finland in 2009–2015","authors":"Markus Haakana,&nbsp;Sakari Tuominen,&nbsp;Juha Heikkinen,&nbsp;Mikko Peltoniemi,&nbsp;Aleksi Lehtonen","doi":"10.1016/j.ophoto.2023.100036","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100036","url":null,"abstract":"<div><p>Forest characteristics vary largely at the regional level and in smaller geographic areas in Finland. The amount of greenhouse gas emissions is related to changes in biomass and the soil type (e.g. upland soils vs. peatlands). In this paper, estimating and explaining spatial patterns of tree biomass change across Finland was the main interest. We analysed biomass changes on different soil and site types between the years 2009 and 2015 using the Finnish multi-source national forest inventory (MS-NFI) raster layers. MS-NFI method is based on combining information from satellite imagery, digital maps and national forest inventory (NFI) field data. Automatic segmentation was used to create silvicultural management and treatment units. An average biomass estimate of the segmented MS-NFI (MS–NFI–seg) map was 73.9 tons ha<sup>−1</sup> compared to the national forest inventory estimate of 76.5 tons ha<sup>−1</sup> in 2015. Forest soil type had a similar effect on average biomass in MS–NFI–seg and NFI data. Despite good regional and country-level results, segmentation narrowed the biomass distributions. Hence, biomass changes on segments can be considered only approximate values; also, those small differences in average biomass may accumulate when map layers from more than one time point are compared. A kappa of 0.44 was achieved for precision when comparing undisturbed and disturbed forest stands in the segmented Global Forest Change data (GFC-seg) and MS–NFI–seg map. Compared to NFI, 69% and 62% of disturbed areas were detected by GFC-seg and MS–NFI–seg, respectively. Spatially accurate map data of biomass changes on forest land improve the ability to suggest optimal management alternatives for any patch of land, e.g. in terms of climate change mitigation.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100036"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49723921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation of lidar-based gridded DEM uncertainty with varying terrain roughness and point density 随地形粗糙度和点密度变化的激光雷达网格DEM不确定性估计
Pub Date : 2023-01-01 Epub Date: 2022-12-17 DOI: 10.1016/j.ophoto.2022.100028
Luyen K. Bui , Craig L. Glennie

Light detection and ranging (lidar) scanning systems can be used to provide a point cloud with high quality and point density. Gridded digital elevation models (DEMs) interpolated from laser scanning point clouds are widely used due to their convenience, however, DEM uncertainty is rarely provided. This paper proposes an end-to-end workflow to quantify the uncertainty (i.e., standard deviation) of a gridded lidar-derived DEM. A benefit of the proposed approach is that it does not require independent validation data measured by alternative means. The input point cloud requires per point uncertainty which is derived from lidar system observational uncertainty. The propagated uncertainty caused by interpolation is then derived by the general law of propagation of variances (GLOPOV) with simultaneous consideration of both horizontal and vertical point cloud uncertainties. Finally, the interpolated uncertainty is then scaled by point density and a measure of terrain roughness to arrive at the final gridded DEM uncertainty. The proposed approach is tested with two lidar datasets measured in Waikoloa, Hawaii, and Sitka, Alaska. Triangulated irregular network (TIN) interpolation is chosen as the representative gridding approach. The results indicate estimated terrain roughness/point density scale factors ranging between 1 (in flat areas) and 7.6 (in high roughness areas), with a mean value of 2.3 for the Waikoloa dataset and between 1 and 9.2 with a mean value of 1.2 for the Sitka dataset. As a result, the final gridded DEM uncertainties are estimated between 0.059 m and 0.677 m with a mean value of 0.164 m for the Waikoloa dataset and between 0.059 m and 1.723 m with a mean value of 0.097 m for the Sitka dataset.

光探测和测距(激光雷达)扫描系统可用于提供具有高质量和点密度的点云。激光扫描点云插值的网格化数字高程模型(DEM)由于其方便性而被广泛使用,但很少提供DEM的不确定性。本文提出了一种端到端的工作流程来量化网格激光雷达衍生DEM的不确定性(即标准差)。所提出的方法的一个好处是,它不需要通过替代方法测量的独立验证数据。输入点云需要每个点的不确定性,该不确定性源自激光雷达系统的观测不确定性。然后,在同时考虑水平和垂直点云不确定性的情况下,通过一般方差传播定律(GLOPOV)导出插值引起的传播不确定性。最后,通过点密度和地形粗糙度的测量来缩放插值的不确定性,以获得最终的网格DEM不确定性。该方法在夏威夷威科洛亚和阿拉斯加锡特卡的两个激光雷达数据集上进行了测试。选择不规则三角网(TIN)插值作为代表性的网格方法。结果表明,估计的地形粗糙度/点密度比例因子介于1(在平坦地区)和7.6(在高粗糙度地区)之间,Waikoloa数据集的平均值为2.3,Sitka数据集介于1和9.2之间,平均值为1.2。因此,Waikoloa数据集的最终网格DEM不确定性估计在0.059 m至0.677 m之间,平均值为0.164 m,Sitka数据集在0.059米至1.723 m之间,其平均值为0.097 m。
{"title":"Estimation of lidar-based gridded DEM uncertainty with varying terrain roughness and point density","authors":"Luyen K. Bui ,&nbsp;Craig L. Glennie","doi":"10.1016/j.ophoto.2022.100028","DOIUrl":"https://doi.org/10.1016/j.ophoto.2022.100028","url":null,"abstract":"<div><p>Light detection and ranging (lidar) scanning systems can be used to provide a point cloud with high quality and point density. Gridded digital elevation models (DEMs) interpolated from laser scanning point clouds are widely used due to their convenience, however, DEM uncertainty is rarely provided. This paper proposes an end-to-end workflow to quantify the uncertainty (i.e., standard deviation) of a gridded lidar-derived DEM. A benefit of the proposed approach is that it does not require independent validation data measured by alternative means. The input point cloud requires per point uncertainty which is derived from lidar system observational uncertainty. The propagated uncertainty caused by interpolation is then derived by the general law of propagation of variances (GLOPOV) with simultaneous consideration of both horizontal and vertical point cloud uncertainties. Finally, the interpolated uncertainty is then scaled by point density and a measure of terrain roughness to arrive at the final gridded DEM uncertainty. The proposed approach is tested with two lidar datasets measured in Waikoloa, Hawaii, and Sitka, Alaska. Triangulated irregular network (TIN) interpolation is chosen as the representative gridding approach. The results indicate estimated terrain roughness/point density scale factors ranging between 1 (in flat areas) and 7.6 (in high roughness areas), with a mean value of 2.3 for the Waikoloa dataset and between 1 and 9.2 with a mean value of 1.2 for the Sitka dataset. As a result, the final gridded DEM uncertainties are estimated between 0.059 m and 0.677 m with a mean value of 0.164 m for the Waikoloa dataset and between 0.059 m and 1.723 m with a mean value of 0.097 m for the Sitka dataset.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"7 ","pages":"Article 100028"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49725976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In-camera IMU angular data for orthophoto projection in underwater photogrammetry 水下摄影测量中用于正射影投影的相机内IMU角度数据
Pub Date : 2023-01-01 Epub Date: 2022-12-07 DOI: 10.1016/j.ophoto.2022.100027
Erica Nocerino , Fabio Menna

Among photogrammetric products, orthophotos are probably the most versatile and widely used in many fields of application. In the last years, coupled with the spread of semi-automated survey and processing approaches based on photogrammetry, orthophotos have become almost a standard for monitoring the underwater environment. If on land the definition of the reference coordinate system and projection plane for the orthophoto generation is trivial, underwater it may represent a challenge. In this paper, we address the issue of defining the vertical direction and resulting horizontal plane (levelling) for the differential ortho rectification. We propose a non-invasive, contactless method based on roll and pitch angular data provided by in-camera IMU sensors and embedded in the Exif metadata of JPEG and raw image files. We show how our approach can be seamlessly integrated into automatic SfM/MVS pipelines, provide the mathematical background, and showcase real-world applications results in an underwater monitoring project. The results illustrate the effectiveness of the proposed method and, for the first time, provide a metric evaluation of the definition of the vertical direction with low-cost sensors enclosed in digital cameras directly underwater.

在摄影测量产品中,正射影像可能是用途最广泛、应用最广泛的产品。近年来,随着基于摄影测量的半自动化测量和处理方法的普及,正射影像几乎已成为监测水下环境的标准。如果在陆地上,用于生成正射影像的参考坐标系和投影平面的定义很琐碎,那么在水下,这可能是一个挑战。在本文中,我们讨论了定义差分正交校正的垂直方向和产生的水平面(水平)的问题。我们提出了一种非侵入性、非接触式的方法,该方法基于相机内IMU传感器提供的滚转和俯仰角度数据,并嵌入JPEG和原始图像文件的Exif元数据中。我们展示了我们的方法如何无缝集成到自动SfM/MVS管道中,提供数学背景,并在水下监测项目中展示真实世界的应用结果。结果表明了所提出的方法的有效性,并首次提供了用直接在水下的数字相机中封装的低成本传感器对垂直方向定义的度量评估。
{"title":"In-camera IMU angular data for orthophoto projection in underwater photogrammetry","authors":"Erica Nocerino ,&nbsp;Fabio Menna","doi":"10.1016/j.ophoto.2022.100027","DOIUrl":"https://doi.org/10.1016/j.ophoto.2022.100027","url":null,"abstract":"<div><p>Among photogrammetric products, orthophotos are probably the most versatile and widely used in many fields of application. In the last years, coupled with the spread of semi-automated survey and processing approaches based on photogrammetry, orthophotos have become almost a standard for monitoring the underwater environment. If on land the definition of the reference coordinate system and projection plane for the orthophoto generation is trivial, underwater it may represent a challenge. In this paper, we address the issue of defining the vertical direction and resulting horizontal plane (levelling) for the differential ortho rectification. We propose a non-invasive, contactless method based on roll and pitch angular data provided by in-camera IMU sensors and embedded in the Exif metadata of JPEG and raw image files. We show how our approach can be seamlessly integrated into automatic SfM/MVS pipelines, provide the mathematical background, and showcase real-world applications results in an underwater monitoring project. The results illustrate the effectiveness of the proposed method and, for the first time, provide a metric evaluation of the definition of the vertical direction with low-cost sensors enclosed in digital cameras directly underwater.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"7 ","pages":"Article 100027"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49725975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
ISPRS Open Journal of Photogrammetry and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1