Juan Castorena , L. Turin Dickman , Adam J. Killebrew , James R. Gattiker , Rod Linn , E. Louise Loudermilk
{"title":"ForestAlign: Automatic forest structure-based alignment for multi-view TLS and ALS point clouds","authors":"Juan Castorena , L. Turin Dickman , Adam J. Killebrew , James R. Gattiker , Rod Linn , E. Louise Loudermilk","doi":"10.1016/j.srs.2024.100194","DOIUrl":null,"url":null,"abstract":"<div><div>Access to highly detailed models of heterogeneous forests, spanning from the near surface to above the tree canopy at varying scales, is increasingly in demand. This enables advanced computational tools for analysis, planning, and ecosystem management. LiDAR sensors, available through terrestrial (TLS) and aerial (ALS) scanning platforms, have become established as primary technologies for forest monitoring due to their capability to rapidly collect precise 3D structural information directly. Selection of these platforms typically depends on the scales (tree-level, plot, regional) required for observational or intervention studies. Forestry now recognizes the benefits of a multi-scale approach, leveraging the strengths of each platform while minimizing individual source uncertainties. However, effective integration of these LiDAR sources relies heavily on efficient multi-scale, multi-view co-registration or point-cloud alignment methods. In GPS-denied areas, forestry has traditionally relied on target-based co-registration methods (e.g., reflective or marked trees), which are impractical at scale. Here, we propose ForestAlign: an effective, target-less, and fully automatic co-registration method for aligning forest point clouds collected from multi-view, multi-scale LiDAR sources. Our co-registration approach employs an incremental alignment strategy, grouping and aggregating 3D points based on increasing levels of structural complexity. This strategy aligns 3D points from less complex (e.g., ground surface) to more complex structures (e.g., tree trunks/branches, foliage) sequentially, refining alignment iteratively. Empirical evidence demonstrates the method’s effectiveness in aligning TLS-to-TLS and TLS-to-ALS scans locally, across various ecosystem conditions, including pre/post fire treatment effects. In TLS-to-TLS scenarios, parameter RMSE errors were less than 0.75 degrees in rotation and 5.5 cm in translation. For TLS-to-ALS, corresponding errors were less than 0.8 degrees and 8 cm, respectively. These results, show that our ForestAlign method is effective for co-registering both TLS-to-TLS and TLS-to-ALS in such forest environments, without relying on targets, while achieving high performance.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100194"},"PeriodicalIF":5.7000,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science of Remote Sensing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666017224000786","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENVIRONMENTAL SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Access to highly detailed models of heterogeneous forests, spanning from the near surface to above the tree canopy at varying scales, is increasingly in demand. This enables advanced computational tools for analysis, planning, and ecosystem management. LiDAR sensors, available through terrestrial (TLS) and aerial (ALS) scanning platforms, have become established as primary technologies for forest monitoring due to their capability to rapidly collect precise 3D structural information directly. Selection of these platforms typically depends on the scales (tree-level, plot, regional) required for observational or intervention studies. Forestry now recognizes the benefits of a multi-scale approach, leveraging the strengths of each platform while minimizing individual source uncertainties. However, effective integration of these LiDAR sources relies heavily on efficient multi-scale, multi-view co-registration or point-cloud alignment methods. In GPS-denied areas, forestry has traditionally relied on target-based co-registration methods (e.g., reflective or marked trees), which are impractical at scale. Here, we propose ForestAlign: an effective, target-less, and fully automatic co-registration method for aligning forest point clouds collected from multi-view, multi-scale LiDAR sources. Our co-registration approach employs an incremental alignment strategy, grouping and aggregating 3D points based on increasing levels of structural complexity. This strategy aligns 3D points from less complex (e.g., ground surface) to more complex structures (e.g., tree trunks/branches, foliage) sequentially, refining alignment iteratively. Empirical evidence demonstrates the method’s effectiveness in aligning TLS-to-TLS and TLS-to-ALS scans locally, across various ecosystem conditions, including pre/post fire treatment effects. In TLS-to-TLS scenarios, parameter RMSE errors were less than 0.75 degrees in rotation and 5.5 cm in translation. For TLS-to-ALS, corresponding errors were less than 0.8 degrees and 8 cm, respectively. These results, show that our ForestAlign method is effective for co-registering both TLS-to-TLS and TLS-to-ALS in such forest environments, without relying on targets, while achieving high performance.