Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-167-2024
V. Karjalainen, N. Koivumäki, T. Hakala, A. George, Jesse Muhojoki, Eric Hyyppa, J. Suomalainen, E. Honkavaara
Abstract. During the last decade, the use of drones in forest monitoring and remote sensing has become highly popular. While most of the monitoring tasks take place in high altitudes and open air, in the last few years drones have also gained interest in under-canopy data collection. However, flying under the forest canopy is a complex task since the drone can not use Global Navigation Satellite Systems (GNSS) for positioning and it has to continually avoid obstacles, such as trees, branches, and rocks, on its path. For that reason, drone-based data collection under the forest canopy is still mainly based on manual control by human pilots. Autonomous flying in GNSS-denied obstacle-rich environment has been an actively researched topic in the field of robotics during the last years and various open-sourced methods have been published in the literature. However, most of the research is done purely from the point-of-view of robotics and only a few studies have been published in the boundary of forest sciences and robotics aiming to take steps towards autonomous forest data collection. In this study, a prototype of an autonomous under-canopy drone is developed and implemented utilizing state-of-the-art open-source methods. The prototype is utilizing the EGO-Planner-v2 trajectory planner for autonomous obstacle avoidance and VINS-Fusion for Visual-inertial-odometry based GNSS-free pose estimation. The flying performance of the prototype is evaluated by performing multiple test flights with real hardware in two different boreal forest test plots with medium and difficult densities. Furthermore, the first results of the forest data collecting performance are obtained by post-processing the data collected with a low-cost stereo camera during one test flight to a 3D point cloud and by performing diameter breast at height (DBH) estimation. In the medium-density forest, all seven test flights were successful, but in the difficult test forest, one of eight test flights failed. The RMSE of the DBH estimation was 3.86 cm (12.98 %).
{"title":"Autonomous robotic drone system for mapping forest interiors","authors":"V. Karjalainen, N. Koivumäki, T. Hakala, A. George, Jesse Muhojoki, Eric Hyyppa, J. Suomalainen, E. Honkavaara","doi":"10.5194/isprs-archives-xlviii-2-2024-167-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-167-2024","url":null,"abstract":"Abstract. During the last decade, the use of drones in forest monitoring and remote sensing has become highly popular. While most of the monitoring tasks take place in high altitudes and open air, in the last few years drones have also gained interest in under-canopy data collection. However, flying under the forest canopy is a complex task since the drone can not use Global Navigation Satellite Systems (GNSS) for positioning and it has to continually avoid obstacles, such as trees, branches, and rocks, on its path. For that reason, drone-based data collection under the forest canopy is still mainly based on manual control by human pilots. Autonomous flying in GNSS-denied obstacle-rich environment has been an actively researched topic in the field of robotics during the last years and various open-sourced methods have been published in the literature. However, most of the research is done purely from the point-of-view of robotics and only a few studies have been published in the boundary of forest sciences and robotics aiming to take steps towards autonomous forest data collection. In this study, a prototype of an autonomous under-canopy drone is developed and implemented utilizing state-of-the-art open-source methods. The prototype is utilizing the EGO-Planner-v2 trajectory planner for autonomous obstacle avoidance and VINS-Fusion for Visual-inertial-odometry based GNSS-free pose estimation. The flying performance of the prototype is evaluated by performing multiple test flights with real hardware in two different boreal forest test plots with medium and difficult densities. Furthermore, the first results of the forest data collecting performance are obtained by post-processing the data collected with a low-cost stereo camera during one test flight to a 3D point cloud and by performing diameter breast at height (DBH) estimation. In the medium-density forest, all seven test flights were successful, but in the difficult test forest, one of eight test flights failed. The RMSE of the DBH estimation was 3.86 cm (12.98 %).\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141355609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-211-2024
E. Lo, H. Lozano Bravo, Nathan Hui, E. Nocerino, F. Menna, D. Rissolo, F. Kuester
Abstract. Photogrammetry is a valuable tool for 3D documentation, mapping, and monitoring of underwater environments. However, the ground control surveys necessary for georeferencing and validation of the reconstructed bathymetry are difficult and time consuming to perform underwater, and thus impractical to scale to larger areas. Underwater direct georeferencing, using a differential GNSS receiver synchronized with an underwater camera system, offers an attractive alternative to surveying underwater ground control points in conditions when the seafloor is clearly visible from the surface. In this paper, the design of an underwater direct georeferencing system using mostly commercial off the shelf components is presented. The accuracy of the system is evaluated against geodetic survey based on trilateration and leveling, as well as by RTK (real time kinematic) positioning using a tilt-compensated GNSS receiver mounted on an extended pole to allow measurements of points in up to 7 m in water depth. Tests were conducted in a controlled outdoor pool setting with depths from 1–3 m, as well as in a 10 m × 10 m test plot established on the seafloor in a near-shore environment by Catalina Island, California at depths from 4–10 m. Comparing the geometry of the photogrammetric reconstruction with the geodetic survey yielded sub centimeter consistency, and 1 mm accuracy in length measurement was achieved when compared with calibrated 0.5 m scale bars. Through repeated surveys of the same area, repeatability of georeferencing is demonstrated within expectations for differential GNSS positioning, with horizontal errors at sub centimeter level, and vertical errors of up to 3 cm in the worst cases. These tests demonstrate the benefits of the underwater direct georeferencing approach in shallow waters, which can be scaled up much more easily than measuring underwater ground control points with traditional approaches, making this an ideal option for collecting accurate bathymetry of the seafloor over large coastal areas with clear waters.
{"title":"Evaluation of the Accuracy of Photogrammetric Reconstruction of Bathymetry Using Differential GNSS Synchronized with an Underwater Camera","authors":"E. Lo, H. Lozano Bravo, Nathan Hui, E. Nocerino, F. Menna, D. Rissolo, F. Kuester","doi":"10.5194/isprs-archives-xlviii-2-2024-211-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-211-2024","url":null,"abstract":"Abstract. Photogrammetry is a valuable tool for 3D documentation, mapping, and monitoring of underwater environments. However, the ground control surveys necessary for georeferencing and validation of the reconstructed bathymetry are difficult and time consuming to perform underwater, and thus impractical to scale to larger areas. Underwater direct georeferencing, using a differential GNSS receiver synchronized with an underwater camera system, offers an attractive alternative to surveying underwater ground control points in conditions when the seafloor is clearly visible from the surface. In this paper, the design of an underwater direct georeferencing system using mostly commercial off the shelf components is presented. The accuracy of the system is evaluated against geodetic survey based on trilateration and leveling, as well as by RTK (real time kinematic) positioning using a tilt-compensated GNSS receiver mounted on an extended pole to allow measurements of points in up to 7 m in water depth. Tests were conducted in a controlled outdoor pool setting with depths from 1–3 m, as well as in a 10 m × 10 m test plot established on the seafloor in a near-shore environment by Catalina Island, California at depths from 4–10 m. Comparing the geometry of the photogrammetric reconstruction with the geodetic survey yielded sub centimeter consistency, and 1 mm accuracy in length measurement was achieved when compared with calibrated 0.5 m scale bars. Through repeated surveys of the same area, repeatability of georeferencing is demonstrated within expectations for differential GNSS positioning, with horizontal errors at sub centimeter level, and vertical errors of up to 3 cm in the worst cases. These tests demonstrate the benefits of the underwater direct georeferencing approach in shallow waters, which can be scaled up much more easily than measuring underwater ground control points with traditional approaches, making this an ideal option for collecting accurate bathymetry of the seafloor over large coastal areas with clear waters.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"86 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141359703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-219-2024
L. Markelin, Mikko Sippo, Petri Leiso, E. Honkavaara
Abstract. Production of orthophotos and orthomosaics from airborne imaging has been standard procedure of National mapping agencies for decades. Various aspects affect to the colour, or tones of the images: time of the day and year, atmosphere, illumination conditions, view and illumination angle, BRDF-effects (Bi-directional reflectance distribution function), object, sensor, and whole imaging system. Quantitative and automated solution for creating evenly coloured image mosaics is to use method based on radiometric block adjustment, that has similarities to more well-known geometric block adjustment. We have applied radiometric block adjustment method called RadBA, originally developed for drone image blocks collected with hyperspectral sensor, to image block collected with large-format photogrammetric camera. Goals of our work are: 1) to speed up orthophoto deliveries to Finish Food Authority used for EU farming subsidies monitoring, 2) improve the quality of images delivered to Finnish Forest Centre used for forest inventory, and 3) to automate, fasten and improve the current colour balancing process of orthophotos at National Land Survey of Finland. RadBA-based tonal processing with 4-parameter BRDF-correction was able to clearly improve the tonal quality of the image mosaic collected during three different dates. We still need to automate various steps of the RadBA-workflow and improve final steps in converting 16-bits per band mosaics to visually good looking 8-bits per band mosaics. Our long-term goal is to create tonally high quality, seamless orthomosaic of whole Finland.
{"title":"From research to production: radiometric block adjustment to automate and improve tonal adjustment of orthomosaics created from airborne images","authors":"L. Markelin, Mikko Sippo, Petri Leiso, E. Honkavaara","doi":"10.5194/isprs-archives-xlviii-2-2024-219-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-219-2024","url":null,"abstract":"Abstract. Production of orthophotos and orthomosaics from airborne imaging has been standard procedure of National mapping agencies for decades. Various aspects affect to the colour, or tones of the images: time of the day and year, atmosphere, illumination conditions, view and illumination angle, BRDF-effects (Bi-directional reflectance distribution function), object, sensor, and whole imaging system. Quantitative and automated solution for creating evenly coloured image mosaics is to use method based on radiometric block adjustment, that has similarities to more well-known geometric block adjustment. We have applied radiometric block adjustment method called RadBA, originally developed for drone image blocks collected with hyperspectral sensor, to image block collected with large-format photogrammetric camera. Goals of our work are: 1) to speed up orthophoto deliveries to Finish Food Authority used for EU farming subsidies monitoring, 2) improve the quality of images delivered to Finnish Forest Centre used for forest inventory, and 3) to automate, fasten and improve the current colour balancing process of orthophotos at National Land Survey of Finland. RadBA-based tonal processing with 4-parameter BRDF-correction was able to clearly improve the tonal quality of the image mosaic collected during three different dates. We still need to automate various steps of the RadBA-workflow and improve final steps in converting 16-bits per band mosaics to visually good looking 8-bits per band mosaics. Our long-term goal is to create tonally high quality, seamless orthomosaic of whole Finland.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"78 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141359749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-363-2024
H. Sardemann, C. Mulsow, Laure-Anne Gueguen, Gottfried Mandlburger, Hans-Gerd Maas
Abstract. If multimedia-photogrammetry is used for the generation of point clouds of submerged objects or of the water bottom, Snell’s law has to be considered. When the images are taken from air, image rays are refracted at the air-water interface. This results in the collinearity equations being no longer valid. Bundle block adjustment can still be solved by adding additional terms considering Snell’s law. Existing approaches usually assume that the water surface is flat. Refractive indices and water height can either be measured separately or included as unknowns in the adjustment. However, when the water surface is not flat due to the presence of waves, assuming a planar water surface leads to large geometric errors. This work will analyze the significance of those errors and propose a way of including water surface parameters as unknowns into the bundle block adjustment, both based on simulated data. The simulation reproduces multiple images taken simultaneously, e.g. from synchronized UAV cameras or from cameras on tripods.
{"title":"Multimedia Photogrammetry with non-planar Water Surfaces – Accuracy Analysis on Simulation Basis","authors":"H. Sardemann, C. Mulsow, Laure-Anne Gueguen, Gottfried Mandlburger, Hans-Gerd Maas","doi":"10.5194/isprs-archives-xlviii-2-2024-363-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-363-2024","url":null,"abstract":"Abstract. If multimedia-photogrammetry is used for the generation of point clouds of submerged objects or of the water bottom, Snell’s law has to be considered. When the images are taken from air, image rays are refracted at the air-water interface. This results in the collinearity equations being no longer valid. Bundle block adjustment can still be solved by adding additional terms considering Snell’s law. Existing approaches usually assume that the water surface is flat. Refractive indices and water height can either be measured separately or included as unknowns in the adjustment. However, when the water surface is not flat due to the presence of waves, assuming a planar water surface leads to large geometric errors. This work will analyze the significance of those errors and propose a way of including water surface parameters as unknowns into the bundle block adjustment, both based on simulated data. The simulation reproduces multiple images taken simultaneously, e.g. from synchronized UAV cameras or from cameras on tripods.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"56 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141358124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-17-2024
S. Ban, Taejung Kim
Abstract. Rapid progress in satellite technology has led to a noticeable surge in availability of Earth observation satellite images, which are being collected daily from satellites deployed worldwide. However, even with advanced satellite positioning equipment, there are still diverse level of remaining positional errors. This is a hindrance to satellite image utilization. Therefore, positional errors between satellite images must be corrected before utilization. Relative geometric correction of satellite images is a technique that adjusts geometric displacements based on their relative positional relationships in image or object space. In this study, we propose homography-based bundle adjustment for relative geometric correction of multi-sensor satellite images. Our method aims to estimate optimal ground plane on which images are projected and quickly generate result image. For experiments, orthorectified satellite images with various resolutions and georeferencing information were employed as input data. The experiment results showed that the average error, which was initially 4.96 pixels before relative geometric correction, was decreased to 1.73 pixels after applying the proposed method.
{"title":"Precise Relative Geometric Correction for Multi-Sensor Satellite Images","authors":"S. Ban, Taejung Kim","doi":"10.5194/isprs-archives-xlviii-2-2024-17-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-17-2024","url":null,"abstract":"Abstract. Rapid progress in satellite technology has led to a noticeable surge in availability of Earth observation satellite images, which are being collected daily from satellites deployed worldwide. However, even with advanced satellite positioning equipment, there are still diverse level of remaining positional errors. This is a hindrance to satellite image utilization. Therefore, positional errors between satellite images must be corrected before utilization. Relative geometric correction of satellite images is a technique that adjusts geometric displacements based on their relative positional relationships in image or object space. In this study, we propose homography-based bundle adjustment for relative geometric correction of multi-sensor satellite images. Our method aims to estimate optimal ground plane on which images are projected and quickly generate result image. For experiments, orthorectified satellite images with various resolutions and georeferencing information were employed as input data. The experiment results showed that the average error, which was initially 4.96 pixels before relative geometric correction, was decreased to 1.73 pixels after applying the proposed method.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"25 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141360302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-281-2024
Luca Morelli, G. Mazzacca, P. Trybała, F. Gaspari, F. Ioli, Zhenyu Ma, F. Remondino, Keith Challis, Andrew Poad, Alex Turner, Jon P. Mills
Abstract. The orientation of crowdsourced and multi-temporal image datasets presents a challenging task for traditional photogrammetry. Indeed, traditional image matching approaches often struggle to find accurate and reliable tie points in images that appear significantly different from one another. In this paper, in order to preserve the memory of the Sycamore Gap tree, a symbol of Hadrian's Wall that was felled in an act of vandalism in September 2023, deep-learning-based features trained specifically on challenging image datasets were employed to overcome limitations of traditional matching approaches. We demonstrate how unordered crowdsourced images and UAV videos can be oriented and used for 3D reconstruction purposes, together with a recently acquired terrestrial laser scanner point cloud for scaling and referencing. This allows the memory of the Sycamore Gap tree to live on and exhibits the potential of photogrammetric AI (Artificial Intelligence) for reverse engineering lost heritage.
摘要众包和多时态图像数据集的定位对传统摄影测量来说是一项具有挑战性的任务。事实上,传统的图像匹配方法往往难以在图像之间存在明显差异的情况下找到准确可靠的连接点。在本文中,为了保留哈德良长城的象征--梧桐树(Sycamore Gap tree)--的记忆,我们采用了基于深度学习的特征,专门针对具有挑战性的图像数据集进行训练,以克服传统匹配方法的局限性。我们展示了如何对无序的众包图像和无人机视频进行定向,并将其用于三维重建目的,同时利用最近获得的地面激光扫描仪点云进行缩放和参照。这让梧桐峡树的记忆得以延续,并展示了摄影测量 AI(人工智能)在逆向工程失落遗产方面的潜力。
{"title":"The Legacy of Sycamore Gap: The Potential of Photogrammetric AI for Reverse Engineering Lost Heritage with Crowdsourced Data","authors":"Luca Morelli, G. Mazzacca, P. Trybała, F. Gaspari, F. Ioli, Zhenyu Ma, F. Remondino, Keith Challis, Andrew Poad, Alex Turner, Jon P. Mills","doi":"10.5194/isprs-archives-xlviii-2-2024-281-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-281-2024","url":null,"abstract":"Abstract. The orientation of crowdsourced and multi-temporal image datasets presents a challenging task for traditional photogrammetry. Indeed, traditional image matching approaches often struggle to find accurate and reliable tie points in images that appear significantly different from one another. In this paper, in order to preserve the memory of the Sycamore Gap tree, a symbol of Hadrian's Wall that was felled in an act of vandalism in September 2023, deep-learning-based features trained specifically on challenging image datasets were employed to overcome limitations of traditional matching approaches. We demonstrate how unordered crowdsourced images and UAV videos can be oriented and used for 3D reconstruction purposes, together with a recently acquired terrestrial laser scanner point cloud for scaling and referencing. This allows the memory of the Sycamore Gap tree to live on and exhibits the potential of photogrammetric AI (Artificial Intelligence) for reverse engineering lost heritage.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"79 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141357732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-273-2024
F. Menna, E. Nocerino, A. Calantropio
Abstract. Three-dimensional reference points (RPs) are fundamental for datum definition and metric validation in many photogrammetric applications, often used as ground control points (GCPs) to constrain the bundle adjustment solution. Nevertheless, survey operations underwater present challenges due to the physical characteristics of the water itself and the technological limitations of available instruments. Traditional methods to collect RPs underwater rely on direct geodetic measurements like slope distances, height differences, and depths from a dive computer. These methods can be time-consuming and impractical to scale up to large areas, particularly in deeper waters. This paper reports on the use of a custom-developed low-cost pressure sensor to measure depths and height differences of underwater RPs with survey-grade accuracy. Laboratory and open water tests demonstrated the method's potential, achieving an RMSEZ of less than 1 mm over a 1.5 m height range in the laboratory in static water and a sub-centimetre RMSE of relative depth differences in shallow water tests carried out in two different locations at sea with maximum significant wave height of 9 cm. The sensor proved its effectiveness also for constraining a corridor-like underwater photogrammetric survey reducing the bending of the 3D model with respect to the free network solution (RMSEZ lowered from 10 cm to less than 1 cm). The preliminary tests with the presented approach proved several advantages against other consolidated methods, including cost reduction (compared to commercial survey instruments), rapidity, safety, and accuracy, especially at depths greater than 3–5 m where other approaches (e.g., GNSS or topographic measures) cannot be applied.
{"title":"High-accuracy height differences using a pressure sensor for ground control points measurement in underwater photogrammetry","authors":"F. Menna, E. Nocerino, A. Calantropio","doi":"10.5194/isprs-archives-xlviii-2-2024-273-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-273-2024","url":null,"abstract":"Abstract. Three-dimensional reference points (RPs) are fundamental for datum definition and metric validation in many photogrammetric applications, often used as ground control points (GCPs) to constrain the bundle adjustment solution. Nevertheless, survey operations underwater present challenges due to the physical characteristics of the water itself and the technological limitations of available instruments. Traditional methods to collect RPs underwater rely on direct geodetic measurements like slope distances, height differences, and depths from a dive computer. These methods can be time-consuming and impractical to scale up to large areas, particularly in deeper waters. This paper reports on the use of a custom-developed low-cost pressure sensor to measure depths and height differences of underwater RPs with survey-grade accuracy. Laboratory and open water tests demonstrated the method's potential, achieving an RMSEZ of less than 1 mm over a 1.5 m height range in the laboratory in static water and a sub-centimetre RMSE of relative depth differences in shallow water tests carried out in two different locations at sea with maximum significant wave height of 9 cm. The sensor proved its effectiveness also for constraining a corridor-like underwater photogrammetric survey reducing the bending of the 3D model with respect to the free network solution (RMSEZ lowered from 10 cm to less than 1 cm). The preliminary tests with the presented approach proved several advantages against other consolidated methods, including cost reduction (compared to commercial survey instruments), rapidity, safety, and accuracy, especially at depths greater than 3–5 m where other approaches (e.g., GNSS or topographic measures) cannot be applied.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"23 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141356327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-33-2024
A. Bienert, Katja Richter, Sophia Boehme, Hans-Gerd Maas
Abstract. Monitoring tree growth processes is relevant for ecological research and understanding the intricate relationship between vegetation and the environment. Time series analyses have revealed a correlation between leaf emergence timing and climate change, with earlier leaf emergence attributed to global warming. While traditional forest inventory methods struggle to quantify growth processes on small scales, terrestrial laser scanning provides a powerful alternative for providing high-resolution 3D information. This study explores the use of high-frequency hyper-temporal terrestrial laser scanning data to quantitatively describe deciduous tree growth, tested on a pedunculate oak (Quercus robur). The research aims to address key questions about detecting leaf growth in hypertemporal terrestrial laser scanning data. Additionally, it explores how 3D tree parameters and point cloud comparisons capture leaf and tree growth throughout the year. Results from M3C2 point cloud analyses indicate that the temporary branch movements correlate with precipitation. Over the year, branch movements were detected to increase with growing distance from the trunk.
{"title":"Investigating the Potential of Hyper-Temporal Terrestrial Laser Point Clouds for Monitoring Deciduous Tree Growth","authors":"A. Bienert, Katja Richter, Sophia Boehme, Hans-Gerd Maas","doi":"10.5194/isprs-archives-xlviii-2-2024-33-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-33-2024","url":null,"abstract":"Abstract. Monitoring tree growth processes is relevant for ecological research and understanding the intricate relationship between vegetation and the environment. Time series analyses have revealed a correlation between leaf emergence timing and climate change, with earlier leaf emergence attributed to global warming. While traditional forest inventory methods struggle to quantify growth processes on small scales, terrestrial laser scanning provides a powerful alternative for providing high-resolution 3D information. This study explores the use of high-frequency hyper-temporal terrestrial laser scanning data to quantitatively describe deciduous tree growth, tested on a pedunculate oak (Quercus robur). The research aims to address key questions about detecting leaf growth in hypertemporal terrestrial laser scanning data. Additionally, it explores how 3D tree parameters and point cloud comparisons capture leaf and tree growth throughout the year. Results from M3C2 point cloud analyses indicate that the temporary branch movements correlate with precipitation. Over the year, branch movements were detected to increase with growing distance from the trunk.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"51 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141358487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-371-2024
Wenfei Shen, Liang Huo, Tao Shen, Miao Zhang, Yucai Li
Abstract. In 3D building models, a large number of texture maps with different sizes increase the number of model data loading and drawing batches, which greatly reduces the drawing efficiency of the model. Therefore, this paper proposes a texture set mapping method based on vertex importance. Firstly, based on the 2D space boxing algorithm, the texture maps are merged and a series of Mipmap texture maps are generated, and then the vertex curvature, texture variability and location information of each vertex are calculated, normalized, and weighted to get the importance of each vertex, and then finally, different Mipmap-level textures are remapped according to the importance of the vertices. The experiment proves that the algorithm in this paper can reduce the amount of texture data on the one hand, and avoid the rendering pressure brought by the still large amount of data after merging on the other hand, so as to improve the rendering efficiency of the model.
{"title":"Optimization of Texture Rendering of 3D Building Model Based on Vertex Importance","authors":"Wenfei Shen, Liang Huo, Tao Shen, Miao Zhang, Yucai Li","doi":"10.5194/isprs-archives-xlviii-2-2024-371-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-371-2024","url":null,"abstract":"Abstract. In 3D building models, a large number of texture maps with different sizes increase the number of model data loading and drawing batches, which greatly reduces the drawing efficiency of the model. Therefore, this paper proposes a texture set mapping method based on vertex importance. Firstly, based on the 2D space boxing algorithm, the texture maps are merged and a series of Mipmap texture maps are generated, and then the vertex curvature, texture variability and location information of each vertex are calculated, normalized, and weighted to get the importance of each vertex, and then finally, different Mipmap-level textures are remapped according to the importance of the vertices. The experiment proves that the algorithm in this paper can reduce the amount of texture data on the one hand, and avoid the rendering pressure brought by the still large amount of data after merging on the other hand, so as to improve the rendering efficiency of the model.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"61 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141358325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract. To solve the problem of collapsing block walls widely used in Japan, this study proposes a method for extracting block walls using 3D point cloud data measured by the Mobile Mapping System (MMS). Unlike conventional methods, this method identifies block walls based on geometric features without relying on MMS trajectory data or deep learning inference results. In addition, the computational load is low and manual correction can be minimized. In our experiments, we used point cloud data collected in urban areas in Japan and achieved a precision of 0.750, recall of 0.810, and F-measure of 0.779. The results demonstrate the effectiveness of this method for automatic extraction of block walls and rapid assessment of collapse risk and are expected to contribute to safety measures in areas with high seismic risk.
{"title":"Extraction of block walls from point clouds measured by Mobile Mapping System","authors":"Taiga Odaka, Hiroki Harada, Kei Otomo, Kiichiro Ishikawa","doi":"10.5194/isprs-archives-xlviii-2-2024-309-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-309-2024","url":null,"abstract":"Abstract. To solve the problem of collapsing block walls widely used in Japan, this study proposes a method for extracting block walls using 3D point cloud data measured by the Mobile Mapping System (MMS). Unlike conventional methods, this method identifies block walls based on geometric features without relying on MMS trajectory data or deep learning inference results. In addition, the computational load is low and manual correction can be minimized. In our experiments, we used point cloud data collected in urban areas in Japan and achieved a precision of 0.750, recall of 0.810, and F-measure of 0.779. The results demonstrate the effectiveness of this method for automatic extraction of block walls and rapid assessment of collapse risk and are expected to contribute to safety measures in areas with high seismic risk.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"27 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141357370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}