Pub Date : 2024-04-01Epub Date: 2024-02-06DOI: 10.1016/j.ophoto.2024.100058
Lukas Lucks , Uwe Stilla , Ludwig Hoegner , Christoph Holst
This paper introduces methods for monitoring rock slope movements in Alpine environments based on terrestrial images. The first method is a photogrammtric point cloud-based deformation analysis, relying on M3C2. Although effective in identifying large changes, the method has a tendency to underestimate smaller-scale movements. A feature-based method is presented to address this limitation, using SIFT features to track keypoints in images from different epochs. These automatically detected 3D vectors offer high spatial density and enable small-scale movement detection in the order of a few millimeters. The results are incorporated into a deformation analysis that allows statistically based conclusions about the ongoing movements. The workflow relies on georegistration using Ground Control Points. To investigate the possibility of avoiding these points, a registration method based on the ICP algorithm and M3C2 is tested. The study utilizes data from an active landslide site at Hochvogel Mountain in the Alps, analyzing changes and deformations from 2018 to 2021, revealing an average motion of 75 mm.
{"title":"Photogrammetric rockfall monitoring in Alpine environments using M3C2 and tracked motion vectors","authors":"Lukas Lucks , Uwe Stilla , Ludwig Hoegner , Christoph Holst","doi":"10.1016/j.ophoto.2024.100058","DOIUrl":"10.1016/j.ophoto.2024.100058","url":null,"abstract":"<div><p>This paper introduces methods for monitoring rock slope movements in Alpine environments based on terrestrial images. The first method is a photogrammtric point cloud-based deformation analysis, relying on M3C2. Although effective in identifying large changes, the method has a tendency to underestimate smaller-scale movements. A feature-based method is presented to address this limitation, using SIFT features to track keypoints in images from different epochs. These automatically detected 3D vectors offer high spatial density and enable small-scale movement detection in the order of a few millimeters. The results are incorporated into a deformation analysis that allows statistically based conclusions about the ongoing movements. The workflow relies on georegistration using Ground Control Points. To investigate the possibility of avoiding these points, a registration method based on the ICP algorithm and M3C2 is tested. The study utilizes data from an active landslide site at Hochvogel Mountain in the Alps, analyzing changes and deformations from 2018 to 2021, revealing an average motion of 75 mm.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100058"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000012/pdfft?md5=5c428099c72948419171303ad7c14d16&pid=1-s2.0-S2667393224000012-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139826629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-01Epub Date: 2024-03-01DOI: 10.1016/j.ophoto.2024.100061
Mikael Reichler , Josef Taher , Petri Manninen , Harri Kaartinen , Juha Hyyppä , Antero Kukko
Real-time semantic segmentation of point clouds has increasing importance in applications related to 3D city modelling and mapping, automated inventory of forests, autonomous driving and mobile robotics. Current state-of-the-art point cloud semantic segmentation methods rely heavily on the availability of 3D laser scanning data. This is problematic in regards of low-latency, real-time applications that use data from high-precision mobile laser scanners, as those are typically 2D line scanning devices. In this study, we experiment with real-time semantic segmentation of high-density multispectral point clouds collected from 2D line scanners in urban environments using encoder - decoder convolutional neural network architectures. We introduce a rasterized multi-scan input format that can be constructed exclusively from the raw (non-georeferenced profiles) 2D laser scanner measurement stream without odometry information. In addition, we investigate the impact of multispectral data on the segmentation accuracy. The dataset used for training, validation and testing was collected with multispectral FGI AkhkaR4-DW backpack laser scanning system operating at the wavelengths of 905 nm and 1550 nm, and consists in total of 228 million points (39 583 scans). The data was divided into 13 classes that represent various targets in urban environments. The results show that the increased spatial context of the multi-scan format improves the segmentation performance on the single-wavelength lidar dataset from 45.4 mIoU (a single scan) to 62.1 mIoU (24 consecutive scans). In the multispectral point cloud experiments we achieved a 71 % and 28 % relative increase in the segmentation mIoU (43.5 mIoU) as compared to the purely single-wavelength reference experiments, in which we achieved 25.4 mIoU (905 nm) and 34.1 mIoU (1550 nm). Our findings show that it is possible to semantically segment 2D line scanner data with good results by combining consecutive scans without the need for odometry information. The results also serve as motivation for developing multispectral mobile laser scanning systems that can be used in challenging urban surveys.
{"title":"Semantic segmentation of raw multispectral laser scanning data from urban environments with deep neural networks","authors":"Mikael Reichler , Josef Taher , Petri Manninen , Harri Kaartinen , Juha Hyyppä , Antero Kukko","doi":"10.1016/j.ophoto.2024.100061","DOIUrl":"10.1016/j.ophoto.2024.100061","url":null,"abstract":"<div><p>Real-time semantic segmentation of point clouds has increasing importance in applications related to 3D city modelling and mapping, automated inventory of forests, autonomous driving and mobile robotics. Current state-of-the-art point cloud semantic segmentation methods rely heavily on the availability of 3D laser scanning data. This is problematic in regards of low-latency, real-time applications that use data from high-precision mobile laser scanners, as those are typically 2D line scanning devices. In this study, we experiment with real-time semantic segmentation of high-density multispectral point clouds collected from 2D line scanners in urban environments using encoder - decoder convolutional neural network architectures. We introduce a rasterized multi-scan input format that can be constructed exclusively from the raw (non-georeferenced profiles) 2D laser scanner measurement stream without odometry information. In addition, we investigate the impact of multispectral data on the segmentation accuracy. The dataset used for training, validation and testing was collected with multispectral FGI AkhkaR4-DW backpack laser scanning system operating at the wavelengths of 905 nm and 1550 nm, and consists in total of 228 million points (39 583 scans). The data was divided into 13 classes that represent various targets in urban environments. The results show that the increased spatial context of the multi-scan format improves the segmentation performance on the single-wavelength lidar dataset from 45.4 mIoU (a single scan) to 62.1 mIoU (24 consecutive scans). In the multispectral point cloud experiments we achieved a 71 % and 28 % relative increase in the segmentation mIoU (43.5 mIoU) as compared to the purely single-wavelength reference experiments, in which we achieved 25.4 mIoU (905 nm) and 34.1 mIoU (1550 nm). Our findings show that it is possible to semantically segment 2D line scanner data with good results by combining consecutive scans without the need for odometry information. The results also serve as motivation for developing multispectral mobile laser scanning systems that can be used in challenging urban surveys.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100061"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000048/pdfft?md5=6faf1ff37f867c363f5ed0c6399534c9&pid=1-s2.0-S2667393224000048-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140090915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Predicting crop yield using deep learning (DL) and remote sensing is a promising technique in agriculture. In smallholder agriculture (<2 ha), where 84% of the farms operate globally, it is crucial to build a model that can be useful across several fields (high spatial transferability). However, enhancing spatial model transferability in a small-scale setting faces significant challenges, including spatial autocorrelation, heterogeneity and scale dependence of spatial dynamics, as well as the need to address limited data points. This study aimed to test the hypothesis that spatial cross validation (SCV) is a more suitable model validation practice than random cross validation (RCV) to enhance model transferability for spatial prediction in a small-scale farming setting. We compared the performances of DL models that predict crop yield for several settings including three crop types and two DL architectures based on RCV with and without overlapping samples and SCV. Notably, we conducted model performance tests on external, equally sized fields instead of the field used for training. We used high resolution RGB imagery taken with a drone as input. Our results show that the models using SCV outperformed those using RCV when the models were tested on external fields (on average r = 0.37 for SCV, r = 0.18 for RCV with overlap and r = 0.07 without), even though the models using SCV showed a substantially lower performance for cross validation (CV) than those using RCV (r with SCV and RCV w/o overlap = 0.73 and 0.98/0.73, respectively). The results suggest that RCV leads to over-optimism by overfitting the spatial structure and remembering image-specific information (so called memorization). Our study offers the first empirical evidence in agriculture that SCV is preferable to RCV in small field settings for making DL models more transferable.
{"title":"Improving spatial transferability of deep learning models for small-field crop yield prediction","authors":"Stefan Stiller , Kathrin Grahmann , Gohar Ghazaryan , Masahiro Ryo","doi":"10.1016/j.ophoto.2024.100064","DOIUrl":"https://doi.org/10.1016/j.ophoto.2024.100064","url":null,"abstract":"<div><p>Predicting crop yield using deep learning (DL) and remote sensing is a promising technique in agriculture. In smallholder agriculture (<2 ha), where 84% of the farms operate globally, it is crucial to build a model that can be useful across several fields (high spatial transferability). However, enhancing spatial model transferability in a small-scale setting faces significant challenges, including spatial autocorrelation, heterogeneity and scale dependence of spatial dynamics, as well as the need to address limited data points. This study aimed to test the hypothesis that spatial cross validation (SCV) is a more suitable model validation practice than random cross validation (RCV) to enhance model transferability for spatial prediction in a small-scale farming setting. We compared the performances of DL models that predict crop yield for several settings including three crop types and two DL architectures based on RCV with and without overlapping samples and SCV. Notably, we conducted model performance tests on external, equally sized fields instead of the field used for training. We used high resolution RGB imagery taken with a drone as input. Our results show that the models using SCV outperformed those using RCV when the models were tested on external fields (on average r = 0.37 for SCV, r = 0.18 for RCV with overlap and r = 0.07 without), even though the models using SCV showed a substantially lower performance for cross validation (CV) than those using RCV (r with SCV and RCV w/o overlap = 0.73 and 0.98/0.73, respectively). The results suggest that RCV leads to over-optimism by overfitting the spatial structure and remembering image-specific information (so called memorization). Our study offers the first empirical evidence in agriculture that SCV is preferable to RCV in small field settings for making DL models more transferable.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100064"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000073/pdfft?md5=50c355dd3d3f1275fbe75dfa9e3ceab5&pid=1-s2.0-S2667393224000073-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140643663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01DOI: 10.1016/j.ophoto.2024.100058
Lukas Lucks, Uwe Stilla, L. Hoegner, Christoph Holst
{"title":"Photogrammetric rockfall monitoring in Alpine environments using M3C2 and tracked motion vectors","authors":"Lukas Lucks, Uwe Stilla, L. Hoegner, Christoph Holst","doi":"10.1016/j.ophoto.2024.100058","DOIUrl":"https://doi.org/10.1016/j.ophoto.2024.100058","url":null,"abstract":"","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"19 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139886772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01DOI: 10.1016/j.ophoto.2024.100059
Miguel Vallejo, K. Anders, O. Ajayi, Olaf Bubenzer, B. Höfle
{"title":"Integrating multi-user digitising actions for mapping gully outlines using a combined approach of Kalman filtering and machine learning","authors":"Miguel Vallejo, K. Anders, O. Ajayi, Olaf Bubenzer, B. Höfle","doi":"10.1016/j.ophoto.2024.100059","DOIUrl":"https://doi.org/10.1016/j.ophoto.2024.100059","url":null,"abstract":"","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"83 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139815198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2023-12-25DOI: 10.1016/j.ophoto.2023.100056
Felix Dahle, Roderik Lindenbergh, Bert Wouters
The TriMetrogon Aerial (TMA) archive is an archive of historical images of Antarctica taken by the US Navy between 1940 and 2000 with analogue cameras. The analysis of such historic data can give a view of Antarctica's glaciers predating modern satellite imagery and provide unique insights into the long-term impact of changing climate conditions with essential validation data for climate modelling. However, the lack of semantic information for these images presents a challenge for large-scale computer-driven analysis.
Such information can be added to the data using semantic segmentation, but traditional algorithms fail on these scanned historical grayscale images, due to varying image quality, lack of colour information and artefacts in the images. To address this, we present a deep-learning-based U-net workflow. Our approach includes creating training data by pre-processing and labelling the raw images. Furthermore, different versions of the U-net are trained to optimize its hyperparameters and augmentation methods. With the optimal hyper-parameters and augmentation methods, a final model has been trained for a use-case to segment 118 images covering Adelaide Island.
We tested our approach by segmenting challenging historical images using a U-net model with just 80 training images, achieving an accuracy of 73% for 20 validation images. While no test data is available for our use case, a visual examination of the segmented images shows that our method performs effectively.
The comparison of the hyper-parameters and augmentation methods provides directions for training other U-net-based models so that the presented workflow can be used to segment other archives with historical imagery. Additionally, the labelled training data and the segmented images of the test are publicly available at https://github.com/fdahle/antarctic_segmentation.
{"title":"Revisiting the Past: A comparative study for semantic segmentation of historical images of Adelaide Island using U-nets","authors":"Felix Dahle, Roderik Lindenbergh, Bert Wouters","doi":"10.1016/j.ophoto.2023.100056","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100056","url":null,"abstract":"<div><p>The TriMetrogon Aerial (TMA) archive is an archive of historical images of Antarctica taken by the US Navy between 1940 and 2000 with analogue cameras. The analysis of such historic data can give a view of Antarctica's glaciers predating modern satellite imagery and provide unique insights into the long-term impact of changing climate conditions with essential validation data for climate modelling. However, the lack of semantic information for these images presents a challenge for large-scale computer-driven analysis.</p><p>Such information can be added to the data using semantic segmentation, but traditional algorithms fail on these scanned historical grayscale images, due to varying image quality, lack of colour information and artefacts in the images. To address this, we present a deep-learning-based U-net workflow. Our approach includes creating training data by pre-processing and labelling the raw images. Furthermore, different versions of the U-net are trained to optimize its hyperparameters and augmentation methods. With the optimal hyper-parameters and augmentation methods, a final model has been trained for a use-case to segment 118 images covering Adelaide Island.</p><p>We tested our approach by segmenting challenging historical images using a U-net model with just 80 training images, achieving an accuracy of 73% for 20 validation images. While no test data is available for our use case, a visual examination of the segmented images shows that our method performs effectively.</p><p>The comparison of the hyper-parameters and augmentation methods provides directions for training other U-net-based models so that the presented workflow can be used to segment other archives with historical imagery. Additionally, the labelled training data and the segmented images of the test are publicly available at <span>https://github.com/fdahle/antarctic_segmentation</span><svg><path></path></svg>.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"11 ","pages":"Article 100056"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393223000273/pdfft?md5=d102ce83a2ff8228dd333428f7d3bf8e&pid=1-s2.0-S2667393223000273-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139107227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2023-11-24DOI: 10.1016/j.ophoto.2023.100051
Eleonora Maset , Luca Magri , Andrea Fusiello
This paper examines the effects of implementing relative orientation constraints on bundle adjustment, as well as provides a full derivation of the Jacobian matrix for such an adjustment, that can be used to facilitate other implementations of bundle adjustment with constrained cameras. We present empirical evidence demonstrating improved accuracy and reduced computational load when these constraints are imposed.
{"title":"Principled bundle block adjustment with multi-head cameras","authors":"Eleonora Maset , Luca Magri , Andrea Fusiello","doi":"10.1016/j.ophoto.2023.100051","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100051","url":null,"abstract":"<div><p>This paper examines the effects of implementing relative orientation constraints on bundle adjustment, as well as provides a full derivation of the Jacobian matrix for such an adjustment, that can be used to facilitate other implementations of bundle adjustment with constrained cameras. We present empirical evidence demonstrating improved accuracy and reduced computational load when these constraints are imposed.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"11 ","pages":"Article 100051"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393223000224/pdfft?md5=104b2b21116c9955ace52700652a666b&pid=1-s2.0-S2667393223000224-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139111422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2023-12-06DOI: 10.1016/j.ophoto.2023.100053
Mariya Velikova, Juan Fernandez-Diaz, Craig Glennie
The ATLAS sensor onboard the ICESat-2 satellite is a photon-counting lidar (PCL) with a primary mission to map Earth's ice sheets. A secondary goal of the mission is to provide vegetation and terrain elevations, which are essential for calculating the planet's biomass carbon reserves. A drawback of ATLAS is that the sensor does not provide reliable terrain height estimates in dense, high-closure forests because only a few photons reach the ground through the canopy and return to the detector. This low penetration translates into lower accuracy for the resultant terrain model. Tropical forest measurements with ATLAS have an additional problem estimating top of canopy because of frequent atmospheric phenomena such as fog and low clouds that can be misinterpreted as top of the canopy. To alleviate these issues, we propose using a ConvPoint neural network for 3D point clouds and high-density airborne lidar as training data to classify vegetation and terrain returns from ATLAS. The semantic segmentation network provides excellent results and could be used in parallel with the current ATL08 noise filtering algorithms, especially in areas with dense vegetation. We use high-density airborne lidar data acquired along ICESat-2 transects in Central American forests as a ground reference for training the neural network to distinguish between noise photons and photons lying between the terrain and the top of the canopy. Each photon event receives a label (noise or signal) in the test phase, providing automated noise-filtering of the ATL03 data. The terrain and top of canopy elevations are subsequently aggregated in 100 m segments using a series of iterative smoothing filters. We demonstrate improved estimates for both terrain and top of canopy elevations compared to the ATL08 100 m segment estimates. The neural network (NN) noise filtering reliably eliminated outlier top of canopy estimates caused by low clouds, and aggregated root mean square error (RMSE) decreased from 7.7 m for ATL08 to 3.7 m for NN prediction (18 test profiles aggregated). For terrain elevations, RMSE decreased from 5.2 m for ATL08 to 3.3 m for the NN prediction, compared to airborne lidar reference profiles.
{"title":"ICESat-2 noise filtering using a point cloud neural network","authors":"Mariya Velikova, Juan Fernandez-Diaz, Craig Glennie","doi":"10.1016/j.ophoto.2023.100053","DOIUrl":"10.1016/j.ophoto.2023.100053","url":null,"abstract":"<div><p>The ATLAS sensor onboard the ICESat-2 satellite is a photon-counting lidar (PCL) with a primary mission to map Earth's ice sheets. A secondary goal of the mission is to provide vegetation and terrain elevations, which are essential for calculating the planet's biomass carbon reserves. A drawback of ATLAS is that the sensor does not provide reliable terrain height estimates in dense, high-closure forests because only a few photons reach the ground through the canopy and return to the detector. This low penetration translates into lower accuracy for the resultant terrain model. Tropical forest measurements with ATLAS have an additional problem estimating top of canopy because of frequent atmospheric phenomena such as fog and low clouds that can be misinterpreted as top of the canopy. To alleviate these issues, we propose using a ConvPoint neural network for 3D point clouds and high-density airborne lidar as training data to classify vegetation and terrain returns from ATLAS. The semantic segmentation network provides excellent results and could be used in parallel with the current ATL08 noise filtering algorithms, especially in areas with dense vegetation. We use high-density airborne lidar data acquired along ICESat-2 transects in Central American forests as a ground reference for training the neural network to distinguish between noise photons and photons lying between the terrain and the top of the canopy. Each photon event receives a label (noise or signal) in the test phase, providing automated noise-filtering of the ATL03 data. The terrain and top of canopy elevations are subsequently aggregated in 100 m segments using a series of iterative smoothing filters. We demonstrate improved estimates for both terrain and top of canopy elevations compared to the ATL08 100 m segment estimates. The neural network (NN) noise filtering reliably eliminated outlier top of canopy estimates caused by low clouds, and aggregated root mean square error (RMSE) decreased from 7.7 m for ATL08 to 3.7 m for NN prediction (18 test profiles aggregated). For terrain elevations, RMSE decreased from 5.2 m for ATL08 to 3.3 m for the NN prediction, compared to airborne lidar reference profiles.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"11 ","pages":"Article 100053"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393223000248/pdfft?md5=90f41b323182f63f9bad036a38f7b9ea&pid=1-s2.0-S2667393223000248-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138621053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2023-12-27DOI: 10.1016/j.ophoto.2023.100055
Mieke Kuschnerus , Roderik Lindenbergh , Sander Vos , Ramon Hanssen
In the view of climate change, understanding and managing effects on coastal areas and adjacent cities is essential. Permanent Laser Scanning (PLS) is a successful technique to not only observe notably sandy coasts incidentally or once every year, but (nearly) continuously over extended periods of time. The collected point cloud observations form a 4D point cloud data set representing the evolution of the coast provide the opportunity to assess change processes at high level of detail. For an exemplary location in Noordwijk, The Netherlands, three years of hourly point clouds were acquired on a 1 km long section of a typical Dutch urban sandy beach. Often, the so-called level of detection is used to assess point cloud differences from two epochs. To explicitly incorporate the temporal dimension of the height estimates from the point cloud data set, we revisit statistical testing theory. We apply multiple hypothesis testing on elevation time series in order to identify different coastal processes, like aeolian sand transport or bulldozer works. We then estimate the minimal detectable bias for different alternative hypotheses, to quantify the minimal elevation change that can be estimated from the PLS observations over a certain period of time. Additionally, we analyse potential error sources and influences on the elevation estimations and provide orders of magnitudes and possible ways to deal with them. Finally we conclude that elevation time series from a long term PLS data set are a suitable input to identify aeolian sand transport with the help of multiple hypothesis testing. In our example case, slopes of 0.032 m/day and sudden changes of 0.031 m can be identified with statistical power of 80% and with 95% significance in 24-h time series on the upper beach. In the intertidal area the presented method allows to classify daily elevation time series over one month according to the dominating model (sudden change or linear trend) in either eroding or accreting behaviour.
{"title":"Statistically assessing vertical change on a sandy beach from permanent laser scanning time series","authors":"Mieke Kuschnerus , Roderik Lindenbergh , Sander Vos , Ramon Hanssen","doi":"10.1016/j.ophoto.2023.100055","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100055","url":null,"abstract":"<div><p>In the view of climate change, understanding and managing effects on coastal areas and adjacent cities is essential. Permanent Laser Scanning (PLS) is a successful technique to not only observe notably sandy coasts incidentally or once every year, but (nearly) continuously over extended periods of time. The collected point cloud observations form a 4D point cloud data set representing the evolution of the coast provide the opportunity to assess change processes at high level of detail. For an exemplary location in Noordwijk, The Netherlands, three years of hourly point clouds were acquired on a 1 km long section of a typical Dutch urban sandy beach. Often, the so-called level of detection is used to assess point cloud differences from two epochs. To explicitly incorporate the temporal dimension of the height estimates from the point cloud data set, we revisit statistical testing theory. We apply multiple hypothesis testing on elevation time series in order to identify different coastal processes, like aeolian sand transport or bulldozer works. We then estimate the minimal detectable bias for different alternative hypotheses, to quantify the minimal elevation change that can be estimated from the PLS observations over a certain period of time. Additionally, we analyse potential error sources and influences on the elevation estimations and provide orders of magnitudes and possible ways to deal with them. Finally we conclude that elevation time series from a long term PLS data set are a suitable input to identify aeolian sand transport with the help of multiple hypothesis testing. In our example case, slopes of 0.032 m/day and sudden changes of 0.031 m can be identified with statistical power of 80% and with 95% significance in 24-h time series on the upper beach. In the intertidal area the presented method allows to classify daily elevation time series over one month according to the dominating model (sudden change or linear trend) in either eroding or accreting behaviour.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"11 ","pages":"Article 100055"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393223000261/pdfft?md5=2b715eedb9e8c262b3b531332998a270&pid=1-s2.0-S2667393223000261-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139107208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2023-11-30DOI: 10.1016/j.ophoto.2023.100052
Philippe Vigneault , Joël Lafond-Lapalme , Arianne Deshaies , Kosal Khun , Samuel de la Sablonnière , Martin Filion , Louis Longchamps , Benjamin Mimee
UAV-mounted sensors can be used to estimate crop biophysical traits, offering an alternative to traditional field scouting. However, the high temporal resolution offered by UAV platforms, critical for identifying small differences in crop conditions, is rarely exploited throughout the entire growing season. This limits growers' ability to obtain timely information for real-time interventions. New findings support that it is possible to parametrize an entire crop growth cycle under different conditions by accumulating sufficient data over time and using logistic growth models to highlight growth patterns. A step forward would be to model crop growth cycle at the plant-level in order to anticipate the optimal harvest dates in each plot or quickly identify growth problematics. Individual plant monitoring can be achieved by combining high spatial resolution images with accurate segmentation algorithms. The main objective of the study was therefore to develop and validate an integrated pipeline based on multidimensional data to extract predictive growth metrics for crop monitoring at the plant-level under various field conditions. The plant growth monitoring workflow was based on a three-step design ultimately leading to decision-making and reporting. Lettuce (Lactuca sativa L.) was chosen as a model plant due to its simple geometry, rapid growth and simple cultivation method. Treatments were composed of contrasting cover crops. Overall, correlation analysis showed that UAV-derived morphological metrics are reliable proxies for harvested biomass throughout the growing season, especially in later stages (Spearman's ρ > 0.9) and can be used as growth indicators. Therefore, Logistic Growth Curves (LGCs) were fitted to Crop Object Area (COA) values for each individual lettuce, using data up to 26 (generating G26 LGCs), 30 (G30) and 37 (G37) Days After Transplant (DAT). To assess the quality of their projections, G26 and G30 were compared to the reference LGC G37. The results indicated that Mean Absolute Percentage Error (MAPE) of projected COA was 9.6% and 6.8% for G26 and G30 respectively. Overall, the LGC parameters were close to the reference and highly correlated with the harvested biomass. The study also demonstrated the potential of having very good insight on plant maturity level by modeling the LGC 13 days before harvest. Furthermore, a dashboard was proposed to monitor current and projected maturity level, highlighting areas for further investigation. This novel integrated pipeline has the potential to become a valuable tool for research, on-farm decision making, and field interventions by providing data on plant biomass, maturity, and growth stages under different conditions, used as crop growth indicators.
{"title":"An integrated data-driven approach to monitor and estimate plant-scale growth using UAV","authors":"Philippe Vigneault , Joël Lafond-Lapalme , Arianne Deshaies , Kosal Khun , Samuel de la Sablonnière , Martin Filion , Louis Longchamps , Benjamin Mimee","doi":"10.1016/j.ophoto.2023.100052","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100052","url":null,"abstract":"<div><p>UAV-mounted sensors can be used to estimate crop biophysical traits, offering an alternative to traditional field scouting. However, the high temporal resolution offered by UAV platforms, critical for identifying small differences in crop conditions, is rarely exploited throughout the entire growing season. This limits growers' ability to obtain timely information for real-time interventions. New findings support that it is possible to parametrize an entire crop growth cycle under different conditions by accumulating sufficient data over time and using logistic growth models to highlight growth patterns. A step forward would be to model crop growth cycle at the plant-level in order to anticipate the optimal harvest dates in each plot or quickly identify growth problematics. Individual plant monitoring can be achieved by combining high spatial resolution images with accurate segmentation algorithms. The main objective of the study was therefore to develop and validate an integrated pipeline based on multidimensional data to extract predictive growth metrics for crop monitoring at the plant-level under various field conditions. The plant growth monitoring workflow was based on a three-step design ultimately leading to decision-making and reporting. Lettuce (<em>Lactuca sativa</em> L.) was chosen as a model plant due to its simple geometry, rapid growth and simple cultivation method. Treatments were composed of contrasting cover crops. Overall, correlation analysis showed that UAV-derived morphological metrics are reliable proxies for harvested biomass throughout the growing season, especially in later stages (Spearman's ρ > 0.9) and can be used as growth indicators. Therefore, Logistic Growth Curves (LGCs) were fitted to Crop Object Area (COA) values for each individual lettuce, using data up to 26 (generating G<sub>26</sub> LGCs), 30 (G<sub>30</sub>) and 37 (G<sub>37</sub>) Days After Transplant (DAT). To assess the quality of their projections, G<sub>26</sub> and G<sub>30</sub> were compared to the reference LGC G<sub>37</sub>. The results indicated that Mean Absolute Percentage Error (MAPE) of projected COA was 9.6% and 6.8% for G<sub>26</sub> and G<sub>30</sub> respectively. Overall, the LGC parameters were close to the reference and highly correlated with the harvested biomass. The study also demonstrated the potential of having very good insight on plant maturity level by modeling the LGC 13 days before harvest. Furthermore, a dashboard was proposed to monitor current and projected maturity level, highlighting areas for further investigation. This novel integrated pipeline has the potential to become a valuable tool for research, on-farm decision making, and field interventions by providing data on plant biomass, maturity, and growth stages under different conditions, used as crop growth indicators.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"11 ","pages":"Article 100052"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393223000236/pdfft?md5=d5a0738ff9505d3deb1b9b7a25a6d55e&pid=1-s2.0-S2667393223000236-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138549946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}