Pub Date : 2023-11-30DOI: 10.1007/s41064-023-00259-7
Georg Bareth, Christoph Hütt
Remote sensing approaches using Unmanned Aerial Vehicles (UAVs) have become an established method to monitor agricultural systems. They enable data acquisition with multi- or hyperspectral, RGB, or LiDAR sensors. For non-destructive estimation of crop or sward traits, photogrammetric analysis using Structure from Motion and Multiview Stereopsis (SfM/MVS) has opened a new research field. SfM/MVS analysis enables the monitoring of plant height and plant growth to determine, e.g., biomass. A drawback in the SfM/MVS analysis workflow is that it requires ground control points (GCPs), making it unsuitable for monitoring managed fields which are typically larger than 1 ha. Consequently, accurately georeferenced image data acquisition would be beneficial as it would enable data analysis without GCPs. In the last decade, substantial progress has been achieved in integrating real-time kinematic (RTK) positioning in UAVs, which can potentially provide the desired accuracy in cm range. Therefore, to evaluate the accuracy of crop and sward height analysis, we investigated two SfM/MVS workflows for RTK-tagged UAV data, (I) without and (II) with GCPs. The results clearly indicate that direct RTK-georeferenced UAV data perform well in workflow (I) without using any GCPs (RMSE for Z is 2.8 cm) compared to the effectiveness in workflow (II), which included the GCPs in the SfM/MVS analysis (RMSE for Z is 1.7 cm). Both data sets have the same Ground Sampling Distance (GSD) of 2.46 cm. We conclude that RTK-equipped UAVs enable the monitoring of crop and sward growth greater than 3 cm. At greater plant height differences, the monitoring is significantly more accurate.
{"title":"Evaluation of Direct RTK-georeferenced UAV Images for Crop and Pasture Monitoring Using Polygon Grids","authors":"Georg Bareth, Christoph Hütt","doi":"10.1007/s41064-023-00259-7","DOIUrl":"https://doi.org/10.1007/s41064-023-00259-7","url":null,"abstract":"<p>Remote sensing approaches using Unmanned Aerial Vehicles (UAVs) have become an established method to monitor agricultural systems. They enable data acquisition with multi- or hyperspectral, RGB, or LiDAR sensors. For non-destructive estimation of crop or sward traits, photogrammetric analysis using Structure from Motion and Multiview Stereopsis (SfM/MVS) has opened a new research field. SfM/MVS analysis enables the monitoring of plant height and plant growth to determine, e.g., biomass. A drawback in the SfM/MVS analysis workflow is that it requires ground control points (GCPs), making it unsuitable for monitoring managed fields which are typically larger than 1 ha. Consequently, accurately georeferenced image data acquisition would be beneficial as it would enable data analysis without GCPs. In the last decade, substantial progress has been achieved in integrating real-time kinematic (RTK) positioning in UAVs, which can potentially provide the desired accuracy in cm range. Therefore, to evaluate the accuracy of crop and sward height analysis, we investigated two SfM/MVS workflows for RTK-tagged UAV data, (I) without and (II) with GCPs. The results clearly indicate that direct RTK-georeferenced UAV data perform well in workflow (I) without using any GCPs (RMSE for <i>Z</i> is 2.8 cm) compared to the effectiveness in workflow (II), which included the GCPs in the SfM/MVS analysis (RMSE for <i>Z</i> is 1.7 cm). Both data sets have the same Ground Sampling Distance (GSD) of 2.46 cm. We conclude that RTK-equipped UAVs enable the monitoring of crop and sward growth greater than 3 cm. At greater plant height differences, the monitoring is significantly more accurate.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":"40 S1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138495580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-24DOI: 10.1007/s41064-023-00267-7
{"title":"Report","authors":"","doi":"10.1007/s41064-023-00267-7","DOIUrl":"https://doi.org/10.1007/s41064-023-00267-7","url":null,"abstract":"","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":"43 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140930084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-22DOI: 10.1007/s41064-023-00264-w
Richard Dein D. Altarez, Armando Apan, Tek Maraseni
Tropical montane forests (TMFs) are highly valuable for their above-ground biomass (AGB) and their potential to sequester carbon, but they remain understudied. Sentinel-1, -2, biophysical data and Machine Learning were used to estimate and map the AGB and above-ground carbon (AGC) stocks in Benguet, Philippines. Non-destructive field AGB measurements were collected from 184 plots, revealing that pine forests had 33.57% less AGB than mossy forests (380.33 Mgha−1), whilst the grassland summit had 39.93 Mgha−1. In contrast to the majority of literature, AGB did not decrease linearly with elevation. NDVI, LAI, fAPAR, fCover and elevation were the most effective predictors of field-derived AGB as determined by Random Forest (RF) feature selection in R. WEKA was used to evaluate and validate 26 Machine Learning algorithms. The results show that the Machine Learning K star (K*) (r = 0.213–0.832; RMSE = 106.682 Mgha−1–224.713 Mgha−1) and RF (r = 0.391–0.822; RMSE = 108.226 Mgha−1–175.642 Mgha−1) exhibited high modelling capabilities to estimate AGB across all predictor categories. Consequently, spatially explicit models were carried out in Whitebox Runner software to map the study site’s AGB, demonstrating RF with the highest predictive performance (r = 0.982; RMSE = 53.980 Mgha−1). The study area’s carbon stock map ranged from 0 to 434.94 Mgha−1, highlighting the significance of forests at higher elevations for forest conservation and carbon sequestration. Carbon-rich mountainous regions of the county can be encouraged for carbon sequestration through REDD + interventions. Longer wavelength radar imagery, species-specific allometric equations and soil fertility should be tested in future carbon studies. The produced carbon maps can help policy makers in decision-planning, and thus contribute to conserve the natural resources of the Benguet Mountains.
{"title":"Uncovering the Hidden Carbon Treasures of the Philippines’ Towering Mountains: A Synergistic Exploration Using Satellite Imagery and Machine Learning","authors":"Richard Dein D. Altarez, Armando Apan, Tek Maraseni","doi":"10.1007/s41064-023-00264-w","DOIUrl":"https://doi.org/10.1007/s41064-023-00264-w","url":null,"abstract":"<p>Tropical montane forests (TMFs) are highly valuable for their above-ground biomass (AGB) and their potential to sequester carbon, but they remain understudied. Sentinel-1, -2, biophysical data and Machine Learning were used to estimate and map the AGB and above-ground carbon (AGC) stocks in Benguet, Philippines. Non-destructive field AGB measurements were collected from 184 plots, revealing that pine forests had 33.57% less AGB than mossy forests (380.33 Mgha<sup>−1</sup>), whilst the grassland summit had 39.93 Mgha<sup>−1</sup>. In contrast to the majority of literature, AGB did not decrease linearly with elevation. NDVI, LAI, fAPAR, fCover and elevation were the most effective predictors of field-derived AGB as determined by Random Forest (RF) feature selection in R. WEKA was used to evaluate and validate 26 Machine Learning algorithms. The results show that the Machine Learning K star (K*) (<i>r</i> = 0.213–0.832; RMSE = 106.682 Mgha<sup>−1</sup>–224.713 Mgha<sup>−1</sup>) and RF (<i>r</i> = 0.391–0.822; RMSE = 108.226 Mgha<sup>−1</sup>–175.642 Mgha<sup>−1</sup>) exhibited high modelling capabilities to estimate AGB across all predictor categories. Consequently, spatially explicit models were carried out in Whitebox Runner software to map the study site’s AGB, demonstrating RF with the highest predictive performance (<i>r</i> = 0.982; RMSE = 53.980 Mgha<sup>−1</sup>). The study area’s carbon stock map ranged from 0 to 434.94 Mgha<sup>−1</sup>, highlighting the significance of forests at higher elevations for forest conservation and carbon sequestration. Carbon-rich mountainous regions of the county can be encouraged for carbon sequestration through REDD + interventions. Longer wavelength radar imagery, species-specific allometric equations and soil fertility should be tested in future carbon studies. The produced carbon maps can help policy makers in decision-planning, and thus contribute to conserve the natural resources of the Benguet Mountains.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":"40 2","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138495579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-14DOI: 10.1007/s41064-023-00261-z
{"title":"Reports","authors":"","doi":"10.1007/s41064-023-00261-z","DOIUrl":"https://doi.org/10.1007/s41064-023-00261-z","url":null,"abstract":"","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":"35 18","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134953945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-13DOI: 10.1007/s41064-023-00265-9
Tamer ElGharbawi, Mosbeh R. Kaloop, Jong Wan Hu, Fawzi Zarzoura
{"title":"Subpixel Accuracy of Shoreline Monitoring Using Developed Landsat Series and Google Earth Engine Technique","authors":"Tamer ElGharbawi, Mosbeh R. Kaloop, Jong Wan Hu, Fawzi Zarzoura","doi":"10.1007/s41064-023-00265-9","DOIUrl":"https://doi.org/10.1007/s41064-023-00265-9","url":null,"abstract":"","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":"12 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136346522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-16DOI: 10.1007/s41064-023-00263-x
Birger Winkel, David Nakath, Felix Woelk, Kevin Köser
Abstract To advance underwater computer vision and robotics from lab environments and clear water scenarios to the deep dark ocean or murky coastal waters, representative benchmarks and realistic datasets with ground truth information are required. In particular, determining the camera pose is essential for many underwater robotic or photogrammetric applications and known ground truth is mandatory to evaluate the performance of, e.g., simultaneous localization and mapping approaches in such extreme environments. This paper presents the conception, calibration, and implementation of an external reference system for determining the underwater camera pose in real time. The approach, based on an HTC Vive tracking system in air, calculates the underwater camera pose by fusing the poses of two controllers tracked above the water surface of a tank. It is shown that the mean deviation of this approach to an optical marker-based reference in air is less than 3 mm and 0.3 $$^{circ }$$ ∘ . Finally, the usability of the system for underwater applications is demonstrated.
{"title":"Design, Implementation, and Evaluation of an External Pose-Tracking System for Underwater Cameras","authors":"Birger Winkel, David Nakath, Felix Woelk, Kevin Köser","doi":"10.1007/s41064-023-00263-x","DOIUrl":"https://doi.org/10.1007/s41064-023-00263-x","url":null,"abstract":"Abstract To advance underwater computer vision and robotics from lab environments and clear water scenarios to the deep dark ocean or murky coastal waters, representative benchmarks and realistic datasets with ground truth information are required. In particular, determining the camera pose is essential for many underwater robotic or photogrammetric applications and known ground truth is mandatory to evaluate the performance of, e.g., simultaneous localization and mapping approaches in such extreme environments. This paper presents the conception, calibration, and implementation of an external reference system for determining the underwater camera pose in real time. The approach, based on an HTC Vive tracking system in air, calculates the underwater camera pose by fusing the poses of two controllers tracked above the water surface of a tank. It is shown that the mean deviation of this approach to an optical marker-based reference in air is less than 3 mm and 0.3 $$^{circ }$$ <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:msup> <mml:mrow /> <mml:mo>∘</mml:mo> </mml:msup> </mml:math> . Finally, the usability of the system for underwater applications is demonstrated.","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136116745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-13DOI: 10.1007/s41064-023-00262-y
Markus Gerke, Michael Cramer
{"title":"Editorial for PFG Issue 5/2023","authors":"Markus Gerke, Michael Cramer","doi":"10.1007/s41064-023-00262-y","DOIUrl":"https://doi.org/10.1007/s41064-023-00262-y","url":null,"abstract":"","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":"254 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135858445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-06DOI: 10.1007/s41064-023-00260-0
Paweł Trybała, Jarosław Szrek, Fabio Remondino, Paulina Kujawa, Jacek Wodecki, Jan Blachowski, Radosław Zimroz
Abstract The research potential in the field of mobile mapping technologies is often hindered by several constraints. These include the need for costly hardware to collect data, limited access to target sites with specific environmental conditions or the collection of ground truth data for a quantitative evaluation of the developed solutions. To address these challenges, the research community has often prepared open datasets suitable for developments and testing. However, the availability of datasets that encompass truly demanding mixed indoor–outdoor and subterranean conditions, acquired with diverse but synchronized sensors, is currently limited. To alleviate this issue, we propose the MIN3D dataset (MultI-seNsor 3D mapping with an unmanned ground vehicle for mining applications) which includes data gathered using a wheeled mobile robot in two distinct locations: (i) textureless dark corridors and outside parts of a university campus and (ii) tunnels of an underground WW2 site in Walim (Poland). MIN3D comprises around 150 GB of raw data, including images captured by multiple co-calibrated monocular, stereo and thermal cameras, two LiDAR sensors and three inertial measurement units. Reliable ground truth (GT) point clouds were collected using a survey-grade terrestrial laser scanner. By openly sharing this dataset, we aim to support the efforts of the scientific community in developing robust methods for navigation and mapping in challenging underground conditions. In the paper, we describe the collected data and provide an initial accuracy assessment of some visual- and LiDAR-based simultaneous localization and mapping (SLAM) algorithms for selected sequences. Encountered problems, open research questions and areas that could benefit from utilizing our dataset are discussed. Data are available at https://3dom.fbk.eu/benchmarks .
{"title":"MIN3D Dataset: MultI-seNsor 3D Mapping with an Unmanned Ground Vehicle","authors":"Paweł Trybała, Jarosław Szrek, Fabio Remondino, Paulina Kujawa, Jacek Wodecki, Jan Blachowski, Radosław Zimroz","doi":"10.1007/s41064-023-00260-0","DOIUrl":"https://doi.org/10.1007/s41064-023-00260-0","url":null,"abstract":"Abstract The research potential in the field of mobile mapping technologies is often hindered by several constraints. These include the need for costly hardware to collect data, limited access to target sites with specific environmental conditions or the collection of ground truth data for a quantitative evaluation of the developed solutions. To address these challenges, the research community has often prepared open datasets suitable for developments and testing. However, the availability of datasets that encompass truly demanding mixed indoor–outdoor and subterranean conditions, acquired with diverse but synchronized sensors, is currently limited. To alleviate this issue, we propose the MIN3D dataset (MultI-seNsor 3D mapping with an unmanned ground vehicle for mining applications) which includes data gathered using a wheeled mobile robot in two distinct locations: (i) textureless dark corridors and outside parts of a university campus and (ii) tunnels of an underground WW2 site in Walim (Poland). MIN3D comprises around 150 GB of raw data, including images captured by multiple co-calibrated monocular, stereo and thermal cameras, two LiDAR sensors and three inertial measurement units. Reliable ground truth (GT) point clouds were collected using a survey-grade terrestrial laser scanner. By openly sharing this dataset, we aim to support the efforts of the scientific community in developing robust methods for navigation and mapping in challenging underground conditions. In the paper, we describe the collected data and provide an initial accuracy assessment of some visual- and LiDAR-based simultaneous localization and mapping (SLAM) algorithms for selected sequences. Encountered problems, open research questions and areas that could benefit from utilizing our dataset are discussed. Data are available at https://3dom.fbk.eu/benchmarks .","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":"411 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135350313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-05DOI: 10.1007/s41064-023-00258-8
Nando Metzger, Mehmet Özgür Türkoglu, Rodrigo Caye Daudt, Jan Dirk Wegner, Konrad Schindler
Abstract Forecasting where and when new buildings will emerge is a rather unexplored topic, but one that is very useful in many disciplines such as urban planning, agriculture, resource management, and even autonomous flying. In the present work, we present a method that accomplishes this task with a deep neural network and a custom pretraining procedure. In Stage 1 , a U-Net backbone is pretrained within a Siamese network architecture that aims to solve a (building) change detection task. In Stage 2 , the backbone is repurposed to forecast the emergence of new buildings based solely on one image acquired before its construction. Furthermore, we also present a model that forecasts the time range within which the change will occur. We validate our approach using the SpaceNet7 dataset, which covers an area of 960 km $$^2$$ 2 at 24 points in time across 2 years. In our experiments, we found that our proposed pretraining method consistently outperforms the traditional pretraining using the ImageNet dataset. We also show that it is to some degree possible to predict in advance when building changes will occur.
{"title":"Urban Change Forecasting from Satellite Images","authors":"Nando Metzger, Mehmet Özgür Türkoglu, Rodrigo Caye Daudt, Jan Dirk Wegner, Konrad Schindler","doi":"10.1007/s41064-023-00258-8","DOIUrl":"https://doi.org/10.1007/s41064-023-00258-8","url":null,"abstract":"Abstract Forecasting where and when new buildings will emerge is a rather unexplored topic, but one that is very useful in many disciplines such as urban planning, agriculture, resource management, and even autonomous flying. In the present work, we present a method that accomplishes this task with a deep neural network and a custom pretraining procedure. In Stage 1 , a U-Net backbone is pretrained within a Siamese network architecture that aims to solve a (building) change detection task. In Stage 2 , the backbone is repurposed to forecast the emergence of new buildings based solely on one image acquired before its construction. Furthermore, we also present a model that forecasts the time range within which the change will occur. We validate our approach using the SpaceNet7 dataset, which covers an area of 960 km $$^2$$ <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:msup> <mml:mrow /> <mml:mn>2</mml:mn> </mml:msup> </mml:math> at 24 points in time across 2 years. In our experiments, we found that our proposed pretraining method consistently outperforms the traditional pretraining using the ImageNet dataset. We also show that it is to some degree possible to predict in advance when building changes will occur.","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":"442 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135481560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-04DOI: 10.1007/s41064-023-00257-9
Rim Katlane, David Doxaran, Boubaker ElKilani, Chaïma Trabelsi
{"title":"Remote Sensing of Turbidity in Optically Shallow Waters Using Sentinel-2 MSI and PRISMA Satellite Data","authors":"Rim Katlane, David Doxaran, Boubaker ElKilani, Chaïma Trabelsi","doi":"10.1007/s41064-023-00257-9","DOIUrl":"https://doi.org/10.1007/s41064-023-00257-9","url":null,"abstract":"","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135592115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}