Pub Date : 2025-01-01Epub Date: 2025-09-03DOI: 10.1007/s41064-025-00357-8
Felix Dahle, Yushan Liu, Roderik Lindenbergh, Bert Wouters
Historical aerial imagery provides valuable data from regions and periods with limited geospatial information. A common method to utilize this data is through the generation of ortho-photos and 3D models using Structure-from-Motion (SfM) techniques. However, many of these images were scanned decades after their acquisition and require geometric calibration, along with internal and external camera parameter estimation, for accurate reconstruction. Manual identification of key features, such as fiducial marks and text annotations, is labour-intensive, while existing automated methods struggle with poor-quality datasets. This paper presents an automated workflow that combines computer vision and machine learning techniques to detect and extract these key features from historical aerial images. To address challenges related to image quality, we also introduce estimation protocols that compensate for missing or unreliable detections by leveraging redundancy across multiple flight paths. The methodology was evaluated on the TMA (Trimetrogon Aerial) archive, a collection of historical images from the Antarctic Peninsula. Our test dataset comprised over 7000 images from 20 different flight paths. The workflow demonstrated high success rates in detecting and extracting fiducial marks, image subsets, and textual annotations. Approximately 70% of the images provided usable focal length data, while fiducial mark detection exhibited high accuracy except in cases of severe scanning artifacts. Altitude data extraction proved to be the most challenging, with successful results in only 15% of images due to degraded altimeter readings. Despite these limitations, the automated workflow effectively estimated missing parameters, ensuring robust image reconstruction across flight paths. The code for this workflow is open-source and publicly available on GitHub at https://github.com/fdahle/hist_meta_extraction.
{"title":"From Film to Data: Automating Meta-Feature Extraction in Historical Aerial Imagery.","authors":"Felix Dahle, Yushan Liu, Roderik Lindenbergh, Bert Wouters","doi":"10.1007/s41064-025-00357-8","DOIUrl":"10.1007/s41064-025-00357-8","url":null,"abstract":"<p><p>Historical aerial imagery provides valuable data from regions and periods with limited geospatial information. A common method to utilize this data is through the generation of ortho-photos and 3D models using Structure-from-Motion (SfM) techniques. However, many of these images were scanned decades after their acquisition and require geometric calibration, along with internal and external camera parameter estimation, for accurate reconstruction. Manual identification of key features, such as fiducial marks and text annotations, is labour-intensive, while existing automated methods struggle with poor-quality datasets. This paper presents an automated workflow that combines computer vision and machine learning techniques to detect and extract these key features from historical aerial images. To address challenges related to image quality, we also introduce estimation protocols that compensate for missing or unreliable detections by leveraging redundancy across multiple flight paths. The methodology was evaluated on the TMA (Trimetrogon Aerial) archive, a collection of historical images from the Antarctic Peninsula. Our test dataset comprised over 7000 images from 20 different flight paths. The workflow demonstrated high success rates in detecting and extracting fiducial marks, image subsets, and textual annotations. Approximately 70% of the images provided usable focal length data, while fiducial mark detection exhibited high accuracy except in cases of severe scanning artifacts. Altitude data extraction proved to be the most challenging, with successful results in only 15% of images due to degraded altimeter readings. Despite these limitations, the automated workflow effectively estimated missing parameters, ensuring robust image reconstruction across flight paths. The code for this workflow is open-source and publicly available on GitHub at https://github.com/fdahle/hist_meta_extraction.</p>","PeriodicalId":91030,"journal":{"name":"Journal of photogrammetry, remote sensing and geoinformation science","volume":"93 6","pages":"521-534"},"PeriodicalIF":3.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12779747/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145954150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-10-13DOI: 10.1007/s41064-025-00359-6
Sebastian Mikolka-Flöry, Camillo Ressl, Norbert Pfeifer
With monoplotting, object points can be reconstructed from a single oriented image if a reference surface of the captured scene is available. While used extensively in environmental sciences, prior approaches fall short of describing the uncertainty of the reconstructed points. In this paper, we estimate this monoplotting uncertainty using three different methods: i) Monte Carlo simulation, ii) unscented transform and iii) classical variance propagation with tangential approximation of the terrain. Our investigations are guided by two different use cases: i) For manually selected image points, the estimated uncertainty determines whether these monoplotted points are accurate enough for a subsequent research question (e.g. deriving glacier changes from historical terrestrial images). ii) Estimating the monoplotting uncertainty for each pixel of the whole image to get an overview of the expectable uncertainty, which will already be beneficial during the image orientation step. While for the first use case, the precision of the estimated uncertainty is crucial, the second use case requires a fast method. Furthermore, in both use cases, silhouettes must be considered because the estimates in their vicinity will not be valid. Therefore, we further investigate the derivation of silhouette masks, optimally exploiting the available information from the three different methods. For evaluation, we use a selected historical terrestrial image showing a glacier in the Alps around 1900, where, for the first use case, we manually digitised individual vertices of a glacier outline. Using the Monte Carlo estimates based on 1000 samples as reference, the results from the unscented transform are closer to those (14.1% RMS) than the ones from variance propagation (24.7% RMS). Despite this good result from the unscented transform, our recommendation for this use case is nevertheless the Monte Carlo simulation, thanks to the speed of existing ray-casting routines. However, for the second use case, where the monoplotting uncertainty is predicted for each pixel of the entire image to get a quick overview, the enormous amount of millions of ray-castings prohibits both Monte Carlo simulation and unscented transform. Here, we propose to use variance propagation because of its speed and still reasonable precision, yielding uncertainty estimates with an RMS of 7.8% in areas away from silhouettes.
{"title":"Uncertainty of Object Points Monoplotted from Terrestrial Images.","authors":"Sebastian Mikolka-Flöry, Camillo Ressl, Norbert Pfeifer","doi":"10.1007/s41064-025-00359-6","DOIUrl":"10.1007/s41064-025-00359-6","url":null,"abstract":"<p><p>With monoplotting, object points can be reconstructed from a single oriented image if a reference surface of the captured scene is available. While used extensively in environmental sciences, prior approaches fall short of describing the uncertainty of the reconstructed points. In this paper, we estimate this monoplotting uncertainty using three different methods: i) Monte Carlo simulation, ii) unscented transform and iii) classical variance propagation with tangential approximation of the terrain. Our investigations are guided by two different use cases: i) For manually selected image points, the estimated uncertainty determines whether these monoplotted points are accurate enough for a subsequent research question (e.g. deriving glacier changes from historical terrestrial images). ii) Estimating the monoplotting uncertainty for each pixel of the whole image to get an overview of the expectable uncertainty, which will already be beneficial during the image orientation step. While for the first use case, the precision of the estimated uncertainty is crucial, the second use case requires a fast method. Furthermore, in both use cases, silhouettes must be considered because the estimates in their vicinity will not be valid. Therefore, we further investigate the derivation of silhouette masks, optimally exploiting the available information from the three different methods. For evaluation, we use a selected historical terrestrial image showing a glacier in the Alps around 1900, where, for the first use case, we manually digitised individual vertices of a glacier outline. Using the Monte Carlo estimates based on 1000 samples as reference, the results from the unscented transform are closer to those (14.1% RMS) than the ones from variance propagation (24.7% RMS). Despite this good result from the unscented transform, our recommendation for this use case is nevertheless the Monte Carlo simulation, thanks to the speed of existing ray-casting routines. However, for the second use case, where the monoplotting uncertainty is predicted for each pixel of the entire image to get a quick overview, the enormous amount of millions of ray-castings prohibits both Monte Carlo simulation and unscented transform. Here, we propose to use variance propagation because of its speed and still reasonable precision, yielding uncertainty estimates with an RMS of 7.8% in areas away from silhouettes.</p>","PeriodicalId":91030,"journal":{"name":"Journal of photogrammetry, remote sensing and geoinformation science","volume":"93 6","pages":"645-661"},"PeriodicalIF":3.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12779715/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145954138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01Epub Date: 2022-09-19DOI: 10.1007/s41064-022-00217-9
Ralf Bill, Jörg Blankenbach, Martin Breunig, Jan-Henrik Haunert, Christian Heipke, Stefan Herle, Hans-Gerd Maas, Helmut Mayer, Liqui Meng, Franz Rottensteiner, Jochen Schiewe, Monika Sester, Uwe Sörgel, Martin Werner
Geospatial information science (GI science) is concerned with the development and application of geodetic and information science methods for modeling, acquiring, sharing, managing, exploring, analyzing, synthesizing, visualizing, and evaluating data on spatio-temporal phenomena related to the Earth. As an interdisciplinary scientific discipline, it focuses on developing and adapting information technologies to understand processes on the Earth and human-place interactions, to detect and predict trends and patterns in the observed data, and to support decision making. The authors - members of DGK, the Geoinformatics division, as part of the Committee on Geodesy of the Bavarian Academy of Sciences and Humanities, representing geodetic research and university teaching in Germany - have prepared this paper as a means to point out future research questions and directions in geospatial information science. For the different facets of geospatial information science, the state of art is presented and underlined with mostly own case studies. The paper thus illustrates which contributions the German GI community makes and which research perspectives arise in geospatial information science. The paper further demonstrates that GI science, with its expertise in data acquisition and interpretation, information modeling and management, integration, decision support, visualization, and dissemination, can help solve many of the grand challenges facing society today and in the future.
{"title":"Geospatial Information Research: State of the Art, Case Studies and Future Perspectives.","authors":"Ralf Bill, Jörg Blankenbach, Martin Breunig, Jan-Henrik Haunert, Christian Heipke, Stefan Herle, Hans-Gerd Maas, Helmut Mayer, Liqui Meng, Franz Rottensteiner, Jochen Schiewe, Monika Sester, Uwe Sörgel, Martin Werner","doi":"10.1007/s41064-022-00217-9","DOIUrl":"https://doi.org/10.1007/s41064-022-00217-9","url":null,"abstract":"<p><p>Geospatial information science (GI science) is concerned with the development and application of geodetic and information science methods for modeling, acquiring, sharing, managing, exploring, analyzing, synthesizing, visualizing, and evaluating data on spatio-temporal phenomena related to the Earth. As an interdisciplinary scientific discipline, it focuses on developing and adapting information technologies to understand processes on the Earth and human-place interactions, to detect and predict trends and patterns in the observed data, and to support decision making. The authors - members of DGK, the Geoinformatics division, as part of the Committee on Geodesy of the Bavarian Academy of Sciences and Humanities, representing geodetic research and university teaching in Germany - have prepared this paper as a means to point out future research questions and directions in geospatial information science. For the different facets of geospatial information science, the state of art is presented and underlined with mostly own case studies. The paper thus illustrates which contributions the German GI community makes and which research perspectives arise in geospatial information science. The paper further demonstrates that GI science, with its expertise in data acquisition and interpretation, information modeling and management, integration, decision support, visualization, and dissemination, can help solve many of the grand challenges facing society today and in the future.</p>","PeriodicalId":91030,"journal":{"name":"Journal of photogrammetry, remote sensing and geoinformation science","volume":"90 4","pages":"349-389"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9484357/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33483416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1007/s41064-019-00076-x
Nima Ahmadian, Tobias Ullmann, Jochem Verrelst, Erik Borg, Reinhard Zölitz, Christopher Conrad
The biomass of three agricultural crops, winter wheat (Triticum aestivum L.), barley (Hordeum vulgare L.), and canola (Brassica napus L.), was studied using multi-temporal dual-polarimetric TerraSAR-X data. The radar backscattering coefficient sigma nought of the two polarization channels HH and VV was extracted from the satellite images. Subsequently, combinations of HH and VV polarizations were calculated (e.g. HH/VV, HH + VV, HH × VV) to establish relationships between SAR data and the fresh and dry biomass of each crop type using multiple stepwise regression. Additionally, the semi-empirical water cloud model (WCM) was used to account for the effect of crop biomass on radar backscatter data. The potential of the Random Forest (RF) machine learning approach was also explored. The split sampling approach (i.e. 70% training and 30% testing) was carried out to validate the stepwise models, WCM and RF. The multiple stepwise regression method using dual-polarimetric data was capable to retrieve the biomass of the three crops, particularly for dry biomass, with R2 > 0.7, without any external input variable, such as information on the (actual) soil moisture. A comparison of the random forest technique with the WCM reveals that the RF technique remarkably outperformed the WCM in biomass estimation, especially for the fresh biomass. For example, the R2 > 0.68 for the fresh biomass estimation of different crop types using RF whereas WCM show R2 < 0.35 only. However, for the dry biomass, the results of both approaches resembled each other.
{"title":"Biomass Assessment of Agricultural Crops Using Multi-temporal Dual-Polarimetric TerraSAR-X Data.","authors":"Nima Ahmadian, Tobias Ullmann, Jochem Verrelst, Erik Borg, Reinhard Zölitz, Christopher Conrad","doi":"10.1007/s41064-019-00076-x","DOIUrl":"10.1007/s41064-019-00076-x","url":null,"abstract":"<p><p>The biomass of three agricultural crops, winter wheat <i>(Triticum aestivum</i> L.), barley <i>(Hordeum vulgare</i> L.), and canola <i>(Brassica napus</i> L.), was studied using multi-temporal dual-polarimetric TerraSAR-X data. The radar backscattering coefficient sigma nought of the two polarization channels HH and VV was extracted from the satellite images. Subsequently, combinations of HH and VV polarizations were calculated (e.g. HH/VV, HH + VV, HH × VV) to establish relationships between SAR data and the fresh and dry biomass of each crop type using multiple stepwise regression. Additionally, the semi-empirical water cloud model (WCM) was used to account for the effect of crop biomass on radar backscatter data. The potential of the Random Forest (RF) machine learning approach was also explored. The split sampling approach (i.e. 70% training and 30% testing) was carried out to validate the stepwise models, WCM and RF. The multiple stepwise regression method using dual-polarimetric data was capable to retrieve the biomass of the three crops, particularly for dry biomass, with <i>R<sup>2</sup></i> > 0.7, without any external input variable, such as information on the (actual) soil moisture. A comparison of the random forest technique with the WCM reveals that the RF technique remarkably outperformed the WCM in biomass estimation, especially for the fresh biomass. For example, the <i>R</i> <sup>2</sup> > 0.68 for the fresh biomass estimation of different crop types using RF whereas WCM show <i>R</i> <sup>2</sup> < 0.35 only. However, for the dry biomass, the results of both approaches resembled each other.</p>","PeriodicalId":91030,"journal":{"name":"Journal of photogrammetry, remote sensing and geoinformation science","volume":"87 ","pages":"159-175"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7613484/pdf/EMS152642.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40353857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}