Pub Date : 2024-08-01Epub Date: 2024-06-18DOI: 10.1016/j.ophoto.2024.100070
F. Nex , E.K. Stathopoulou , F. Remondino , M.Y. Yang , L. Madhuanand , Y. Yogender , B. Alsadik , M. Weinmann , B. Jutzi , R. Qin
3D reconstruction is a long-standing research topic in the photogrammetric and computer vision communities; although a plethora of open-source and commercial solutions for 3D reconstruction have been released in the last few years, several open challenges and limitations still exist. Undoubtedly, deep learning algorithms have demonstrated great potential in several remote sensing tasks, including image-based 3D reconstruction. State-of-the-art monocular and stereo algorithms leverage deep learning techniques and achieve increased performance in depth estimation and 3D reconstruction. However, one of the limitations of such methods is that they highly rely on large training sets that are often tedious to obtain; even when available, they typically refer to indoor, close-range scenarios and low-resolution images. Especially while considering UAV (Unmanned Aerial Vehicle) scenarios, such data are not available and domain adaptation is not a trivial challenge. To fill this gap, the UAV-based multi-sensor dataset for geospatial research (UseGeo - https://usegeo.fbk.eu/home) is introduced in this paper. It contains both image and LiDAR data and aims to support relevant research in photogrammetry and computer vision with a useful training set for both stereo and monocular 3D reconstruction algorithms. In this regard, the dataset provides ground truth data for both point clouds and depth maps. In addition, UseGeo can be also a valuable dataset for other tasks such as feature extraction and matching, aerial triangulation, or image and LiDAR co-registration. The paper introduces the UseGeo dataset and validates some state-of-the-art algorithms to assess their usability for both monocular and multi-view 3D reconstruction.
{"title":"UseGeo - A UAV-based multi-sensor dataset for geospatial research","authors":"F. Nex , E.K. Stathopoulou , F. Remondino , M.Y. Yang , L. Madhuanand , Y. Yogender , B. Alsadik , M. Weinmann , B. Jutzi , R. Qin","doi":"10.1016/j.ophoto.2024.100070","DOIUrl":"https://doi.org/10.1016/j.ophoto.2024.100070","url":null,"abstract":"<div><p>3D reconstruction is a long-standing research topic in the photogrammetric and computer vision communities; although a plethora of open-source and commercial solutions for 3D reconstruction have been released in the last few years, several open challenges and limitations still exist. Undoubtedly, deep learning algorithms have demonstrated great potential in several remote sensing tasks, including image-based 3D reconstruction. State-of-the-art monocular and stereo algorithms leverage deep learning techniques and achieve increased performance in depth estimation and 3D reconstruction. However, one of the limitations of such methods is that they highly rely on large training sets that are often tedious to obtain; even when available, they typically refer to indoor, close-range scenarios and low-resolution images. Especially while considering UAV (Unmanned Aerial Vehicle) scenarios, such data are not available and domain adaptation is not a trivial challenge. To fill this gap, the UAV-based multi-sensor dataset for geospatial research (UseGeo - <span>https://usegeo.fbk.eu/home</span><svg><path></path></svg>) is introduced in this paper. It contains both image and LiDAR data and aims to support relevant research in photogrammetry and computer vision with a useful training set for both stereo and monocular 3D reconstruction algorithms. In this regard, the dataset provides ground truth data for both point clouds and depth maps. In addition, UseGeo can be also a valuable dataset for other tasks such as feature extraction and matching, aerial triangulation, or image and LiDAR co-registration. The paper introduces the UseGeo dataset and validates some state-of-the-art algorithms to assess their usability for both monocular and multi-view 3D reconstruction.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"13 ","pages":"Article 100070"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000140/pdfft?md5=d47d00df97c93d40feb57faaf122d56c&pid=1-s2.0-S2667393224000140-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141483810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-05-31DOI: 10.1016/j.ophoto.2024.100068
Eleonora Maset, Luca Magri, Andrea Fusiello
{"title":"Erratum to “Principled bundle block adjustment with multi-head cameras” [ISPRS Open J. Photogram. Rem. Sens. 11 (2023) 100051]","authors":"Eleonora Maset, Luca Magri, Andrea Fusiello","doi":"10.1016/j.ophoto.2024.100068","DOIUrl":"10.1016/j.ophoto.2024.100068","url":null,"abstract":"","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"13 ","pages":"Article 100068"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-05-31DOI: 10.1016/j.ophoto.2024.100066
Andras Balazs, Eero Liski, Sakari Tuominen, Annika Kangas
{"title":"Erratum to “Comparison of neural networks and k-nearest neighbors methods in forest stand variable estimation using airborne laser data” [ISPRS Open J. Photogram. Rem. Sens. 4 (2022) 100012]","authors":"Andras Balazs, Eero Liski, Sakari Tuominen, Annika Kangas","doi":"10.1016/j.ophoto.2024.100066","DOIUrl":"10.1016/j.ophoto.2024.100066","url":null,"abstract":"","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"13 ","pages":"Article 100066"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-05-31DOI: 10.1016/j.ophoto.2024.100067
Aada Hakula, Lassi Ruoppa, Matti Lehtomäki, Xiaowei Yu, Antero Kukko, Harri Kaartinen, Josef Taher, Leena Matikainen, Eric Hyyppä, Ville Luoma, Markus Holopainen, Ville Kankare, Juha Hyyppä
{"title":"Erratum to “Individual tree segmentation and species classification using high-density close-range multispectral laser scanning data” [ISPRS Open J. Photogram. Rem. Sens. 9 (2023) 100039]","authors":"Aada Hakula, Lassi Ruoppa, Matti Lehtomäki, Xiaowei Yu, Antero Kukko, Harri Kaartinen, Josef Taher, Leena Matikainen, Eric Hyyppä, Ville Luoma, Markus Holopainen, Ville Kankare, Juha Hyyppä","doi":"10.1016/j.ophoto.2024.100067","DOIUrl":"10.1016/j.ophoto.2024.100067","url":null,"abstract":"","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"13 ","pages":"Article 100067"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-05-04DOI: 10.1016/j.ophoto.2024.100065
M. Hermann , M. Weinmann , F. Nex , E.K. Stathopoulou , F. Remondino , B. Jutzi , B. Ruf
Depth estimation and 3D model reconstruction from aerial imagery is an important task in photogrammetry, remote sensing, and computer vision. To compare the performance of different image-based approaches, this study presents a benchmark for UAV-based aerial imagery using the UseGeo dataset. The contributions include the release of various evaluation routines on GitHub, as well as a comprehensive comparison of baseline approaches, such as methods for offline multi-view 3D reconstruction resulting in point clouds and triangle meshes, online multi-view depth estimation, as well as single-image depth estimation using self-supervised deep learning. With the release of our evaluation routines, we aim to provide a universal protocol for the evaluation of depth estimation and 3D reconstruction methods on the UseGeo dataset. The conducted experiments and analyses show that each method excels in a different category: the depth estimation from COLMAP outperforms that of the other approaches, ACMMP achieves the lowest error and highest completeness for point clouds, while OpenMVS produces triangle meshes with the lowest error. Among the online methods for depth estimation, the approach from the Plane-Sweep Library outperforms the FaSS-MVS approach, while the latter achieves the lowest processing time. And even though the particularly challenging nature of the dataset and the small amount of training data leads to a significantly higher error in the results of the self-supervised single-image depth estimation approach, it outperforms all other approaches in terms of processing time and frame rate. In our evaluation, we have also considered modern learning-based approaches that can be used for image-based 3D reconstruction, such as NeRFs. However, due to the significantly lower quality of the resulting 3D models, we have only included a qualitative comparison between NeRF-based and conventional approaches in the scope of this work.
{"title":"Depth estimation and 3D reconstruction from UAV-borne imagery: Evaluation on the UseGeo dataset","authors":"M. Hermann , M. Weinmann , F. Nex , E.K. Stathopoulou , F. Remondino , B. Jutzi , B. Ruf","doi":"10.1016/j.ophoto.2024.100065","DOIUrl":"10.1016/j.ophoto.2024.100065","url":null,"abstract":"<div><p>Depth estimation and 3D model reconstruction from aerial imagery is an important task in photogrammetry, remote sensing, and computer vision. To compare the performance of different image-based approaches, this study presents a benchmark for UAV-based aerial imagery using the UseGeo dataset. The contributions include the release of various evaluation routines on GitHub, as well as a comprehensive comparison of baseline approaches, such as methods for offline multi-view 3D reconstruction resulting in point clouds and triangle meshes, online multi-view depth estimation, as well as single-image depth estimation using self-supervised deep learning. With the release of our evaluation routines, we aim to provide a universal protocol for the evaluation of depth estimation and 3D reconstruction methods on the UseGeo dataset. The conducted experiments and analyses show that each method excels in a different category: the depth estimation from COLMAP outperforms that of the other approaches, ACMMP achieves the lowest error and highest completeness for point clouds, while OpenMVS produces triangle meshes with the lowest error. Among the online methods for depth estimation, the approach from the Plane-Sweep Library outperforms the FaSS-MVS approach, while the latter achieves the lowest processing time. And even though the particularly challenging nature of the dataset and the small amount of training data leads to a significantly higher error in the results of the self-supervised single-image depth estimation approach, it outperforms all other approaches in terms of processing time and frame rate. In our evaluation, we have also considered modern learning-based approaches that can be used for image-based 3D reconstruction, such as NeRFs. However, due to the significantly lower quality of the resulting 3D models, we have only included a qualitative comparison between NeRF-based and conventional approaches in the scope of this work.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"13 ","pages":"Article 100065"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000085/pdfft?md5=62b1d4520d924b174fc6755a9b752484&pid=1-s2.0-S2667393224000085-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141039465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-01Epub Date: 2024-04-16DOI: 10.1016/j.ophoto.2024.100063
Parzival Borlinghaus , Frederic Tausch , Richard Odemer
Various methods have been developed to assign pollen to its botanical origin. They range from technically complex approaches to the less precise but sophisticated chromatic assessment, in which the pollen colors are used for identification. However, a common challenge lies in the similarity of colors of pollen from different plant species. The advent of camera-based bee monitoring systems has sparked renewed interest in classifying pollen based on color and offers potential advances for honey bee biomonitoring. Despite the promise of improved sensor accuracy, a critical examination of whether color diversity within a single species may be the primary limiting factor has been lacking. Our comprehensive analysis, which includes over 85,000 corbicular pollen from 30 major pollen species, shows that the average color variation within each species is distinguishable to a human observer, similar to the difference between two dissimilar colors. From today's perspective, the considerable color variation within a single pollen source makes the use of color alone to classify pollen impractical. When picking a single pollen color from the entire dataset, we report a correct pollen type classification rate of 67 %. The accuracy was highly dependent on the type and ranged from 0 % for rare types with common colors to 99 % for distinct colors. The large color dispersion within species highlights the need for complementary methods to improve the accuracy and reliability of color-based pollen identification in biomonitoring applications.
{"title":"Natural color dispersion of corbicular pollen limits color-based classification","authors":"Parzival Borlinghaus , Frederic Tausch , Richard Odemer","doi":"10.1016/j.ophoto.2024.100063","DOIUrl":"https://doi.org/10.1016/j.ophoto.2024.100063","url":null,"abstract":"<div><p>Various methods have been developed to assign pollen to its botanical origin. They range from technically complex approaches to the less precise but sophisticated chromatic assessment, in which the pollen colors are used for identification. However, a common challenge lies in the similarity of colors of pollen from different plant species. The advent of camera-based bee monitoring systems has sparked renewed interest in classifying pollen based on color and offers potential advances for honey bee biomonitoring. Despite the promise of improved sensor accuracy, a critical examination of whether color diversity within a single species may be the primary limiting factor has been lacking. Our comprehensive analysis, which includes over 85,000 corbicular pollen from 30 major pollen species, shows that the average color variation within each species is distinguishable to a human observer, similar to the difference between two dissimilar colors. From today's perspective, the considerable color variation within a single pollen source makes the use of color alone to classify pollen impractical. When picking a single pollen color from the entire dataset, we report a correct pollen type classification rate of 67 %. The accuracy was highly dependent on the type and ranged from 0 % for rare types with common colors to 99 % for distinct colors. The large color dispersion within species highlights the need for complementary methods to improve the accuracy and reliability of color-based pollen identification in biomonitoring applications.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100063"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000061/pdfft?md5=60851447727c71ddaf821e0054cde41f&pid=1-s2.0-S2667393224000061-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140618444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-01Epub Date: 2024-03-06DOI: 10.1016/j.ophoto.2024.100060
X. Briottet , K. Adeline , T. Bajjouk , V. Carrère , M. Chami , Y. Constans , Y. Derimian , A. Dupiau , M. Dumont , S. Doz , S. Fabre , P.Y. Foucher , H. Herbin , S. Jacquemoud , M. Lang , A. Le Bris , P. Litvinov , S. Loyer , R. Marion , A. Minghelli , B. Cheul
CNES is currently carrying out a Phase A study to assess the feasibility of a future hyperspectral imaging sensor (10 m spatial resolution) combined with a panchromatic camera (2.5 m spatial resolution). This mission focuses on both high spatial and spectral resolution requirements, as inherited from previous French studies such as HYPEX, HYPXIM, and BIODIVERSITY. To meet user requirements, cost, and instrument compactness constraints, CNES asked the French hyperspectral Mission Advisory Group (MAG), representing a broad French scientific community, to provide recommendations on spectral sampling, particularly in the Short Wave InfraRed (SWIR) for various applications.
This paper presents the tests carried out with the aim of defining the optimal spectral sampling and spectral resolution in the SWIR domain for quantitative estimation of physical variables and classification purposes. The targeted applications are geosciences (mineralogy, soil moisture content), forestry (tree species classification, leaf functional traits), coastal and inland waters (bathymetry, water column, bottom classification in shallow water, coastal habitat classification), urban areas (land cover), industrial plumes (aerosols, methane and carbon dioxide), cryosphere (specific surface area, equivalent black carbon concentration), and atmosphere (water vapor, carbon dioxide and aerosols). All the products simulated in this exercise used the same CNES end-to-end processing chain, with realistic instrument parameters, enabling easy comparison between applications. 648 simulations were carried out with different spectral strategies, radiometric calibration performances and signal-to-noise Ratios (SNR): 24 instrument configurations × 25 datasets (22 images + 3 spectral libraries).
The results show that spectral sampling up to 20 nm in the SWIR range is sufficient for most applications. However, 10 nm spectral sampling is recommended for applications based on specific absorption bands such as mineralogy, industrial plumes or atmospheric gases. In addition, a slight performance loss is generally observed when radiometric calibration accuracy decreases, with a few exceptions in bathymetry and in the cryosphere for which the observed performance is severely degraded. Finally, most applications can be achieved with a realistic SNR, with the exception of bathymetry, shallow water classification, as well as carbon dioxide and methane estimation, which require the optimistic SNR level tested. On the basis of these results, CNES is currently evaluating the best compromise for designing the future hyperspectral sensor to meet the objectives of priority applications.
{"title":"End-to-end simulations to optimize imaging spectroscopy mission requirements for seven scientific applications","authors":"X. Briottet , K. Adeline , T. Bajjouk , V. Carrère , M. Chami , Y. Constans , Y. Derimian , A. Dupiau , M. Dumont , S. Doz , S. Fabre , P.Y. Foucher , H. Herbin , S. Jacquemoud , M. Lang , A. Le Bris , P. Litvinov , S. Loyer , R. Marion , A. Minghelli , B. Cheul","doi":"10.1016/j.ophoto.2024.100060","DOIUrl":"10.1016/j.ophoto.2024.100060","url":null,"abstract":"<div><p>CNES is currently carrying out a Phase A study to assess the feasibility of a future hyperspectral imaging sensor (10 m spatial resolution) combined with a panchromatic camera (2.5 m spatial resolution). This mission focuses on both high spatial and spectral resolution requirements, as inherited from previous French studies such as HYPEX, HYPXIM, and BIODIVERSITY. To meet user requirements, cost, and instrument compactness constraints, CNES asked the French hyperspectral Mission Advisory Group (MAG), representing a broad French scientific community, to provide recommendations on spectral sampling, particularly in the Short Wave InfraRed (SWIR) for various applications.</p><p>This paper presents the tests carried out with the aim of defining the optimal spectral sampling and spectral resolution in the SWIR domain for quantitative estimation of physical variables and classification purposes. The targeted applications are geosciences (mineralogy, soil moisture content), forestry (tree species classification, leaf functional traits), coastal and inland waters (bathymetry, water column, bottom classification in shallow water, coastal habitat classification), urban areas (land cover), industrial plumes (aerosols, methane and carbon dioxide), cryosphere (specific surface area, equivalent black carbon concentration), and atmosphere (water vapor, carbon dioxide and aerosols). All the products simulated in this exercise used the same CNES end-to-end processing chain, with realistic instrument parameters, enabling easy comparison between applications. 648 simulations were carried out with different spectral strategies, radiometric calibration performances and signal-to-noise Ratios (SNR): 24 instrument configurations × 25 datasets (22 images + 3 spectral libraries).</p><p>The results show that spectral sampling up to 20 nm in the SWIR range is sufficient for most applications. However, 10 nm spectral sampling is recommended for applications based on specific absorption bands such as mineralogy, industrial plumes or atmospheric gases. In addition, a slight performance loss is generally observed when radiometric calibration accuracy decreases, with a few exceptions in bathymetry and in the cryosphere for which the observed performance is severely degraded. Finally, most applications can be achieved with a <em>realistic</em> SNR, with the exception of bathymetry, shallow water classification, as well as carbon dioxide and methane estimation, which require the <em>optimistic</em> SNR level tested. On the basis of these results, CNES is currently evaluating the best compromise for designing the future hyperspectral sensor to meet the objectives of priority applications.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100060"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000036/pdfft?md5=a4765581693a72be42a56629872e3511&pid=1-s2.0-S2667393224000036-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140092723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-01Epub Date: 2024-02-10DOI: 10.1016/j.ophoto.2024.100059
Miguel Vallejo Orti , Katharina Anders , Oluibukun Ajayi , Olaf Bubenzer , Bernhard Höfle
Scalable and transferable methods for generating reliable reference data for automated remote sensing approaches are crucial, especially for mapping complex Earth surface processes such as gully erosion in low-populated and inaccessible areas. As an alternative for the labour-intense in-situ authoritative mapping, collaborative approaches enable volunteers to generate redundant independent geoinformation by digitising Earth observation imagery. We face the challenge of mapping the complex gully outlines integrating multi-user contributions of the same gully network. Comparing Sentinel 2, Bing Aerial, and unoccupied aerial vehicle orthophoto base maps, we examine the volunteered geographic information process and multi-contribution integration using Kalman filtering and machine learning to segment a gully border in a remote area in northwestern Namibia. The Kalman filtering integrates the different lines finding a smoothed solution, and a Random Forest model is used to identify mapping conditions and terrain features as key predictors for evaluating contributors' digitising quality. Assessing results with expert-based reference data, we identify ten contributions as optimal, yielding root mean square distance values of 19.1 m, 15.9 m and 16.6 m, and variability of 2.0 m, 4.2 m and 3.8 m (root mean square distance standard deviation) for Sentinel 2, Bing Aerial, and unoccupied aerial vehicle orthophoto, respectively. Eliminating the lowest performing contributions for Sentinel 2 using a Random Forest regression-based quality indicator improves the accuracy by up to 35% in the root mean square distance compared to a random selection, and up to 54% compared to a supervised remote sensing classification. Results for Sentinel 2 show that low slope, low terrain ruggedness index, and high normalised difference vegetation index values are correlated to high spatial mapping deviations, with Pearson correlation coefficients of −0.61, −0.5, and 0.18, respectively. Our approach is a powerful alternative for authoritative mapping of morphologically complex environmental phenomena and can provide independent reference data for supervised automatic remote sensing analysis.
{"title":"Integrating multi-user digitising actions for mapping gully outlines using a combined approach of Kalman filtering and machine learning","authors":"Miguel Vallejo Orti , Katharina Anders , Oluibukun Ajayi , Olaf Bubenzer , Bernhard Höfle","doi":"10.1016/j.ophoto.2024.100059","DOIUrl":"10.1016/j.ophoto.2024.100059","url":null,"abstract":"<div><p>Scalable and transferable methods for generating reliable reference data for automated remote sensing approaches are crucial, especially for mapping complex Earth surface processes such as gully erosion in low-populated and inaccessible areas. As an alternative for the labour-intense in-situ authoritative mapping, collaborative approaches enable volunteers to generate redundant independent geoinformation by digitising Earth observation imagery. We face the challenge of mapping the complex gully outlines integrating multi-user contributions of the same gully network. Comparing Sentinel 2, Bing Aerial, and unoccupied aerial vehicle orthophoto base maps, we examine the volunteered geographic information process and multi-contribution integration using Kalman filtering and machine learning to segment a gully border in a remote area in northwestern Namibia. The Kalman filtering integrates the different lines finding a smoothed solution, and a Random Forest model is used to identify mapping conditions and terrain features as key predictors for evaluating contributors' digitising quality. Assessing results with expert-based reference data, we identify ten contributions as optimal, yielding root mean square distance values of 19.1 m, 15.9 m and 16.6 m, and variability of 2.0 m, 4.2 m and 3.8 m (root mean square distance standard deviation) for Sentinel 2, Bing Aerial, and unoccupied aerial vehicle orthophoto, respectively. Eliminating the lowest performing contributions for Sentinel 2 using a Random Forest regression-based quality indicator improves the accuracy by up to 35% in the root mean square distance compared to a random selection, and up to 54% compared to a supervised remote sensing classification. Results for Sentinel 2 show that low slope, low terrain ruggedness index, and high normalised difference vegetation index values are correlated to high spatial mapping deviations, with Pearson correlation coefficients of −0.61, −0.5, and 0.18, respectively. Our approach is a powerful alternative for authoritative mapping of morphologically complex environmental phenomena and can provide independent reference data for supervised automatic remote sensing analysis.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100059"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000024/pdfft?md5=48a1afef19ee80fc26305409481984b5&pid=1-s2.0-S2667393224000024-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139874969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, there has been a growing emphasis on assessing and ensuring the quality of horticultural and agricultural produce. Traditional methods involving field measurements, investigations, and statistical analyses are labour-intensive, time-consuming, and costly. As a solution, Hyperspectral Imaging (HSI) has emerged as a non-destructive and environmentally friendly technology. HSI has gained significant popularity as a new technology, particularly for its promising applications in remote sensing, notably in agriculture. However, classifying HSI data is highly complex because it involves several challenges, such as the excessive redundancy of spectral bands, scarcity of training samples, and the intricate non-linear relationship between spatial positions and spectral bands. Notably, Deep Learning (DL) techniques have demonstrated remarkable efficacy in various HSI analysis tasks, including those within agriculture. As interest continues to surge in leveraging HSI data for agricultural applications through DL approaches, a pressing need exists for a comprehensive survey that can effectively navigate researchers through the significant strides achieved and the future promising research directions in this domain. This literature review diligently compiles, analyzes, and discusses recent endeavours employing DL methodologies. These methodologies encompass a spectrum of approaches, ranging from Autoencoders (AE) to Convolutional Neural Networks (CNN) (in 1D, 2D, and 3D configurations), Recurrent Neural Networks (RNN), Deep Belief Networks (DBN), Generative Adversarial Networks (GAN), Transfer Learning (TL), Semi-Supervised Learning (SSL), Few-Shot Learning (FSL) and Active Learning (AL). These approaches are tailored to address the unique challenges posed by agricultural HSI analysis. This review evaluates and discusses the performance exhibited by these diverse approaches. To this end, the efficiency of these approaches has been rigorously analyzed and discussed based on the results of the state-of-the-art papers on widely recognized land cover datasets. Github repository.
{"title":"Deep learning techniques for hyperspectral image analysis in agriculture: A review","authors":"Mohamed Fadhlallah Guerri , Cosimo Distante , Paolo Spagnolo , Fares Bougourzi , Abdelmalik Taleb-Ahmed","doi":"10.1016/j.ophoto.2024.100062","DOIUrl":"https://doi.org/10.1016/j.ophoto.2024.100062","url":null,"abstract":"<div><p>In recent years, there has been a growing emphasis on assessing and ensuring the quality of horticultural and agricultural produce. Traditional methods involving field measurements, investigations, and statistical analyses are labour-intensive, time-consuming, and costly. As a solution, Hyperspectral Imaging (HSI) has emerged as a non-destructive and environmentally friendly technology. HSI has gained significant popularity as a new technology, particularly for its promising applications in remote sensing, notably in agriculture. However, classifying HSI data is highly complex because it involves several challenges, such as the excessive redundancy of spectral bands, scarcity of training samples, and the intricate non-linear relationship between spatial positions and spectral bands. Notably, Deep Learning (DL) techniques have demonstrated remarkable efficacy in various HSI analysis tasks, including those within agriculture. As interest continues to surge in leveraging HSI data for agricultural applications through DL approaches, a pressing need exists for a comprehensive survey that can effectively navigate researchers through the significant strides achieved and the future promising research directions in this domain. This literature review diligently compiles, analyzes, and discusses recent endeavours employing DL methodologies. These methodologies encompass a spectrum of approaches, ranging from Autoencoders (AE) to Convolutional Neural Networks (CNN) (in 1D, 2D, and 3D configurations), Recurrent Neural Networks (RNN), Deep Belief Networks (DBN), Generative Adversarial Networks (GAN), Transfer Learning (TL), Semi-Supervised Learning (SSL), Few-Shot Learning (FSL) and Active Learning (AL). These approaches are tailored to address the unique challenges posed by agricultural HSI analysis. This review evaluates and discusses the performance exhibited by these diverse approaches. To this end, the efficiency of these approaches has been rigorously analyzed and discussed based on the results of the state-of-the-art papers on widely recognized land cover datasets. <span>Github repository</span><svg><path></path></svg>.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100062"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266739322400005X/pdfft?md5=5a272b7d6066b8efe8bee784c28464f9&pid=1-s2.0-S266739322400005X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140331066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-01Epub Date: 2024-01-12DOI: 10.1016/j.ophoto.2023.100057
Kyriaki Mouzakidou, Aurélien Brun, Davide A. Cucci, Jan Skaloud
Tightly-coupled sensor orientation, i.e. the simultaneous processing of temporal (GNSS and raw inertial) and spatial (image and lidar) constraints in a common adjustment, has demonstrated significant improvement in the quality of attitude determination with small inertial sensors. This is particularly beneficial in kinematic laser scanning on lightweight aerial platforms, such as drones, which employ direct sensor orientation for the spatial interpretation of laser vectors. In this study, previously reported preliminary results are extended to assess the gain in accuracy of sensor orientation through leveraging all available spatio-temporal constraints in a dynamic network i) with a commercial IMU for drones and ii) with simultaneous processing of raw-observations of several low-quality IMUs. Additionally, we evaluate the influence of different types of spatial constraints (image 2D and point-cloud 3D tie-points) and flight geometries (with and without a cross flight line). We present the newly implemented estimation of confidence levels and compare those with the observed residual errors. The empirical evidence demonstrates that the use of spatial constraints increases the attitude accuracy of the derived trajectory by a factor of 2–3, both for the commercial and low-quality IMUs, while at the same time reducing the dispersion of geo-referencing errors, resulting in a considerably more precise and self-coherent geo-referenced point-cloud. We further demonstrate that the use of image constraints (additionally to lidar constraints) stabilizes the in-flight lidar boresight estimation by a factor of 3–10, establishing the feasibility of such estimation even in the absence of special calibration patterns or calibration targets.
{"title":"Airborne sensor fusion: Expected accuracy and behavior of a concurrent adjustment","authors":"Kyriaki Mouzakidou, Aurélien Brun, Davide A. Cucci, Jan Skaloud","doi":"10.1016/j.ophoto.2023.100057","DOIUrl":"10.1016/j.ophoto.2023.100057","url":null,"abstract":"<div><p><em>Tightly-coupled</em> sensor orientation, i.e. the simultaneous processing of temporal (GNSS and raw inertial) and spatial (image and lidar) constraints in a common adjustment, has demonstrated significant improvement in the quality of attitude determination with small inertial sensors. This is particularly beneficial in kinematic laser scanning on lightweight aerial platforms, such as drones, which employ direct sensor orientation for the spatial interpretation of laser vectors. In this study, previously reported preliminary results are extended to assess the gain in accuracy of sensor orientation through leveraging all available spatio-temporal constraints in a dynamic network i) with a commercial IMU for drones and ii) with simultaneous processing of raw-observations of several low-quality IMUs. Additionally, we evaluate the influence of different types of spatial constraints (image 2D and point-cloud 3D tie-points) and flight geometries (with and without a cross flight line). We present the newly implemented estimation of confidence levels and compare those with the observed residual errors. The empirical evidence demonstrates that the use of spatial constraints increases the attitude accuracy of the derived trajectory by a factor of 2–3, both for the commercial and low-quality IMUs, while at the same time reducing the dispersion of geo-referencing errors, resulting in a considerably more precise and self-coherent geo-referenced point-cloud. We further demonstrate that the use of image constraints (additionally to lidar constraints) stabilizes the in-flight lidar boresight estimation by a factor of 3–10, establishing the feasibility of such estimation even in the absence of special calibration patterns or calibration targets.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100057"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393223000285/pdfft?md5=0f7ab041b690c142ba3b35d6019ecf11&pid=1-s2.0-S2667393223000285-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139632413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}