首页 > 最新文献

ISPRS Open Journal of Photogrammetry and Remote Sensing最新文献

英文 中文
UseGeo - A UAV-based multi-sensor dataset for geospatial research UseGeo - 用于地理空间研究的无人机多传感器数据集
Pub Date : 2024-08-01 Epub Date: 2024-06-18 DOI: 10.1016/j.ophoto.2024.100070
F. Nex , E.K. Stathopoulou , F. Remondino , M.Y. Yang , L. Madhuanand , Y. Yogender , B. Alsadik , M. Weinmann , B. Jutzi , R. Qin

3D reconstruction is a long-standing research topic in the photogrammetric and computer vision communities; although a plethora of open-source and commercial solutions for 3D reconstruction have been released in the last few years, several open challenges and limitations still exist. Undoubtedly, deep learning algorithms have demonstrated great potential in several remote sensing tasks, including image-based 3D reconstruction. State-of-the-art monocular and stereo algorithms leverage deep learning techniques and achieve increased performance in depth estimation and 3D reconstruction. However, one of the limitations of such methods is that they highly rely on large training sets that are often tedious to obtain; even when available, they typically refer to indoor, close-range scenarios and low-resolution images. Especially while considering UAV (Unmanned Aerial Vehicle) scenarios, such data are not available and domain adaptation is not a trivial challenge. To fill this gap, the UAV-based multi-sensor dataset for geospatial research (UseGeo - https://usegeo.fbk.eu/home) is introduced in this paper. It contains both image and LiDAR data and aims to support relevant research in photogrammetry and computer vision with a useful training set for both stereo and monocular 3D reconstruction algorithms. In this regard, the dataset provides ground truth data for both point clouds and depth maps. In addition, UseGeo can be also a valuable dataset for other tasks such as feature extraction and matching, aerial triangulation, or image and LiDAR co-registration. The paper introduces the UseGeo dataset and validates some state-of-the-art algorithms to assess their usability for both monocular and multi-view 3D reconstruction.

三维重建是摄影测量和计算机视觉领域一个由来已久的研究课题;尽管在过去几年中发布了大量用于三维重建的开源和商业解决方案,但仍存在一些公开的挑战和限制。毫无疑问,深度学习算法在多项遥感任务(包括基于图像的三维重建)中展现出了巨大的潜力。最先进的单目和立体算法利用深度学习技术提高了深度估计和三维重建的性能。然而,这些方法的局限性之一是它们高度依赖于大型训练集,而这些训练集的获取往往十分繁琐;即使可以获得,它们通常也是针对室内、近距离场景和低分辨率图像的。特别是在考虑无人机(UAV)场景时,此类数据不可用,领域适应性也不是一个小挑战。为了填补这一空白,本文介绍了用于地理空间研究的基于无人机的多传感器数据集(UseGeo - https://usegeo.fbk.eu/home)。该数据集包含图像和激光雷达数据,旨在为摄影测量和计算机视觉领域的相关研究提供支持,为立体和单目三维重建算法提供有用的训练集。在这方面,数据集为点云和深度图提供了地面实况数据。此外,UseGeo 数据集还是其他任务(如特征提取和匹配、航空三角测量或图像和激光雷达协同配准)的重要数据集。本文介绍了 UseGeo 数据集,并验证了一些最先进的算法,以评估它们在单目和多目三维重建中的可用性。
{"title":"UseGeo - A UAV-based multi-sensor dataset for geospatial research","authors":"F. Nex ,&nbsp;E.K. Stathopoulou ,&nbsp;F. Remondino ,&nbsp;M.Y. Yang ,&nbsp;L. Madhuanand ,&nbsp;Y. Yogender ,&nbsp;B. Alsadik ,&nbsp;M. Weinmann ,&nbsp;B. Jutzi ,&nbsp;R. Qin","doi":"10.1016/j.ophoto.2024.100070","DOIUrl":"https://doi.org/10.1016/j.ophoto.2024.100070","url":null,"abstract":"<div><p>3D reconstruction is a long-standing research topic in the photogrammetric and computer vision communities; although a plethora of open-source and commercial solutions for 3D reconstruction have been released in the last few years, several open challenges and limitations still exist. Undoubtedly, deep learning algorithms have demonstrated great potential in several remote sensing tasks, including image-based 3D reconstruction. State-of-the-art monocular and stereo algorithms leverage deep learning techniques and achieve increased performance in depth estimation and 3D reconstruction. However, one of the limitations of such methods is that they highly rely on large training sets that are often tedious to obtain; even when available, they typically refer to indoor, close-range scenarios and low-resolution images. Especially while considering UAV (Unmanned Aerial Vehicle) scenarios, such data are not available and domain adaptation is not a trivial challenge. To fill this gap, the UAV-based multi-sensor dataset for geospatial research (UseGeo - <span>https://usegeo.fbk.eu/home</span><svg><path></path></svg>) is introduced in this paper. It contains both image and LiDAR data and aims to support relevant research in photogrammetry and computer vision with a useful training set for both stereo and monocular 3D reconstruction algorithms. In this regard, the dataset provides ground truth data for both point clouds and depth maps. In addition, UseGeo can be also a valuable dataset for other tasks such as feature extraction and matching, aerial triangulation, or image and LiDAR co-registration. The paper introduces the UseGeo dataset and validates some state-of-the-art algorithms to assess their usability for both monocular and multi-view 3D reconstruction.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"13 ","pages":"Article 100070"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000140/pdfft?md5=d47d00df97c93d40feb57faaf122d56c&pid=1-s2.0-S2667393224000140-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141483810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum to “Principled bundle block adjustment with multi-head cameras” [ISPRS Open J. Photogram. Rem. Sens. 11 (2023) 100051] 对 "使用多云台相机进行原则性束块调整 "的勘误 [ISPRS Open J. Photogram. Rem. Sens. 11 (2023) 100051]
Pub Date : 2024-08-01 Epub Date: 2024-05-31 DOI: 10.1016/j.ophoto.2024.100068
Eleonora Maset, Luca Magri, Andrea Fusiello
{"title":"Erratum to “Principled bundle block adjustment with multi-head cameras” [ISPRS Open J. Photogram. Rem. Sens. 11 (2023) 100051]","authors":"Eleonora Maset,&nbsp;Luca Magri,&nbsp;Andrea Fusiello","doi":"10.1016/j.ophoto.2024.100068","DOIUrl":"10.1016/j.ophoto.2024.100068","url":null,"abstract":"","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"13 ","pages":"Article 100068"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum to “Comparison of neural networks and k-nearest neighbors methods in forest stand variable estimation using airborne laser data” [ISPRS Open J. Photogram. Rem. Sens. 4 (2022) 100012] 对《利用机载激光数据进行林分变量估算的神经网络和 K 最近邻方法的比较》的勘误 [ISPRS Open J. Photogram. Rem. Sens. 4 (2022) 100012]
Pub Date : 2024-08-01 Epub Date: 2024-05-31 DOI: 10.1016/j.ophoto.2024.100066
Andras Balazs, Eero Liski, Sakari Tuominen, Annika Kangas
{"title":"Erratum to “Comparison of neural networks and k-nearest neighbors methods in forest stand variable estimation using airborne laser data” [ISPRS Open J. Photogram. Rem. Sens. 4 (2022) 100012]","authors":"Andras Balazs,&nbsp;Eero Liski,&nbsp;Sakari Tuominen,&nbsp;Annika Kangas","doi":"10.1016/j.ophoto.2024.100066","DOIUrl":"10.1016/j.ophoto.2024.100066","url":null,"abstract":"","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"13 ","pages":"Article 100066"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum to “Individual tree segmentation and species classification using high-density close-range multispectral laser scanning data” [ISPRS Open J. Photogram. Rem. Sens. 9 (2023) 100039] 对 "利用高密度近距离多光谱激光扫描数据进行单棵树分割和物种分类 "的勘误 [ISPRS Open J. Photogram. Rem. Sens. 9 (2023) 100039]
Pub Date : 2024-08-01 Epub Date: 2024-05-31 DOI: 10.1016/j.ophoto.2024.100067
Aada Hakula, Lassi Ruoppa, Matti Lehtomäki, Xiaowei Yu, Antero Kukko, Harri Kaartinen, Josef Taher, Leena Matikainen, Eric Hyyppä, Ville Luoma, Markus Holopainen, Ville Kankare, Juha Hyyppä
{"title":"Erratum to “Individual tree segmentation and species classification using high-density close-range multispectral laser scanning data” [ISPRS Open J. Photogram. Rem. Sens. 9 (2023) 100039]","authors":"Aada Hakula,&nbsp;Lassi Ruoppa,&nbsp;Matti Lehtomäki,&nbsp;Xiaowei Yu,&nbsp;Antero Kukko,&nbsp;Harri Kaartinen,&nbsp;Josef Taher,&nbsp;Leena Matikainen,&nbsp;Eric Hyyppä,&nbsp;Ville Luoma,&nbsp;Markus Holopainen,&nbsp;Ville Kankare,&nbsp;Juha Hyyppä","doi":"10.1016/j.ophoto.2024.100067","DOIUrl":"10.1016/j.ophoto.2024.100067","url":null,"abstract":"","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"13 ","pages":"Article 100067"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depth estimation and 3D reconstruction from UAV-borne imagery: Evaluation on the UseGeo dataset 利用无人机载图像进行深度估计和 3D 重建:对 UseGeo 数据集的评估
Pub Date : 2024-08-01 Epub Date: 2024-05-04 DOI: 10.1016/j.ophoto.2024.100065
M. Hermann , M. Weinmann , F. Nex , E.K. Stathopoulou , F. Remondino , B. Jutzi , B. Ruf

Depth estimation and 3D model reconstruction from aerial imagery is an important task in photogrammetry, remote sensing, and computer vision. To compare the performance of different image-based approaches, this study presents a benchmark for UAV-based aerial imagery using the UseGeo dataset. The contributions include the release of various evaluation routines on GitHub, as well as a comprehensive comparison of baseline approaches, such as methods for offline multi-view 3D reconstruction resulting in point clouds and triangle meshes, online multi-view depth estimation, as well as single-image depth estimation using self-supervised deep learning. With the release of our evaluation routines, we aim to provide a universal protocol for the evaluation of depth estimation and 3D reconstruction methods on the UseGeo dataset. The conducted experiments and analyses show that each method excels in a different category: the depth estimation from COLMAP outperforms that of the other approaches, ACMMP achieves the lowest error and highest completeness for point clouds, while OpenMVS produces triangle meshes with the lowest error. Among the online methods for depth estimation, the approach from the Plane-Sweep Library outperforms the FaSS-MVS approach, while the latter achieves the lowest processing time. And even though the particularly challenging nature of the dataset and the small amount of training data leads to a significantly higher error in the results of the self-supervised single-image depth estimation approach, it outperforms all other approaches in terms of processing time and frame rate. In our evaluation, we have also considered modern learning-based approaches that can be used for image-based 3D reconstruction, such as NeRFs. However, due to the significantly lower quality of the resulting 3D models, we have only included a qualitative comparison between NeRF-based and conventional approaches in the scope of this work.

根据航空图像进行深度估计和三维模型重建是摄影测量、遥感和计算机视觉领域的一项重要任务。为了比较不同图像方法的性能,本研究利用 UseGeo 数据集为基于无人机的航空图像提供了一个基准。本研究的贡献包括在 GitHub 上发布各种评估例程,以及对基准方法进行全面比较,如离线多视角三维重建生成点云和三角网格的方法、在线多视角深度估计,以及使用自监督深度学习的单图像深度估计。随着评估例程的发布,我们的目标是为在 UseGeo 数据集上评估深度估计和三维重建方法提供一个通用协议。实验和分析表明,每种方法都在不同的类别中表现出色:COLMAP 的深度估计优于其他方法,ACMMP 的点云误差最小、完整性最高,而 OpenMVS 生成的三角形网格误差最小。在深度估计的在线方法中,Plane-Sweep 库的方法优于 FaSS-MVS 方法,而后者的处理时间最少。尽管数据集的特殊挑战性和少量的训练数据导致自监督单图像深度估算方法的结果误差显著增大,但它在处理时间和帧率方面优于所有其他方法。在评估中,我们还考虑了可用于基于图像的 3D 重建的现代学习方法,如 NeRF。不过,由于生成的三维模型质量明显较低,我们只将基于 NeRF 的方法与传统方法进行了定性比较。
{"title":"Depth estimation and 3D reconstruction from UAV-borne imagery: Evaluation on the UseGeo dataset","authors":"M. Hermann ,&nbsp;M. Weinmann ,&nbsp;F. Nex ,&nbsp;E.K. Stathopoulou ,&nbsp;F. Remondino ,&nbsp;B. Jutzi ,&nbsp;B. Ruf","doi":"10.1016/j.ophoto.2024.100065","DOIUrl":"10.1016/j.ophoto.2024.100065","url":null,"abstract":"<div><p>Depth estimation and 3D model reconstruction from aerial imagery is an important task in photogrammetry, remote sensing, and computer vision. To compare the performance of different image-based approaches, this study presents a benchmark for UAV-based aerial imagery using the UseGeo dataset. The contributions include the release of various evaluation routines on GitHub, as well as a comprehensive comparison of baseline approaches, such as methods for offline multi-view 3D reconstruction resulting in point clouds and triangle meshes, online multi-view depth estimation, as well as single-image depth estimation using self-supervised deep learning. With the release of our evaluation routines, we aim to provide a universal protocol for the evaluation of depth estimation and 3D reconstruction methods on the UseGeo dataset. The conducted experiments and analyses show that each method excels in a different category: the depth estimation from COLMAP outperforms that of the other approaches, ACMMP achieves the lowest error and highest completeness for point clouds, while OpenMVS produces triangle meshes with the lowest error. Among the online methods for depth estimation, the approach from the Plane-Sweep Library outperforms the FaSS-MVS approach, while the latter achieves the lowest processing time. And even though the particularly challenging nature of the dataset and the small amount of training data leads to a significantly higher error in the results of the self-supervised single-image depth estimation approach, it outperforms all other approaches in terms of processing time and frame rate. In our evaluation, we have also considered modern learning-based approaches that can be used for image-based 3D reconstruction, such as NeRFs. However, due to the significantly lower quality of the resulting 3D models, we have only included a qualitative comparison between NeRF-based and conventional approaches in the scope of this work.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"13 ","pages":"Article 100065"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000085/pdfft?md5=62b1d4520d924b174fc6755a9b752484&pid=1-s2.0-S2667393224000085-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141039465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Natural color dispersion of corbicular pollen limits color-based classification 花冠花粉的自然色分散限制了基于颜色的分类
Pub Date : 2024-04-01 Epub Date: 2024-04-16 DOI: 10.1016/j.ophoto.2024.100063
Parzival Borlinghaus , Frederic Tausch , Richard Odemer

Various methods have been developed to assign pollen to its botanical origin. They range from technically complex approaches to the less precise but sophisticated chromatic assessment, in which the pollen colors are used for identification. However, a common challenge lies in the similarity of colors of pollen from different plant species. The advent of camera-based bee monitoring systems has sparked renewed interest in classifying pollen based on color and offers potential advances for honey bee biomonitoring. Despite the promise of improved sensor accuracy, a critical examination of whether color diversity within a single species may be the primary limiting factor has been lacking. Our comprehensive analysis, which includes over 85,000 corbicular pollen from 30 major pollen species, shows that the average color variation within each species is distinguishable to a human observer, similar to the difference between two dissimilar colors. From today's perspective, the considerable color variation within a single pollen source makes the use of color alone to classify pollen impractical. When picking a single pollen color from the entire dataset, we report a correct pollen type classification rate of 67 %. The accuracy was highly dependent on the type and ranged from 0 % for rare types with common colors to 99 % for distinct colors. The large color dispersion within species highlights the need for complementary methods to improve the accuracy and reliability of color-based pollen identification in biomonitoring applications.

目前已开发出多种方法来确定花粉的植物来源。这些方法既有技术上复杂的方法,也有不太精确但复杂的色度评估方法,即利用花粉的颜色进行鉴定。然而,一个共同的挑战在于不同植物物种花粉颜色的相似性。基于摄像头的蜜蜂监测系统的出现再次激发了人们对根据颜色对花粉进行分类的兴趣,并为蜜蜂生物监测提供了潜在的进展。尽管传感器的准确性有望得到提高,但对于单一物种内的颜色多样性是否会成为主要限制因素,一直缺乏批判性的研究。我们的综合分析包括来自 30 个主要花粉物种的 85,000 多枚鸡冠花粉,结果表明,对于人类观察者来说,每个物种内部的平均颜色变化是可以区分的,类似于两种不同颜色之间的差异。从今天的角度来看,单个花粉源的颜色差异很大,因此仅用颜色来对花粉进行分类是不切实际的。从整个数据集中挑选单一花粉颜色时,我们的花粉类型分类正确率为 67%。准确率在很大程度上取决于花粉类型,对于具有常见颜色的稀有类型,准确率为 0%,而对于具有独特颜色的类型,准确率则高达 99%。物种内部颜色的巨大分散性凸显了在生物监测应用中提高基于颜色的花粉识别准确性和可靠性的补充方法的必要性。
{"title":"Natural color dispersion of corbicular pollen limits color-based classification","authors":"Parzival Borlinghaus ,&nbsp;Frederic Tausch ,&nbsp;Richard Odemer","doi":"10.1016/j.ophoto.2024.100063","DOIUrl":"https://doi.org/10.1016/j.ophoto.2024.100063","url":null,"abstract":"<div><p>Various methods have been developed to assign pollen to its botanical origin. They range from technically complex approaches to the less precise but sophisticated chromatic assessment, in which the pollen colors are used for identification. However, a common challenge lies in the similarity of colors of pollen from different plant species. The advent of camera-based bee monitoring systems has sparked renewed interest in classifying pollen based on color and offers potential advances for honey bee biomonitoring. Despite the promise of improved sensor accuracy, a critical examination of whether color diversity within a single species may be the primary limiting factor has been lacking. Our comprehensive analysis, which includes over 85,000 corbicular pollen from 30 major pollen species, shows that the average color variation within each species is distinguishable to a human observer, similar to the difference between two dissimilar colors. From today's perspective, the considerable color variation within a single pollen source makes the use of color alone to classify pollen impractical. When picking a single pollen color from the entire dataset, we report a correct pollen type classification rate of 67 %. The accuracy was highly dependent on the type and ranged from 0 % for rare types with common colors to 99 % for distinct colors. The large color dispersion within species highlights the need for complementary methods to improve the accuracy and reliability of color-based pollen identification in biomonitoring applications.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100063"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000061/pdfft?md5=60851447727c71ddaf821e0054cde41f&pid=1-s2.0-S2667393224000061-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140618444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-end simulations to optimize imaging spectroscopy mission requirements for seven scientific applications 端到端模拟,优化七种科学应用的成像光谱任务要求
Pub Date : 2024-04-01 Epub Date: 2024-03-06 DOI: 10.1016/j.ophoto.2024.100060
X. Briottet , K. Adeline , T. Bajjouk , V. Carrère , M. Chami , Y. Constans , Y. Derimian , A. Dupiau , M. Dumont , S. Doz , S. Fabre , P.Y. Foucher , H. Herbin , S. Jacquemoud , M. Lang , A. Le Bris , P. Litvinov , S. Loyer , R. Marion , A. Minghelli , B. Cheul

CNES is currently carrying out a Phase A study to assess the feasibility of a future hyperspectral imaging sensor (10 m spatial resolution) combined with a panchromatic camera (2.5 m spatial resolution). This mission focuses on both high spatial and spectral resolution requirements, as inherited from previous French studies such as HYPEX, HYPXIM, and BIODIVERSITY. To meet user requirements, cost, and instrument compactness constraints, CNES asked the French hyperspectral Mission Advisory Group (MAG), representing a broad French scientific community, to provide recommendations on spectral sampling, particularly in the Short Wave InfraRed (SWIR) for various applications.

This paper presents the tests carried out with the aim of defining the optimal spectral sampling and spectral resolution in the SWIR domain for quantitative estimation of physical variables and classification purposes. The targeted applications are geosciences (mineralogy, soil moisture content), forestry (tree species classification, leaf functional traits), coastal and inland waters (bathymetry, water column, bottom classification in shallow water, coastal habitat classification), urban areas (land cover), industrial plumes (aerosols, methane and carbon dioxide), cryosphere (specific surface area, equivalent black carbon concentration), and atmosphere (water vapor, carbon dioxide and aerosols). All the products simulated in this exercise used the same CNES end-to-end processing chain, with realistic instrument parameters, enabling easy comparison between applications. 648 simulations were carried out with different spectral strategies, radiometric calibration performances and signal-to-noise Ratios (SNR): 24 instrument configurations × 25 datasets (22 images + 3 spectral libraries).

The results show that spectral sampling up to 20 nm in the SWIR range is sufficient for most applications. However, 10 nm spectral sampling is recommended for applications based on specific absorption bands such as mineralogy, industrial plumes or atmospheric gases. In addition, a slight performance loss is generally observed when radiometric calibration accuracy decreases, with a few exceptions in bathymetry and in the cryosphere for which the observed performance is severely degraded. Finally, most applications can be achieved with a realistic SNR, with the exception of bathymetry, shallow water classification, as well as carbon dioxide and methane estimation, which require the optimistic SNR level tested. On the basis of these results, CNES is currently evaluating the best compromise for designing the future hyperspectral sensor to meet the objectives of priority applications.

法国国家空间研究中心目前正在开展一项 A 阶段研究,以评估未来的高光谱成像传感 器(10 米空间分辨率)与全色照相机(2.5 米空间分辨率)相结合的可行性。这项任务的重点是满足高空间分辨率和高光谱分辨率的要求,这一点继承了法国以前的研究,如 HYPEX、HYPXIM 和 BIODIVERSITY。为了满足用户要求、成本和仪器紧凑性方面的限制,法国国家空间研究中心请代表法国广大科学界的法国高光谱任务咨询小组(MAG)就光谱采样,特别是短波红外(SWIR)光谱采样提出建议,以满足各种应用的需要。本文介绍了为确定 SWIR 领域的最佳光谱采样和光谱分辨率而进行的测试,以达到定量估算物理变量和分类的目的。目标应用包括地球科学(矿物学、土壤水分含量)、林业(树种分类、叶片功能特征)、沿海和内陆水域(水深测量、水柱、浅水区水底分类、沿海生境分类)、城市地区(土地覆盖)、工业羽流(气溶胶、甲烷和二氧化碳)、冰冻圈(比表面积、等效黑碳浓度)和大气(水蒸气、二氧化碳和气溶胶)。这次模拟的所有产品都使用了相同的国家空间研究中心端到端处理链,仪器参数符合实际情况,便于在不同应用之间进行比较。采用不同的光谱策略、辐射校准性能和信噪比(SNR)进行了 648 次模拟:24 种仪器配置 × 25 个数据集(22 幅图像 + 3 个光谱库)。然而,对于基于特定吸收波段的应用,如矿物学、工业羽流或大气气体,建议采用 10 nm 光谱采样。此外,当辐射测量校准精度降低时,通常会出现轻微的性能下降,但在测深和冰冻圈中有少数例外,观测到的性能会严重下降。最后,除了水深测量、浅水分类以及二氧化碳和甲烷估算需要测试的乐观信噪比水平外,大多数应用都可以通过实际信噪比实现。根据这些结果,法国国家空间研究中心目前正在评估设计未来高光谱传感器的最佳折衷方案,以满足优先应用的目标。
{"title":"End-to-end simulations to optimize imaging spectroscopy mission requirements for seven scientific applications","authors":"X. Briottet ,&nbsp;K. Adeline ,&nbsp;T. Bajjouk ,&nbsp;V. Carrère ,&nbsp;M. Chami ,&nbsp;Y. Constans ,&nbsp;Y. Derimian ,&nbsp;A. Dupiau ,&nbsp;M. Dumont ,&nbsp;S. Doz ,&nbsp;S. Fabre ,&nbsp;P.Y. Foucher ,&nbsp;H. Herbin ,&nbsp;S. Jacquemoud ,&nbsp;M. Lang ,&nbsp;A. Le Bris ,&nbsp;P. Litvinov ,&nbsp;S. Loyer ,&nbsp;R. Marion ,&nbsp;A. Minghelli ,&nbsp;B. Cheul","doi":"10.1016/j.ophoto.2024.100060","DOIUrl":"10.1016/j.ophoto.2024.100060","url":null,"abstract":"<div><p>CNES is currently carrying out a Phase A study to assess the feasibility of a future hyperspectral imaging sensor (10 m spatial resolution) combined with a panchromatic camera (2.5 m spatial resolution). This mission focuses on both high spatial and spectral resolution requirements, as inherited from previous French studies such as HYPEX, HYPXIM, and BIODIVERSITY. To meet user requirements, cost, and instrument compactness constraints, CNES asked the French hyperspectral Mission Advisory Group (MAG), representing a broad French scientific community, to provide recommendations on spectral sampling, particularly in the Short Wave InfraRed (SWIR) for various applications.</p><p>This paper presents the tests carried out with the aim of defining the optimal spectral sampling and spectral resolution in the SWIR domain for quantitative estimation of physical variables and classification purposes. The targeted applications are geosciences (mineralogy, soil moisture content), forestry (tree species classification, leaf functional traits), coastal and inland waters (bathymetry, water column, bottom classification in shallow water, coastal habitat classification), urban areas (land cover), industrial plumes (aerosols, methane and carbon dioxide), cryosphere (specific surface area, equivalent black carbon concentration), and atmosphere (water vapor, carbon dioxide and aerosols). All the products simulated in this exercise used the same CNES end-to-end processing chain, with realistic instrument parameters, enabling easy comparison between applications. 648 simulations were carried out with different spectral strategies, radiometric calibration performances and signal-to-noise Ratios (SNR): 24 instrument configurations × 25 datasets (22 images + 3 spectral libraries).</p><p>The results show that spectral sampling up to 20 nm in the SWIR range is sufficient for most applications. However, 10 nm spectral sampling is recommended for applications based on specific absorption bands such as mineralogy, industrial plumes or atmospheric gases. In addition, a slight performance loss is generally observed when radiometric calibration accuracy decreases, with a few exceptions in bathymetry and in the cryosphere for which the observed performance is severely degraded. Finally, most applications can be achieved with a <em>realistic</em> SNR, with the exception of bathymetry, shallow water classification, as well as carbon dioxide and methane estimation, which require the <em>optimistic</em> SNR level tested. On the basis of these results, CNES is currently evaluating the best compromise for designing the future hyperspectral sensor to meet the objectives of priority applications.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100060"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000036/pdfft?md5=a4765581693a72be42a56629872e3511&pid=1-s2.0-S2667393224000036-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140092723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating multi-user digitising actions for mapping gully outlines using a combined approach of Kalman filtering and machine learning 利用卡尔曼滤波和机器学习相结合的方法,整合多用户数字化行动,绘制沟壑轮廓图
Pub Date : 2024-04-01 Epub Date: 2024-02-10 DOI: 10.1016/j.ophoto.2024.100059
Miguel Vallejo Orti , Katharina Anders , Oluibukun Ajayi , Olaf Bubenzer , Bernhard Höfle

Scalable and transferable methods for generating reliable reference data for automated remote sensing approaches are crucial, especially for mapping complex Earth surface processes such as gully erosion in low-populated and inaccessible areas. As an alternative for the labour-intense in-situ authoritative mapping, collaborative approaches enable volunteers to generate redundant independent geoinformation by digitising Earth observation imagery. We face the challenge of mapping the complex gully outlines integrating multi-user contributions of the same gully network. Comparing Sentinel 2, Bing Aerial, and unoccupied aerial vehicle orthophoto base maps, we examine the volunteered geographic information process and multi-contribution integration using Kalman filtering and machine learning to segment a gully border in a remote area in northwestern Namibia. The Kalman filtering integrates the different lines finding a smoothed solution, and a Random Forest model is used to identify mapping conditions and terrain features as key predictors for evaluating contributors' digitising quality. Assessing results with expert-based reference data, we identify ten contributions as optimal, yielding root mean square distance values of 19.1 m, 15.9 m and 16.6 m, and variability of 2.0 m, 4.2 m and 3.8 m (root mean square distance standard deviation) for Sentinel 2, Bing Aerial, and unoccupied aerial vehicle orthophoto, respectively. Eliminating the lowest performing contributions for Sentinel 2 using a Random Forest regression-based quality indicator improves the accuracy by up to 35% in the root mean square distance compared to a random selection, and up to 54% compared to a supervised remote sensing classification. Results for Sentinel 2 show that low slope, low terrain ruggedness index, and high normalised difference vegetation index values are correlated to high spatial mapping deviations, with Pearson correlation coefficients of −0.61, −0.5, and 0.18, respectively. Our approach is a powerful alternative for authoritative mapping of morphologically complex environmental phenomena and can provide independent reference data for supervised automatic remote sensing analysis.

为自动遥感方法生成可靠参考数据的可扩展和可转移方法至关重要,特别是对于绘制复杂的地球表面过程图,如人口稀少和交通不便地区的沟壑侵蚀图。作为劳动密集型原地权威测绘的替代方法,协作方法使志愿者能够通过对地球观测图像进行数字化来生成冗余的独立地理信息。我们面临的挑战是如何绘制复杂的沟壑轮廓图,将同一沟壑网络的多用户贡献整合在一起。通过比较哨兵 2 号、必应空中摄影和无人驾驶航空飞行器正射影像基础地图,我们利用卡尔曼滤波和机器学习对自愿提供的地理信息过程和多方贡献整合进行了研究,以划分纳米比亚西北部偏远地区的沟壑边界。卡尔曼滤波将不同的线条整合在一起,找到一个平滑的解决方案,并使用随机森林模型确定制图条件和地形特征,作为评估贡献者数字化质量的关键预测因素。利用基于专家的参考数据对结果进行评估后,我们确定了十项最佳贡献,其中哨兵 2 号、Bing Aerial 和无人飞行器正射影像的均方根距离值分别为 19.1 米、15.9 米和 16.6 米,可变性分别为 2.0 米、4.2 米和 3.8 米(均方根距离标准偏差)。与随机选择相比,使用基于随机森林回归的质量指标剔除性能最低的 "哨兵 2 号 "数据,可将均方根距离的准确度提高 35%,与监督遥感分类相比,可将准确度提高 54%。哨兵 2 号的结果显示,低坡度、低地形崎岖指数和高归一化差异植被指数值与高空间绘图偏差相关,皮尔逊相关系数分别为-0.61、-0.5 和 0.18。我们的方法是对形态复杂的环境现象进行权威测绘的有力替代方案,可为监督式自动遥感分析提供独立的参考数据。
{"title":"Integrating multi-user digitising actions for mapping gully outlines using a combined approach of Kalman filtering and machine learning","authors":"Miguel Vallejo Orti ,&nbsp;Katharina Anders ,&nbsp;Oluibukun Ajayi ,&nbsp;Olaf Bubenzer ,&nbsp;Bernhard Höfle","doi":"10.1016/j.ophoto.2024.100059","DOIUrl":"10.1016/j.ophoto.2024.100059","url":null,"abstract":"<div><p>Scalable and transferable methods for generating reliable reference data for automated remote sensing approaches are crucial, especially for mapping complex Earth surface processes such as gully erosion in low-populated and inaccessible areas. As an alternative for the labour-intense in-situ authoritative mapping, collaborative approaches enable volunteers to generate redundant independent geoinformation by digitising Earth observation imagery. We face the challenge of mapping the complex gully outlines integrating multi-user contributions of the same gully network. Comparing Sentinel 2, Bing Aerial, and unoccupied aerial vehicle orthophoto base maps, we examine the volunteered geographic information process and multi-contribution integration using Kalman filtering and machine learning to segment a gully border in a remote area in northwestern Namibia. The Kalman filtering integrates the different lines finding a smoothed solution, and a Random Forest model is used to identify mapping conditions and terrain features as key predictors for evaluating contributors' digitising quality. Assessing results with expert-based reference data, we identify ten contributions as optimal, yielding root mean square distance values of 19.1 m, 15.9 m and 16.6 m, and variability of 2.0 m, 4.2 m and 3.8 m (root mean square distance standard deviation) for Sentinel 2, Bing Aerial, and unoccupied aerial vehicle orthophoto, respectively. Eliminating the lowest performing contributions for Sentinel 2 using a Random Forest regression-based quality indicator improves the accuracy by up to 35% in the root mean square distance compared to a random selection, and up to 54% compared to a supervised remote sensing classification. Results for Sentinel 2 show that low slope, low terrain ruggedness index, and high normalised difference vegetation index values are correlated to high spatial mapping deviations, with Pearson correlation coefficients of −0.61, −0.5, and 0.18, respectively. Our approach is a powerful alternative for authoritative mapping of morphologically complex environmental phenomena and can provide independent reference data for supervised automatic remote sensing analysis.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100059"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000024/pdfft?md5=48a1afef19ee80fc26305409481984b5&pid=1-s2.0-S2667393224000024-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139874969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning techniques for hyperspectral image analysis in agriculture: A review 用于农业高光谱图像分析的深度学习技术:综述
Pub Date : 2024-04-01 Epub Date: 2024-03-30 DOI: 10.1016/j.ophoto.2024.100062
Mohamed Fadhlallah Guerri , Cosimo Distante , Paolo Spagnolo , Fares Bougourzi , Abdelmalik Taleb-Ahmed

In recent years, there has been a growing emphasis on assessing and ensuring the quality of horticultural and agricultural produce. Traditional methods involving field measurements, investigations, and statistical analyses are labour-intensive, time-consuming, and costly. As a solution, Hyperspectral Imaging (HSI) has emerged as a non-destructive and environmentally friendly technology. HSI has gained significant popularity as a new technology, particularly for its promising applications in remote sensing, notably in agriculture. However, classifying HSI data is highly complex because it involves several challenges, such as the excessive redundancy of spectral bands, scarcity of training samples, and the intricate non-linear relationship between spatial positions and spectral bands. Notably, Deep Learning (DL) techniques have demonstrated remarkable efficacy in various HSI analysis tasks, including those within agriculture. As interest continues to surge in leveraging HSI data for agricultural applications through DL approaches, a pressing need exists for a comprehensive survey that can effectively navigate researchers through the significant strides achieved and the future promising research directions in this domain. This literature review diligently compiles, analyzes, and discusses recent endeavours employing DL methodologies. These methodologies encompass a spectrum of approaches, ranging from Autoencoders (AE) to Convolutional Neural Networks (CNN) (in 1D, 2D, and 3D configurations), Recurrent Neural Networks (RNN), Deep Belief Networks (DBN), Generative Adversarial Networks (GAN), Transfer Learning (TL), Semi-Supervised Learning (SSL), Few-Shot Learning (FSL) and Active Learning (AL). These approaches are tailored to address the unique challenges posed by agricultural HSI analysis. This review evaluates and discusses the performance exhibited by these diverse approaches. To this end, the efficiency of these approaches has been rigorously analyzed and discussed based on the results of the state-of-the-art papers on widely recognized land cover datasets. Github repository.

近年来,人们越来越重视评估和确保园艺和农产品的质量。涉及实地测量、调查和统计分析的传统方法耗费大量人力、时间和成本。作为一种解决方案,高光谱成像(HSI)已成为一种非破坏性的环保技术。作为一项新技术,高光谱成像技术已经大受欢迎,尤其是在遥感领域,特别是农业领域的应用前景广阔。然而,对恒星成像数据进行分类非常复杂,因为它涉及多个难题,例如光谱波段冗余过多、训练样本稀缺以及空间位置与光谱波段之间错综复杂的非线性关系。值得注意的是,深度学习(DL)技术已在包括农业在内的各种人机交互分析任务中显示出显著功效。随着人们对通过深度学习方法将 HSI 数据用于农业应用的兴趣不断高涨,迫切需要一份全面的调查报告,以便有效地指导研究人员了解该领域取得的重大进展和未来有前途的研究方向。这篇文献综述认真地汇编、分析和讨论了最近采用 DL 方法所做的努力。这些方法涵盖了从自动编码器 (AE) 到卷积神经网络 (CNN)(一维、二维和三维配置)、循环神经网络 (RNN)、深度信念网络 (DBN)、生成对抗网络 (GAN)、迁移学习 (TL)、半监督学习 (SSL)、快速学习 (FSL) 和主动学习 (AL) 等一系列方法。这些方法都是为应对农业人机交互分析带来的独特挑战而量身定制的。本综述对这些不同方法的性能进行了评估和讨论。为此,本综述基于在广泛认可的土地覆被数据集上发表的最先进论文的结果,对这些方法的效率进行了严格的分析和讨论。Github 存储库。
{"title":"Deep learning techniques for hyperspectral image analysis in agriculture: A review","authors":"Mohamed Fadhlallah Guerri ,&nbsp;Cosimo Distante ,&nbsp;Paolo Spagnolo ,&nbsp;Fares Bougourzi ,&nbsp;Abdelmalik Taleb-Ahmed","doi":"10.1016/j.ophoto.2024.100062","DOIUrl":"https://doi.org/10.1016/j.ophoto.2024.100062","url":null,"abstract":"<div><p>In recent years, there has been a growing emphasis on assessing and ensuring the quality of horticultural and agricultural produce. Traditional methods involving field measurements, investigations, and statistical analyses are labour-intensive, time-consuming, and costly. As a solution, Hyperspectral Imaging (HSI) has emerged as a non-destructive and environmentally friendly technology. HSI has gained significant popularity as a new technology, particularly for its promising applications in remote sensing, notably in agriculture. However, classifying HSI data is highly complex because it involves several challenges, such as the excessive redundancy of spectral bands, scarcity of training samples, and the intricate non-linear relationship between spatial positions and spectral bands. Notably, Deep Learning (DL) techniques have demonstrated remarkable efficacy in various HSI analysis tasks, including those within agriculture. As interest continues to surge in leveraging HSI data for agricultural applications through DL approaches, a pressing need exists for a comprehensive survey that can effectively navigate researchers through the significant strides achieved and the future promising research directions in this domain. This literature review diligently compiles, analyzes, and discusses recent endeavours employing DL methodologies. These methodologies encompass a spectrum of approaches, ranging from Autoencoders (AE) to Convolutional Neural Networks (CNN) (in 1D, 2D, and 3D configurations), Recurrent Neural Networks (RNN), Deep Belief Networks (DBN), Generative Adversarial Networks (GAN), Transfer Learning (TL), Semi-Supervised Learning (SSL), Few-Shot Learning (FSL) and Active Learning (AL). These approaches are tailored to address the unique challenges posed by agricultural HSI analysis. This review evaluates and discusses the performance exhibited by these diverse approaches. To this end, the efficiency of these approaches has been rigorously analyzed and discussed based on the results of the state-of-the-art papers on widely recognized land cover datasets. <span>Github repository</span><svg><path></path></svg>.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100062"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266739322400005X/pdfft?md5=5a272b7d6066b8efe8bee784c28464f9&pid=1-s2.0-S266739322400005X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140331066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Airborne sensor fusion: Expected accuracy and behavior of a concurrent adjustment 机载传感器融合:并发调整的预期精度和行为
Pub Date : 2024-04-01 Epub Date: 2024-01-12 DOI: 10.1016/j.ophoto.2023.100057
Kyriaki Mouzakidou, Aurélien Brun, Davide A. Cucci, Jan Skaloud

Tightly-coupled sensor orientation, i.e. the simultaneous processing of temporal (GNSS and raw inertial) and spatial (image and lidar) constraints in a common adjustment, has demonstrated significant improvement in the quality of attitude determination with small inertial sensors. This is particularly beneficial in kinematic laser scanning on lightweight aerial platforms, such as drones, which employ direct sensor orientation for the spatial interpretation of laser vectors. In this study, previously reported preliminary results are extended to assess the gain in accuracy of sensor orientation through leveraging all available spatio-temporal constraints in a dynamic network i) with a commercial IMU for drones and ii) with simultaneous processing of raw-observations of several low-quality IMUs. Additionally, we evaluate the influence of different types of spatial constraints (image 2D and point-cloud 3D tie-points) and flight geometries (with and without a cross flight line). We present the newly implemented estimation of confidence levels and compare those with the observed residual errors. The empirical evidence demonstrates that the use of spatial constraints increases the attitude accuracy of the derived trajectory by a factor of 2–3, both for the commercial and low-quality IMUs, while at the same time reducing the dispersion of geo-referencing errors, resulting in a considerably more precise and self-coherent geo-referenced point-cloud. We further demonstrate that the use of image constraints (additionally to lidar constraints) stabilizes the in-flight lidar boresight estimation by a factor of 3–10, establishing the feasibility of such estimation even in the absence of special calibration patterns or calibration targets.

紧密耦合的传感器定向,即在共同调整中同时处理时间(全球导航卫星系统和原始惯性)和空间(图像和激光雷达)约束,已证明可显著提高小型惯性传感器的姿态确定质量。这对轻型航空平台(如无人机)上的运动学激光扫描尤其有益,因为无人机采用直接传感器定位来解释激光矢量的空间。在本研究中,我们对之前报告的初步结果进行了扩展,评估了通过利用动态网络中所有可用的时空约束条件(i)和商用无人机 IMU,以及(ii)同时处理多个低质量 IMU 的原始观测数据,传感器定位精度的提高情况。此外,我们还评估了不同类型空间约束(图像二维和点云三维连接点)和飞行几何(有交叉飞行线和无交叉飞行线)的影响。我们介绍了新实施的置信度估计,并将其与观测到的残余误差进行了比较。经验证据表明,空间约束的使用将推导轨迹的姿态精度提高了 2-3 倍,无论是对于商用还是低质量 IMU,同时还降低了地理参照误差的分散性,使地理参照点云更加精确和自洽。我们进一步证明,使用图像约束(激光雷达约束之外)可将飞行中激光雷达孔距估算的稳定性提高 3-10 倍,从而确立了即使在没有特殊校准模式或校准目标的情况下进行此类估算的可行性。
{"title":"Airborne sensor fusion: Expected accuracy and behavior of a concurrent adjustment","authors":"Kyriaki Mouzakidou,&nbsp;Aurélien Brun,&nbsp;Davide A. Cucci,&nbsp;Jan Skaloud","doi":"10.1016/j.ophoto.2023.100057","DOIUrl":"10.1016/j.ophoto.2023.100057","url":null,"abstract":"<div><p><em>Tightly-coupled</em> sensor orientation, i.e. the simultaneous processing of temporal (GNSS and raw inertial) and spatial (image and lidar) constraints in a common adjustment, has demonstrated significant improvement in the quality of attitude determination with small inertial sensors. This is particularly beneficial in kinematic laser scanning on lightweight aerial platforms, such as drones, which employ direct sensor orientation for the spatial interpretation of laser vectors. In this study, previously reported preliminary results are extended to assess the gain in accuracy of sensor orientation through leveraging all available spatio-temporal constraints in a dynamic network i) with a commercial IMU for drones and ii) with simultaneous processing of raw-observations of several low-quality IMUs. Additionally, we evaluate the influence of different types of spatial constraints (image 2D and point-cloud 3D tie-points) and flight geometries (with and without a cross flight line). We present the newly implemented estimation of confidence levels and compare those with the observed residual errors. The empirical evidence demonstrates that the use of spatial constraints increases the attitude accuracy of the derived trajectory by a factor of 2–3, both for the commercial and low-quality IMUs, while at the same time reducing the dispersion of geo-referencing errors, resulting in a considerably more precise and self-coherent geo-referenced point-cloud. We further demonstrate that the use of image constraints (additionally to lidar constraints) stabilizes the in-flight lidar boresight estimation by a factor of 3–10, establishing the feasibility of such estimation even in the absence of special calibration patterns or calibration targets.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100057"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393223000285/pdfft?md5=0f7ab041b690c142ba3b35d6019ecf11&pid=1-s2.0-S2667393223000285-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139632413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ISPRS Open Journal of Photogrammetry and Remote Sensing
全部 Geobiology Appl. Clay Sci. Geochim. Cosmochim. Acta J. Hydrol. Org. Geochem. Carbon Balance Manage. Contrib. Mineral. Petrol. Int. J. Biometeorol. IZV-PHYS SOLID EART+ J. Atmos. Chem. Acta Oceanolog. Sin. Acta Geophys. ACTA GEOL POL ACTA PETROL SIN ACTA GEOL SIN-ENGL AAPG Bull. Acta Geochimica Adv. Atmos. Sci. Adv. Meteorol. Am. J. Phys. Anthropol. Am. J. Sci. Am. Mineral. Annu. Rev. Earth Planet. Sci. Appl. Geochem. Aquat. Geochem. Ann. Glaciol. Archaeol. Anthropol. Sci. ARCHAEOMETRY ARCT ANTARCT ALP RES Asia-Pac. J. Atmos. Sci. ATMOSPHERE-BASEL Atmos. Res. Aust. J. Earth Sci. Atmos. Chem. Phys. Atmos. Meas. Tech. Basin Res. Big Earth Data BIOGEOSCIENCES Geostand. Geoanal. Res. GEOLOGY Geosci. J. Geochem. J. Geochem. Trans. Geosci. Front. Geol. Ore Deposits Global Biogeochem. Cycles Gondwana Res. Geochem. Int. Geol. J. Geophys. Prospect. Geosci. Model Dev. GEOL BELG GROUNDWATER Hydrogeol. J. Hydrol. Earth Syst. Sci. Hydrol. Processes Int. J. Climatol. Int. J. Earth Sci. Int. Geol. Rev. Int. J. Disaster Risk Reduct. Int. J. Geomech. Int. J. Geog. Inf. Sci. Isl. Arc J. Afr. Earth. Sci. J. Adv. Model. Earth Syst. J APPL METEOROL CLIM J. Atmos. Oceanic Technol. J. Atmos. Sol. Terr. Phys. J. Clim. J. Earth Sci. J. Earth Syst. Sci. J. Environ. Eng. Geophys. J. Geog. Sci. Mineral. Mag. Miner. Deposita Mon. Weather Rev. Nat. Hazards Earth Syst. Sci. Nat. Clim. Change Nat. Geosci. Ocean Dyn. Ocean and Coastal Research npj Clim. Atmos. Sci. Ocean Modell. Ocean Sci. Ore Geol. Rev. OCEAN SCI J Paleontol. J. PALAEOGEOGR PALAEOCL PERIOD MINERAL PETROLOGY+ Phys. Chem. Miner. Polar Sci. Prog. Oceanogr. Quat. Sci. Rev. Q. J. Eng. Geol. Hydrogeol. RADIOCARBON Pure Appl. Geophys. Resour. Geol. Rev. Geophys. Sediment. Geol.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1