首页 > 最新文献

ISPRS Open Journal of Photogrammetry and Remote Sensing最新文献

英文 中文
Comparing Deep Learning models for mapping rice cultivation area in Bhutan using high-resolution satellite imagery 比较使用高分辨率卫星图像绘制不丹水稻种植面积的深度学习模型
Pub Date : 2025-01-01 Epub Date: 2025-01-31 DOI: 10.1016/j.ophoto.2025.100084
Biplov Bhandari, Timothy Mayer
<div><div>Crop type and crop extent are critical information that helps policymakers make informed decisions on food security. As the economic growth of Bhutan has increased at an annual rate of 7.5% over the last three decades, there is a need to provide geospatial products that can be leveraged by local experts to support decision-making in the context of economic and population growth effects and impacts on food security. To address these concerns related to food security, through various policies and implementation, the Bhutanese government is promoting several drought-resilient, high-yielding, and disease-resistant crop varieties to actively combat environmental challenges and support higher crop yields. Simultaneously the Bhutanese government is increasing its utilization of technological approaches such as including Remote Sensing-based knowledge and data products into their decision-making process. This study focuses on Paro, one of the top rice-yielding districts in Bhutan, and employs publicly available Norway’s International Climate and Forest Initiative (NICFI) high-resolution satellite imagery from Planet Labs. Two Deep Learning approaches, point-based (DNN) and patch-based (U-Net), models were used in conjunction with cloud-computing platforms. Four different models per Deep Learning approaches (DNN and U-Net) were trained: (1) Red, Green, Blue, and Near-Infrared (RGBN) channels from Planet, (2) RGBN and Elevation data (RGBNE), (3) RGBN and Sentinel-1 data (RGBNS), and (4) RGBN with Elevation and Sentinel-1 data (RGBNES). From this comprehensive analysis, the U-Net displayed higher performance metrics across both model training and model validation efforts. Among the U-Net model sets, the RGBN, RGBNE, RGBNS, and RGBNES models had an F1-score of 0.8546, 0.8563, 0.8467, and 0.8500 respectively. An additional independent model evaluation was performed and found a high level of performance variation across all the metrics (precision, recall, and F1-score) underscoring the need for practitioners to employ independent validation. For this independent model evaluation, the U-Net-based RGBN, RGBNE, RGBNS, and RGBNES models displayed the F1-scores of 0.5935, 0.6154, 0.5882, and 0.6582, suggesting U-Net RGBNES as the best model across the comparison. The study demonstrates that the Deep Learning approaches can be used for mapping rice cultivation area, and can also be used in combination with the survey-based approaches currently utilized by the Department of Agriculture (DoA) in Bhutan. Further this study successfully demonstrated the usage of regional land cover products such as SERVIR’s Regional Land Cover Monitoring System (RLCMS) as a weak label approach to capture different strata addressing the class imbalance problem and improving the sampling design for Deep Learning application. Finally, through preliminary model testing and comparisons outlined it was demonstrated that using additional features such as NDVI, EVI, and NDWI did not d
作物类型和种植面积是帮助决策者就粮食安全作出明智决策的关键信息。在过去的三十年里,不丹的经济以每年7.5%的速度增长,因此有必要提供地理空间产品,以便当地专家在经济和人口增长的影响以及对粮食安全的影响的背景下支持决策。为了解决这些与粮食安全有关的问题,不丹政府通过各种政策和实施,正在推广几种抗旱、高产和抗病的作物品种,以积极应对环境挑战,支持提高作物产量。与此同时,不丹政府正在增加对技术方法的利用,例如将基于遥感的知识和数据产品纳入决策过程。这项研究的重点是不丹水稻产量最高的地区之一帕罗,并使用了地球实验室提供的挪威国际气候与森林倡议(NICFI)的公开高分辨率卫星图像。两种深度学习方法,基于点的(DNN)和基于补丁的(U-Net),模型与云计算平台结合使用。每种深度学习方法(DNN和U-Net)训练了四种不同的模型:(1)来自Planet的红、绿、蓝和近红外(RGBN)通道,(2)RGBN和高程数据(RGBNE), (3) RGBN和Sentinel-1数据(RGBNS),以及(4)RGBN结合高程和Sentinel-1数据(RGBNES)。从这个综合分析中,U-Net在模型训练和模型验证方面都显示出更高的性能指标。U-Net模型集中,RGBN、RGBNE、RGBNS和RGBNES模型的f1得分分别为0.8546、0.8563、0.8467和0.8500。执行了一个额外的独立模型评估,并发现在所有度量(精度、召回率和f1分数)中存在高水平的性能变化,强调从业者需要采用独立验证。在独立模型评价中,基于U-Net的RGBN、RGBNE、RGBNS和RGBNES模型的f1得分分别为0.5935、0.6154、0.5882和0.6582,表明U-Net RGBNES模型是整体比较的最佳模型。该研究表明,深度学习方法可用于绘制水稻种植面积,也可与不丹农业部(DoA)目前使用的基于调查的方法结合使用。此外,本研究成功地展示了使用区域土地覆盖产品(如SERVIR的区域土地覆盖监测系统(RLCMS))作为一种弱标签方法来捕获不同的地层,解决了类不平衡问题,并改进了深度学习应用的采样设计。最后,通过初步的模型测试和比较表明,使用额外的特征,如NDVI、EVI和NDWI并没有显著提高模型的性能。
{"title":"Comparing Deep Learning models for mapping rice cultivation area in Bhutan using high-resolution satellite imagery","authors":"Biplov Bhandari,&nbsp;Timothy Mayer","doi":"10.1016/j.ophoto.2025.100084","DOIUrl":"10.1016/j.ophoto.2025.100084","url":null,"abstract":"&lt;div&gt;&lt;div&gt;Crop type and crop extent are critical information that helps policymakers make informed decisions on food security. As the economic growth of Bhutan has increased at an annual rate of 7.5% over the last three decades, there is a need to provide geospatial products that can be leveraged by local experts to support decision-making in the context of economic and population growth effects and impacts on food security. To address these concerns related to food security, through various policies and implementation, the Bhutanese government is promoting several drought-resilient, high-yielding, and disease-resistant crop varieties to actively combat environmental challenges and support higher crop yields. Simultaneously the Bhutanese government is increasing its utilization of technological approaches such as including Remote Sensing-based knowledge and data products into their decision-making process. This study focuses on Paro, one of the top rice-yielding districts in Bhutan, and employs publicly available Norway’s International Climate and Forest Initiative (NICFI) high-resolution satellite imagery from Planet Labs. Two Deep Learning approaches, point-based (DNN) and patch-based (U-Net), models were used in conjunction with cloud-computing platforms. Four different models per Deep Learning approaches (DNN and U-Net) were trained: (1) Red, Green, Blue, and Near-Infrared (RGBN) channels from Planet, (2) RGBN and Elevation data (RGBNE), (3) RGBN and Sentinel-1 data (RGBNS), and (4) RGBN with Elevation and Sentinel-1 data (RGBNES). From this comprehensive analysis, the U-Net displayed higher performance metrics across both model training and model validation efforts. Among the U-Net model sets, the RGBN, RGBNE, RGBNS, and RGBNES models had an F1-score of 0.8546, 0.8563, 0.8467, and 0.8500 respectively. An additional independent model evaluation was performed and found a high level of performance variation across all the metrics (precision, recall, and F1-score) underscoring the need for practitioners to employ independent validation. For this independent model evaluation, the U-Net-based RGBN, RGBNE, RGBNS, and RGBNES models displayed the F1-scores of 0.5935, 0.6154, 0.5882, and 0.6582, suggesting U-Net RGBNES as the best model across the comparison. The study demonstrates that the Deep Learning approaches can be used for mapping rice cultivation area, and can also be used in combination with the survey-based approaches currently utilized by the Department of Agriculture (DoA) in Bhutan. Further this study successfully demonstrated the usage of regional land cover products such as SERVIR’s Regional Land Cover Monitoring System (RLCMS) as a weak label approach to capture different strata addressing the class imbalance problem and improving the sampling design for Deep Learning application. Finally, through preliminary model testing and comparisons outlined it was demonstrated that using additional features such as NDVI, EVI, and NDWI did not d","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100084"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimal rotations in arbitrary dimensions with applications to hypothesis testing and parameter estimation 任意维度的最小旋转及其在假设检验和参数估计中的应用
Pub Date : 2025-01-01 Epub Date: 2025-02-07 DOI: 10.1016/j.ophoto.2025.100085
Jochen Meidow, Horst Hammer
The rotation of a vector around the origin and in a plane constitutes a minimal rotation. Such a rotation is of vital importance in many applications. Examples are the re-orientation of spacecraft or antennas with minimal effort, the smooth interpolation between sensor poses, and the drawing of circular arcs in 2D and 3D. In numerical linear algebra, minimal rotations in different planes are used to manipulate matrices, e.g., to compute the QR decomposition of a matrix. This review compiles the concepts and formulas for minimal rotations in arbitrary dimensions for easy reference and provides a summary of the mathematical background necessary to understand the techniques described in this paper. The discussed concepts are accompanied by important examples in the context of photogrammetric image analysis. Hypothesis testing and parameter estimation for uncertain geometric entities are described in detail. In both applications, minimal rotations play an important role.
矢量绕原点和平面的旋转构成最小旋转。这种旋转在许多应用中是至关重要的。例如,以最小的努力重新定位航天器或天线,传感器姿态之间的平滑插值,以及在2D和3D中绘制圆弧。在数值线性代数中,不同平面上的最小旋转被用来操作矩阵,例如,计算矩阵的QR分解。这篇综述汇编了任意维度的最小旋转的概念和公式,以方便参考,并提供了理解本文所描述的技术所需的数学背景的总结。所讨论的概念伴随着重要的例子在摄影测量图像分析的背景下。详细描述了不确定几何实体的假设检验和参数估计。在这两种应用中,最小旋转都起着重要作用。
{"title":"Minimal rotations in arbitrary dimensions with applications to hypothesis testing and parameter estimation","authors":"Jochen Meidow,&nbsp;Horst Hammer","doi":"10.1016/j.ophoto.2025.100085","DOIUrl":"10.1016/j.ophoto.2025.100085","url":null,"abstract":"<div><div>The rotation of a vector around the origin and in a plane constitutes a minimal rotation. Such a rotation is of vital importance in many applications. Examples are the re-orientation of spacecraft or antennas with minimal effort, the smooth interpolation between sensor poses, and the drawing of circular arcs in 2D and 3D. In numerical linear algebra, minimal rotations in different planes are used to manipulate matrices, e.g., to compute the QR decomposition of a matrix. This review compiles the concepts and formulas for minimal rotations in arbitrary dimensions for easy reference and provides a summary of the mathematical background necessary to understand the techniques described in this paper. The discussed concepts are accompanied by important examples in the context of photogrammetric image analysis. Hypothesis testing and parameter estimation for uncertain geometric entities are described in detail. In both applications, minimal rotations play an important role.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100085"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143420354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measuring nearshore waves at break point in 4D with Stereo-GoPro photogrammetry: A field comparison with multi-beam LiDAR and pressure sensors 利用 Stereo-GoPro 摄影测量技术在 4D 条件下测量近岸波浪断裂点:与多波束激光雷达和压力传感器的实地比较
Pub Date : 2024-12-01 Epub Date: 2024-11-06 DOI: 10.1016/j.ophoto.2024.100077
Marion Jaud , Stéphane Bertin , Emmanuel Augereau , France Floc’h
Measuring nearshore waves remains technically challenging despite wave properties are being used in a variety of applications. With the promise of high-resolution and remotely-sensed measurements of water surfaces in four dimensions (spatially and temporally), stereo-photogrammetry applied to video imagery has grown as a viable solution over the last ten years. However, past deployments have essentially used costly cameras and optics, requiring fixed deployment platforms and hindering the applicability of the method in the field.
Focusing on close-range measurements of nearshore waves at break point, this paper presents a detailed evaluation of a field-oriented and cost-effective stereo-video system composed of two GoProTM (Hero 7) cameras capable of collecting 12-megapixel imagery at 24 frames per second. The so-called ‘Stereo-GoPro’ system was deployed in the surf zone during energetic conditions at a macrotidal field site using a custom-assembled mobile tower. Deployed concurrently with stereo-video, a 16-beam LiDAR (Light Detection and Ranging) and two pressure sensors provided independent data to assess stereo-GoPro performance. All three methods were compared with respect to the evolution of the free-surface elevation over 25 min of recording at high tide and the wave parameters derived from spectral analysis. We show that stereo-GoPro allows producing digital elevation models (DEMs) of the water surface over large areas (250 m2) at high spatial resolution (0.2 m grid size), which was unsurpassed by the LiDAR. From instrument inter-comparisons at the location of the pressure transducers, free-surface elevation root-mean square errors of 0.11 m and 0.18 m were obtained respectively for LiDAR and stereo-GoPro. This translated into a maximum relative error of 3.9% and 12.5% on spectral wave parameters for LiDAR and stereo-GoPro, respectively. Optical distortion in imagery, which could not be completely corrected with calibration, was the main source of error. Whilst stereo-video processing workflow remains complex, cost-effective stereo-photogrammetry already opens new opportunities for deriving wave parameters in coastal regions, as well as for various other practical applications. Further tests should try to address specifically challenges associated to variable ambient conditions and acquisition configurations, affecting measurement performance, to guarantee a larger uptake of the technique.
尽管波浪特性被广泛应用于各种领域,但对近岸波浪的测量在技术上仍具有挑战性。过去十年来,由于有望在四维(空间和时间)上对水面进行高分辨率遥感测量,应用于视频图像的立体摄影测量技术已发展成为一种可行的解决方案。本文以近岸波浪断裂点的近距离测量为重点,详细评估了一种面向现场、经济高效的立体视频系统,该系统由两台 GoProTM(Hero 7)摄像机组成,能够以每秒 24 帧的速度采集 1200 万像素的图像。所谓的 "Stereo-GoPro "系统是在大潮汐现场的高能条件下,利用定制组装的移动塔在冲浪区部署的。与立体视频同时部署的还有一个 16 光束激光雷达(光探测与测距)和两个压力传感器,它们提供独立数据以评估立体-GoPro 的性能。我们比较了这三种方法在涨潮时记录 25 分钟的自由表面海拔高度的变化情况,以及通过光谱分析得出的波浪参数。结果表明,立体-GoPro 能够以高空间分辨率(0.2 米网格大小)生成大面积(250 平方米)的水面数字高程模型(DEM),这是激光雷达无法超越的。通过在压力传感器位置进行仪器间比较,LiDAR 和立体-GoPro 的自由表面高程均方根误差分别为 0.11 米和 0.18 米。因此,LiDAR 和立体 GoPro 的光谱波参数的最大相对误差分别为 3.9% 和 12.5%。图像中的光学失真是造成误差的主要原因,校准无法完全纠正这种失真。虽然立体视频处理工作流程依然复杂,但具有成本效益的立体摄影测量已经为推导沿海地区的波浪参数以及其他各种实际应用提供了新的机会。进一步的测试应努力解决与多变的环境条件和采集配置相关的挑战,这些挑战会影响测量性能,以确保该技术得到更广泛的应用。
{"title":"Measuring nearshore waves at break point in 4D with Stereo-GoPro photogrammetry: A field comparison with multi-beam LiDAR and pressure sensors","authors":"Marion Jaud ,&nbsp;Stéphane Bertin ,&nbsp;Emmanuel Augereau ,&nbsp;France Floc’h","doi":"10.1016/j.ophoto.2024.100077","DOIUrl":"10.1016/j.ophoto.2024.100077","url":null,"abstract":"<div><div>Measuring nearshore waves remains technically challenging despite wave properties are being used in a variety of applications. With the promise of high-resolution and remotely-sensed measurements of water surfaces in four dimensions (spatially and temporally), stereo-photogrammetry applied to video imagery has grown as a viable solution over the last ten years. However, past deployments have essentially used costly cameras and optics, requiring fixed deployment platforms and hindering the applicability of the method in the field.</div><div>Focusing on close-range measurements of nearshore waves at break point, this paper presents a detailed evaluation of a field-oriented and cost-effective stereo-video system composed of two <em>GoPro</em><sup><em>TM</em></sup> <em>(Hero 7)</em> cameras capable of collecting 12-megapixel imagery at 24 frames per second. The so-called ‘Stereo-GoPro’ system was deployed in the surf zone during energetic conditions at a macrotidal field site using a custom-assembled mobile tower. Deployed concurrently with stereo-video, a 16-beam LiDAR (Light Detection and Ranging) and two pressure sensors provided independent data to assess stereo-GoPro performance. All three methods were compared with respect to the evolution of the free-surface elevation over 25 min of recording at high tide and the wave parameters derived from spectral analysis. We show that stereo-GoPro allows producing digital elevation models (DEMs) of the water surface over large areas (250 m<sup>2</sup>) at high spatial resolution (0.2 m grid size), which was unsurpassed by the LiDAR. From instrument inter-comparisons at the location of the pressure transducers, free-surface elevation root-mean square errors of 0.11 m and 0.18 m were obtained respectively for LiDAR and stereo-GoPro. This translated into a maximum relative error of 3.9% and 12.5% on spectral wave parameters for LiDAR and stereo-GoPro, respectively. Optical distortion in imagery, which could not be completely corrected with calibration, was the main source of error. Whilst stereo-video processing workflow remains complex, cost-effective stereo-photogrammetry already opens new opportunities for deriving wave parameters in coastal regions, as well as for various other practical applications. Further tests should try to address specifically challenges associated to variable ambient conditions and acquisition configurations, affecting measurement performance, to guarantee a larger uptake of the technique.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"14 ","pages":"Article 100077"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain adaptation of deep neural networks for tree part segmentation using synthetic forest trees 利用合成林木对深度神经网络进行树体部分分割的领域适应性研究
Pub Date : 2024-12-01 Epub Date: 2024-11-16 DOI: 10.1016/j.ophoto.2024.100078
Mitch Bryson , Ahalya Ravendran , Celine Mercier , Tancred Frickey , Sadeepa Jayathunga , Grant Pearse , Robin J.L. Hartley
Supervised deep learning algorithms have recently achieved state-of-the-art performance in the classification, segmentation and analysis of 3D LiDAR point cloud data in a wide-range of applications and environments. One of the main downsides of deep learning-based approaches is the need for extensive training datasets, i.e. LiDAR point clouds that have been annotated for target tasks by human experts. One strategy for addressing this issue is the use of simulated/synthetic data (with automatically generated annotations) for training models which can then be deployed on real target data/environments. This paper explores using synthetic data of realistic forest trees to train deep learning models for tree part segmentation from real forest LiDAR data. We develop a new pipeline for generating high-fidelity simulated LiDAR scans of synthetic forest trees and combine this with an unsupervised domain adaptation strategy to adapt models trained on synthetic data to LiDAR data captured in real forest environments.
Models were trained for semantic segmentation of tree parts using a PointNet++ architecture and evaluated across a range of medium to high-resolution laser scanning datasets collected across both ground-based and aerial platforms in multiple forest environments. Results of our work indicated that models trained on our synthetic data pipeline were competitive with models trained on real data, in particular when real data came from non-target sites, and our unsupervised domain adaptation method further improved performance. Our approach has implications for reducing the burden required in manual human expert annotation of large LiDAR datasets required to achieve high-performance from deep learning methods for forest analysis. The use of synthetically-trained models shown here provides a potential way to reduce the barriers to the use of deep learning in large-scale forest analysis, with implications to applications ranging from forest inventories to scaling-up in-situ forest phenotyping.
最近,有监督的深度学习算法在广泛的应用和环境中,在三维激光雷达点云数据的分类、分割和分析方面取得了一流的性能。基于深度学习的方法的主要缺点之一是需要大量的训练数据集,即由人类专家为目标任务标注的激光雷达点云。解决这一问题的策略之一是使用模拟/合成数据(带有自动生成的注释)来训练模型,然后将模型部署到真实的目标数据/环境中。本文探讨了如何使用真实林木的合成数据来训练深度学习模型,以便从真实的森林激光雷达数据中进行树木部分分割。我们开发了一种新的管道,用于生成合成林木的高保真模拟激光雷达扫描,并将其与无监督领域适应策略相结合,使在合成数据上训练的模型适应真实森林环境中捕获的激光雷达数据。我们使用 PointNet++ 架构对模型进行了语义分割训练,并在多种森林环境中通过地面和空中平台收集的一系列中高分辨率激光扫描数据集上进行了评估。我们的工作结果表明,在我们的合成数据管道上训练的模型与在真实数据上训练的模型具有竞争力,特别是当真实数据来自非目标地点时,我们的无监督领域适应方法进一步提高了性能。我们的方法可以减轻专家对大型激光雷达数据集进行人工标注的负担,从而实现深度学习方法在森林分析中的高性能。本文所展示的合成训练模型的使用为减少深度学习在大规模森林分析中的使用障碍提供了一种潜在的方法,对从森林资源清查到扩大原位森林表型的各种应用都有影响。
{"title":"Domain adaptation of deep neural networks for tree part segmentation using synthetic forest trees","authors":"Mitch Bryson ,&nbsp;Ahalya Ravendran ,&nbsp;Celine Mercier ,&nbsp;Tancred Frickey ,&nbsp;Sadeepa Jayathunga ,&nbsp;Grant Pearse ,&nbsp;Robin J.L. Hartley","doi":"10.1016/j.ophoto.2024.100078","DOIUrl":"10.1016/j.ophoto.2024.100078","url":null,"abstract":"<div><div>Supervised deep learning algorithms have recently achieved state-of-the-art performance in the classification, segmentation and analysis of 3D LiDAR point cloud data in a wide-range of applications and environments. One of the main downsides of deep learning-based approaches is the need for extensive training datasets, <em>i.e</em>. LiDAR point clouds that have been annotated for target tasks by human experts. One strategy for addressing this issue is the use of simulated/synthetic data (with automatically generated annotations) for training models which can then be deployed on real target data/environments. This paper explores using synthetic data of realistic forest trees to train deep learning models for tree part segmentation from real forest LiDAR data. We develop a new pipeline for generating high-fidelity simulated LiDAR scans of synthetic forest trees and combine this with an unsupervised domain adaptation strategy to adapt models trained on synthetic data to LiDAR data captured in real forest environments.</div><div>Models were trained for semantic segmentation of tree parts using a PointNet++ architecture and evaluated across a range of medium to high-resolution laser scanning datasets collected across both ground-based and aerial platforms in multiple forest environments. Results of our work indicated that models trained on our synthetic data pipeline were competitive with models trained on real data, in particular when real data came from non-target sites, and our unsupervised domain adaptation method further improved performance. Our approach has implications for reducing the burden required in manual human expert annotation of large LiDAR datasets required to achieve high-performance from deep learning methods for forest analysis. The use of synthetically-trained models shown here provides a potential way to reduce the barriers to the use of deep learning in large-scale forest analysis, with implications to applications ranging from forest inventories to scaling-up in-situ forest phenotyping.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"14 ","pages":"Article 100078"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Colour guided ground-to-UAV fire segmentation 彩色制导的地对无人机火力分割
Pub Date : 2024-12-01 Epub Date: 2024-11-07 DOI: 10.1016/j.ophoto.2024.100076
Rui Zhou, Tardi Tjahjadi
Leveraging ground-annotated data for scene analysis on unmanned aerial vehicles (UAVs) can lead to valuable real-world applications. However, existing unsupervised domain adaptive (UDA) methods primarily focus on domain confusion, which raises conflicts among training data if there is a huge domain shift caused by variations in observation perspectives or locations. To illustrate this problem, we present a ground-to-UAV fire segmentation method as a novel benchmark to verify typical UDA methods, and propose an effective framework, Colour-Mix, to boost the performance of the segmentation method equivalent to the fully supervised level. First, we identify domain-invariant fire features by deriving fire-discriminating components (u*VS) defined in colour spaces Lu*v*, YUV, and HSV. Notably, we devise criteria to combine components that are beneficial for integrating colour signals into deep-learning training, thus significantly improving the generalisation abilities of the framework without resorting to UDA techniques. Second, we perform class-specific mixing to eliminate irrelevant background content on the ground scenario and enrich annotated fire samples for the UAV imagery. Third, we propose to disentangle the feature encoding for different domains and use class-specific mixing as robust training signals for the target domain. The framework is validated on the drone-captured dataset, Flame, by using the combined ground-level source datasets, Street Fire and Corsica Wildfires. The code is available at https://github.com/Rui-Zhou-2/Colour-Mix.
利用无人飞行器(UAV)上的地面标注数据进行场景分析,可以带来有价值的实际应用。然而,现有的无监督领域自适应(UDA)方法主要侧重于领域混淆,如果观测视角或位置的变化导致领域发生巨大变化,则会引发训练数据之间的冲突。为了说明这个问题,我们提出了一种从地面到无人机的火灾分割方法,作为验证典型 UDA 方法的新基准,并提出了一个有效的框架--Colour-Mix,以提高分割方法的性能,使其达到完全监督的水平。首先,我们通过推导色彩空间 Lu*v*、YUV 和 HSV 中定义的火灾区分成分 (u*VS),识别出领域不变的火灾特征。值得注意的是,我们设计了将有利于将颜色信号整合到深度学习训练中的成分组合标准,从而在不采用 UDA 技术的情况下显著提高了框架的泛化能力。其次,我们进行了特定类别的混合,以消除地面场景中无关的背景内容,并丰富无人机图像的注释火灾样本。第三,我们建议将不同领域的特征编码分离开来,并使用特定类别混合作为目标领域的稳健训练信号。该框架在无人机捕获的数据集 "Flame "上进行了验证,并使用了综合地面源数据集 "Street Fire "和 "Corsica Wildfires"。代码见 https://github.com/Rui-Zhou-2/Colour-Mix。
{"title":"Colour guided ground-to-UAV fire segmentation","authors":"Rui Zhou,&nbsp;Tardi Tjahjadi","doi":"10.1016/j.ophoto.2024.100076","DOIUrl":"10.1016/j.ophoto.2024.100076","url":null,"abstract":"<div><div>Leveraging ground-annotated data for scene analysis on unmanned aerial vehicles (UAVs) can lead to valuable real-world applications. However, existing unsupervised domain adaptive (UDA) methods primarily focus on domain confusion, which raises conflicts among training data if there is a huge domain shift caused by variations in observation perspectives or locations. To illustrate this problem, we present a ground-to-UAV fire segmentation method as a novel benchmark to verify typical UDA methods, and propose an effective framework, Colour-Mix, to boost the performance of the segmentation method equivalent to the fully supervised level. First, we identify domain-invariant fire features by deriving fire-discriminating components (u*VS) defined in colour spaces Lu*v*, YUV, and HSV. Notably, we devise criteria to combine components that are beneficial for integrating colour signals into deep-learning training, thus significantly improving the generalisation abilities of the framework without resorting to UDA techniques. Second, we perform class-specific mixing to eliminate irrelevant background content on the ground scenario and enrich annotated fire samples for the UAV imagery. Third, we propose to disentangle the feature encoding for different domains and use class-specific mixing as robust training signals for the target domain. The framework is validated on the drone-captured dataset, Flame, by using the combined ground-level source datasets, Street Fire and Corsica Wildfires. The code is available at <span><span>https://github.com/Rui-Zhou-2/Colour-Mix</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"14 ","pages":"Article 100076"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-cost real-time aerial object detection and GPS location tracking pipeline 低成本实时航空物体探测和 GPS 定位跟踪管道
Pub Date : 2024-08-01 Epub Date: 2024-06-03 DOI: 10.1016/j.ophoto.2024.100069
Allan Lago, Sahaj Patel, Aditya Singh

Real-time object detection and tracking is an active area of aerial remote sensing research that enables many environmental and ecological monitoring and preservation applications. Despite the development of several solutions tailored for these specific applications, trade-offs between cost efficiency and feature richness persist. This paper proposes a lightweight, low-cost, and modular approach to real-time object detection and instance tracking, enabling a wide gamut of use cases. By integrating real-time object detection models with affordable embedded hardware, we present a system that uses image metadata to perform geolocation on detected objects, enabling real-time applications due to minimal computational overhead. This algorithm generates cleaner ’areas of interest’ based on geolocated detections filtered by a clustering algorithm to remove false positives. In our findings, this proved a viable solution with real-time processing speeds and GPS positioning accuracy within a meter. While there is room for improvement, our proposed pipeline represents a significant step forward in lowering the costs involved with applying computer vision to conservation applications.

实时目标检测和跟踪是航空遥感研究的一个活跃领域,它使许多环境和生态监测及保护应用成为可能。尽管针对这些特定应用开发了多种解决方案,但在成本效率和特征丰富度之间的权衡仍然存在。本文提出了一种轻量级、低成本、模块化的实时物体检测和实例跟踪方法,可广泛应用于各种情况。通过将实时物体检测模型与经济实惠的嵌入式硬件相结合,我们提出了一种利用图像元数据对检测到的物体进行地理定位的系统,由于计算开销极小,因此可以实现实时应用。该算法根据经聚类算法过滤的地理定位检测结果生成更干净的 "感兴趣区域",以消除误报。我们的研究结果表明,这是一种可行的解决方案,处理速度快,GPS 定位精度在一米以内。虽然还有改进的余地,但我们提出的管道在降低将计算机视觉应用于保护应用的成本方面迈出了重要一步。
{"title":"Low-cost real-time aerial object detection and GPS location tracking pipeline","authors":"Allan Lago,&nbsp;Sahaj Patel,&nbsp;Aditya Singh","doi":"10.1016/j.ophoto.2024.100069","DOIUrl":"10.1016/j.ophoto.2024.100069","url":null,"abstract":"<div><p>Real-time object detection and tracking is an active area of aerial remote sensing research that enables many environmental and ecological monitoring and preservation applications. Despite the development of several solutions tailored for these specific applications, trade-offs between cost efficiency and feature richness persist. This paper proposes a lightweight, low-cost, and modular approach to real-time object detection and instance tracking, enabling a wide gamut of use cases. By integrating real-time object detection models with affordable embedded hardware, we present a system that uses image metadata to perform geolocation on detected objects, enabling real-time applications due to minimal computational overhead. This algorithm generates cleaner ’areas of interest’ based on geolocated detections filtered by a clustering algorithm to remove false positives. In our findings, this proved a viable solution with real-time processing speeds and GPS positioning accuracy within a meter. While there is room for improvement, our proposed pipeline represents a significant step forward in lowering the costs involved with applying computer vision to conservation applications.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"13 ","pages":"Article 100069"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000139/pdfft?md5=3e4423d991dff13ba71a9c0bdb66c837&pid=1-s2.0-S2667393224000139-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141274879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Individual tree detection and crown delineation in the Harz National Park from 2009 to 2022 using mask R–CNN and aerial imagery 利用掩膜 R-CNN 和航空图像,从 2009 年到 2022 年对哈尔茨国家公园的树木进行个体检测和树冠划分
Pub Date : 2024-08-01 Epub Date: 2024-07-05 DOI: 10.1016/j.ophoto.2024.100071
Moritz Lucas , Maren Pukrop , Philip Beckschäfer , Björn Waske

Forest diebacks pose a major threat to global ecosystems. Identifying and mapping both living and dead trees is crucial for understanding the causes and implementing effective management strategies. This study explores the efficacy of Mask R–CNN for automated forest dieback monitoring. The method detects individual trees, delineates their crowns, and classifies them as alive or dead. We evaluated the approach using aerial imagery and canopy height models in the Harz Mountains, Germany, a region severely affected by forest dieback. To assess the model's ability to track changes over time, we applied it to images from three separate flight campaigns (2009, 2016, and 2022). This evaluation considered variations in acquisition dates, cameras, post-processing techniques, and image tilting. Forest changes were analyzed based on the detected trees' number, spatial distribution, and height. A comprehensive accuracy assessment demonstrated the Mask R–CNN's robust performance, with precision scores ranging from 0.80 to 0.88 and F1-scores from 0.88 to 0.91. These results confirm the model's ability to generalize across diverse image acquisition conditions. While minor changes were observed between 2009 and 2016, the period between 2016 and 2022 witnessed substantial dieback, with a 64.57% loss of living trees. Notably, taller trees appeared to be particularly affected. This study highlights Mask R–CNN's potential as a valuable tool for automated forest dieback monitoring. It enables efficient detection, delineation, and classification of both living and dead trees, providing crucial data for informed forest management practices.

森林枯死对全球生态系统构成重大威胁。识别和绘制活树和死树的地图对于了解其成因和实施有效的管理策略至关重要。本研究探讨了 Mask R-CNN 在自动监测森林枯死方面的功效。该方法可检测到单棵树木,划定其树冠,并将其分为活树和死树。我们利用德国哈茨山区的航空图像和树冠高度模型对该方法进行了评估,该地区受到森林枯死的严重影响。为了评估该模型跟踪随时间变化的能力,我们将其应用于三次独立飞行活动(2009 年、2016 年和 2022 年)的图像。这项评估考虑了采集日期、相机、后处理技术和图像倾斜度的变化。根据检测到的树木数量、空间分布和高度分析了森林的变化。全面的精度评估证明了 Mask R-CNN 的强大性能,精度分数从 0.80 到 0.88 不等,F1 分数从 0.88 到 0.91 不等。这些结果证实了该模型在不同图像采集条件下的通用能力。虽然在 2009 年至 2016 年期间观察到的变化较小,但在 2016 年至 2022 年期间,树木出现了大幅衰退,活树减少了 64.57%。值得注意的是,较高的树木似乎尤其受到影响。这项研究凸显了 Mask R-CNN 作为自动化森林枯死监测宝贵工具的潜力。它能有效地检测、划分和分类活树和死树,为明智的森林管理实践提供关键数据。
{"title":"Individual tree detection and crown delineation in the Harz National Park from 2009 to 2022 using mask R–CNN and aerial imagery","authors":"Moritz Lucas ,&nbsp;Maren Pukrop ,&nbsp;Philip Beckschäfer ,&nbsp;Björn Waske","doi":"10.1016/j.ophoto.2024.100071","DOIUrl":"https://doi.org/10.1016/j.ophoto.2024.100071","url":null,"abstract":"<div><p>Forest diebacks pose a major threat to global ecosystems. Identifying and mapping both living and dead trees is crucial for understanding the causes and implementing effective management strategies. This study explores the efficacy of Mask R–CNN for automated forest dieback monitoring. The method detects individual trees, delineates their crowns, and classifies them as alive or dead. We evaluated the approach using aerial imagery and canopy height models in the Harz Mountains, Germany, a region severely affected by forest dieback. To assess the model's ability to track changes over time, we applied it to images from three separate flight campaigns (2009, 2016, and 2022). This evaluation considered variations in acquisition dates, cameras, post-processing techniques, and image tilting. Forest changes were analyzed based on the detected trees' number, spatial distribution, and height. A comprehensive accuracy assessment demonstrated the Mask R–CNN's robust performance, with precision scores ranging from 0.80 to 0.88 and F1-scores from 0.88 to 0.91. These results confirm the model's ability to generalize across diverse image acquisition conditions. While minor changes were observed between 2009 and 2016, the period between 2016 and 2022 witnessed substantial dieback, with a 64.57% loss of living trees. Notably, taller trees appeared to be particularly affected. This study highlights Mask R–CNN's potential as a valuable tool for automated forest dieback monitoring. It enables efficient detection, delineation, and classification of both living and dead trees, providing crucial data for informed forest management practices.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"13 ","pages":"Article 100071"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000152/pdfft?md5=71f4c32c472a325cebf9fb59433deb61&pid=1-s2.0-S2667393224000152-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141596887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust marker detection and identification using deep learning in underwater images for close range photogrammetry 利用深度学习在水下图像中进行稳健的标记检测和识别,用于近距离摄影测量
Pub Date : 2024-08-01 Epub Date: 2024-08-14 DOI: 10.1016/j.ophoto.2024.100072
Jost Wittmann , Sangam Chatterjee , Thomas Sure

The progressing industrialization of oceans mandates reliable, accurate and automatable subsea survey methods. Close-range photogrammetry is a promising discipline, which is frequently applied by archaeologists, fish-farmers, and the offshore energy industry. This paper presents a robust approach for the reliable detection and identification of photogrammetric markers in subsea images. The proposed method is robust to severe image degradation, which is frequently observed in underwater images due to turbidity, light absorption, and optical aberrations. This is the first step towards a highly automated work-flow for single-camera underwater photogrammetry. The newly developed approach comprises several machine learning models, which are trained by 10,122 real-world subsea images, showing a total of 338,301 photogrammetric markers. The performance is evaluated using an object detection metrics, and through a comparison with the commercially available software Metashape by Agisoft. Metashape delivers satisfactory results when the image quality is good. In images with strong noise, haze or little light, only the novel approach retrieves sufficient information for a high degree of automation of the subsequent bundle adjustment. While the need for offshore personnel and the time-to-results decreases, the robustness of the survey increases.

随着海洋工业化的不断发展,需要采用可靠、准确和自动化的海底勘测方法。近距离摄影测量是一门前景广阔的学科,经常被考古学家、渔民和近海能源工业所应用。本文提出了一种在海底图像中可靠检测和识别摄影测量标记的稳健方法。由于浑浊、光吸收和光学像差等原因,水下图像经常出现严重的图像劣化现象,而本文提出的方法对这种劣化现象具有很强的鲁棒性。这是实现单相机水下摄影测量高度自动化工作流程的第一步。新开发的方法由多个机器学习模型组成,这些模型由 10,122 幅真实水下图像训练而成,共显示 338,301 个摄影测量标记。使用对象检测指标对其性能进行了评估,并与 Agisoft 公司的商用软件 Metashape 进行了比较。当图像质量较好时,Metashape 能提供令人满意的结果。而在有强烈噪音、雾霾或光线不足的图像中,只有新方法能检索到足够的信息,从而实现后续捆绑调整的高度自动化。在减少对海上人员的需求和缩短取得成果的时间的同时,勘测的稳健性也得到了提高。
{"title":"Robust marker detection and identification using deep learning in underwater images for close range photogrammetry","authors":"Jost Wittmann ,&nbsp;Sangam Chatterjee ,&nbsp;Thomas Sure","doi":"10.1016/j.ophoto.2024.100072","DOIUrl":"10.1016/j.ophoto.2024.100072","url":null,"abstract":"<div><p>The progressing industrialization of oceans mandates reliable, accurate and automatable subsea survey methods. Close-range photogrammetry is a promising discipline, which is frequently applied by archaeologists, fish-farmers, and the offshore energy industry. This paper presents a robust approach for the reliable detection and identification of photogrammetric markers in subsea images. The proposed method is robust to severe image degradation, which is frequently observed in underwater images due to turbidity, light absorption, and optical aberrations. This is the first step towards a highly automated work-flow for single-camera underwater photogrammetry. The newly developed approach comprises several machine learning models, which are trained by 10,122 real-world subsea images, showing a total of 338,301 photogrammetric markers. The performance is evaluated using an object detection metrics, and through a comparison with the commercially available software Metashape by Agisoft. Metashape delivers satisfactory results when the image quality is good. In images with strong noise, haze or little light, only the novel approach retrieves sufficient information for a high degree of automation of the subsequent bundle adjustment. While the need for offshore personnel and the time-to-results decreases, the robustness of the survey increases.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"13 ","pages":"Article 100072"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000164/pdfft?md5=2fb00797de66c6b69489c851a71accbe&pid=1-s2.0-S2667393224000164-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142168606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated extrinsic calibration of solid-state frame LiDAR sensors with non-overlapping field of view for monitoring indoor stockpile storage facilities 自动校准具有非重叠视场的固态框架激光雷达传感器,以监测室内库存设施
Pub Date : 2024-08-01 Epub Date: 2024-08-26 DOI: 10.1016/j.ophoto.2024.100073
Mina Joseph , Haydn Malackowski , Hazem Hanafy , Jidong Liu , Zach DeLoach , Darcy Bullock , Ayman Habib

Several industrial and commercial bulk material management applications rely on accurate, current stockpile volume estimation. Proximal imaging and LiDAR sensing modalities can be used to derive stockpile volume estimates in outdoor and indoor storage facilities. Among available imaging and LiDAR sensing modalities, the latter is more advantageous for indoor storage facilities due to its ability to capture scans under poor lighting conditions. Evaluating volumes from such sensing modalities requires the pose (i.e., position and orientation) parameters of the used sensors relative to a common reference frame. For outdoor facilities, a Global Navigation Satellite System (GNSS) combined with an Inertial Navigation System (INS) can be used to derive the sensors’ pose relative to a global reference frame. For indoor facilities, GNSS signal outages will not allow for such capability. Prior research has developed strategies for establishing the sensor position and orientation for stockpile volume estimation while relying on multi-beam spinning LiDAR units. These approaches are feasible due to the large range and Field of View (FOV) of such systems that can capture the internal surfaces of indoor storage facilities.

The mechanical movement of multi-beam spinning LiDAR units together with the harsh conditions within indoor facilities (e.g., excessive humidity, wide range of temperature variation, dust, and corrosive environment in deicing salt storage facilities) limit the use of such systems. With the increasing availability of solid-state LiDAR units, there is an interest in exploring their potential for stockpile volume estimation. Despite their higher robustness to harsh conditions, solid-state LiDAR units have shorter distance measurement range and limited FOV when compared with multi-beam spinning LiDAR. This research presents a strategy for the extrinsic calibration (i.e., estimating the relative pose parameters) of installed solid-state LiDAR units inside stockpile storage facilities. The extrinsic calibration is made possible using deployed spherical targets and a complete, reference scan of the facility from another LiDAR sensing modality. The proposed research introduces strategies for: 1) automated extraction of the spherical targets; 2) automated matching of these targets in the solid-state LiDAR and reference scans using invariant relationships among them; and 3) coarse-to-fine estimation of the calibration parameters. Experimental results in several facilities have shown the feasibility of using the proposed methodology to conduct the extrinsic calibration and volume evaluation with an error percentage less than 3.5% even with occlusion percentages reaching up to 50%.

一些工业和商业散装物料管理应用依赖于准确的当前库存量估算。近距离成像和激光雷达传感模式可用于估算室外和室内仓储设施的库存量。在现有的成像和激光雷达传感模式中,后者由于能够在光线不足的条件下进行扫描,因此在室内仓储设施中更具优势。评估此类传感模式的体积需要所用传感器相对于共同参考框架的姿态(即位置和方向)参数。对于室外设施,可使用全球导航卫星系统(GNSS)和惯性导航系统(INS)来推导传感器相对于全球参考框架的姿态。对于室内设施,全球导航卫星系统信号中断将无法实现这种功能。之前的研究已经开发出了建立传感器位置和方向的策略,以便在依赖多波束旋转激光雷达装置的情况下进行堆积体积估算。多波束旋转激光雷达装置的机械运动以及室内设施的恶劣条件(如湿度过高、温度变化范围大、灰尘以及除冰盐储存设施中的腐蚀性环境)限制了此类系统的使用。随着固态激光雷达装置的日益普及,人们有兴趣探索其在库存量估算方面的潜力。尽管固态激光雷达在恶劣条件下具有更强的鲁棒性,但与多光束旋转激光雷达相比,固态激光雷达的距离测量范围较短,视场角有限。本研究提出了一种策略,用于对堆存设施内安装的固态激光雷达装置进行外部校准(即估算相对姿态参数)。利用部署的球形目标和另一种激光雷达传感模式对设施进行的完整参考扫描,可以实现外校准。建议的研究引入了以下策略1) 自动提取球形目标;2) 利用固态激光雷达和参考扫描中的不变关系自动匹配这些目标;3) 从粗到细估算校准参数。在多个设施中进行的实验结果表明,使用所提出的方法进行外部校准和体积评估是可行的,即使遮挡率高达 50%,误差率也低于 3.5%。
{"title":"Automated extrinsic calibration of solid-state frame LiDAR sensors with non-overlapping field of view for monitoring indoor stockpile storage facilities","authors":"Mina Joseph ,&nbsp;Haydn Malackowski ,&nbsp;Hazem Hanafy ,&nbsp;Jidong Liu ,&nbsp;Zach DeLoach ,&nbsp;Darcy Bullock ,&nbsp;Ayman Habib","doi":"10.1016/j.ophoto.2024.100073","DOIUrl":"10.1016/j.ophoto.2024.100073","url":null,"abstract":"<div><p>Several industrial and commercial bulk material management applications rely on accurate, current stockpile volume estimation. Proximal imaging and LiDAR sensing modalities can be used to derive stockpile volume estimates in outdoor and indoor storage facilities. Among available imaging and LiDAR sensing modalities, the latter is more advantageous for indoor storage facilities due to its ability to capture scans under poor lighting conditions. Evaluating volumes from such sensing modalities requires the pose (i.e., position and orientation) parameters of the used sensors relative to a common reference frame. For outdoor facilities, a Global Navigation Satellite System (GNSS) combined with an Inertial Navigation System (INS) can be used to derive the sensors’ pose relative to a global reference frame. For indoor facilities, GNSS signal outages will not allow for such capability. Prior research has developed strategies for establishing the sensor position and orientation for stockpile volume estimation while relying on multi-beam spinning LiDAR units. These approaches are feasible due to the large range and Field of View (FOV) of such systems that can capture the internal surfaces of indoor storage facilities.</p><p>The mechanical movement of multi-beam spinning LiDAR units together with the harsh conditions within indoor facilities (e.g., excessive humidity, wide range of temperature variation, dust, and corrosive environment in deicing salt storage facilities) limit the use of such systems. With the increasing availability of solid-state LiDAR units, there is an interest in exploring their potential for stockpile volume estimation. Despite their higher robustness to harsh conditions, solid-state LiDAR units have shorter distance measurement range and limited FOV when compared with multi-beam spinning LiDAR. This research presents a strategy for the extrinsic calibration (i.e., estimating the relative pose parameters) of installed solid-state LiDAR units inside stockpile storage facilities. The extrinsic calibration is made possible using deployed spherical targets and a complete, reference scan of the facility from another LiDAR sensing modality. The proposed research introduces strategies for: 1) automated extraction of the spherical targets; 2) automated matching of these targets in the solid-state LiDAR and reference scans using invariant relationships among them; and 3) coarse-to-fine estimation of the calibration parameters. Experimental results in several facilities have shown the feasibility of using the proposed methodology to conduct the extrinsic calibration and volume evaluation with an error percentage less than 3.5% even with occlusion percentages reaching up to 50%.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"13 ","pages":"Article 100073"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000176/pdfft?md5=0f0d8b437518bd5c7f1f1f0eb89fbdab&pid=1-s2.0-S2667393224000176-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Target-based georeferencing of terrestrial radar images using TLS point clouds and multi-modal corner reflectors in geomonitoring applications 在地理监测应用中使用 TLS 点云和多模态角反射器对陆地雷达图像进行基于目标的地理参照
Pub Date : 2024-08-01 Epub Date: 2024-09-18 DOI: 10.1016/j.ophoto.2024.100074
Lorenz Schmid , Tomislav Medic , Othmar Frey , Andreas Wieser
Terrestrial Radar Interferometry (TRI) is widely adopted in geomonitoring applications due to its capability to precisely observe surface displacements along the line of sight, among other key characteristics. As its deployment grows, TRI is also increasingly used to monitor smaller and more dispersed geological phenomena, where the challenge is their precise localization in 3d space if the pose of the radar interferometer is not known beforehand. To tackle this challenge, we introduce a semi-automatic target-based georeferencing method for precisely aligning TRI data with 3d point clouds obtained using long-range Terrestrial Laser Scanning (TLS). To facilitate this, we developed a multi-modal corner reflector (mmCR) that serves as a common reference point recognizable by both technologies, and we accompanied it with a semi-automatic data-processing pipeline, including the algorithms for precise center estimation. Experimental validation demonstrated that the corner reflector can be localized within the TLS data with a precision of 3–5 cm and within the TRI data with 1–2 dm. The targets were deployed in a realistic geomonitoring scenario to evaluate the implemented workflow and the achievable quality of georeferencing. The post-georeferencing mapping uncertainty was found to be on a decimeter level, matching the state-of-the-art results using dedicated targets and achieving more than an order of magnitude lower uncertainty than the existing data-driven approaches. In contrast to the existing target-based approaches, our results were achieved without laborious visual data inspection and manual target detection and on significantly larger distances, surpassing 2 km. The use of the developed mmCR and its associated data-processing pipeline extends beyond precise georeferencing of TRI imagery to TLS point clouds, allowing for alternatively georeferencing using total stations, mapping quality evaluation as well as on-site testing and calibrating TRI systems within the application environment.
地面雷达干涉测量法(TRI)具有沿视线精确观测地表位移的能力以及其他关键特性,因此被广泛应用于地质监测领域。随着其部署的增加,TRI 也越来越多地用于监测更小、更分散的地质现象,如果事先不知道雷达干涉仪的姿态,在三维空间对其进行精确定位就是一个挑战。为了应对这一挑战,我们引入了一种基于目标的半自动地理参照方法,用于将 TRI 数据与利用长距离地面激光扫描(TLS)获得的三维点云精确对齐。为此,我们开发了一种多模态拐角反射器(mmCR),作为两种技术都能识别的通用参考点,并为其配备了半自动数据处理管道,包括精确中心估算算法。实验验证表明,角反射器在 TLS 数据中的定位精度为 3-5 厘米,在 TRI 数据中的定位精度为 1-2 分米。在现实的地理监测场景中部署了目标,以评估实施的工作流程和可实现的地理参照质量。结果发现,地理参照后测绘的不确定性在分米级,与使用专用目标的先进结果相匹配,不确定性比现有的数据驱动方法低一个数量级以上。与现有的基于目标的方法相比,我们的结果不需要费力的目视数据检查和人工目标检测,而且距离更远,超过了 2 公里。所开发的 mmCR 及其相关数据处理流水线的使用范围超出了 TRI 图像到 TLS 点云的精确地理参照,可替代全站仪进行地理参照、测绘质量评估以及在应用环境中现场测试和校准 TRI 系统。
{"title":"Target-based georeferencing of terrestrial radar images using TLS point clouds and multi-modal corner reflectors in geomonitoring applications","authors":"Lorenz Schmid ,&nbsp;Tomislav Medic ,&nbsp;Othmar Frey ,&nbsp;Andreas Wieser","doi":"10.1016/j.ophoto.2024.100074","DOIUrl":"10.1016/j.ophoto.2024.100074","url":null,"abstract":"<div><div>Terrestrial Radar Interferometry (TRI) is widely adopted in geomonitoring applications due to its capability to precisely observe surface displacements along the line of sight, among other key characteristics. As its deployment grows, TRI is also increasingly used to monitor smaller and more dispersed geological phenomena, where the challenge is their precise localization in 3d space if the pose of the radar interferometer is not known beforehand. To tackle this challenge, we introduce a semi-automatic target-based georeferencing method for precisely aligning TRI data with 3d point clouds obtained using long-range Terrestrial Laser Scanning (TLS). To facilitate this, we developed a multi-modal corner reflector (mmCR) that serves as a common reference point recognizable by both technologies, and we accompanied it with a semi-automatic data-processing pipeline, including the algorithms for precise center estimation. Experimental validation demonstrated that the corner reflector can be localized within the TLS data with a precision of 3–5 cm and within the TRI data with 1–2 dm. The targets were deployed in a realistic geomonitoring scenario to evaluate the implemented workflow and the achievable quality of georeferencing. The post-georeferencing mapping uncertainty was found to be on a decimeter level, matching the state-of-the-art results using dedicated targets and achieving more than an order of magnitude lower uncertainty than the existing data-driven approaches. In contrast to the existing target-based approaches, our results were achieved without laborious visual data inspection and manual target detection and on significantly larger distances, surpassing 2 km. The use of the developed mmCR and its associated data-processing pipeline extends beyond precise georeferencing of TRI imagery to TLS point clouds, allowing for alternatively georeferencing using total stations, mapping quality evaluation as well as on-site testing and calibrating TRI systems within the application environment.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"13 ","pages":"Article 100074"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142319534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ISPRS Open Journal of Photogrammetry and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1