首页 > 最新文献

The Photogrammetric Record最新文献

英文 中文
59th Photogrammetric Week: Advancement in photogrammetry, remote sensing and Geoinformatics 第 59 届摄影测量周:摄影测量、遥感和地理信息学的进步
Pub Date : 2024-09-11 DOI: 10.1111/phor.7_12515
{"title":"59th Photogrammetric Week: Advancement in photogrammetry, remote sensing and Geoinformatics","authors":"","doi":"10.1111/phor.7_12515","DOIUrl":"https://doi.org/10.1111/phor.7_12515","url":null,"abstract":"","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"71 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142212996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Registration‐based point cloud deskewing and dynamic lidar simulation 基于注册的点云纠偏和动态激光雷达模拟
Pub Date : 2024-08-16 DOI: 10.1111/phor.12516
Yuan Zhao, Kourosh Khoshelham, Amir Khodabandeh
Point clouds captured using laser scanners mounted on mobile platforms contain errors at the centimetre to decimetre level due to motion distortion. In applications such as lidar odometry or SLAM, this motion distortion is often ignored. However, in applications such as HD mapping or precise vehicle localisation, it is necessary to correct the effect of motion distortion or ‘deskew’ the point clouds before using them. Existing methods for deskewing point clouds mostly rely on high frequency IMU, which may not always be available. In this paper, we propose a straightforward approach that uses the registration of consecutive point clouds to estimate the motion of the scanner and deskew the point clouds. We introduce a novel surface‐based evaluation method to evaluate the performance of the proposed deskewing method. Furthermore, we develop a lidar simulator using the reverse of the proposed deskewing method which can produce synthetic point clouds with realistic motion distortion.
由于运动失真,使用安装在移动平台上的激光扫描仪采集的点云存在厘米到分米级的误差。在激光雷达测距或 SLAM 等应用中,这种运动失真通常会被忽略。然而,在高清地图绘制或精确车辆定位等应用中,有必要在使用点云之前纠正运动失真的影响或对其进行 "纠偏"。现有的点云纠偏方法大多依赖于高频 IMU,而高频 IMU 并不总是可用的。在本文中,我们提出了一种直接的方法,利用连续点云的注册来估计扫描仪的运动并对点云进行纠偏。我们引入了一种新颖的基于曲面的评估方法来评估所提出的纠偏方法的性能。此外,我们还开发了一种激光雷达模拟器,该模拟器使用了所提出的反向纠偏方法,可以生成具有真实运动失真的合成点云。
{"title":"Registration‐based point cloud deskewing and dynamic lidar simulation","authors":"Yuan Zhao, Kourosh Khoshelham, Amir Khodabandeh","doi":"10.1111/phor.12516","DOIUrl":"https://doi.org/10.1111/phor.12516","url":null,"abstract":"Point clouds captured using laser scanners mounted on mobile platforms contain errors at the centimetre to decimetre level due to motion distortion. In applications such as lidar odometry or SLAM, this motion distortion is often ignored. However, in applications such as HD mapping or precise vehicle localisation, it is necessary to correct the effect of motion distortion or ‘deskew’ the point clouds before using them. Existing methods for deskewing point clouds mostly rely on high frequency IMU, which may not always be available. In this paper, we propose a straightforward approach that uses the registration of consecutive point clouds to estimate the motion of the scanner and deskew the point clouds. We introduce a novel surface‐based evaluation method to evaluate the performance of the proposed deskewing method. Furthermore, we develop a lidar simulator using the reverse of the proposed deskewing method which can produce synthetic point clouds with realistic motion distortion.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142213020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coarse‐to‐fine adjustment for multi‐platform point cloud fusion 多平台点云融合的粗到细调整
Pub Date : 2024-07-24 DOI: 10.1111/phor.12513
Xin Zhao, Jianping Li, Yuhao Li, Bisheng Yang, Sihan Sun, Yongfeng Lin, Zhen Dong
Leveraging multi‐platform laser scanning systems offers a complete solution for 3D modelling of large‐scale urban scenes. However, the spatial inconsistency of point clouds collected by heterogeneous platforms with different viewpoints presents challenges in achieving seamless fusion. To tackle this challenge, this paper proposes a coarse‐to‐fine adjustment for multi‐platform point cloud fusion. First, in the preprocessing stage, the bounding box of each point cloud block is employed to identify potential constraint association. Second, the proposed local optimisation facilitates preliminary pairwise alignment with these potential constraint relationships, and obtaining initial guess for a comprehensive global optimisation. At last, the proposed global optimisation incorporates all the local constraints for tightly coupled optimisation with raw point correspondences. We choose two study areas to conduct experiments. Study area 1 represents a fast road scene with a significant amount of vegetation, while study area 2 represents an urban scene with many buildings. Extensive experimental evaluations indicate the proposed method has increased the accuracy of study area 1 by 50.6% and the accuracy of study area 2 by 44.7%.
利用多平台激光扫描系统可为大规模城市场景的三维建模提供完整的解决方案。然而,由不同视角的异构平台采集的点云在空间上的不一致性给实现无缝融合带来了挑战。为解决这一难题,本文提出了一种多平台点云融合的从粗到细调整方法。首先,在预处理阶段,利用每个点云块的边界框来识别潜在的约束关联。其次,建议的局部优化方法有助于与这些潜在的约束关系进行初步配对,并为全面的全局优化获得初始猜测。最后,建议的全局优化将所有局部约束与原始点对应关系紧密耦合在一起进行优化。我们选择了两个研究区域进行实验。研究区域 1 代表一个有大量植被的快速路场景,而研究区域 2 代表一个有许多建筑物的城市场景。广泛的实验评估表明,所提出的方法将研究区域 1 的精确度提高了 50.6%,将研究区域 2 的精确度提高了 44.7%。
{"title":"Coarse‐to‐fine adjustment for multi‐platform point cloud fusion","authors":"Xin Zhao, Jianping Li, Yuhao Li, Bisheng Yang, Sihan Sun, Yongfeng Lin, Zhen Dong","doi":"10.1111/phor.12513","DOIUrl":"https://doi.org/10.1111/phor.12513","url":null,"abstract":"Leveraging multi‐platform laser scanning systems offers a complete solution for 3D modelling of large‐scale urban scenes. However, the spatial inconsistency of point clouds collected by heterogeneous platforms with different viewpoints presents challenges in achieving seamless fusion. To tackle this challenge, this paper proposes a coarse‐to‐fine adjustment for multi‐platform point cloud fusion. First, in the preprocessing stage, the bounding box of each point cloud block is employed to identify potential constraint association. Second, the proposed local optimisation facilitates preliminary pairwise alignment with these potential constraint relationships, and obtaining initial guess for a comprehensive global optimisation. At last, the proposed global optimisation incorporates all the local constraints for tightly coupled optimisation with raw point correspondences. We choose two study areas to conduct experiments. Study area 1 represents a fast road scene with a significant amount of vegetation, while study area 2 represents an urban scene with many buildings. Extensive experimental evaluations indicate the proposed method has increased the accuracy of study area 1 by 50.6% and the accuracy of study area 2 by 44.7%.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141770304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimisation of real‐scene 3D building models based on straight‐line constraints 基于直线约束的实景 3D 建筑模型优化
Pub Date : 2024-07-24 DOI: 10.1111/phor.12514
Kaiyun Lv, Longyu Chen, Haiqing He, Fuyang Zhou, Shixun Yu
Due to the influence of repeated textures or edge perspective transformations on building facades, building modelling based on unmanned aerial vehicle (UAV) photogrammetry often suffers geometric deformation and distortion when using existing methods or commercial software. To address this issue, a real‐scene three‐dimensional (3D) building model optimisation method based on straight‐line constraints is proposed. First, point clouds generated by unmanned aerial vehicle (UAV) photogrammetry are down‐sampled based on local curvature characteristics, and structural point clouds located at the edges of buildings are extracted. Subsequently, an improved random sample consensus (RANSAC) algorithm, considering distance and angle constraints on lines, known as co‐constrained RANSAC, is applied to further extract point clouds with straight‐line features from the structural point clouds. Finally, point clouds with straight‐line features are optimised and updated using sampled points on the fitted straight lines. Experimental results demonstrate that the proposed method can effectively eliminate redundant 3D points or noise while retaining the fundamental structure of buildings. Compared to popular methods and commercial software, the proposed method significantly enhances the accuracy of building modelling. The average reduction in error is 59.2%, including the optimisation of deviations in the original model's contour projection.
由于建筑外墙受重复纹理或边缘透视变换的影响,使用现有方法或商业软件进行基于无人机(UAV)摄影测量的建筑建模时,往往会出现几何变形和扭曲。为解决这一问题,本文提出了一种基于直线约束的真实场景三维(3D)建筑模型优化方法。首先,根据局部曲率特征对无人机(UAV)摄影测量生成的点云进行下采样,并提取位于建筑物边缘的结构点云。随后,考虑到直线的距离和角度约束,应用改进的随机样本共识(RANSAC)算法(称为共约束 RANSAC),进一步从结构点云中提取具有直线特征的点云。最后,利用拟合直线上的采样点对具有直线特征的点云进行优化和更新。实验结果表明,所提出的方法可以有效消除冗余三维点或噪声,同时保留建筑物的基本结构。与流行的方法和商业软件相比,所提出的方法显著提高了建筑物建模的准确性。包括对原始模型轮廓投影偏差的优化在内,误差平均减少了 59.2%。
{"title":"Optimisation of real‐scene 3D building models based on straight‐line constraints","authors":"Kaiyun Lv, Longyu Chen, Haiqing He, Fuyang Zhou, Shixun Yu","doi":"10.1111/phor.12514","DOIUrl":"https://doi.org/10.1111/phor.12514","url":null,"abstract":"Due to the influence of repeated textures or edge perspective transformations on building facades, building modelling based on unmanned aerial vehicle (UAV) photogrammetry often suffers geometric deformation and distortion when using existing methods or commercial software. To address this issue, a real‐scene three‐dimensional (3D) building model optimisation method based on straight‐line constraints is proposed. First, point clouds generated by unmanned aerial vehicle (UAV) photogrammetry are down‐sampled based on local curvature characteristics, and structural point clouds located at the edges of buildings are extracted. Subsequently, an improved random sample consensus (RANSAC) algorithm, considering distance and angle constraints on lines, known as co‐constrained RANSAC, is applied to further extract point clouds with straight‐line features from the structural point clouds. Finally, point clouds with straight‐line features are optimised and updated using sampled points on the fitted straight lines. Experimental results demonstrate that the proposed method can effectively eliminate redundant 3D points or noise while retaining the fundamental structure of buildings. Compared to popular methods and commercial software, the proposed method significantly enhances the accuracy of building modelling. The average reduction in error is 59.2%, including the optimisation of deviations in the original model's contour projection.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"890 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141770306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Urban hyperspectral reference data availability and reuse: State‐of‐the‐practice review 城市高光谱参考数据的可用性和再利用:实践现状审查
Pub Date : 2024-07-22 DOI: 10.1111/phor.12508
Jessica M. O. Salcido, Debra F. Laefer
Hyperspectral remote sensing is currently underutilized in urban environments due to significant barriers concerning the existence, availability, and quality of urban hyperspectral reference spectra. This paper exposes these barriers by identifying, cataloging, and characterizing the contents of 23 spectral libraries, developing metrics to assess compliance with the Principles of Findability, Accessibility, Interoperability, and Reusability (FAIR), and evaluating existing resources using these criteria. Only 2931 urban spectral records were found within the 4 Global Spectral Libraries (0.61% of 476,592 published spectra). Within a further 19 Local Urban Spectral Libraries, 3862 additional urban spectra were found, but only 1662 (43%) were accessible without restriction. Content analysis revealed insufficient representation of urban material heterogeneity, imbalanced categories, and limited library interoperability, all of which further hinder effective data utilization. In response, this paper proposes a 14‐category metadataset, with specific considerations to overcome environmentally induced and inherent, intra‐material variability. In addition, material‐based spectral groupings and data resampling to common hyperspectral equipment specifications are recommended. These measures aim to enhance the utility of urban spectral libraries by improving FAIR compliance, thereby contributing to a more cohesive and enduring framework for hyperspectral reference data.
由于在城市高光谱参考光谱的存在、可用性和质量方面存在重大障碍,高光谱遥感技术目前在城市环境中利用不足。本文通过对 23 个光谱库的内容进行识别、编目和特征描述,制定评估是否符合可查找性、可访问性、互操作性和可重用性原则(FAIR)的指标,并利用这些标准对现有资源进行评估,从而揭示了这些障碍。在 4 个全球光谱库中仅发现 2931 条城市光谱记录(占 476,592 条已发布光谱的 0.61%)。在另外 19 个地方城市光谱图书馆中,又发现了 3862 条城市光谱记录,但只有 1662 条(43%)可以不受限制地访问。内容分析显示,城市材料异质性的代表性不足、类别不平衡、图书馆互操作性有限,所有这些都进一步阻碍了数据的有效利用。为此,本文提出了一个 14 个类别的元数据集,其中特别考虑到要克服环境因素和材料内部固有的差异。此外,还建议根据材料进行光谱分组,并按照常见的高光谱设备规格对数据进行重新采样。这些措施旨在通过改善 FAIR 合规性来提高城市光谱库的实用性,从而促进建立一个更具凝聚力和持久性的高光谱参考数据框架。
{"title":"Urban hyperspectral reference data availability and reuse: State‐of‐the‐practice review","authors":"Jessica M. O. Salcido, Debra F. Laefer","doi":"10.1111/phor.12508","DOIUrl":"https://doi.org/10.1111/phor.12508","url":null,"abstract":"Hyperspectral remote sensing is currently underutilized in urban environments due to significant barriers concerning the existence, availability, and quality of urban hyperspectral reference spectra. This paper exposes these barriers by identifying, cataloging, and characterizing the contents of 23 spectral libraries, developing metrics to assess compliance with the Principles of Findability, Accessibility, Interoperability, and Reusability (FAIR), and evaluating existing resources using these criteria. Only 2931 urban spectral records were found within the 4 Global Spectral Libraries (0.61% of 476,592 published spectra). Within a further 19 Local Urban Spectral Libraries, 3862 additional urban spectra were found, but only 1662 (43%) were accessible without restriction. Content analysis revealed insufficient representation of urban material heterogeneity, imbalanced categories, and limited library interoperability, all of which further hinder effective data utilization. In response, this paper proposes a 14‐category metadataset, with specific considerations to overcome environmentally induced and inherent, intra‐material variability. In addition, material‐based spectral groupings and data resampling to common hyperspectral equipment specifications are recommended. These measures aim to enhance the utility of urban spectral libraries by improving FAIR compliance, thereby contributing to a more cohesive and enduring framework for hyperspectral reference data.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141770305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ADGEO: A new shore‐based approach to improving spatial accuracy when mapping water bodies using low‐cost drones ADGEO:使用低成本无人机绘制水体地图时提高空间精度的岸基新方法
Pub Date : 2024-07-02 DOI: 10.1111/phor.12512
Bernard Essel, Michael Bolger, John McDonald, Conor Cahalane
Over the last three decades, satellite imagery has been instrumental in mapping and monitoring water quality. However, satellites often have limitations due to image availability and cloud cover. Today, the spatial resolution of satellite images does not provide finer detail measurements essential for small‐scale water pollution management. Drones offer a complimentary platform capable of operating below cloud cover and acquiring very high spatial resolution datasets in near real‐time. Studies have shown that drone mapping over water can be done via the Direct Georeferencing approach. However, this method is only suitable for high‐end drones with accurate GNSS/IMU. Importantly, this limitation is exacerbated because of the difficulty in placing targets over water, which can be used to improve the accuracy after the survey. This study explored a new method called Assisted Direct Georeferencing which combines the benefits of traditional Bundle Adjustment with Direct Georeferencing. The performance of the approach was evaluated over a variety of different scenarios, demonstrating significant improvement in the planimetric accuracy. From the results, the method reduced the error in XY of drone imagery from MAE of 18.9 to 3.4 m. The result shows the potential of low‐cost drones with Assisted Direct Georeferencing in closing the gap to high‐end drones.
过去三十年来,卫星图像在绘制和监测水质方面发挥了重要作用。然而,卫星往往因图像可用性和云层覆盖而受到限制。如今,卫星图像的空间分辨率无法提供小规模水污染管理所需的更精细的测量数据。无人机提供了一个补充平台,能够在云层以下运行,并以接近实时的方式获取空间分辨率极高的数据集。研究表明,无人机可通过直接地理参照方法绘制水域地图。不过,这种方法只适用于配备精确 GNSS/IMU 的高端无人机。重要的是,由于难以在水上放置目标,这种局限性更加严重,而在勘测之后,可以利用目标来提高精度。本研究探索了一种名为 "辅助直接地理参照 "的新方法,它结合了传统捆绑调整和直接地理参照的优点。对该方法在各种不同情况下的性能进行了评估,结果表明该方法显著提高了平面测量精度。结果显示,该方法将无人机图像的 XY 误差从 18.9 米的 MAE 降低到 3.4 米。
{"title":"ADGEO: A new shore‐based approach to improving spatial accuracy when mapping water bodies using low‐cost drones","authors":"Bernard Essel, Michael Bolger, John McDonald, Conor Cahalane","doi":"10.1111/phor.12512","DOIUrl":"https://doi.org/10.1111/phor.12512","url":null,"abstract":"Over the last three decades, satellite imagery has been instrumental in mapping and monitoring water quality. However, satellites often have limitations due to image availability and cloud cover. Today, the spatial resolution of satellite images does not provide finer detail measurements essential for small‐scale water pollution management. Drones offer a complimentary platform capable of operating below cloud cover and acquiring very high spatial resolution datasets in near real‐time. Studies have shown that drone mapping over water can be done via the Direct Georeferencing approach. However, this method is only suitable for high‐end drones with accurate GNSS/IMU. Importantly, this limitation is exacerbated because of the difficulty in placing targets over water, which can be used to improve the accuracy after the survey. This study explored a new method called Assisted Direct Georeferencing which combines the benefits of traditional Bundle Adjustment with Direct Georeferencing. The performance of the approach was evaluated over a variety of different scenarios, demonstrating significant improvement in the planimetric accuracy. From the results, the method reduced the error in XY of drone imagery from MAE of 18.9 to 3.4 m. The result shows the potential of low‐cost drones with Assisted Direct Georeferencing in closing the gap to high‐end drones.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"191 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141515686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optical flow matching with automatically correcting the scale difference of tunnel parallel photogrammetry 自动校正隧道平行摄影测量比例差的光流匹配技术
Pub Date : 2024-06-27 DOI: 10.1111/phor.12511
Hao Li, Bohao Gao, Xiufeng He, Pengfei Yu
Using parallel photography to model tunnels is an efficient method for real scene modelling. Aiming at the problem that the accuracy of optical flow matching in tunnel parallel photography sequence photos is severely affected by the scale deformation of stereo images, a novel optical flow matching method with automatically correcting the scale difference of tunnel parallel photography stereo images is proposed from the perspective of imaging relationships. By analysing the distribution pattern of scale difference in stereo images, a model is obtained in which the scale difference of image points is symmetrically distributed radially on the image and follows a power function growth. Introduce it into traditional optical flow matching to correct image scale differences based on the model to improve matching accuracy. The mean square error of the optical flow matching after correcting scale difference in the experiment is less than 0.3 pixels, which is at least 34.3% higher than before correction and a maximum improvement of 45.5% in the experimental results. The research result indicates that the proposed optical flow matching method with automatically correcting the scale difference has a significant effect on improving the accuracy of tunnel parallel photography image matching and modelling.
利用平行摄影技术对隧道进行建模是一种高效的真实场景建模方法。针对隧道平行摄影序列照片的光流匹配精度受立体图像尺度变形影响较大的问题,从成像关系的角度提出了一种自动修正隧道平行摄影立体图像尺度差的新型光流匹配方法。通过分析尺度差在立体图像中的分布规律,得到了图像点的尺度差在图像上呈径向对称分布并服从幂函数增长的模型。将其引入传统的光流匹配,根据模型修正图像尺度差,从而提高匹配精度。实验中校正尺度差后的光流匹配均方误差小于 0.3 像素,比校正前至少提高了 34.3%,实验结果最大提高了 45.5%。研究结果表明,所提出的自动校正比例差的光流匹配方法对提高隧道平行摄影图像匹配和建模的精度有显著效果。
{"title":"Optical flow matching with automatically correcting the scale difference of tunnel parallel photogrammetry","authors":"Hao Li, Bohao Gao, Xiufeng He, Pengfei Yu","doi":"10.1111/phor.12511","DOIUrl":"https://doi.org/10.1111/phor.12511","url":null,"abstract":"Using parallel photography to model tunnels is an efficient method for real scene modelling. Aiming at the problem that the accuracy of optical flow matching in tunnel parallel photography sequence photos is severely affected by the scale deformation of stereo images, a novel optical flow matching method with automatically correcting the scale difference of tunnel parallel photography stereo images is proposed from the perspective of imaging relationships. By analysing the distribution pattern of scale difference in stereo images, a model is obtained in which the scale difference of image points is symmetrically distributed radially on the image and follows a power function growth. Introduce it into traditional optical flow matching to correct image scale differences based on the model to improve matching accuracy. The mean square error of the optical flow matching after correcting scale difference in the experiment is less than 0.3 pixels, which is at least 34.3% higher than before correction and a maximum improvement of 45.5% in the experimental results. The research result indicates that the proposed optical flow matching method with automatically correcting the scale difference has a significant effect on improving the accuracy of tunnel parallel photography image matching and modelling.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"84 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MoLO: Drift‐free lidar odometry using a 3D model MoLO:利用三维模型进行无漂移激光雷达测距
Pub Date : 2024-06-14 DOI: 10.1111/phor.12509
H. Zhao, Y. Zhao, M. Tomko, K. Khoshelham
LiDAR odometry enables localising vehicles and robots in the environments where global navigation satellite systems (GNSS) are not available. An inherent limitation of LiDAR odometry is the accumulation of local motion estimation errors. Current approaches heavily rely on loop closure to optimise the estimated sensor poses and to eliminate the drift of the estimated trajectory. Consequently, these systems cannot perform real‐time localization and are therefore not practical for a navigation task. This paper presents MoLO, a novel model‐based LiDAR odometry approach to achieve real‐time and drift‐free localization using a 3D model of the environment containing planar surfaces, namely the structural elements of buildings. The proposed approach uses a 3D model of the environment to initialise the LiDAR pose and includes a scan‐to‐scan registration to estimate the pose for consecutive LiDAR scans. Re‐registering LiDAR scans to the 3D model at a certain frequency provides the global sensor pose and eliminates the drift of the trajectory. Pose graphs are built frequently to acquire a smooth and accurate trajectory. A geometry‐based method and a learning‐based method to register LiDAR scans with the 3D model are tested and compared. Experimental results show that MoLO can eliminate drift and achieve real‐time localization while providing an accuracy equivalent to loop closure optimization.
在没有全球导航卫星系统(GNSS)的环境中,激光雷达测距仪可对车辆和机器人进行定位。激光雷达测距的一个固有局限是局部运动估计误差的累积。目前的方法严重依赖闭环来优化传感器的估计位置,并消除估计轨迹的漂移。因此,这些系统无法进行实时定位,无法用于导航任务。本文提出的 MoLO 是一种基于模型的新型激光雷达测距方法,可利用包含平面(即建筑物的结构元素)的环境三维模型实现实时、无漂移定位。所提出的方法使用环境的三维模型来初始化激光雷达姿态,包括扫描到扫描注册,以估计连续激光雷达扫描的姿态。以一定频率将激光雷达扫描重新注册到三维模型上,可提供全局传感器姿态并消除轨迹漂移。为了获得平滑而精确的轨迹,需要经常建立姿态图。对基于几何的方法和基于学习的方法进行了测试和比较,以将激光雷达扫描与三维模型进行配准。实验结果表明,MoLO 可以消除漂移并实现实时定位,同时提供与闭环优化相当的精度。
{"title":"MoLO: Drift‐free lidar odometry using a 3D model","authors":"H. Zhao, Y. Zhao, M. Tomko, K. Khoshelham","doi":"10.1111/phor.12509","DOIUrl":"https://doi.org/10.1111/phor.12509","url":null,"abstract":"LiDAR odometry enables localising vehicles and robots in the environments where global navigation satellite systems (GNSS) are not available. An inherent limitation of LiDAR odometry is the accumulation of local motion estimation errors. Current approaches heavily rely on loop closure to optimise the estimated sensor poses and to eliminate the drift of the estimated trajectory. Consequently, these systems cannot perform real‐time localization and are therefore not practical for a navigation task. This paper presents MoLO, a novel model‐based LiDAR odometry approach to achieve real‐time and drift‐free localization using a 3D model of the environment containing planar surfaces, namely the structural elements of buildings. The proposed approach uses a 3D model of the environment to initialise the LiDAR pose and includes a scan‐to‐scan registration to estimate the pose for consecutive LiDAR scans. Re‐registering LiDAR scans to the 3D model at a certain frequency provides the global sensor pose and eliminates the drift of the trajectory. Pose graphs are built frequently to acquire a smooth and accurate trajectory. A geometry‐based method and a learning‐based method to register LiDAR scans with the 3D model are tested and compared. Experimental results show that MoLO can eliminate drift and achieve real‐time localization while providing an accuracy equivalent to loop closure optimization.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"25 21","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141340457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forest canopy height modelling based on photogrammetric data and machine learning methods 基于摄影测量数据和机器学习方法的林冠高度建模
Pub Date : 2024-06-04 DOI: 10.1111/phor.12507
Xingsheng Deng, Yujing Liu, Xingdong Cheng
Forest topographic survey is a problem that photogrammetry has not solved for a long time. Forest canopy height is a crucial forest biophysical parameter which is used to derive essential information about forest ecosystems. In order to construct a canopy height model in forest areas, this study extracts spectral feature factors from digital orthophoto map and geometric feature factors from digital surface model, which are generated through aerial photogrammetry and LiDAR (light detection and ranging). The maximum information coefficient, Pearson, Kendall, Spearman correlation coefficients, and a new proposed index of relative importance are employed to assess the correlation between each feature factor and forest vertical heights. Gradient boosting decision tree regression is introduced and utilised to construct a canopy height model, which enables the prediction of unknown canopy height in forest areas. Two additional machine learning techniques, namely random forest regression and support vector machine regression, are employed to construct canopy height model for comparative analysis. The data sets from two study areas have been processed for model training and prediction, yielding encouraging experimental results that demonstrate the potential of canopy height model to achieve prediction accuracies of 0.3 m in forested areas with 50% vegetation coverage and 0.8 m in areas with 99% vegetation coverage, even when only a mere 10% of the available data sets are selected as model training data. The above approaches present techniques for modelling canopy height in forested areas with varying conditions, which have been shown to be both feasible and reliable.
森林地形测量是摄影测量学长期未能解决的问题。林冠高度是一个重要的森林生物物理参数,可用于获取森林生态系统的基本信息。为了构建林区冠层高度模型,本研究从数字正射影像图中提取光谱特征因子,从数字地表模型中提取几何特征因子。采用最大信息系数、Pearson、Kendall、Spearman 相关系数以及新提出的相对重要性指数来评估各特征因子与森林垂直高度之间的相关性。梯度提升决策树回归被引入并用于构建树冠高度模型,从而实现对林区未知树冠高度的预测。另外还采用了两种机器学习技术,即随机森林回归和支持向量机回归,来构建冠层高度模型,以便进行比较分析。对两个研究区域的数据集进行了模型训练和预测处理,取得了令人鼓舞的实验结果,证明了冠层高度模型在植被覆盖率为 50%的林区和植被覆盖率为 99%的林区分别达到 0.3 米和 0.8 米的预测精度,即使仅选择可用数据集的 10%作为模型训练数据。上述方法提出了在条件各异的林区建立树冠高度模型的技术,这些技术已被证明是可行和可靠的。
{"title":"Forest canopy height modelling based on photogrammetric data and machine learning methods","authors":"Xingsheng Deng, Yujing Liu, Xingdong Cheng","doi":"10.1111/phor.12507","DOIUrl":"https://doi.org/10.1111/phor.12507","url":null,"abstract":"Forest topographic survey is a problem that photogrammetry has not solved for a long time. Forest canopy height is a crucial forest biophysical parameter which is used to derive essential information about forest ecosystems. In order to construct a canopy height model in forest areas, this study extracts spectral feature factors from digital orthophoto map and geometric feature factors from digital surface model, which are generated through aerial photogrammetry and LiDAR (light detection and ranging). The maximum information coefficient, Pearson, Kendall, Spearman correlation coefficients, and a new proposed index of relative importance are employed to assess the correlation between each feature factor and forest vertical heights. Gradient boosting decision tree regression is introduced and utilised to construct a canopy height model, which enables the prediction of unknown canopy height in forest areas. Two additional machine learning techniques, namely random forest regression and support vector machine regression, are employed to construct canopy height model for comparative analysis. The data sets from two study areas have been processed for model training and prediction, yielding encouraging experimental results that demonstrate the potential of canopy height model to achieve prediction accuracies of 0.3 m in forested areas with 50% vegetation coverage and 0.8 m in areas with 99% vegetation coverage, even when only a mere 10% of the available data sets are selected as model training data. The above approaches present techniques for modelling canopy height in forested areas with varying conditions, which have been shown to be both feasible and reliable.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141387542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ISPRS WG IV/9: 3D GeoInfo and EG‐ICE joint conference 2024 国际摄影测量和遥感学会第 IV/9 工作组:3D GeoInfo 和 EG-ICE 联席会议 2024 年
Pub Date : 2024-06-01 DOI: 10.1111/phor.12501
{"title":"ISPRS WG IV/9: 3D GeoInfo and EG‐ICE joint conference 2024","authors":"","doi":"10.1111/phor.12501","DOIUrl":"https://doi.org/10.1111/phor.12501","url":null,"abstract":"","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141410606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
The Photogrammetric Record
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1