首页 > 最新文献

The Photogrammetric Record最新文献

英文 中文
Adaptive region aggregation for multi‐view stereo matching using deformable convolutional networks 基于可变形卷积网络的多视点立体匹配自适应区域聚合
Pub Date : 2023-08-21 DOI: 10.1111/phor.12459
Han Hu, Liupeng Su, Shunfu Mao, Min Chen, Guoqiang Pan, Bo Xu, Qing Zhu
Deep‐learning methods have demonstrated promising performance in multi‐view stereo (MVS) applications. However, it remains challenging to apply a geometrical prior on the adaptive matching windows to achieve efficient three‐dimensional reconstruction. To address this problem, this paper proposes a learnable adaptive region aggregation method based on deformable convolutional networks (DCNs), which is integrated into the feature extraction workflow for MVSNet method that uses coarse‐to‐fine structure. Following the conventional pipeline of MVSNet, a DCN is used to densely estimate and apply transformations in our feature extractor, which is composed of a deformable feature pyramid network (DFPN). Furthermore, we introduce a dedicated offset regulariser to promote the convergence of the learnable offsets of the DCN. The effectiveness of the proposed DFPN is validated through quantitative and qualitative evaluations on the BlendedMVS and Tanks and Temples benchmark datasets within a cross‐dataset evaluation setting.
深度学习方法在多视点立体(MVS)应用中表现出了良好的性能。然而,如何在自适应匹配窗口上应用几何先验来实现有效的三维重建仍然是一个挑战。为了解决这一问题,本文提出了一种基于可变形卷积网络(DCNs)的可学习自适应区域聚合方法,并将其集成到使用粗-细结构的MVSNet方法的特征提取工作流程中。在MVSNet传统管道的基础上,利用DCN对可变形特征金字塔网络(DFPN)构成的特征提取器进行密集估计和变换。此外,我们还引入了一个专用的偏移校正器来促进DCN的可学习偏移的收敛性。通过在交叉数据集评估设置中对BlendedMVS和Tanks and Temples基准数据集进行定量和定性评估,验证了所提出的DFPN的有效性。
{"title":"Adaptive region aggregation for multi‐view stereo matching using deformable convolutional networks","authors":"Han Hu, Liupeng Su, Shunfu Mao, Min Chen, Guoqiang Pan, Bo Xu, Qing Zhu","doi":"10.1111/phor.12459","DOIUrl":"https://doi.org/10.1111/phor.12459","url":null,"abstract":"Deep‐learning methods have demonstrated promising performance in multi‐view stereo (MVS) applications. However, it remains challenging to apply a geometrical prior on the adaptive matching windows to achieve efficient three‐dimensional reconstruction. To address this problem, this paper proposes a learnable adaptive region aggregation method based on deformable convolutional networks (DCNs), which is integrated into the feature extraction workflow for MVSNet method that uses coarse‐to‐fine structure. Following the conventional pipeline of MVSNet, a DCN is used to densely estimate and apply transformations in our feature extractor, which is composed of a deformable feature pyramid network (DFPN). Furthermore, we introduce a dedicated offset regulariser to promote the convergence of the learnable offsets of the DCN. The effectiveness of the proposed DFPN is validated through quantitative and qualitative evaluations on the BlendedMVS and Tanks and Temples benchmark datasets within a cross‐dataset evaluation setting.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"16 1","pages":"430 - 449"},"PeriodicalIF":0.0,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81631377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on conventional and learning‐based methods for multi‐view stereo 多视点立体视觉的传统方法和基于学习的方法综述
Pub Date : 2023-08-13 DOI: 10.1111/phor.12456
Elisavet (Ellie) Konstantina Stathopoulou, F. Remondino
3D reconstruction of scenes using multiple images, relying on robust correspondence search and depth estimation, has been thoroughly studied for the two‐view and multi‐view scenarios in recent years. Multi‐view stereo (MVS) algorithms aim to generate a rich, dense 3D model of the scene in the form of a dense point cloud or a triangulated mesh. In a typical MVS pipeline, the robust estimations for the camera poses along with the sparse points obtained from structure from motion (SfM) are used as input. During this process, the depth of generally every pixel of the scene is to be calculated. Several methods, either conventional or, more recently, learning‐based have been developed for solving the correspondence search problem. A vast amount of research exists in the literature using local, global or semi‐global stereomatching approaches, with the PatchMatch algorithm being among the most popular and efficient conventional ones in the last decade. Yet, and despite the widespread evolution of the algorithms, yielding complete, accurate and aesthetically pleasing 3D representations of a scene remains an open issue in real‐world and large‐scale photogrammetric applications. This work aims to provide a concrete survey on the most widely used MVS methods, investigating underlying concepts and challenges. To this end, the theoretical background and relative literature are discussed for both conventional and learning‐based approaches, with a particular focus on close‐range 3D reconstruction applications.
近年来,基于鲁棒对应搜索和深度估计的多幅图像场景三维重建研究已经深入到双视图和多视图场景中。多视图立体(MVS)算法旨在以密集点云或三角网格的形式生成丰富、密集的场景3D模型。在典型的MVS流水线中,使用摄像机姿态的鲁棒估计和由运动构造(SfM)得到的稀疏点作为输入。在这个过程中,通常要计算场景中每个像素的深度。有几种方法,无论是传统的还是最近的基于学习的,都被开发出来用于解决对应搜索问题。文献中存在大量使用局部、全局或半全局立体匹配方法的研究,其中PatchMatch算法是过去十年中最流行和最有效的传统算法之一。然而,尽管算法得到了广泛的发展,但在现实世界和大规模摄影测量应用中,生成完整、准确和美观的场景3D表示仍然是一个悬而未决的问题。这项工作旨在对最广泛使用的MVS方法进行具体调查,研究潜在的概念和挑战。为此,本文讨论了传统方法和基于学习的方法的理论背景和相关文献,特别关注了近距离三维重建的应用。
{"title":"A survey on conventional and learning‐based methods for multi‐view stereo","authors":"Elisavet (Ellie) Konstantina Stathopoulou, F. Remondino","doi":"10.1111/phor.12456","DOIUrl":"https://doi.org/10.1111/phor.12456","url":null,"abstract":"3D reconstruction of scenes using multiple images, relying on robust correspondence search and depth estimation, has been thoroughly studied for the two‐view and multi‐view scenarios in recent years. Multi‐view stereo (MVS) algorithms aim to generate a rich, dense 3D model of the scene in the form of a dense point cloud or a triangulated mesh. In a typical MVS pipeline, the robust estimations for the camera poses along with the sparse points obtained from structure from motion (SfM) are used as input. During this process, the depth of generally every pixel of the scene is to be calculated. Several methods, either conventional or, more recently, learning‐based have been developed for solving the correspondence search problem. A vast amount of research exists in the literature using local, global or semi‐global stereomatching approaches, with the PatchMatch algorithm being among the most popular and efficient conventional ones in the last decade. Yet, and despite the widespread evolution of the algorithms, yielding complete, accurate and aesthetically pleasing 3D representations of a scene remains an open issue in real‐world and large‐scale photogrammetric applications. This work aims to provide a concrete survey on the most widely used MVS methods, investigating underlying concepts and challenges. To this end, the theoretical background and relative literature are discussed for both conventional and learning‐based approaches, with a particular focus on close‐range 3D reconstruction applications.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"96 1","pages":"374 - 407"},"PeriodicalIF":0.0,"publicationDate":"2023-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83363640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic calibration of terrestrial laser scanners using intensity features 利用强度特征自动校准地面激光扫描仪
Pub Date : 2023-07-18 DOI: 10.1111/phor.12454
Jing Qiao, Tomislav Medic, Andreas Baumann-Ouyang
We propose an in situ self‐calibration method by detecting and matching intensity features on the local planes in overlapping point clouds based on the Förstner operator. We successfully matched the intensity features from scans at different locations by feature matching on common local planes rather than on the rasterised grids of the horizontal and vertical angles adopted by the affirmed keypoint‐based algorithm. The capability of extracting features from different stations offers the possibility of comprehensive scanner calibration, solving the disadvantage that the existing keypoint‐based methods can only estimate the two‐face‐sensitive model parameters. The proposed algorithm has been tested with a high‐precision panoramic scanner, Leica RTC360, using datasets from a calibration hall and a general working scenario. It has been shown that the proposed approach consistently calibrates the two‐face‐sensitive model parameters with the affirmed keypoint‐based one. For the case of comprehensive calibration with the offset estimated and some angular parameters separated where the previous keypoint‐based one failed, the proposed algorithm achieves an accuracy of 0.16 mm, 2.7″ and 2.1″ in range, azimuth and elevation for the estimated target centres. The proposed algorithm can accurately calibrate two‐face‐sensitive and more comprehensive model parameters without any preparation on‐site, for example, mounting targets.
本文提出了一种基于Förstner算子的重叠点云局部平面强度特征检测与匹配的原位自校准方法。我们通过在共同局部平面上的特征匹配来成功匹配不同位置扫描的强度特征,而不是采用基于确认关键点的算法所采用的水平和垂直角度的光栅网格。从不同台站提取特征的能力为全面的扫描仪校准提供了可能,解决了现有基于关键点的方法只能估计双面敏感模型参数的缺点。该算法已在高精度全景扫描仪徕卡RTC360上进行了测试,使用来自校准大厅的数据集和一般工作场景。结果表明,该方法与确定的基于关键点的模型参数一致地校准了两个面敏感模型参数。对于先前基于关键点的校准失败的偏移量估计和一些角度参数分离的综合校准情况,该算法对估计的目标中心的距离,方位角和仰角精度分别达到0.16 mm, 2.7″和2.1″。该算法可以准确地校准两面敏感的更全面的模型参数,而无需进行任何现场准备,例如安装目标。
{"title":"Automatic calibration of terrestrial laser scanners using intensity features","authors":"Jing Qiao, Tomislav Medic, Andreas Baumann-Ouyang","doi":"10.1111/phor.12454","DOIUrl":"https://doi.org/10.1111/phor.12454","url":null,"abstract":"We propose an in situ self‐calibration method by detecting and matching intensity features on the local planes in overlapping point clouds based on the Förstner operator. We successfully matched the intensity features from scans at different locations by feature matching on common local planes rather than on the rasterised grids of the horizontal and vertical angles adopted by the affirmed keypoint‐based algorithm. The capability of extracting features from different stations offers the possibility of comprehensive scanner calibration, solving the disadvantage that the existing keypoint‐based methods can only estimate the two‐face‐sensitive model parameters. The proposed algorithm has been tested with a high‐precision panoramic scanner, Leica RTC360, using datasets from a calibration hall and a general working scenario. It has been shown that the proposed approach consistently calibrates the two‐face‐sensitive model parameters with the affirmed keypoint‐based one. For the case of comprehensive calibration with the offset estimated and some angular parameters separated where the previous keypoint‐based one failed, the proposed algorithm achieves an accuracy of 0.16 mm, 2.7″ and 2.1″ in range, azimuth and elevation for the estimated target centres. The proposed algorithm can accurately calibrate two‐face‐sensitive and more comprehensive model parameters without any preparation on‐site, for example, mounting targets.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"32 5","pages":"320 - 338"},"PeriodicalIF":0.0,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91489037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real‐time mosaic of multiple fisheye surveillance videos based on geo‐registration and rectification 基于地理配准和校正的多鱼眼监控视频实时拼接
Pub Date : 2023-07-18 DOI: 10.1111/phor.12455
Jiongli Gao, Jun Wu, Mingyi Huang, Gang Xu
A distributed fisheye video surveillance system (DFVSS) can monitor a wide area without blind spots, but it is often affected by the viewpoint discontinuity and space inconsistency of multiple videos in the area. This paper proposes a novel real‐time fisheye video mosaic algorithm for wide‐area surveillance. First, by extending the line photogrammetry theory under central projection to spherical projection, a fisheye video geo‐registration model is established and estimated using orthogonal parallel lines on the ground, so that all videos of DFVSS are in the unified reference system to eliminate the space inconsistency between them. Second, by combining the photogrammetry orthorectification technique with thin‐plate spline transformation, a fisheye video rectification model is established to eliminate serious distortion in geo‐registered fisheye videos and align them accurately. Third, the viewport‐dependent video selection strategy and video look‐up table computation technique are adopted to create a high‐resolution panorama from input fisheye videos in real time. A parking lot of about 0.4 km2 monitored by eight fisheye cameras was selected as the test area. The experimental result shows the line re‐projection error in fisheye videos is about 0.5 pixels, and the overall efficiency, including panorama creation and mapping to the ground as texture, is not <30 fps. It indicates that the proposed algorithm can achieve a good balance between the limitation of video transmission bandwidth and the smooth observation requirement of computer equipment for the panorama, which is of great value for the construction and application of DFVSS.
分布式鱼眼视频监控系统(DFVSS)可以在无盲点的大范围内进行监控,但经常受到视点不连续和空间不一致的影响。提出了一种用于广域监控的实时鱼眼视频拼接算法。首先,将中心投影下的线摄影测量理论扩展到球面投影,建立鱼眼视频地理配准模型,利用地面上的正交平行线进行估计,使DFVSS所有视频处于统一的参照系中,消除了视频之间的空间不一致;其次,将摄影测量正校正技术与薄板样条变换相结合,建立鱼眼视频校正模型,消除地质配准鱼眼视频严重失真,实现准确对准;第三,采用视口依赖视频选择策略和视频查找表计算技术,从输入的鱼眼视频实时生成高分辨率全景图。选取8台鱼眼摄像机监测的停车场面积约0.4 km2作为试验区。实验结果表明,鱼眼视频的线重投影误差约为0.5像素,整体效率(包括全景创建和映射到地面的纹理)不小于30 fps。结果表明,该算法能够很好地平衡视频传输带宽的限制和计算机设备对全景图像的平滑观测要求,对DFVSS的构建和应用具有重要价值。
{"title":"Real‐time mosaic of multiple fisheye surveillance videos based on geo‐registration and rectification","authors":"Jiongli Gao, Jun Wu, Mingyi Huang, Gang Xu","doi":"10.1111/phor.12455","DOIUrl":"https://doi.org/10.1111/phor.12455","url":null,"abstract":"A distributed fisheye video surveillance system (DFVSS) can monitor a wide area without blind spots, but it is often affected by the viewpoint discontinuity and space inconsistency of multiple videos in the area. This paper proposes a novel real‐time fisheye video mosaic algorithm for wide‐area surveillance. First, by extending the line photogrammetry theory under central projection to spherical projection, a fisheye video geo‐registration model is established and estimated using orthogonal parallel lines on the ground, so that all videos of DFVSS are in the unified reference system to eliminate the space inconsistency between them. Second, by combining the photogrammetry orthorectification technique with thin‐plate spline transformation, a fisheye video rectification model is established to eliminate serious distortion in geo‐registered fisheye videos and align them accurately. Third, the viewport‐dependent video selection strategy and video look‐up table computation technique are adopted to create a high‐resolution panorama from input fisheye videos in real time. A parking lot of about 0.4 km2 monitored by eight fisheye cameras was selected as the test area. The experimental result shows the line re‐projection error in fisheye videos is about 0.5 pixels, and the overall efficiency, including panorama creation and mapping to the ground as texture, is not <30 fps. It indicates that the proposed algorithm can achieve a good balance between the limitation of video transmission bandwidth and the smooth observation requirement of computer equipment for the panorama, which is of great value for the construction and application of DFVSS.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"1 1","pages":"339 - 373"},"PeriodicalIF":0.0,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85449142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning‐based encoded target detection on iteratively orthorectified images for accurate fisheye calibration 基于学习的编码目标检测,用于迭代正校正图像的精确鱼眼校准
Pub Date : 2023-06-19 DOI: 10.1111/phor.12453
Haonan Dong, Jian Yao, Ye Gong, Li Li, Shaosheng Cao, Yuxuan Li
Fisheye camera calibration is an essential task in photogrammetry. However, previous calibration patterns and the robustness of the adjoint processing methods are limited due to the fisheye distortion and various lighting. This problem leads to additional manual intervention in the data collection. Moreover, it is arduous to accurately detect the board target under fisheye's distortion. To increase the robustness in this task, we present a novel encoded board “Meta‐Board” and a learning‐based target detection method. Additionally, an automatic image orthorectification is integrated to alleviate the distortion effect on the target iteratively until convergence. A low‐cost control field with the proposed boards is built for the experiment. Results on both virtual and real camera lenses and multi‐camera rigs show that our method can be robustly used in calibrating the fisheye camera and reaches state‐of‐the‐art accuracy.
鱼眼相机标定是摄影测量中的一项重要工作。然而,以往的校正模式和伴随处理方法的鲁棒性受到鱼眼畸变和各种光照的限制。这个问题导致在数据收集中需要额外的人工干预。此外,在鱼眼畸变的情况下,精确检测板靶是一项艰巨的任务。为了提高该任务的鲁棒性,我们提出了一种新的编码板“Meta - board”和一种基于学习的目标检测方法。此外,还集成了一种自动图像正校正,迭代地减轻对目标的畸变影响,直至收敛。利用所提出的电路板建立了一个低成本的控制场。在虚拟和真实摄像机镜头以及多摄像机平台上的结果表明,我们的方法可以稳健地用于校准鱼眼摄像机,并达到了最先进的精度。
{"title":"Learning‐based encoded target detection on iteratively orthorectified images for accurate fisheye calibration","authors":"Haonan Dong, Jian Yao, Ye Gong, Li Li, Shaosheng Cao, Yuxuan Li","doi":"10.1111/phor.12453","DOIUrl":"https://doi.org/10.1111/phor.12453","url":null,"abstract":"Fisheye camera calibration is an essential task in photogrammetry. However, previous calibration patterns and the robustness of the adjoint processing methods are limited due to the fisheye distortion and various lighting. This problem leads to additional manual intervention in the data collection. Moreover, it is arduous to accurately detect the board target under fisheye's distortion. To increase the robustness in this task, we present a novel encoded board “Meta‐Board” and a learning‐based target detection method. Additionally, an automatic image orthorectification is integrated to alleviate the distortion effect on the target iteratively until convergence. A low‐cost control field with the proposed boards is built for the experiment. Results on both virtual and real camera lenses and multi‐camera rigs show that our method can be robustly used in calibrating the fisheye camera and reaches state‐of‐the‐art accuracy.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"156 1","pages":"297 - 319"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77381646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the performance of spectral and textural information for leaf area index estimation with homogeneous and heterogeneous surfaces 探索光谱和纹理信息在均匀和非均匀表面叶面积指数估计中的性能
Pub Date : 2023-06-04 DOI: 10.1111/phor.12450
Yangyang Zhang, Xu Han, Jian Yang
Leaf area index (LAI) is one of the key parameters of vegetation structure, which can be applied in monitoring vegetation growth status. Currently, abundant spatial information (e.g., textural information), provided by the developing remote sensing satellite techniques, could boost the accuracy of LAI estimation. Thus, the performance of spectral and textural information must be evaluated for different vegetation types of LAI estimation in different surface types. In this study, different spectral vegetation indices (SVIs) and grey‐level co‐occurrence matrix‐based textural variables under different moving window sizes were extracted from Landsat TM satellite data. First, the ability of different types of SVIs for LAI estimation in different surface types was analysed. Subsequently, the effect of different texture variables with different moving window sizes towards LAI estimation accuracy in different vegetation types was explored. Lastly, the performance of SVIs combined with textural information for the LAI estimation in different vegetation types was evaluated. Results indicated that SVIs performed better for LAI estimation in the homogeneous region than that in the heterogeneous region, and difference vegetation index was more remarkable for LAI estimation in different vegetation types than other SVIs. In addition, variations in texture variables and moving window sizes had a large influence on LAI estimation of natural vegetation with high canopy heterogeneity. SVI combined with textural information can efficiently improve the accuracy of LAI estimation in different vegetation types (R2 = 0.672, 0.455 and 0.523 for meadow, shrub and cantaloupe, respectively.) compared with SVI alone (R2 = 0.189, 0.064 and 0.431 for meadow, shrub and cantaloupe, respectively.). Especially for natural vegetation (meadow, shrub), the addition of textural information can greatly improve the accuracy of LAI estimation.
叶面积指数(LAI)是植被结构的关键参数之一,可用于监测植被生长状况。目前,不断发展的遥感卫星技术提供了丰富的空间信息(如纹理信息),可以提高LAI估算的精度。因此,对于不同地表类型的不同植被类型的LAI估算,必须评估光谱和纹理信息的性能。本研究从Landsat TM卫星数据中提取不同移动窗口大小下的光谱植被指数(svi)和基于灰度共生矩阵的纹理变量。首先,分析了不同类型svi在不同地表类型下对LAI的估计能力。随后,探讨不同移动窗大小的纹理变量对不同植被类型下LAI估计精度的影响。最后,评价了基于纹理信息的SVIs在不同植被类型下的LAI估计性能。结果表明,SVIs在均匀区估算LAI的效果优于非均匀区,不同植被类型估算LAI的植被指数差异比其他SVIs更为显著。此外,纹理变量和移动窗大小的变化对冠层异质性高的天然植被LAI估算有较大影响。与单独使用SVI(草甸、灌丛和哈密瓜的R2分别为0.189、0.064和0.431)相比,SVI结合纹理信息能有效提高不同植被类型下LAI的估计精度(草甸、灌丛和哈密瓜的R2分别为0.672、0.455和0.523)。特别是对于天然植被(草甸、灌木),纹理信息的加入可以大大提高LAI估计的精度。
{"title":"Exploring the performance of spectral and textural information for leaf area index estimation with homogeneous and heterogeneous surfaces","authors":"Yangyang Zhang, Xu Han, Jian Yang","doi":"10.1111/phor.12450","DOIUrl":"https://doi.org/10.1111/phor.12450","url":null,"abstract":"Leaf area index (LAI) is one of the key parameters of vegetation structure, which can be applied in monitoring vegetation growth status. Currently, abundant spatial information (e.g., textural information), provided by the developing remote sensing satellite techniques, could boost the accuracy of LAI estimation. Thus, the performance of spectral and textural information must be evaluated for different vegetation types of LAI estimation in different surface types. In this study, different spectral vegetation indices (SVIs) and grey‐level co‐occurrence matrix‐based textural variables under different moving window sizes were extracted from Landsat TM satellite data. First, the ability of different types of SVIs for LAI estimation in different surface types was analysed. Subsequently, the effect of different texture variables with different moving window sizes towards LAI estimation accuracy in different vegetation types was explored. Lastly, the performance of SVIs combined with textural information for the LAI estimation in different vegetation types was evaluated. Results indicated that SVIs performed better for LAI estimation in the homogeneous region than that in the heterogeneous region, and difference vegetation index was more remarkable for LAI estimation in different vegetation types than other SVIs. In addition, variations in texture variables and moving window sizes had a large influence on LAI estimation of natural vegetation with high canopy heterogeneity. SVI combined with textural information can efficiently improve the accuracy of LAI estimation in different vegetation types (R2 = 0.672, 0.455 and 0.523 for meadow, shrub and cantaloupe, respectively.) compared with SVI alone (R2 = 0.189, 0.064 and 0.431 for meadow, shrub and cantaloupe, respectively.). Especially for natural vegetation (meadow, shrub), the addition of textural information can greatly improve the accuracy of LAI estimation.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"1 1","pages":"233 - 251"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83710884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
View‐graph key‐subset extraction for efficient and robust structure from motion 从运动中提取有效且稳健的视图关键子集
Pub Date : 2023-06-04 DOI: 10.1111/phor.12451
Ye Gong, Pengwei Zhou, Yu‐ye Liu, Haonan Dong, Li Li, Jian Yao
Structure from motion (SfM) is used to recover camera poses and the sparse structure of real scenes from multiview images. SfM methods construct a view‐graph from the matching relationships of images. Redundancy and incorrect edges are usually observed in it. Redundancy inhibits the efficiency and incorrect edges result in the misalignment of structures. In addition, the uneven distribution of vertices usually affects the global accuracy. To address these problems, we propose a coarse‐to‐fine approach in which the poses of an extracted key‐subset of images are first computed and then all remaining images are oriented. The core of this approach is view‐graph key‐subset extraction, which not only prunes redundant data and incorrect edges but also obtains properly distributed key‐subset vertices. The extraction approach is based on a replaceability score and an iteration‐update strategy. In this way, only vertices with high SfM importance are preserved in the key‐subset. Different public datasets are used to evaluate our approach. Due to the absence of ground‐truth camera poses in large‐scale datasets, we present new datasets with accurate camera poses and point clouds. The results demonstrate that our approach greatly increases the efficiency of SfM. Furthermore, the robustness and accuracy can be improved.
运动结构(SfM)用于从多视图图像中恢复相机姿态和真实场景的稀疏结构。SfM方法从图像的匹配关系中构建视图。其中经常观察到冗余和错误边。冗余会降低效率,错误的边缘会导致结构错位。此外,顶点分布的不均匀通常会影响全局精度。为了解决这些问题,我们提出了一种从粗到细的方法,首先计算提取的关键图像子集的姿态,然后对所有剩余的图像进行定向。该方法的核心是视图键子集提取,不仅可以去除冗余数据和不正确的边,还可以获得适当分布的键子集顶点。提取方法是基于可替换性评分和迭代更新策略。这样,只有具有高SfM重要性的顶点被保留在键子集中。不同的公共数据集被用来评估我们的方法。由于在大尺度数据集中缺乏地真相机姿态,我们提出了具有精确相机姿态和点云的新数据集。结果表明,我们的方法大大提高了SfM的效率。进一步提高了鲁棒性和准确性。
{"title":"View‐graph key‐subset extraction for efficient and robust structure from motion","authors":"Ye Gong, Pengwei Zhou, Yu‐ye Liu, Haonan Dong, Li Li, Jian Yao","doi":"10.1111/phor.12451","DOIUrl":"https://doi.org/10.1111/phor.12451","url":null,"abstract":"Structure from motion (SfM) is used to recover camera poses and the sparse structure of real scenes from multiview images. SfM methods construct a view‐graph from the matching relationships of images. Redundancy and incorrect edges are usually observed in it. Redundancy inhibits the efficiency and incorrect edges result in the misalignment of structures. In addition, the uneven distribution of vertices usually affects the global accuracy. To address these problems, we propose a coarse‐to‐fine approach in which the poses of an extracted key‐subset of images are first computed and then all remaining images are oriented. The core of this approach is view‐graph key‐subset extraction, which not only prunes redundant data and incorrect edges but also obtains properly distributed key‐subset vertices. The extraction approach is based on a replaceability score and an iteration‐update strategy. In this way, only vertices with high SfM importance are preserved in the key‐subset. Different public datasets are used to evaluate our approach. Due to the absence of ground‐truth camera poses in large‐scale datasets, we present new datasets with accurate camera poses and point clouds. The results demonstrate that our approach greatly increases the efficiency of SfM. Furthermore, the robustness and accuracy can be improved.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"41 1","pages":"252 - 296"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87132135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2023 Asian Conference on Remote Sensing (ACRS) 2023年亚洲遥感会议
Pub Date : 2023-06-01 DOI: 10.1111/phor.10_12449
The wide variety of sensors and systems available on the market for collecting spatial data makes the evaluation of provided information, calibration of sensors and benchmarking of systems a critical task. It is also an important scientific issue for many professionals. In daily work, the assessment of algorithms and sensors for collecting and generating spatial data resources is a crucial issue for academic institutions, research centres, national mapping and cadastral agencies, and all professionals handling geospatial data. The GEOBENCH workshop is therefore appropriate for those willing to extend their knowledge in the fields of photogrammetry and remote sensing – present evaluations of algorithms and sensors in the sector as well as new benchmarks. The workshop is a followup of the first successful event held in Warsaw, Poland, in 2019 and will be held in the AGH University of Science and Technology in Krakow, Poland, on 23– 24 October 2023.
市场上用于收集空间数据的传感器和系统种类繁多,这使得对所提供的信息进行评估、对传感器进行校准和对系统进行基准测试成为一项关键任务。对于许多专业人士来说,这也是一个重要的科学问题。在日常工作中,对收集和生成空间数据资源的算法和传感器的评估是学术机构、研究中心、国家测绘和地籍机构以及所有处理地理空间数据的专业人员的关键问题。因此,gebench讲习班适合那些愿意扩展其在摄影测量和遥感领域知识的人- -目前对该部门的算法和传感器的评价以及新的基准。该研讨会是2019年在波兰华沙成功举办的第一次研讨会的后续活动,将于2023年10月23日至24日在波兰克拉科夫AGH科技大学举行。
{"title":"2023 Asian Conference on Remote Sensing (ACRS)","authors":"","doi":"10.1111/phor.10_12449","DOIUrl":"https://doi.org/10.1111/phor.10_12449","url":null,"abstract":"The wide variety of sensors and systems available on the market for collecting spatial data makes the evaluation of provided information, calibration of sensors and benchmarking of systems a critical task. It is also an important scientific issue for many professionals. In daily work, the assessment of algorithms and sensors for collecting and generating spatial data resources is a crucial issue for academic institutions, research centres, national mapping and cadastral agencies, and all professionals handling geospatial data. The GEOBENCH workshop is therefore appropriate for those willing to extend their knowledge in the fields of photogrammetry and remote sensing – present evaluations of algorithms and sensors in the sector as well as new benchmarks. The workshop is a followup of the first successful event held in Warsaw, Poland, in 2019 and will be held in the AGH University of Science and Technology in Krakow, Poland, on 23– 24 October 2023.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83324682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
42nd EARSeL Symposium 2023 第42届EARSeL研讨会
Pub Date : 2023-06-01 DOI: 10.1111/phor.4_12449
{"title":"42nd EARSeL Symposium 2023","authors":"","doi":"10.1111/phor.4_12449","DOIUrl":"https://doi.org/10.1111/phor.4_12449","url":null,"abstract":"","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"123 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78564905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Academic Track of Foss4g (Free and Open Source Software for Geospatial) 2023 Foss4g (Free and Open Source Software for Geospatial)学术轨迹2023
Pub Date : 2023-06-01 DOI: 10.1111/phor.3_12449
{"title":"Academic Track of Foss4g (Free and Open Source Software for Geospatial) 2023","authors":"","doi":"10.1111/phor.3_12449","DOIUrl":"https://doi.org/10.1111/phor.3_12449","url":null,"abstract":"","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"109 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88724565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
The Photogrammetric Record
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1