首页 > 最新文献

The Photogrammetric Record最新文献

英文 中文
Automatic extraction of multiple morphological parameters of lunar impact craters 自动提取月球撞击坑的多种形态参数
Pub Date : 2024-03-26 DOI: 10.1111/phor.12483
Meng Xiao, Teng Hu, Zhizhong Kang, Haifeng Zhao, Feng Liu
Impact craters are geomorphological features widely distributed on the lunar surface. Their morphological parameters are crucial for studying the reasons for their formation, the thickness of the lunar regolith at the impact site and the age of the impact crater. However, current research on the extraction of multiple morphological parameters from a large number of impact craters within extensive geographical regions faces several challenges, including issues related to coordinate offsets in heterogeneous data, insufficient interpretation of impact crater profile morphology and incomplete extraction of morphological parameters. To address the aforementioned challenges, this paper proposes an automatic extraction method of morphological parameters based on the digital elevation model (DEM) and impact crater database. It involves the correction of heterogeneous data coordinate offset, simulation of impact crater profile morphology and various impact crater morphological parameter automatic extraction. And the method is designed to handle large numbers of impact craters in a wide range of areas. This makes it particularly useful for studies involving regional‐scale impact crater analysis. Experiments were carried out in geological units of different ages and we analysed the accuracy of this method. The analysis results show that: first, the proposed method has a relatively effective impact crater centre position offset correction. Second, the impact crater profile shape fitting result is relatively accurate. The R‐squared value (R2) is distributed from 0.97 to 1, and the mean absolute percentage error (MAPE) is between 0.032% and 0.568%, which reflects high goodness of fit. Finally, the eight morphological parameters automatically extracted using this method, such as depth, depth–diameter ratio, and internal and external slope, are basically consistent with those extracted manually. By comparing the proposed method with a similar approach, the results demonstrate that it is effective and can provide data support for relevant lunar surface research.
撞击坑是广泛分布于月球表面的地貌特征。其形态参数对于研究其形成原因、撞击地点的月球碎屑厚度和撞击坑的年龄至关重要。然而,目前从广阔地理区域内的大量撞击坑中提取多种形态参数的研究面临着一些挑战,其中包括与异质数据中的坐标偏移、撞击坑剖面形态解释不足以及形态参数提取不完整有关的问题。针对上述挑战,本文提出了一种基于数字高程模型(DEM)和撞击坑数据库的形态参数自动提取方法。该方法涉及异质数据坐标偏移校正、撞击坑剖面形态模拟和各种撞击坑形态参数自动提取。该方法设计用于处理广泛地区的大量撞击坑。因此,该方法特别适用于涉及区域尺度撞击坑分析的研究。我们在不同年代的地质单元中进行了实验,并分析了该方法的准确性。分析结果表明:首先,所提出的方法能对撞击坑中心位置偏移进行相对有效的校正。其次,撞击坑轮廓形状拟合结果相对准确。R 平方值(R2)分布在 0.97 到 1 之间,平均绝对百分比误差(MAPE)在 0.032% 到 0.568% 之间,拟合度较高。最后,该方法自动提取的深度、深径比、内外坡度等 8 个形态参数与人工提取的参数基本一致。通过将所提方法与类似方法进行比较,结果表明该方法是有效的,可以为相关月面研究提供数据支持。
{"title":"Automatic extraction of multiple morphological parameters of lunar impact craters","authors":"Meng Xiao, Teng Hu, Zhizhong Kang, Haifeng Zhao, Feng Liu","doi":"10.1111/phor.12483","DOIUrl":"https://doi.org/10.1111/phor.12483","url":null,"abstract":"Impact craters are geomorphological features widely distributed on the lunar surface. Their morphological parameters are crucial for studying the reasons for their formation, the thickness of the lunar regolith at the impact site and the age of the impact crater. However, current research on the extraction of multiple morphological parameters from a large number of impact craters within extensive geographical regions faces several challenges, including issues related to coordinate offsets in heterogeneous data, insufficient interpretation of impact crater profile morphology and incomplete extraction of morphological parameters. To address the aforementioned challenges, this paper proposes an automatic extraction method of morphological parameters based on the digital elevation model (DEM) and impact crater database. It involves the correction of heterogeneous data coordinate offset, simulation of impact crater profile morphology and various impact crater morphological parameter automatic extraction. And the method is designed to handle large numbers of impact craters in a wide range of areas. This makes it particularly useful for studies involving regional‐scale impact crater analysis. Experiments were carried out in geological units of different ages and we analysed the accuracy of this method. The analysis results show that: first, the proposed method has a relatively effective impact crater centre position offset correction. Second, the impact crater profile shape fitting result is relatively accurate. The <jats:italic>R</jats:italic>‐squared value (<jats:italic>R</jats:italic><jats:sup><jats:italic>2</jats:italic></jats:sup>) is distributed from 0.97 to 1, and the mean absolute percentage error (<jats:italic>MAPE</jats:italic>) is between 0.032% and 0.568%, which reflects high goodness of fit. Finally, the eight morphological parameters automatically extracted using this method, such as depth, depth–diameter ratio, and internal and external slope, are basically consistent with those extracted manually. By comparing the proposed method with a similar approach, the results demonstrate that it is effective and can provide data support for relevant lunar surface research.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"234 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140316408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Indoor point cloud semantic segmentation based on direction perception and hole sampling 基于方向感知和洞穴采样的室内点云语义分割
Pub Date : 2024-03-06 DOI: 10.1111/phor.12482
Xijiang Chen, Peng Li, Bufan Zhao, Tieding Lu, Xunqiang Gong, Hui Deng
Most existing point cloud segmentation methods ignore directional information when extracting neighbourhood features. Those methods are ineffective in extracting point cloud neighbourhood features because the point cloud data is not uniformly distributed and is restricted by the size of the convolution kernel. Therefore, we take into account both multiple directions and hole sampling (MDHS). First, we execute spherically sparse sampling with directional encoding in the surrounding domain for every point inside the data to increase the local perceptual field. The data input is the basic geometric features. We use the graph convolutional neural network to conduct the maximisation of point cloud characteristics in a local neighbourhood. Then the more representative local point features are automatically weighted and fused by an attention pooling layer. Finally, spatial attention is added to increase the connection between remote points, and then the segmentation accuracy is improved. Experimental results show that the OA and mIoU are 1.3% and 4.0% higher than the method PointWeb and 0.6% and 0.7% higher than the baseline method RandLA-Net. For the indoor point cloud semantic segmentation, the segmentation effect of the proposed network is superior to other methods.
现有的大多数点云分割方法在提取邻域特征时都忽略了方向信息。这些方法在提取点云邻域特征时效果不佳,因为点云数据并非均匀分布,而且受到卷积核大小的限制。因此,我们同时考虑了多方向和孔采样(MDHS)。首先,我们对数据内部的每个点进行球形稀疏采样,并在周围域进行方向编码,以增加局部感知场。数据输入是基本几何特征。我们使用图卷积神经网络对局部邻域中的点云特征进行最大化处理。然后,更具代表性的局部点特征会自动加权,并通过注意力汇集层进行融合。最后,加入空间注意力以增加远程点之间的联系,从而提高分割精度。实验结果表明,OA 和 mIoU 分别比 PointWeb 方法高 1.3% 和 4.0%,比基准方法 RandLA-Net 高 0.6% 和 0.7%。在室内点云语义分割方面,所提网络的分割效果优于其他方法。
{"title":"Indoor point cloud semantic segmentation based on direction perception and hole sampling","authors":"Xijiang Chen, Peng Li, Bufan Zhao, Tieding Lu, Xunqiang Gong, Hui Deng","doi":"10.1111/phor.12482","DOIUrl":"https://doi.org/10.1111/phor.12482","url":null,"abstract":"Most existing point cloud segmentation methods ignore directional information when extracting neighbourhood features. Those methods are ineffective in extracting point cloud neighbourhood features because the point cloud data is not uniformly distributed and is restricted by the size of the convolution kernel. Therefore, we take into account both multiple directions and hole sampling (MDHS). First, we execute spherically sparse sampling with directional encoding in the surrounding domain for every point inside the data to increase the local perceptual field. The data input is the basic geometric features. We use the graph convolutional neural network to conduct the maximisation of point cloud characteristics in a local neighbourhood. Then the more representative local point features are automatically weighted and fused by an attention pooling layer. Finally, spatial attention is added to increase the connection between remote points, and then the segmentation accuracy is improved. Experimental results show that the OA and mIoU are 1.3% and 4.0% higher than the method PointWeb and 0.6% and 0.7% higher than the baseline method RandLA-Net. For the indoor point cloud semantic segmentation, the segmentation effect of the proposed network is superior to other methods.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140045403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mono-MVS: textureless-aware multi-view stereo assisted by monocular prediction Mono-MVS:由单目预测辅助的无纹理感知多视角立体图像
Pub Date : 2024-02-29 DOI: 10.1111/phor.12480
Yuanhao Fu, Maoteng Zheng, Peiyu Chen, Xiuguo Liu
The learning-based multi-view stereo (MVS) methods have made remarkable progress in recent years. However, these methods exhibit limited robustness when faced with occlusion, weak or repetitive texture regions in the image. These factors often lead to holes in the final point cloud model due to excessive pixel-matching errors. To address these challenges, we propose a novel MVS network assisted by monocular prediction for 3D reconstruction. Our approach combines the strengths of both monocular and multi-view branches, leveraging the internal semantic information extracted from a single image through monocular prediction, along with the strict geometric relationships between multiple images. Moreover, we adopt a coarse-to-fine strategy to gradually reduce the number of assumed depth planes and minimise the interval between them as the resolution of the input images increases during the network iteration. This strategy can achieve a balance between the computational resource consumption and the effectiveness of the model. Experiments on the DTU, Tanks and Temples, and BlendedMVS datasets demonstrate that our method achieves outstanding results, particularly in textureless regions.
近年来,基于学习的多视角立体(MVS)方法取得了显著进展。然而,这些方法在面对图像中的遮挡、弱纹理或重复纹理区域时表现出有限的鲁棒性。由于像素匹配误差过大,这些因素往往会导致最终的点云模型出现漏洞。为了应对这些挑战,我们提出了一种由单目预测辅助的新型 MVS 网络,用于三维重建。我们的方法结合了单目和多目分支的优势,利用通过单目预测从单幅图像中提取的内部语义信息,以及多幅图像之间严格的几何关系。此外,我们还采用了由粗到细的策略,随着网络迭代过程中输入图像分辨率的提高,逐渐减少假定深度平面的数量,并最小化它们之间的间隔。这种策略可以实现计算资源消耗和模型有效性之间的平衡。在 DTU、Tanks and Temples 和 BlendedMVS 数据集上的实验表明,我们的方法取得了出色的结果,尤其是在无纹理区域。
{"title":"Mono-MVS: textureless-aware multi-view stereo assisted by monocular prediction","authors":"Yuanhao Fu, Maoteng Zheng, Peiyu Chen, Xiuguo Liu","doi":"10.1111/phor.12480","DOIUrl":"https://doi.org/10.1111/phor.12480","url":null,"abstract":"The learning-based multi-view stereo (MVS) methods have made remarkable progress in recent years. However, these methods exhibit limited robustness when faced with occlusion, weak or repetitive texture regions in the image. These factors often lead to holes in the final point cloud model due to excessive pixel-matching errors. To address these challenges, we propose a novel MVS network assisted by monocular prediction for 3D reconstruction. Our approach combines the strengths of both monocular and multi-view branches, leveraging the internal semantic information extracted from a single image through monocular prediction, along with the strict geometric relationships between multiple images. Moreover, we adopt a coarse-to-fine strategy to gradually reduce the number of assumed depth planes and minimise the interval between them as the resolution of the input images increases during the network iteration. This strategy can achieve a balance between the computational resource consumption and the effectiveness of the model. Experiments on the DTU, Tanks and Temples, and BlendedMVS datasets demonstrate that our method achieves outstanding results, particularly in textureless regions.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"148 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140008129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PL‐Pose: robust camera localisation based on combined point and line features using control images PL-Pose:利用控制图像,基于点和线的组合特征进行稳健的相机定位
Pub Date : 2024-02-28 DOI: 10.1111/phor.12481
Zhihua Xu, Yiru Niu, Yan Cui, Rongjun Qin, Wenbin Sun
Camera localisation is an essential task in the field of computer vision. The objective is to determine the precise position and orientation of one newly introduced camera station based on a collection of control images that are geographically referenced. Traditional feature‐based approaches have been found to face difficulties when confronted with the task of localising images that exhibit significant disparities in viewpoint. Modern deep learning approaches, on the contrary, aim to directly regress camera poses from input image content, being holistic to remedy the problem of viewpoint disparities. This paper posits that although deep networks possess the ability to learn robust and invariant visual features, the incorporation of geometry models can provide rigorous constraints in the process of pose estimation. Following the classic structure‐from‐motion (SfM) pipeline, we propose a PL‐Pose framework to perform camera localisation. First, to improve feature correlations for images with large viewpoint disparities, we perform the combination of point and line features based on a deep learning framework and geometric relation of wireframes. Then, a cost function is constructed using the combined point and line features in order to impose constraints on the bundle adjustment process. Finally, the camera pose parameters and 3D points are estimated through an iterative optimisation process. We verify the accuracy of the PL‐Pose approach through the utilisation of two datasets, that is, the publicly available S3DIS dataset and the self‐collected dataset CUMTB_Campus. The experimental results demonstrate that in both indoor and outdoor scenes, our PL‐Pose method can achieve localisation errors of less than 1 m for 82% of the test points. In contrast, the other four comparison methods yield a best result of merely 72%. Meanwhile, the PL‐Pose method can successfully obtain the camera pose parameters in all the scenes with small or large viewpoint disparities, indicating its good stability and adaptability.
相机定位是计算机视觉领域的一项基本任务。其目的是根据一组以地理位置为参照的控制图像,确定一个新引入相机站的精确位置和方向。传统的基于特征的方法在面对视角差异较大的图像定位任务时会遇到困难。与此相反,现代深度学习方法旨在从输入图像内容直接回归相机姿势,从整体上解决视角差异问题。本文认为,虽然深度网络具有学习稳健不变的视觉特征的能力,但在姿势估计过程中,几何模型的加入可以提供严格的约束。按照经典的结构-运动(SfM)管道,我们提出了一个 PL-Pose 框架来执行相机定位。首先,为了提高视角差异较大的图像的特征相关性,我们基于深度学习框架和线框的几何关系,对点和线特征进行了组合。然后,利用组合的点和线特征构建成本函数,以便对捆绑调整过程施加约束。最后,通过迭代优化过程估算相机姿态参数和三维点。我们利用两个数据集(即公开的 S3DIS 数据集和自行收集的 CUMTB_Campus 数据集)验证了 PL-Pose 方法的准确性。实验结果表明,在室内和室外场景中,我们的 PL-Pose 方法可以使 82% 的测试点的定位误差小于 1 米。相比之下,其他四种对比方法的最佳结果仅为 72%。同时,PL-Pose 方法能在所有视角差异较小或较大的场景中成功获得摄像机姿态参数,这表明它具有良好的稳定性和适应性。
{"title":"PL‐Pose: robust camera localisation based on combined point and line features using control images","authors":"Zhihua Xu, Yiru Niu, Yan Cui, Rongjun Qin, Wenbin Sun","doi":"10.1111/phor.12481","DOIUrl":"https://doi.org/10.1111/phor.12481","url":null,"abstract":"Camera localisation is an essential task in the field of computer vision. The objective is to determine the precise position and orientation of one newly introduced camera station based on a collection of control images that are geographically referenced. Traditional feature‐based approaches have been found to face difficulties when confronted with the task of localising images that exhibit significant disparities in viewpoint. Modern deep learning approaches, on the contrary, aim to directly regress camera poses from input image content, being holistic to remedy the problem of viewpoint disparities. This paper posits that although deep networks possess the ability to learn robust and invariant visual features, the incorporation of geometry models can provide rigorous constraints in the process of pose estimation. Following the classic structure‐from‐motion (SfM) pipeline, we propose a PL‐Pose framework to perform camera localisation. First, to improve feature correlations for images with large viewpoint disparities, we perform the combination of point and line features based on a deep learning framework and geometric relation of wireframes. Then, a cost function is constructed using the combined point and line features in order to impose constraints on the bundle adjustment process. Finally, the camera pose parameters and 3D points are estimated through an iterative optimisation process. We verify the accuracy of the PL‐Pose approach through the utilisation of two datasets, that is, the publicly available S3DIS dataset and the self‐collected dataset CUMTB_Campus. The experimental results demonstrate that in both indoor and outdoor scenes, our PL‐Pose method can achieve localisation errors of less than 1 m for 82% of the test points. In contrast, the other four comparison methods yield a best result of merely 72%. Meanwhile, the PL‐Pose method can successfully obtain the camera pose parameters in all the scenes with small or large viewpoint disparities, indicating its good stability and adaptability.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140008346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Associating UAS images through a graph-based guiding strategy for boosting structure from motion 通过基于图形的指导策略关联无人机系统图像,从运动中提升结构
Pub Date : 2024-02-01 DOI: 10.1111/phor.12479
Min-Lung Cheng, Yuji Fujita, Yasutaka Kuramoto, Hiroyuki Miura, Masashi Matsuoka
Structure from motion (SfM) using optical images has been an important prerequisite for reconstructing three-dimensional (3D) landforms. Although various algorithms have been developed in the past, they suffer from many image pairs for feature matching and recursive searching for the most suitable image to add to SfM reconstruction. Thus, carrying out SfM is computationally costly. This research proposes a boosting SfM (B-SfM) pipeline containing two phases, indexing graph network (IGN) and graph tracking, to accelerate SfM reconstruction. The IGN intends to form image pairs presenting desirable spatial correlation to reduce the time costs spent for feature matching. Building on the IGN, graph tracking integrates ant colony optimisation and greedy sorting algorithms to encode an optimum image sequence to boost SfM reconstruction. Compared to the results derived from other available means, the experimental results show that the proposed approach can accelerate the two phases, feature matching and 3D reconstruction, by up to 14 times faster. The quality of the camera poses recovered is retained or even slightly improved. As a result, the developed B-SfM can efficiently achieve SfM reconstruction by suppressing the time cost in the fashion of image pair selection for feature matching and image order determination for more efficient SfM reconstruction.
利用光学图像的运动结构(SfM)是重建三维(3D)地貌的重要前提。虽然过去已经开发出了各种算法,但它们都受到了许多图像对特征匹配和递归搜索的影响,无法找到最合适的图像添加到 SfM 重建中。因此,进行 SfM 的计算成本很高。本研究提出了一个包含索引图网络(IGN)和图跟踪两个阶段的提升 SfM(B-SfM)管道,以加速 SfM 重建。索引图网络旨在形成具有理想空间相关性的图像对,以减少特征匹配的时间成本。在 IGN 的基础上,图跟踪集成了蚁群优化和贪婪排序算法,以编码最佳图像序列,从而促进 SfM 重建。与其他可用方法得出的结果相比,实验结果表明,所提出的方法可将特征匹配和三维重建这两个阶段的速度提高 14 倍。所恢复的摄像机姿势的质量得以保留,甚至略有提高。因此,所开发的 B-SfM 可以有效地实现 SfM 重建,在特征匹配的图像对选择和更高效的 SfM 重建的图像阶次确定方面抑制了时间成本。
{"title":"Associating UAS images through a graph-based guiding strategy for boosting structure from motion","authors":"Min-Lung Cheng, Yuji Fujita, Yasutaka Kuramoto, Hiroyuki Miura, Masashi Matsuoka","doi":"10.1111/phor.12479","DOIUrl":"https://doi.org/10.1111/phor.12479","url":null,"abstract":"Structure from motion (SfM) using optical images has been an important prerequisite for reconstructing three-dimensional (3D) landforms. Although various algorithms have been developed in the past, they suffer from many image pairs for feature matching and recursive searching for the most suitable image to add to SfM reconstruction. Thus, carrying out SfM is computationally costly. This research proposes a boosting SfM (B-SfM) pipeline containing two phases, indexing graph network (IGN) and graph tracking, to accelerate SfM reconstruction. The IGN intends to form image pairs presenting desirable spatial correlation to reduce the time costs spent for feature matching. Building on the IGN, graph tracking integrates ant colony optimisation and greedy sorting algorithms to encode an optimum image sequence to boost SfM reconstruction. Compared to the results derived from other available means, the experimental results show that the proposed approach can accelerate the two phases, feature matching and 3D reconstruction, by up to 14 times faster. The quality of the camera poses recovered is retained or even slightly improved. As a result, the developed B-SfM can efficiently achieve SfM reconstruction by suppressing the time cost in the fashion of image pair selection for feature matching and image order determination for more efficient SfM reconstruction.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139669542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital surface model generation from high-resolution satellite stereos based on hybrid feature fusion network 基于混合特征融合网络从高分辨率卫星立体图生成数字地表模型
Pub Date : 2024-01-09 DOI: 10.1111/phor.12471
Zhi Zheng, Yi Wan, Yongjun Zhang, Zhonghua Hu, Dong Wei, Yongxiang Yao, Chenming Zhu, Kun Yang, Rang Xiao
Recent studies have demonstrated that deep learning-based stereo matching methods (DLSMs) can far exceed conventional ones on most benchmark datasets by both improving visual performance and decreasing the mismatching rate. However, applying DLSMs on high-resolution satellite stereos with broad image coverage and wide terrain variety is still challenging. First, the broad coverage of satellite stereos brings a wide disparity range, while DLSMs are limited to a narrow disparity range in most cases, resulting in incorrect disparity estimation in areas with contradictory disparity ranges. Second, high-resolution satellite stereos always comprise various terrain types, which is more complicated than carefully prepared datasets. Thus, the performance of DLSMs on satellite stereos is unstable, especially for intractable regions such as texture-less and occluded regions. Third, generating DSMs requires occlusion-aware disparity maps, while traditional occlusion detection methods are not always applicable for DLSMs with continuous disparity. To tackle these problems, this paper proposes a novel DLSM-based DSM generation workflow. The workflow comprises three steps: pre-processing, disparity estimation and post-processing. The pre-processing step introduces low-resolution terrain to shift unmatched disparity ranges into a fixed scope and crops satellite stereos to regular patches. The disparity estimation step proposes a hybrid feature fusion network (HF2Net) to improve the matching performance. In detail, HF2Net designs a cross-scale feature extractor (CSF) and a multi-scale cost filter. The feature extractor differentiates structural-context features in complex scenes and thus enhances HF2Net's robustness to satellite stereos, especially on intractable regions. The cost filter filters out most matching errors to ensure accurate disparity estimation. The post-processing step generates initial DSM patches with estimated disparity maps and then refines them for the final large-scale DSMs. Primary experiments on the public US3D dataset showed better accuracy than state-of-the-art methods, indicating HF2Net's superiority. We then created a self-made Gaofen-7 dataset to train HF2Net and conducted DSM generation experiments on two Gaofen-7 stereos to further demonstrate the effectiveness and practical capability of the proposed workflow.
最近的研究表明,基于深度学习的立体匹配方法(DLSMs)在大多数基准数据集上都能远远超过传统方法,既提高了视觉性能,又降低了不匹配率。然而,在图像覆盖面广、地形种类繁多的高分辨率卫星立体图像上应用 DLSM 仍然充满挑战。首先,卫星立体图像的广阔覆盖面带来了宽广的差异范围,而 DLSM 在大多数情况下仅限于较窄的差异范围,从而导致在差异范围相互矛盾的区域出现不正确的差异估计。其次,高分辨率卫星立体图像总是包含各种地形类型,比精心准备的数据集更加复杂。因此,DLSM 在卫星立体图像上的表现并不稳定,尤其是在无纹理和遮挡区域等难以处理的区域。第三,生成 DSM 需要闭塞感知的差异图,而传统的闭塞检测方法并不总是适用于连续差异的 DLSM。为了解决这些问题,本文提出了一种新颖的基于 DLSM 的 DSM 生成工作流程。该工作流程包括三个步骤:预处理、差异估计和后处理。预处理步骤引入低分辨率地形,将不匹配的差异范围转移到一个固定的范围内,并将卫星立体裁剪为规则的补丁。差异估计步骤提出了一种混合特征融合网络(HF2Net)来提高匹配性能。具体来说,HF2Net 设计了一个跨尺度特征提取器(CSF)和一个多尺度成本过滤器。特征提取器可区分复杂场景中的结构-上下文特征,从而增强 HF2Net 对卫星立体的鲁棒性,尤其是在难以处理的区域。成本滤波器能过滤掉大部分匹配误差,以确保准确的差异估计。后处理步骤生成带有估计差异图的初始 DSM 补丁,然后将其细化为最终的大尺度 DSM。在公开的 US3D 数据集上进行的初步实验表明,HF2Net 比最先进的方法具有更高的准确性,这表明了 HF2Net 的优越性。然后,我们创建了一个自制的高分七号数据集来训练 HF2Net,并在两个高分七号立体上进行了 DSM 生成实验,进一步证明了所提出的工作流程的有效性和实用性。
{"title":"Digital surface model generation from high-resolution satellite stereos based on hybrid feature fusion network","authors":"Zhi Zheng, Yi Wan, Yongjun Zhang, Zhonghua Hu, Dong Wei, Yongxiang Yao, Chenming Zhu, Kun Yang, Rang Xiao","doi":"10.1111/phor.12471","DOIUrl":"https://doi.org/10.1111/phor.12471","url":null,"abstract":"Recent studies have demonstrated that deep learning-based stereo matching methods (DLSMs) can far exceed conventional ones on most benchmark datasets by both improving visual performance and decreasing the mismatching rate. However, applying DLSMs on high-resolution satellite stereos with broad image coverage and wide terrain variety is still challenging. First, the broad coverage of satellite stereos brings a wide disparity range, while DLSMs are limited to a narrow disparity range in most cases, resulting in incorrect disparity estimation in areas with contradictory disparity ranges. Second, high-resolution satellite stereos always comprise various terrain types, which is more complicated than carefully prepared datasets. Thus, the performance of DLSMs on satellite stereos is unstable, especially for intractable regions such as texture-less and occluded regions. Third, generating DSMs requires occlusion-aware disparity maps, while traditional occlusion detection methods are not always applicable for DLSMs with continuous disparity. To tackle these problems, this paper proposes a novel DLSM-based DSM generation workflow. The workflow comprises three steps: pre-processing, disparity estimation and post-processing. The pre-processing step introduces low-resolution terrain to shift unmatched disparity ranges into a fixed scope and crops satellite stereos to regular patches. The disparity estimation step proposes a hybrid feature fusion network (HF<sup>2</sup>Net) to improve the matching performance. In detail, HF<sup>2</sup>Net designs a cross-scale feature extractor (CSF) and a multi-scale cost filter. The feature extractor differentiates structural-context features in complex scenes and thus enhances HF<sup>2</sup>Net's robustness to satellite stereos, especially on intractable regions. The cost filter filters out most matching errors to ensure accurate disparity estimation. The post-processing step generates initial DSM patches with estimated disparity maps and then refines them for the final large-scale DSMs. Primary experiments on the public US3D dataset showed better accuracy than state-of-the-art methods, indicating HF<sup>2</sup>Net's superiority. We then created a self-made Gaofen-7 dataset to train HF<sup>2</sup>Net and conducted DSM generation experiments on two Gaofen-7 stereos to further demonstrate the effectiveness and practical capability of the proposed workflow.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"210 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139414818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time generation of spherical panoramic video using an omnidirectional multi-camera system 利用全向多摄像头系统实时生成球形全景视频
Pub Date : 2024-01-08 DOI: 10.1111/phor.12474
Jiongli Gao, Jun Wu, Mingyi Huang, Gang Xu
This paper presents a novel method for real-time generation of seamless spherical panoramic videos from an omnidirectional multi-camera system (OMS). Firstly, a multi-view video alignment model called spherical projection constrained thin-plate spline (SP-TPS) was established and estimated using an approximately symmetrical seam-line, maintaining the structure inconsistency around the seam-line. Then, a look-up table was designed to support real-time processing of video re-projection, video dodging and seam-line updates. In the table, the overlapping areas in OMS multi-view videos, the seam-lines between spherical panoramas and OMS multi-view videos and the pixel coordinate mapping relationship between spherical panoramas and OMS multi-view videos were pre-stored as a whole. Finally, a spherical panoramic video was outputted in real-time through look-up table computation under an ordinary GPU processor. The experiments were conducted on multi-view video taken by “1 + 4” and “1 + 7” OMS, respectively. Experimental results demonstrate that compared with four state-of-the-art methods reported in the literature and two bits of commercial software for video stitching, the proposed method excels in eliminating visual artefacts and demonstrates superior adaptability to scenes with varying depths of field. Assuming that OMS is not movable in the scene, this method can generate seamless spherical panoramic videos with a resolution of 8 K in real time, which is of great value to the surveillance field.
本文提出了一种从全向多摄像头系统(OMS)实时生成无缝球形全景视频的新方法。首先,建立了一种名为球面投影约束薄板样条(SP-TPS)的多视角视频对齐模型,并利用近似对称的接缝线进行估算,保持接缝线周围结构的不一致性。然后,设计了一个查找表,以支持视频重新投影、视频躲避和接缝线更新的实时处理。表中预先整体存储了 OMS 多视角视频中的重叠区域、球形全景图与 OMS 多视角视频之间的接缝线以及球形全景图与 OMS 多视角视频之间的像素坐标映射关系。最后,在普通 GPU 处理器下通过查找表计算实时输出球形全景视频。实验分别在 "1+4 "和 "1+7 "OMS 拍摄的多视角视频上进行。实验结果表明,与文献中报道的四种最先进的方法和两款用于视频拼接的商业软件相比,所提出的方法在消除视觉伪影方面表现出色,并对景深不同的场景表现出卓越的适应性。假设 OMS 在场景中不可移动,该方法可实时生成分辨率为 8 K 的无缝球形全景视频,这对监控领域具有重要价值。
{"title":"Real-time generation of spherical panoramic video using an omnidirectional multi-camera system","authors":"Jiongli Gao, Jun Wu, Mingyi Huang, Gang Xu","doi":"10.1111/phor.12474","DOIUrl":"https://doi.org/10.1111/phor.12474","url":null,"abstract":"This paper presents a novel method for real-time generation of seamless spherical panoramic videos from an omnidirectional multi-camera system (OMS). Firstly, a multi-view video alignment model called spherical projection constrained thin-plate spline (SP-TPS) was established and estimated using an approximately symmetrical seam-line, maintaining the structure inconsistency around the seam-line. Then, a look-up table was designed to support real-time processing of video re-projection, video dodging and seam-line updates. In the table, the overlapping areas in OMS multi-view videos, the seam-lines between spherical panoramas and OMS multi-view videos and the pixel coordinate mapping relationship between spherical panoramas and OMS multi-view videos were pre-stored as a whole. Finally, a spherical panoramic video was outputted in real-time through look-up table computation under an ordinary GPU processor. The experiments were conducted on multi-view video taken by “1 + 4” and “1 + 7” OMS, respectively. Experimental results demonstrate that compared with four state-of-the-art methods reported in the literature and two bits of commercial software for video stitching, the proposed method excels in eliminating visual artefacts and demonstrates superior adaptability to scenes with varying depths of field. Assuming that OMS is not movable in the scene, this method can generate seamless spherical panoramic videos with a resolution of 8 K in real time, which is of great value to the surveillance field.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139414805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A 3D urban scene reconstruction enhancement approach based on adaptive viewpoint selection of panoramic videos 基于全景视频自适应视点选择的三维城市场景重建增强方法
Pub Date : 2024-01-07 DOI: 10.1111/phor.12467
Xujie Zhang, Zhenbiao Hu, Qingwu Hu, Jun Zhao, Mingyao Ai, Pengcheng Zhao, Jiayuan Li, Xiaojie Zhou, Zongqiang Chen
The widely used unmanned aerial vehicle oblique photogrammetry often suffers from information loss in complex urban environments, leading to geometric and textural defects in the resulting models. In this study, a close-range panoramic optimal viewpoint selection assisted 3D urban scene reconstruction enhancement method is proposed for areas prone to defects. We first introduce the ground panoramic data acquisition equipment and strategy, which are different from those of the single-lens supplementary photography method. Data acquisition is accomplished through a single and continuous surround-style collection approach. The full coverage of the panoramic video in the space–time dimension enables the acquisition of texture details without considering camera station planning. Then, a panoramic multiview image generation approach is proposed. Adaptive viewpoint selection is achieved using unbiased sampling points from the rough scene model, and viewpoint optimisation is adopted to ensure sufficient image overlap and intersection effects, thus improving the scene reconstructability. Finally, the 3D model is generated by photogrammetric processing of the panoramic multiview images, resulting in an enhanced modelling effect. To validate the proposed method, we conducted experiments using real data from Qingdao, China. Both the qualitative and quantitative results demonstrate a significant improvement in the quality of geometric and textural reconstruction. The tie-point reprojection errors are less than 1 pixel, and the registration accuracy with the model from oblique photogrammetry is comparable to that of optimised-view photography. By eliminating the need for on-site camera station planning or manual flight operations and effectively minimising the redundancy of panoramic videos, our approach significantly reduces the photography and computation costs associated with reconstruction enhancement. Thus, it presents a feasible technical solution for the generation of urban 3D fine models.
在复杂的城市环境中,广泛使用的无人机倾斜摄影测量往往会出现信息丢失,导致生成的模型存在几何和纹理缺陷。本研究针对容易出现缺陷的区域,提出了一种近距离全景优化视点选择辅助三维城市场景重建增强方法。我们首先介绍了与单镜头辅助摄影方法不同的地面全景数据采集设备和策略。数据采集是通过单一和连续的环绕式采集方法完成的。全景视频在时空维度上的全覆盖,使得采集纹理细节时无需考虑相机站的规划。然后,提出了一种全景多视角图像生成方法。利用粗略场景模型的无偏采样点实现自适应视点选择,并采用视点优化来确保足够的图像重叠和交叉效果,从而提高场景的可重构性。最后,通过对全景多视角图像进行摄影测量处理生成三维模型,从而增强建模效果。为了验证所提出的方法,我们使用中国青岛的真实数据进行了实验。定性和定量结果都表明,几何和纹理重建的质量有了显著提高。连接点重投影误差小于 1 像素,与斜面摄影测量模型的配准精度与优化视角摄影的配准精度相当。由于无需现场规划摄影站或进行人工飞行操作,并有效减少了全景视频的冗余度,我们的方法大大降低了与重建增强相关的摄影和计算成本。因此,它为生成城市三维精细模型提供了一个可行的技术解决方案。
{"title":"A 3D urban scene reconstruction enhancement approach based on adaptive viewpoint selection of panoramic videos","authors":"Xujie Zhang, Zhenbiao Hu, Qingwu Hu, Jun Zhao, Mingyao Ai, Pengcheng Zhao, Jiayuan Li, Xiaojie Zhou, Zongqiang Chen","doi":"10.1111/phor.12467","DOIUrl":"https://doi.org/10.1111/phor.12467","url":null,"abstract":"The widely used unmanned aerial vehicle oblique photogrammetry often suffers from information loss in complex urban environments, leading to geometric and textural defects in the resulting models. In this study, a close-range panoramic optimal viewpoint selection assisted 3D urban scene reconstruction enhancement method is proposed for areas prone to defects. We first introduce the ground panoramic data acquisition equipment and strategy, which are different from those of the single-lens supplementary photography method. Data acquisition is accomplished through a single and continuous surround-style collection approach. The full coverage of the panoramic video in the space–time dimension enables the acquisition of texture details without considering camera station planning. Then, a panoramic multiview image generation approach is proposed. Adaptive viewpoint selection is achieved using unbiased sampling points from the rough scene model, and viewpoint optimisation is adopted to ensure sufficient image overlap and intersection effects, thus improving the scene reconstructability. Finally, the 3D model is generated by photogrammetric processing of the panoramic multiview images, resulting in an enhanced modelling effect. To validate the proposed method, we conducted experiments using real data from Qingdao, China. Both the qualitative and quantitative results demonstrate a significant improvement in the quality of geometric and textural reconstruction. The tie-point reprojection errors are less than 1 pixel, and the registration accuracy with the model from oblique photogrammetry is comparable to that of optimised-view photography. By eliminating the need for on-site camera station planning or manual flight operations and effectively minimising the redundancy of panoramic videos, our approach significantly reduces the photography and computation costs associated with reconstruction enhancement. Thus, it presents a feasible technical solution for the generation of urban 3D fine models.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139414542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Building extraction from oblique photogrammetry point clouds based on PointNet++ with attention mechanism 基于带有关注机制的 PointNet++ 从倾斜摄影测量点云中提取建筑物
Pub Date : 2024-01-05 DOI: 10.1111/phor.12476
Hong Hu, Qing Tan, Ruihong Kang, Yanlan Wu, Hui Liu, Baoguo Wang
Unmanned aircraft vehicles (UAVs) capture oblique point clouds in outdoor scenes that contain considerable building information. Building features extracted from images are affected by the viewing point, illumination, occlusion, noise and image conditions, which make building features difficult to extract. Currently, ground elevation changes can provide powerful aids for the extraction, and point cloud data can precisely reflect this information. Thus, oblique photogrammetry point clouds have significant research implications. Traditional building extraction methods involve the filtering and sorting of raw data to separate buildings, which cause the point clouds to lose spatial information and reduce the building extraction accuracy. Therefore, we develop an intelligent building extraction method based on deep learning that incorporates an attention mechanism module into the Samling and PointNet operations within the set abstraction layer of the PointNet++ network. To assess the efficacy of our approach, we train and extract buildings from a dataset created using UAV oblique point clouds from five regions in the city of Bengbu, China. Impressive performance metrics are achieved, including 95.7% intersection over union, 96.5% accuracy, 96.5% precision, 98.7% recall and 97.8% F1 score. And with the addition of attention mechanism, the overall training accuracy of the model is improved by about 3%. This method showcases potential for advancing the accuracy and efficiency of digital urbanization construction projects.
无人飞行器(UAV)可捕捉室外场景中的斜向点云,其中包含大量建筑物信息。从图像中提取建筑物特征会受到观测点、光照、遮挡、噪声和图像条件的影响,这使得建筑物特征难以提取。目前,地面高程变化可以为提取提供有力的帮助,而点云数据可以精确反映这些信息。因此,斜摄影测量点云具有重要的研究意义。传统的建筑物提取方法需要对原始数据进行过滤和排序以分离建筑物,这会导致点云丢失空间信息,降低建筑物提取精度。因此,我们开发了一种基于深度学习的智能建筑物提取方法,在 PointNet++ 网络的集合抽象层中的 Samling 和 PointNet 操作中加入了注意力机制模块。为了评估我们的方法的有效性,我们从中国蚌埠市五个区域的无人机斜向点云创建的数据集中训练和提取建筑物。该方法取得了令人印象深刻的性能指标,包括 95.7% 的交集大于联合、96.5% 的准确率、96.5% 的精确率、98.7% 的召回率和 97.8% 的 F1 分数。由于加入了注意力机制,模型的整体训练准确率提高了约 3%。该方法展示了提高数字城市化建设项目的准确性和效率的潜力。
{"title":"Building extraction from oblique photogrammetry point clouds based on PointNet++ with attention mechanism","authors":"Hong Hu, Qing Tan, Ruihong Kang, Yanlan Wu, Hui Liu, Baoguo Wang","doi":"10.1111/phor.12476","DOIUrl":"https://doi.org/10.1111/phor.12476","url":null,"abstract":"Unmanned aircraft vehicles (UAVs) capture oblique point clouds in outdoor scenes that contain considerable building information. Building features extracted from images are affected by the viewing point, illumination, occlusion, noise and image conditions, which make building features difficult to extract. Currently, ground elevation changes can provide powerful aids for the extraction, and point cloud data can precisely reflect this information. Thus, oblique photogrammetry point clouds have significant research implications. Traditional building extraction methods involve the filtering and sorting of raw data to separate buildings, which cause the point clouds to lose spatial information and reduce the building extraction accuracy. Therefore, we develop an intelligent building extraction method based on deep learning that incorporates an attention mechanism module into the Samling and PointNet operations within the set abstraction layer of the PointNet++ network. To assess the efficacy of our approach, we train and extract buildings from a dataset created using UAV oblique point clouds from five regions in the city of Bengbu, China. Impressive performance metrics are achieved, including 95.7% intersection over union, 96.5% accuracy, 96.5% precision, 98.7% recall and 97.8% F1 score. And with the addition of attention mechanism, the overall training accuracy of the model is improved by about 3%. This method showcases potential for advancing the accuracy and efficiency of digital urbanization construction projects.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139372840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improvement of the spaceborne synthetic aperture radar stereo positioning accuracy without ground control points 提高无地面控制点的星载合成孔径雷达立体定位精度
Pub Date : 2024-01-05 DOI: 10.1111/phor.12475
Yu Wei, Ruishan Zhao, Qiang Fan, Jiguang Dai, Bing Zhang
Compared with optical remote sensing satellites, the geometric positioning accuracy of synthetic aperture radar (SAR) satellite is not affected by satellite attitude or weather conditions. SAR satellites can achieve relatively high positioning accuracy without ground control points, which is particularly important in global surveying and mapping. However, the stereo positioning accuracy of SAR satellites is mainly affected by the SAR systematic delay and the atmospheric propagation delay of radar signals. An iterative compensation method for the SAR systematic time delay is proposed based on digital elevation model to improve the stereo positioning accuracy of SAR satellites without control points. In addition, to address the non-real-time updates of external reference atmospheric param, an iterative compensation method to estimate the atmospheric propagation delay of radar signals is proposed based on standard atmospheric models. In this study, SAR images from the Gaofen-3 (GF-3) satellite with 5 m resolutions were used as experimental data to verify the effectiveness of our proposed method. Simultaneously, the 2D positioning accuracy was better than 3 m and increased by 42.9%, and the elevation positioning accuracy was better than 3 m and increased by 90.2%.
与光学遥感卫星相比,合成孔径雷达卫星的几何定位精度不受卫星姿态和天气条件的影响。合成孔径雷达卫星可以在没有地面控制点的情况下实现相对较高的定位精度,这在全球测绘中尤为重要。然而,合成孔径雷达卫星的立体定位精度主要受合成孔径雷达系统延迟和雷达信号大气传播延迟的影响。本文提出了一种基于数字高程模型的合成孔径雷达系统时延迭代补偿方法,以提高无控制点合成孔径雷达卫星的立体定位精度。此外,针对外部参考大气参数的非实时更新,提出了一种基于标准大气模型的迭代补偿方法来估计雷达信号的大气传播延迟。本研究使用高分三号卫星(GF-3)分辨率为 5 米的合成孔径雷达图像作为实验数据,以验证我们提出的方法的有效性。同时,二维定位精度优于 3 米,提高了 42.9%;高程定位精度优于 3 米,提高了 90.2%。
{"title":"Improvement of the spaceborne synthetic aperture radar stereo positioning accuracy without ground control points","authors":"Yu Wei, Ruishan Zhao, Qiang Fan, Jiguang Dai, Bing Zhang","doi":"10.1111/phor.12475","DOIUrl":"https://doi.org/10.1111/phor.12475","url":null,"abstract":"Compared with optical remote sensing satellites, the geometric positioning accuracy of synthetic aperture radar (SAR) satellite is not affected by satellite attitude or weather conditions. SAR satellites can achieve relatively high positioning accuracy without ground control points, which is particularly important in global surveying and mapping. However, the stereo positioning accuracy of SAR satellites is mainly affected by the SAR systematic delay and the atmospheric propagation delay of radar signals. An iterative compensation method for the SAR systematic time delay is proposed based on digital elevation model to improve the stereo positioning accuracy of SAR satellites without control points. In addition, to address the non-real-time updates of external reference atmospheric param, an iterative compensation method to estimate the atmospheric propagation delay of radar signals is proposed based on standard atmospheric models. In this study, SAR images from the Gaofen-3 (GF-3) satellite with 5 m resolutions were used as experimental data to verify the effectiveness of our proposed method. Simultaneously, the 2D positioning accuracy was better than 3 m and increased by 42.9%, and the elevation positioning accuracy was better than 3 m and increased by 90.2%.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139372872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
The Photogrammetric Record
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1