Point clouds captured using laser scanners mounted on mobile platforms contain errors at the centimetre to decimetre level due to motion distortion. In applications such as lidar odometry or SLAM, this motion distortion is often ignored. However, in applications such as HD mapping or precise vehicle localisation, it is necessary to correct the effect of motion distortion or ‘deskew’ the point clouds before using them. Existing methods for deskewing point clouds mostly rely on high frequency IMU, which may not always be available. In this paper, we propose a straightforward approach that uses the registration of consecutive point clouds to estimate the motion of the scanner and deskew the point clouds. We introduce a novel surface‐based evaluation method to evaluate the performance of the proposed deskewing method. Furthermore, we develop a lidar simulator using the reverse of the proposed deskewing method which can produce synthetic point clouds with realistic motion distortion.
由于运动失真,使用安装在移动平台上的激光扫描仪采集的点云存在厘米到分米级的误差。在激光雷达测距或 SLAM 等应用中,这种运动失真通常会被忽略。然而,在高清地图绘制或精确车辆定位等应用中,有必要在使用点云之前纠正运动失真的影响或对其进行 "纠偏"。现有的点云纠偏方法大多依赖于高频 IMU,而高频 IMU 并不总是可用的。在本文中,我们提出了一种直接的方法,利用连续点云的注册来估计扫描仪的运动并对点云进行纠偏。我们引入了一种新颖的基于曲面的评估方法来评估所提出的纠偏方法的性能。此外,我们还开发了一种激光雷达模拟器,该模拟器使用了所提出的反向纠偏方法,可以生成具有真实运动失真的合成点云。
{"title":"Registration‐based point cloud deskewing and dynamic lidar simulation","authors":"Yuan Zhao, Kourosh Khoshelham, Amir Khodabandeh","doi":"10.1111/phor.12516","DOIUrl":"https://doi.org/10.1111/phor.12516","url":null,"abstract":"Point clouds captured using laser scanners mounted on mobile platforms contain errors at the centimetre to decimetre level due to motion distortion. In applications such as lidar odometry or SLAM, this motion distortion is often ignored. However, in applications such as HD mapping or precise vehicle localisation, it is necessary to correct the effect of motion distortion or ‘deskew’ the point clouds before using them. Existing methods for deskewing point clouds mostly rely on high frequency IMU, which may not always be available. In this paper, we propose a straightforward approach that uses the registration of consecutive point clouds to estimate the motion of the scanner and deskew the point clouds. We introduce a novel surface‐based evaluation method to evaluate the performance of the proposed deskewing method. Furthermore, we develop a lidar simulator using the reverse of the proposed deskewing method which can produce synthetic point clouds with realistic motion distortion.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142213020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leveraging multi‐platform laser scanning systems offers a complete solution for 3D modelling of large‐scale urban scenes. However, the spatial inconsistency of point clouds collected by heterogeneous platforms with different viewpoints presents challenges in achieving seamless fusion. To tackle this challenge, this paper proposes a coarse‐to‐fine adjustment for multi‐platform point cloud fusion. First, in the preprocessing stage, the bounding box of each point cloud block is employed to identify potential constraint association. Second, the proposed local optimisation facilitates preliminary pairwise alignment with these potential constraint relationships, and obtaining initial guess for a comprehensive global optimisation. At last, the proposed global optimisation incorporates all the local constraints for tightly coupled optimisation with raw point correspondences. We choose two study areas to conduct experiments. Study area 1 represents a fast road scene with a significant amount of vegetation, while study area 2 represents an urban scene with many buildings. Extensive experimental evaluations indicate the proposed method has increased the accuracy of study area 1 by 50.6% and the accuracy of study area 2 by 44.7%.
{"title":"Coarse‐to‐fine adjustment for multi‐platform point cloud fusion","authors":"Xin Zhao, Jianping Li, Yuhao Li, Bisheng Yang, Sihan Sun, Yongfeng Lin, Zhen Dong","doi":"10.1111/phor.12513","DOIUrl":"https://doi.org/10.1111/phor.12513","url":null,"abstract":"Leveraging multi‐platform laser scanning systems offers a complete solution for 3D modelling of large‐scale urban scenes. However, the spatial inconsistency of point clouds collected by heterogeneous platforms with different viewpoints presents challenges in achieving seamless fusion. To tackle this challenge, this paper proposes a coarse‐to‐fine adjustment for multi‐platform point cloud fusion. First, in the preprocessing stage, the bounding box of each point cloud block is employed to identify potential constraint association. Second, the proposed local optimisation facilitates preliminary pairwise alignment with these potential constraint relationships, and obtaining initial guess for a comprehensive global optimisation. At last, the proposed global optimisation incorporates all the local constraints for tightly coupled optimisation with raw point correspondences. We choose two study areas to conduct experiments. Study area 1 represents a fast road scene with a significant amount of vegetation, while study area 2 represents an urban scene with many buildings. Extensive experimental evaluations indicate the proposed method has increased the accuracy of study area 1 by 50.6% and the accuracy of study area 2 by 44.7%.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141770304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the influence of repeated textures or edge perspective transformations on building facades, building modelling based on unmanned aerial vehicle (UAV) photogrammetry often suffers geometric deformation and distortion when using existing methods or commercial software. To address this issue, a real‐scene three‐dimensional (3D) building model optimisation method based on straight‐line constraints is proposed. First, point clouds generated by unmanned aerial vehicle (UAV) photogrammetry are down‐sampled based on local curvature characteristics, and structural point clouds located at the edges of buildings are extracted. Subsequently, an improved random sample consensus (RANSAC) algorithm, considering distance and angle constraints on lines, known as co‐constrained RANSAC, is applied to further extract point clouds with straight‐line features from the structural point clouds. Finally, point clouds with straight‐line features are optimised and updated using sampled points on the fitted straight lines. Experimental results demonstrate that the proposed method can effectively eliminate redundant 3D points or noise while retaining the fundamental structure of buildings. Compared to popular methods and commercial software, the proposed method significantly enhances the accuracy of building modelling. The average reduction in error is 59.2%, including the optimisation of deviations in the original model's contour projection.
{"title":"Optimisation of real‐scene 3D building models based on straight‐line constraints","authors":"Kaiyun Lv, Longyu Chen, Haiqing He, Fuyang Zhou, Shixun Yu","doi":"10.1111/phor.12514","DOIUrl":"https://doi.org/10.1111/phor.12514","url":null,"abstract":"Due to the influence of repeated textures or edge perspective transformations on building facades, building modelling based on unmanned aerial vehicle (UAV) photogrammetry often suffers geometric deformation and distortion when using existing methods or commercial software. To address this issue, a real‐scene three‐dimensional (3D) building model optimisation method based on straight‐line constraints is proposed. First, point clouds generated by unmanned aerial vehicle (UAV) photogrammetry are down‐sampled based on local curvature characteristics, and structural point clouds located at the edges of buildings are extracted. Subsequently, an improved random sample consensus (RANSAC) algorithm, considering distance and angle constraints on lines, known as co‐constrained RANSAC, is applied to further extract point clouds with straight‐line features from the structural point clouds. Finally, point clouds with straight‐line features are optimised and updated using sampled points on the fitted straight lines. Experimental results demonstrate that the proposed method can effectively eliminate redundant 3D points or noise while retaining the fundamental structure of buildings. Compared to popular methods and commercial software, the proposed method significantly enhances the accuracy of building modelling. The average reduction in error is 59.2%, including the optimisation of deviations in the original model's contour projection.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"890 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141770306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyperspectral remote sensing is currently underutilized in urban environments due to significant barriers concerning the existence, availability, and quality of urban hyperspectral reference spectra. This paper exposes these barriers by identifying, cataloging, and characterizing the contents of 23 spectral libraries, developing metrics to assess compliance with the Principles of Findability, Accessibility, Interoperability, and Reusability (FAIR), and evaluating existing resources using these criteria. Only 2931 urban spectral records were found within the 4 Global Spectral Libraries (0.61% of 476,592 published spectra). Within a further 19 Local Urban Spectral Libraries, 3862 additional urban spectra were found, but only 1662 (43%) were accessible without restriction. Content analysis revealed insufficient representation of urban material heterogeneity, imbalanced categories, and limited library interoperability, all of which further hinder effective data utilization. In response, this paper proposes a 14‐category metadataset, with specific considerations to overcome environmentally induced and inherent, intra‐material variability. In addition, material‐based spectral groupings and data resampling to common hyperspectral equipment specifications are recommended. These measures aim to enhance the utility of urban spectral libraries by improving FAIR compliance, thereby contributing to a more cohesive and enduring framework for hyperspectral reference data.
{"title":"Urban hyperspectral reference data availability and reuse: State‐of‐the‐practice review","authors":"Jessica M. O. Salcido, Debra F. Laefer","doi":"10.1111/phor.12508","DOIUrl":"https://doi.org/10.1111/phor.12508","url":null,"abstract":"Hyperspectral remote sensing is currently underutilized in urban environments due to significant barriers concerning the existence, availability, and quality of urban hyperspectral reference spectra. This paper exposes these barriers by identifying, cataloging, and characterizing the contents of 23 spectral libraries, developing metrics to assess compliance with the Principles of Findability, Accessibility, Interoperability, and Reusability (FAIR), and evaluating existing resources using these criteria. Only 2931 urban spectral records were found within the 4 Global Spectral Libraries (0.61% of 476,592 published spectra). Within a further 19 Local Urban Spectral Libraries, 3862 additional urban spectra were found, but only 1662 (43%) were accessible without restriction. Content analysis revealed insufficient representation of urban material heterogeneity, imbalanced categories, and limited library interoperability, all of which further hinder effective data utilization. In response, this paper proposes a 14‐category metadataset, with specific considerations to overcome environmentally induced and inherent, intra‐material variability. In addition, material‐based spectral groupings and data resampling to common hyperspectral equipment specifications are recommended. These measures aim to enhance the utility of urban spectral libraries by improving FAIR compliance, thereby contributing to a more cohesive and enduring framework for hyperspectral reference data.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141770305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bernard Essel, Michael Bolger, John McDonald, Conor Cahalane
Over the last three decades, satellite imagery has been instrumental in mapping and monitoring water quality. However, satellites often have limitations due to image availability and cloud cover. Today, the spatial resolution of satellite images does not provide finer detail measurements essential for small‐scale water pollution management. Drones offer a complimentary platform capable of operating below cloud cover and acquiring very high spatial resolution datasets in near real‐time. Studies have shown that drone mapping over water can be done via the Direct Georeferencing approach. However, this method is only suitable for high‐end drones with accurate GNSS/IMU. Importantly, this limitation is exacerbated because of the difficulty in placing targets over water, which can be used to improve the accuracy after the survey. This study explored a new method called Assisted Direct Georeferencing which combines the benefits of traditional Bundle Adjustment with Direct Georeferencing. The performance of the approach was evaluated over a variety of different scenarios, demonstrating significant improvement in the planimetric accuracy. From the results, the method reduced the error in XY of drone imagery from MAE of 18.9 to 3.4 m. The result shows the potential of low‐cost drones with Assisted Direct Georeferencing in closing the gap to high‐end drones.
{"title":"ADGEO: A new shore‐based approach to improving spatial accuracy when mapping water bodies using low‐cost drones","authors":"Bernard Essel, Michael Bolger, John McDonald, Conor Cahalane","doi":"10.1111/phor.12512","DOIUrl":"https://doi.org/10.1111/phor.12512","url":null,"abstract":"Over the last three decades, satellite imagery has been instrumental in mapping and monitoring water quality. However, satellites often have limitations due to image availability and cloud cover. Today, the spatial resolution of satellite images does not provide finer detail measurements essential for small‐scale water pollution management. Drones offer a complimentary platform capable of operating below cloud cover and acquiring very high spatial resolution datasets in near real‐time. Studies have shown that drone mapping over water can be done via the Direct Georeferencing approach. However, this method is only suitable for high‐end drones with accurate GNSS/IMU. Importantly, this limitation is exacerbated because of the difficulty in placing targets over water, which can be used to improve the accuracy after the survey. This study explored a new method called Assisted Direct Georeferencing which combines the benefits of traditional Bundle Adjustment with Direct Georeferencing. The performance of the approach was evaluated over a variety of different scenarios, demonstrating significant improvement in the planimetric accuracy. From the results, the method reduced the error in XY of drone imagery from MAE of 18.9 to 3.4 m. The result shows the potential of low‐cost drones with Assisted Direct Georeferencing in closing the gap to high‐end drones.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"191 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141515686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using parallel photography to model tunnels is an efficient method for real scene modelling. Aiming at the problem that the accuracy of optical flow matching in tunnel parallel photography sequence photos is severely affected by the scale deformation of stereo images, a novel optical flow matching method with automatically correcting the scale difference of tunnel parallel photography stereo images is proposed from the perspective of imaging relationships. By analysing the distribution pattern of scale difference in stereo images, a model is obtained in which the scale difference of image points is symmetrically distributed radially on the image and follows a power function growth. Introduce it into traditional optical flow matching to correct image scale differences based on the model to improve matching accuracy. The mean square error of the optical flow matching after correcting scale difference in the experiment is less than 0.3 pixels, which is at least 34.3% higher than before correction and a maximum improvement of 45.5% in the experimental results. The research result indicates that the proposed optical flow matching method with automatically correcting the scale difference has a significant effect on improving the accuracy of tunnel parallel photography image matching and modelling.
{"title":"Optical flow matching with automatically correcting the scale difference of tunnel parallel photogrammetry","authors":"Hao Li, Bohao Gao, Xiufeng He, Pengfei Yu","doi":"10.1111/phor.12511","DOIUrl":"https://doi.org/10.1111/phor.12511","url":null,"abstract":"Using parallel photography to model tunnels is an efficient method for real scene modelling. Aiming at the problem that the accuracy of optical flow matching in tunnel parallel photography sequence photos is severely affected by the scale deformation of stereo images, a novel optical flow matching method with automatically correcting the scale difference of tunnel parallel photography stereo images is proposed from the perspective of imaging relationships. By analysing the distribution pattern of scale difference in stereo images, a model is obtained in which the scale difference of image points is symmetrically distributed radially on the image and follows a power function growth. Introduce it into traditional optical flow matching to correct image scale differences based on the model to improve matching accuracy. The mean square error of the optical flow matching after correcting scale difference in the experiment is less than 0.3 pixels, which is at least 34.3% higher than before correction and a maximum improvement of 45.5% in the experimental results. The research result indicates that the proposed optical flow matching method with automatically correcting the scale difference has a significant effect on improving the accuracy of tunnel parallel photography image matching and modelling.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"84 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
LiDAR odometry enables localising vehicles and robots in the environments where global navigation satellite systems (GNSS) are not available. An inherent limitation of LiDAR odometry is the accumulation of local motion estimation errors. Current approaches heavily rely on loop closure to optimise the estimated sensor poses and to eliminate the drift of the estimated trajectory. Consequently, these systems cannot perform real‐time localization and are therefore not practical for a navigation task. This paper presents MoLO, a novel model‐based LiDAR odometry approach to achieve real‐time and drift‐free localization using a 3D model of the environment containing planar surfaces, namely the structural elements of buildings. The proposed approach uses a 3D model of the environment to initialise the LiDAR pose and includes a scan‐to‐scan registration to estimate the pose for consecutive LiDAR scans. Re‐registering LiDAR scans to the 3D model at a certain frequency provides the global sensor pose and eliminates the drift of the trajectory. Pose graphs are built frequently to acquire a smooth and accurate trajectory. A geometry‐based method and a learning‐based method to register LiDAR scans with the 3D model are tested and compared. Experimental results show that MoLO can eliminate drift and achieve real‐time localization while providing an accuracy equivalent to loop closure optimization.
{"title":"MoLO: Drift‐free lidar odometry using a 3D model","authors":"H. Zhao, Y. Zhao, M. Tomko, K. Khoshelham","doi":"10.1111/phor.12509","DOIUrl":"https://doi.org/10.1111/phor.12509","url":null,"abstract":"LiDAR odometry enables localising vehicles and robots in the environments where global navigation satellite systems (GNSS) are not available. An inherent limitation of LiDAR odometry is the accumulation of local motion estimation errors. Current approaches heavily rely on loop closure to optimise the estimated sensor poses and to eliminate the drift of the estimated trajectory. Consequently, these systems cannot perform real‐time localization and are therefore not practical for a navigation task. This paper presents MoLO, a novel model‐based LiDAR odometry approach to achieve real‐time and drift‐free localization using a 3D model of the environment containing planar surfaces, namely the structural elements of buildings. The proposed approach uses a 3D model of the environment to initialise the LiDAR pose and includes a scan‐to‐scan registration to estimate the pose for consecutive LiDAR scans. Re‐registering LiDAR scans to the 3D model at a certain frequency provides the global sensor pose and eliminates the drift of the trajectory. Pose graphs are built frequently to acquire a smooth and accurate trajectory. A geometry‐based method and a learning‐based method to register LiDAR scans with the 3D model are tested and compared. Experimental results show that MoLO can eliminate drift and achieve real‐time localization while providing an accuracy equivalent to loop closure optimization.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"25 21","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141340457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Forest topographic survey is a problem that photogrammetry has not solved for a long time. Forest canopy height is a crucial forest biophysical parameter which is used to derive essential information about forest ecosystems. In order to construct a canopy height model in forest areas, this study extracts spectral feature factors from digital orthophoto map and geometric feature factors from digital surface model, which are generated through aerial photogrammetry and LiDAR (light detection and ranging). The maximum information coefficient, Pearson, Kendall, Spearman correlation coefficients, and a new proposed index of relative importance are employed to assess the correlation between each feature factor and forest vertical heights. Gradient boosting decision tree regression is introduced and utilised to construct a canopy height model, which enables the prediction of unknown canopy height in forest areas. Two additional machine learning techniques, namely random forest regression and support vector machine regression, are employed to construct canopy height model for comparative analysis. The data sets from two study areas have been processed for model training and prediction, yielding encouraging experimental results that demonstrate the potential of canopy height model to achieve prediction accuracies of 0.3 m in forested areas with 50% vegetation coverage and 0.8 m in areas with 99% vegetation coverage, even when only a mere 10% of the available data sets are selected as model training data. The above approaches present techniques for modelling canopy height in forested areas with varying conditions, which have been shown to be both feasible and reliable.
{"title":"Forest canopy height modelling based on photogrammetric data and machine learning methods","authors":"Xingsheng Deng, Yujing Liu, Xingdong Cheng","doi":"10.1111/phor.12507","DOIUrl":"https://doi.org/10.1111/phor.12507","url":null,"abstract":"Forest topographic survey is a problem that photogrammetry has not solved for a long time. Forest canopy height is a crucial forest biophysical parameter which is used to derive essential information about forest ecosystems. In order to construct a canopy height model in forest areas, this study extracts spectral feature factors from digital orthophoto map and geometric feature factors from digital surface model, which are generated through aerial photogrammetry and LiDAR (light detection and ranging). The maximum information coefficient, Pearson, Kendall, Spearman correlation coefficients, and a new proposed index of relative importance are employed to assess the correlation between each feature factor and forest vertical heights. Gradient boosting decision tree regression is introduced and utilised to construct a canopy height model, which enables the prediction of unknown canopy height in forest areas. Two additional machine learning techniques, namely random forest regression and support vector machine regression, are employed to construct canopy height model for comparative analysis. The data sets from two study areas have been processed for model training and prediction, yielding encouraging experimental results that demonstrate the potential of canopy height model to achieve prediction accuracies of 0.3 m in forested areas with 50% vegetation coverage and 0.8 m in areas with 99% vegetation coverage, even when only a mere 10% of the available data sets are selected as model training data. The above approaches present techniques for modelling canopy height in forested areas with varying conditions, which have been shown to be both feasible and reliable.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141387542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ISPRS WG IV/9: 3D GeoInfo and EG‐ICE joint conference 2024","authors":"","doi":"10.1111/phor.12501","DOIUrl":"https://doi.org/10.1111/phor.12501","url":null,"abstract":"","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141410606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}