Change detection is a critical component in the field of remote sensing, with significant implications for resource management and land monitoring. Currently, most conventional methods for remote sensing change detection often rely on qualitative monitoring, which usually requires data collection from the entire scene over multiple time periods. In this paper, we propose a method that can be computationally intensive and lacks reusability, especially when dealing with large datasets. We use a novel methodology that leverages the texture features and geometric structure information derived from three-dimensional (3D) real scenes. By establishing a two-dimensional (2D)–3D geometric relationship between a single observational image and the corresponding 3D scene, we can obtain more accurate positional information for the image. This relationship allows us to transfer the depth information from the 3D model to the observational image, thereby facilitating precise geometric change measurements for specific planar targets. Experimental results indicate that our approach enables millimetre-level change detection of minuscule targets based on a single image. Compared with conventional methods, our technique offers enhanced efficiency and reusability, making it a valuable tool for the fine-grained change detection of small targets based on 3D real scene.
{"title":"Linear target change detection from a single image based on three-dimensional real scene","authors":"Yang Liu, Zheng Ji, Lingfeng Chen, Yuchen Liu","doi":"10.1111/phor.12470","DOIUrl":"https://doi.org/10.1111/phor.12470","url":null,"abstract":"Change detection is a critical component in the field of remote sensing, with significant implications for resource management and land monitoring. Currently, most conventional methods for remote sensing change detection often rely on qualitative monitoring, which usually requires data collection from the entire scene over multiple time periods. In this paper, we propose a method that can be computationally intensive and lacks reusability, especially when dealing with large datasets. We use a novel methodology that leverages the texture features and geometric structure information derived from three-dimensional (3D) real scenes. By establishing a two-dimensional (2D)–3D geometric relationship between a single observational image and the corresponding 3D scene, we can obtain more accurate positional information for the image. This relationship allows us to transfer the depth information from the 3D model to the observational image, thereby facilitating precise geometric change measurements for specific planar targets. Experimental results indicate that our approach enables millimetre-level change detection of minuscule targets based on a single image. Compared with conventional methods, our technique offers enhanced efficiency and reusability, making it a valuable tool for the fine-grained change detection of small targets based on 3D real scene.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139068045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Three-dimensional (3D) reconstruction is a pivotal research area within computer vision and photogrammetry, offering a valuable foundation of data for the development of smart cities. However, existing methods for constructing 3D models from unmanned aerial vehicle (UAV) images often suffer from slow processing speeds and low central processing unit (CPU)/graphics processing unit (GPU) utilization rates. Furthermore, the utilization of cluster-based distributed computing for 3D modelling frequently results in inefficient resource allocation across nodes. To address these challenges, this paper presents a novel approach to 3D modelling in clusters, incorporating a dynamic load-balancing strategy. The method divides the 3D reconstruction process into multiple stages to lay the groundwork for distributing tasks across multiple nodes efficiently. Instead of traditional traversal-based communication, this approach employs gossip communication techniques to reduce the network overhead. To boost the modelling efficiency, a dynamic load-balancing strategy is introduced that prevents nodes from becoming overloaded, thus optimizing resource usage during the modelling process and alleviating resource waste issues in multidevice clusters. The experimental results indicate that in small-scale data environments, this approach improves CPU/GPU utilization by 35.8%/23.4% compared with single-machine utilization. In large-scale data environments for cluster-based 3D modelling tests, this method enhances the average efficiency by 61.4% compared with traditional 3D modelling software while maintaining a comparable model accuracy. In computer vision and photogrammetry, research enhances 3D reconstruction for smart cities. To address slow UAV-based methods, the study employs dynamic load balancing and ‘gossip’ communication to minimize network overhead. In small data tests, the approach improves CPU and GPU efficiency by 20.7% and 40.3%, respectively. In large data settings, it outperforms existing methods by 61.38% while maintaining accuracy.
{"title":"Rapid 3D modelling: Clustering method based on dynamic load balancing strategy","authors":"Yingwei Ge, Bingxuan Guo, Guozheng Xu, Yawen Liu, Xiao Jiang, Zhe Peng","doi":"10.1111/phor.12473","DOIUrl":"https://doi.org/10.1111/phor.12473","url":null,"abstract":"Three-dimensional (3D) reconstruction is a pivotal research area within computer vision and photogrammetry, offering a valuable foundation of data for the development of smart cities. However, existing methods for constructing 3D models from unmanned aerial vehicle (UAV) images often suffer from slow processing speeds and low central processing unit (CPU)/graphics processing unit (GPU) utilization rates. Furthermore, the utilization of cluster-based distributed computing for 3D modelling frequently results in inefficient resource allocation across nodes. To address these challenges, this paper presents a novel approach to 3D modelling in clusters, incorporating a dynamic load-balancing strategy. The method divides the 3D reconstruction process into multiple stages to lay the groundwork for distributing tasks across multiple nodes efficiently. Instead of traditional traversal-based communication, this approach employs gossip communication techniques to reduce the network overhead. To boost the modelling efficiency, a dynamic load-balancing strategy is introduced that prevents nodes from becoming overloaded, thus optimizing resource usage during the modelling process and alleviating resource waste issues in multidevice clusters. The experimental results indicate that in small-scale data environments, this approach improves CPU/GPU utilization by 35.8%/23.4% compared with single-machine utilization. In large-scale data environments for cluster-based 3D modelling tests, this method enhances the average efficiency by 61.4% compared with traditional 3D modelling software while maintaining a comparable model accuracy. In computer vision and photogrammetry, research enhances 3D reconstruction for smart cities. To address slow UAV-based methods, the study employs dynamic load balancing and ‘gossip’ communication to minimize network overhead. In small data tests, the approach improves CPU and GPU efficiency by 20.7% and 40.3%, respectively. In large data settings, it outperforms existing methods by 61.38% while maintaining accuracy.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138629925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The point cloud semantic understanding task has made remarkable progress along with the development of 3D deep learning. However, aggregating spatial information to improve the local feature learning capability of the network remains a major challenge. Many methods have been used for improving local information learning, such as constructing a multi-area structure for capturing different area information. However, it will lose some local information due to the independent learning point feature. To solve this problem, a new network is proposed that considers the importance of the differences between points in the neighbourhood. Capturing local feature information can be enhanced by highlighting the different feature importance of the point cloud in the neighbourhood. First, T-Net is constructed to learn the point cloud transformation matrix for point cloud disorder. Second, transformer is used to improve the problem of local information loss due to the independence of each point in the neighbourhood. The experimental results show that 92.2% accuracy overall was achieved on the ModelNet40 dataset and 93.8% accuracy overall was achieved on the ModelNet10 dataset.
{"title":"Learning point cloud context information based on 3D transformer for more accurate and efficient classification","authors":"Yiping Chen, Shuai Zhang, Weisheng Lin, Shuhang Zhang, Wuming Zhang","doi":"10.1111/phor.12469","DOIUrl":"https://doi.org/10.1111/phor.12469","url":null,"abstract":"The point cloud semantic understanding task has made remarkable progress along with the development of 3D deep learning. However, aggregating spatial information to improve the local feature learning capability of the network remains a major challenge. Many methods have been used for improving local information learning, such as constructing a multi-area structure for capturing different area information. However, it will lose some local information due to the independent learning point feature. To solve this problem, a new network is proposed that considers the importance of the differences between points in the neighbourhood. Capturing local feature information can be enhanced by highlighting the different feature importance of the point cloud in the neighbourhood. First, T-Net is constructed to learn the point cloud transformation matrix for point cloud disorder. Second, transformer is used to improve the problem of local information loss due to the independence of each point in the neighbourhood. The experimental results show that 92.2% accuracy overall was achieved on the ModelNet40 dataset and 93.8% accuracy overall was achieved on the ModelNet10 dataset.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138566670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scene understanding of mobile laser scanning (MLS) point clouds is vital in autonomous driving and virtual reality. Most existing semantic segmentation methods rely on a large number of accurately labelled points, which is time-consuming and labour-intensive. To cope with this issue, this paper explores a weakly supervised learning (WSL) framework for MLS data. Specifically, a category balanced random annotation (CBRA) strategy is employed to obtain balanced labels and enhance model performance. Next, based on KPConv-Net as a backbone network, a WSL semantic segmentation framework is developed for MLS point clouds via a deep consistency-guided self-distillation (DCS) mechanism. The DCS mechanism consists of a deep consistency-guided self-distillation branch and an entropy regularisation branch. The self-distillation branch is designed by constructing an auxiliary network to maintain the consistency of predicted distributions between the auxiliary network and the original network, while the entropy regularisation branch is designed to increase the confidence of the network predicted results. The proposed WSL framework was evaluated on the WHU-MLS, NPM3D and Toronto3D datasets. By using only 0.1% labelled points, the proposed WSL framework achieved a competitive performance in MLS point cloud semantic segmentation with the mean Intersection over Union (mIoU) scores of 60.08%, 72.0% and 67.42% on the three datasets, respectively.
{"title":"Weakly supervised semantic segmentation of mobile laser scanning point clouds via category balanced random annotation and deep consistency-guided self-distillation mechanism","authors":"Jiacheng Liu, Haiyan Guan, Xiangda Lei, Yongtao Yu","doi":"10.1111/phor.12468","DOIUrl":"https://doi.org/10.1111/phor.12468","url":null,"abstract":"Scene understanding of mobile laser scanning (MLS) point clouds is vital in autonomous driving and virtual reality. Most existing semantic segmentation methods rely on a large number of accurately labelled points, which is time-consuming and labour-intensive. To cope with this issue, this paper explores a weakly supervised learning (WSL) framework for MLS data. Specifically, a category balanced random annotation (CBRA) strategy is employed to obtain balanced labels and enhance model performance. Next, based on KPConv-Net as a backbone network, a WSL semantic segmentation framework is developed for MLS point clouds via a deep consistency-guided self-distillation (DCS) mechanism. The DCS mechanism consists of a deep consistency-guided self-distillation branch and an entropy regularisation branch. The self-distillation branch is designed by constructing an auxiliary network to maintain the consistency of predicted distributions between the auxiliary network and the original network, while the entropy regularisation branch is designed to increase the confidence of the network predicted results. The proposed WSL framework was evaluated on the WHU-MLS, NPM3D and Toronto3D datasets. By using only 0.1% labelled points, the proposed WSL framework achieved a competitive performance in MLS point cloud semantic segmentation with the mean Intersection over Union (mIoU) scores of 60.08%, 72.0% and 67.42% on the three datasets, respectively.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"39 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138506930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ebadat Ghanbari Parmehr, Mohammad Savadkouhi, Meghdad Nopour
Abstract The developments in lightweight unmanned aerial vehicles (UAVs) and structure‐from‐motion (SfM)‐based software have opened a new era in 3D mapping which is notably cost‐effective and fast, though the photogrammetric blocks lead to systematic height error due to inaccurate camera calibration parameters particularly when the ground control points (GCPs) are few and unevenly distributed. The use of onboard Global Navigation Satellite System (GNSS) receivers (such as RTK‐ or PPK‐based devices which use the DGNSS technique) to obtain the accurate coordinates of camera perspective centres has reduced the need for ground surveys, nevertheless, the aforementioned systematic error was reported in the UAV photogrammetric blocks. In this research, three flight‐planning scenarios with oblique imagery in addition to the traditional nadir block were evaluated and processed with four different processing cases. Therefore, 16 various blocks with different overlaps, direct and indirect georeferencing approaches as well as flight‐planning scenarios were tested to examine and offer the best imaging network. The results denote that the combination of oblique images located on a circle in the centre of the block with the nadir block provides the best self‐calibration functionality and improves the final accuracy by 50% (from 0.163 to 0.085 m) for direct georeferenced blocks and by 40% (from 0.042 to 0.026 m) for indirect georeferenced blocks.
{"title":"The impact of oblique images and flight‐planning scenarios on the accuracy of UAV 3D mapping","authors":"Ebadat Ghanbari Parmehr, Mohammad Savadkouhi, Meghdad Nopour","doi":"10.1111/phor.12466","DOIUrl":"https://doi.org/10.1111/phor.12466","url":null,"abstract":"Abstract The developments in lightweight unmanned aerial vehicles (UAVs) and structure‐from‐motion (SfM)‐based software have opened a new era in 3D mapping which is notably cost‐effective and fast, though the photogrammetric blocks lead to systematic height error due to inaccurate camera calibration parameters particularly when the ground control points (GCPs) are few and unevenly distributed. The use of onboard Global Navigation Satellite System (GNSS) receivers (such as RTK‐ or PPK‐based devices which use the DGNSS technique) to obtain the accurate coordinates of camera perspective centres has reduced the need for ground surveys, nevertheless, the aforementioned systematic error was reported in the UAV photogrammetric blocks. In this research, three flight‐planning scenarios with oblique imagery in addition to the traditional nadir block were evaluated and processed with four different processing cases. Therefore, 16 various blocks with different overlaps, direct and indirect georeferencing approaches as well as flight‐planning scenarios were tested to examine and offer the best imaging network. The results denote that the combination of oblique images located on a circle in the centre of the block with the nadir block provides the best self‐calibration functionality and improves the final accuracy by 50% (from 0.163 to 0.085 m) for direct georeferenced blocks and by 40% (from 0.042 to 0.026 m) for indirect georeferenced blocks.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135147216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The detection of ground object changes from bi‐temporal images is of great significance for urban planning, land‐use/land‐cover monitoring and natural disaster assessment. To solve the limitation of incomplete change detection (CD) entities and inaccurate edges caused by the loss of detailed information, this paper proposes a network based on dense connections and attention feature fusion, namely Siamese NestedUNet with Attention Feature Fusion (SNAFF). First, multi‐level bi‐temporal features are extracted through a Siamese network. The dense connections between the sub‐nodes of the decoder are used to compensate for the missing location information as well as weakening the semantic differences between features. Then, the attention mechanism is introduced to combine global and local information to achieve feature fusion. Finally, a deep supervision strategy is used to suppress the problem of gradient vanishing and slow convergence speed. During the testing phase, the test time augmentation (TTA) strategy is adopted to further improve the CD performance. In order to verify the effectiveness of the proposed method, two datasets with different change types are used. The experimental results indicate that, compared with the comparison methods, the proposed SNAFF achieves the best quantitative results on both datasets, in which F1, IoU and OA in the LEVIR‐CD dataset are 91.47%, 84.28% and 99.13%, respectively, and the values in the CDD dataset are 96.91%, 94.01% and 99.27%, respectively. In addition, the qualitative results show that SNAFF can effectively retain the global and edge information of the detected entity, thus achieving the best visual performance.
从双时相影像中检测地物变化对城市规划、土地利用/土地覆盖监测和自然灾害评估具有重要意义。为了解决不完全变化检测(CD)实体和细节信息丢失导致的边缘不准确的局限性,本文提出了一种基于密集连接和注意力特征融合的网络,即Siamese NestedUNet with attention feature fusion (SNAFF)。首先,通过Siamese网络提取多层次双时态特征。解码器子节点之间的紧密连接用于补偿缺失的位置信息,并减弱特征之间的语义差异。然后,引入注意机制,结合全局和局部信息实现特征融合;最后,采用深度监督策略抑制了梯度消失和收敛速度慢的问题。在测试阶段,采用测试时间增加(TTA)策略进一步提高CD性能。为了验证该方法的有效性,使用了两个不同变化类型的数据集。实验结果表明,与对比方法相比,所提出的SNAFF在两个数据集上都取得了最好的定量结果,其中LEVIR‐CD数据集的F1、IoU和OA分别为91.47%、84.28%和99.13%,CDD数据集的F1、IoU和OA分别为96.91%、94.01%和99.27%。定性结果表明,SNAFF能够有效地保留被检测实体的全局和边缘信息,从而达到最佳的视觉效果。
{"title":"High‐resolution optical remote sensing image change detection based on dense connection and attention feature fusion network","authors":"Daifeng Peng, Chenchen Zhai, Yongjun Zhang, Haiyan Guan","doi":"10.1111/phor.12462","DOIUrl":"https://doi.org/10.1111/phor.12462","url":null,"abstract":"Abstract The detection of ground object changes from bi‐temporal images is of great significance for urban planning, land‐use/land‐cover monitoring and natural disaster assessment. To solve the limitation of incomplete change detection (CD) entities and inaccurate edges caused by the loss of detailed information, this paper proposes a network based on dense connections and attention feature fusion, namely Siamese NestedUNet with Attention Feature Fusion (SNAFF). First, multi‐level bi‐temporal features are extracted through a Siamese network. The dense connections between the sub‐nodes of the decoder are used to compensate for the missing location information as well as weakening the semantic differences between features. Then, the attention mechanism is introduced to combine global and local information to achieve feature fusion. Finally, a deep supervision strategy is used to suppress the problem of gradient vanishing and slow convergence speed. During the testing phase, the test time augmentation (TTA) strategy is adopted to further improve the CD performance. In order to verify the effectiveness of the proposed method, two datasets with different change types are used. The experimental results indicate that, compared with the comparison methods, the proposed SNAFF achieves the best quantitative results on both datasets, in which F1, IoU and OA in the LEVIR‐CD dataset are 91.47%, 84.28% and 99.13%, respectively, and the values in the CDD dataset are 96.91%, 94.01% and 99.27%, respectively. In addition, the qualitative results show that SNAFF can effectively retain the global and edge information of the detected entity, thus achieving the best visual performance.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135538365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Weak texture remote sensing image (WTRSI) has characteristics such as low reflectivity, high similarity of neighbouring pixels and insignificant differences between regions. These factors cause difficulties in feature extraction and description, which lead to unsuccessful matching. Therefore, this paper proposes a novel hybrid‐domain features and adaptive description (HFAD) approach to perform WTRSI matching. This approach mainly provides two contributions: (1) a new feature extractor that combines both the spatial domain scale space and the frequency domain scale space is established, where a weighted least square filter combined with a phase consistency filter is used to establish the frequency domain scale space; and (2) a new log‐polar descriptor of adaptive neighbourhood (LDAN) is established, where the neighbourhood window size of each descriptor is calculated according to the log‐normalised intensity value of feature points. This article prepares some remote sensing images under weak texture scenes which include deserts, dense forests, waters, ice and snow, and shadows. The data set contains 50 typical image pairs, on which the proposed HFAD was demonstrated and compared with state‐of‐the‐art matching algorithms (RIFT, HOWP, KAZE, POS‐SIFT and SIFT). The statistical results of the comparative experiment show that the HFAD can achieve the accuracy of matching within two pixels and confirm that the proposed algorithm is robust and effective.
{"title":"Weak texture remote sensing image matching based on hybrid domain features and adaptive description method","authors":"Wupeng Yang, Yongxiang Yao, Yongjun Zhang, Yi Wan","doi":"10.1111/phor.12464","DOIUrl":"https://doi.org/10.1111/phor.12464","url":null,"abstract":"Abstract Weak texture remote sensing image (WTRSI) has characteristics such as low reflectivity, high similarity of neighbouring pixels and insignificant differences between regions. These factors cause difficulties in feature extraction and description, which lead to unsuccessful matching. Therefore, this paper proposes a novel hybrid‐domain features and adaptive description (HFAD) approach to perform WTRSI matching. This approach mainly provides two contributions: (1) a new feature extractor that combines both the spatial domain scale space and the frequency domain scale space is established, where a weighted least square filter combined with a phase consistency filter is used to establish the frequency domain scale space; and (2) a new log‐polar descriptor of adaptive neighbourhood (LDAN) is established, where the neighbourhood window size of each descriptor is calculated according to the log‐normalised intensity value of feature points. This article prepares some remote sensing images under weak texture scenes which include deserts, dense forests, waters, ice and snow, and shadows. The data set contains 50 typical image pairs, on which the proposed HFAD was demonstrated and compared with state‐of‐the‐art matching algorithms (RIFT, HOWP, KAZE, POS‐SIFT and SIFT). The statistical results of the comparative experiment show that the HFAD can achieve the accuracy of matching within two pixels and confirm that the proposed algorithm is robust and effective.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135719333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The creation of a 2D floor plan is an integral part of finishing a building construction. Legal obligations in different states often include submitting a precise floor plan for ownership purposes, as the building needs to be divided between new residents with reasonable precision. Common practice for floor plan generation includes manual measurements (tape or laser) and laser scanning (static or SLAM). In this paper, a novel approach is proposed using spherical photogrammetry, which is becoming increasingly popular due to its versatility, low cost and unexplored possibilities. Workflow is also noticeably faster than other methods, as video acquisition is rapid, on a par with SLAM. The accuracy and reliability of the measurements are then experimentally verified, comparing the results with established methods.
{"title":"Floor plan creation using a low‐cost 360° camera","authors":"Jakub Vynikal, David Zahradník","doi":"10.1111/phor.12463","DOIUrl":"https://doi.org/10.1111/phor.12463","url":null,"abstract":"Abstract The creation of a 2D floor plan is an integral part of finishing a building construction. Legal obligations in different states often include submitting a precise floor plan for ownership purposes, as the building needs to be divided between new residents with reasonable precision. Common practice for floor plan generation includes manual measurements (tape or laser) and laser scanning (static or SLAM). In this paper, a novel approach is proposed using spherical photogrammetry, which is becoming increasingly popular due to its versatility, low cost and unexplored possibilities. Workflow is also noticeably faster than other methods, as video acquisition is rapid, on a par with SLAM. The accuracy and reliability of the measurements are then experimentally verified, comparing the results with established methods.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135815731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Photogrammetric RecordVolume 38, Issue 183 p. 451-452 NOTES 2023 International Conference on Metrology for Archaeology and Cultural Heritage First published: 28 September 2023 https://doi.org/10.1111/phor.3_12458Read the full textAboutPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onEmailFacebookTwitterLinkedInRedditWechat No abstract is available for this article. Volume38, Issue183September 2023Pages 451-452 RelatedInformation
The Photogrammetric RecordVolume 38, Issue 183 p. 451-452 NOTES 2023国际考古和文化遗产计量会议首次出版:2023年9月28日https://doi.org/10.1111/phor.3_12458Read全文taboutpdf ToolsRequest permissionExport citation添加到favoritesTrack citation ShareShare给予accessShare全文accessShare全文accessShare全文accessShare请查看我们的使用条款和条件,并勾选下面的文章的全文版本。我已经阅读并接受了Wiley在线图书馆使用共享链接的条款和条件,请使用下面的链接与您的朋友和同事分享本文的全文版本。学习更多的知识。复制URL共享链接共享一个emailfacebooktwitterlinkedinreddit微信本文无摘要vol . 38, Issue183September 2023Pages 451-452
{"title":"2023 International Conference on Metrology for Archaeology and Cultural Heritage","authors":"","doi":"10.1111/phor.3_12458","DOIUrl":"https://doi.org/10.1111/phor.3_12458","url":null,"abstract":"The Photogrammetric RecordVolume 38, Issue 183 p. 451-452 NOTES 2023 International Conference on Metrology for Archaeology and Cultural Heritage First published: 28 September 2023 https://doi.org/10.1111/phor.3_12458Read the full textAboutPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onEmailFacebookTwitterLinkedInRedditWechat No abstract is available for this article. Volume38, Issue183September 2023Pages 451-452 RelatedInformation","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135587990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Photogrammetric RecordVolume 38, Issue 183 p. 450-450 NOTES 3D Computer Vision and Photogrammetry First published: 28 September 2023 https://doi.org/10.1111/phor.12458Read the full textAboutPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onEmailFacebookTwitterLinkedInRedditWechat No abstract is available for this article. Volume38, Issue183September 2023Pages 450-450 RelatedInformation
The Photogrammetric RecordVolume 38, Issue 183 p. 450-450 NOTES 3D计算机视觉和摄影测量首次发布:2023年9月28日https://doi.org/10.1111/phor.12458Read全文taboutpdf ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare给accessShare全文accessShare全文accessShare请查看我们的使用条款和条件,并在下面的框中选择分享文章的全文版本。我已经阅读并接受了Wiley在线图书馆使用共享链接的条款和条件,请使用下面的链接与您的朋友和同事分享本文的全文版本。学习更多的知识。复制URL共享链接共享一个emailfacebooktwitterlinkedinreddit微信本文无摘要vol . 38, Issue183September 2023页450-450
{"title":"3D Computer Vision and Photogrammetry","authors":"","doi":"10.1111/phor.12458","DOIUrl":"https://doi.org/10.1111/phor.12458","url":null,"abstract":"The Photogrammetric RecordVolume 38, Issue 183 p. 450-450 NOTES 3D Computer Vision and Photogrammetry First published: 28 September 2023 https://doi.org/10.1111/phor.12458Read the full textAboutPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onEmailFacebookTwitterLinkedInRedditWechat No abstract is available for this article. Volume38, Issue183September 2023Pages 450-450 RelatedInformation","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135588264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}