首页 > 最新文献

The Photogrammetric Record最新文献

英文 中文
Linear target change detection from a single image based on three-dimensional real scene 基于三维真实场景的单幅图像线性目标变化检测
Pub Date : 2023-12-26 DOI: 10.1111/phor.12470
Yang Liu, Zheng Ji, Lingfeng Chen, Yuchen Liu
Change detection is a critical component in the field of remote sensing, with significant implications for resource management and land monitoring. Currently, most conventional methods for remote sensing change detection often rely on qualitative monitoring, which usually requires data collection from the entire scene over multiple time periods. In this paper, we propose a method that can be computationally intensive and lacks reusability, especially when dealing with large datasets. We use a novel methodology that leverages the texture features and geometric structure information derived from three-dimensional (3D) real scenes. By establishing a two-dimensional (2D)–3D geometric relationship between a single observational image and the corresponding 3D scene, we can obtain more accurate positional information for the image. This relationship allows us to transfer the depth information from the 3D model to the observational image, thereby facilitating precise geometric change measurements for specific planar targets. Experimental results indicate that our approach enables millimetre-level change detection of minuscule targets based on a single image. Compared with conventional methods, our technique offers enhanced efficiency and reusability, making it a valuable tool for the fine-grained change detection of small targets based on 3D real scene.
变化探测是遥感领域的一个重要组成部分,对资源管理和土地监测具有重大意义。目前,大多数传统的遥感变化检测方法往往依赖于定性监测,这通常需要在多个时间段内收集整个场景的数据。在本文中,我们提出了一种计算密集且缺乏可重用性的方法,尤其是在处理大型数据集时。我们采用一种新颖的方法,利用从三维(3D)真实场景中获得的纹理特征和几何结构信息。通过在单个观测图像和相应的三维场景之间建立二维(2D)-三维几何关系,我们可以获得更准确的图像位置信息。通过这种关系,我们可以将三维模型中的深度信息转移到观测图像中,从而便于对特定平面目标进行精确的几何变化测量。实验结果表明,我们的方法能够基于单张图像对微小目标进行毫米级的变化检测。与传统方法相比,我们的技术具有更高的效率和可重用性,是基于三维真实场景对小型目标进行精细变化检测的重要工具。
{"title":"Linear target change detection from a single image based on three-dimensional real scene","authors":"Yang Liu, Zheng Ji, Lingfeng Chen, Yuchen Liu","doi":"10.1111/phor.12470","DOIUrl":"https://doi.org/10.1111/phor.12470","url":null,"abstract":"Change detection is a critical component in the field of remote sensing, with significant implications for resource management and land monitoring. Currently, most conventional methods for remote sensing change detection often rely on qualitative monitoring, which usually requires data collection from the entire scene over multiple time periods. In this paper, we propose a method that can be computationally intensive and lacks reusability, especially when dealing with large datasets. We use a novel methodology that leverages the texture features and geometric structure information derived from three-dimensional (3D) real scenes. By establishing a two-dimensional (2D)–3D geometric relationship between a single observational image and the corresponding 3D scene, we can obtain more accurate positional information for the image. This relationship allows us to transfer the depth information from the 3D model to the observational image, thereby facilitating precise geometric change measurements for specific planar targets. Experimental results indicate that our approach enables millimetre-level change detection of minuscule targets based on a single image. Compared with conventional methods, our technique offers enhanced efficiency and reusability, making it a valuable tool for the fine-grained change detection of small targets based on 3D real scene.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139068045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rapid 3D modelling: Clustering method based on dynamic load balancing strategy 快速 3D 建模:基于动态负载平衡策略的聚类方法
Pub Date : 2023-12-11 DOI: 10.1111/phor.12473
Yingwei Ge, Bingxuan Guo, Guozheng Xu, Yawen Liu, Xiao Jiang, Zhe Peng
Three-dimensional (3D) reconstruction is a pivotal research area within computer vision and photogrammetry, offering a valuable foundation of data for the development of smart cities. However, existing methods for constructing 3D models from unmanned aerial vehicle (UAV) images often suffer from slow processing speeds and low central processing unit (CPU)/graphics processing unit (GPU) utilization rates. Furthermore, the utilization of cluster-based distributed computing for 3D modelling frequently results in inefficient resource allocation across nodes. To address these challenges, this paper presents a novel approach to 3D modelling in clusters, incorporating a dynamic load-balancing strategy. The method divides the 3D reconstruction process into multiple stages to lay the groundwork for distributing tasks across multiple nodes efficiently. Instead of traditional traversal-based communication, this approach employs gossip communication techniques to reduce the network overhead. To boost the modelling efficiency, a dynamic load-balancing strategy is introduced that prevents nodes from becoming overloaded, thus optimizing resource usage during the modelling process and alleviating resource waste issues in multidevice clusters. The experimental results indicate that in small-scale data environments, this approach improves CPU/GPU utilization by 35.8%/23.4% compared with single-machine utilization. In large-scale data environments for cluster-based 3D modelling tests, this method enhances the average efficiency by 61.4% compared with traditional 3D modelling software while maintaining a comparable model accuracy. In computer vision and photogrammetry, research enhances 3D reconstruction for smart cities. To address slow UAV-based methods, the study employs dynamic load balancing and ‘gossip’ communication to minimize network overhead. In small data tests, the approach improves CPU and GPU efficiency by 20.7% and 40.3%, respectively. In large data settings, it outperforms existing methods by 61.38% while maintaining accuracy.
三维重建是计算机视觉和摄影测量中的一个关键研究领域,为智慧城市的发展提供了宝贵的数据基础。然而,现有的利用无人机(UAV)图像构建三维模型的方法往往存在处理速度慢、中央处理器(CPU)/图形处理器(GPU)利用率低等问题。此外,利用基于集群的分布式计算进行三维建模经常导致节点间资源分配效率低下。为了解决这些挑战,本文提出了一种新颖的集群三维建模方法,结合了动态负载平衡策略。该方法将三维重建过程划分为多个阶段,为在多个节点间高效分配任务奠定了基础。与传统的基于遍历的通信不同,这种方法使用八卦通信技术来减少网络开销。为了提高建模效率,引入了一种动态负载平衡策略,防止节点过载,从而优化建模过程中的资源使用,减轻多设备集群中的资源浪费问题。实验结果表明,在小规模数据环境下,与单机利用率相比,该方法的CPU/GPU利用率分别提高了35.8%/23.4%。在大规模数据环境下进行基于聚类的三维建模测试时,该方法在保持模型精度的同时,平均效率比传统三维建模软件提高了61.4%。在计算机视觉和摄影测量方面,研究增强了智慧城市的三维重建。为了解决基于无人机的慢速方法,该研究采用动态负载平衡和“八卦”通信来最小化网络开销。在小数据测试中,该方法将CPU和GPU的效率分别提高了20.7%和40.3%。在大数据环境下,在保持准确性的情况下,该方法比现有方法高出61.38%。
{"title":"Rapid 3D modelling: Clustering method based on dynamic load balancing strategy","authors":"Yingwei Ge, Bingxuan Guo, Guozheng Xu, Yawen Liu, Xiao Jiang, Zhe Peng","doi":"10.1111/phor.12473","DOIUrl":"https://doi.org/10.1111/phor.12473","url":null,"abstract":"Three-dimensional (3D) reconstruction is a pivotal research area within computer vision and photogrammetry, offering a valuable foundation of data for the development of smart cities. However, existing methods for constructing 3D models from unmanned aerial vehicle (UAV) images often suffer from slow processing speeds and low central processing unit (CPU)/graphics processing unit (GPU) utilization rates. Furthermore, the utilization of cluster-based distributed computing for 3D modelling frequently results in inefficient resource allocation across nodes. To address these challenges, this paper presents a novel approach to 3D modelling in clusters, incorporating a dynamic load-balancing strategy. The method divides the 3D reconstruction process into multiple stages to lay the groundwork for distributing tasks across multiple nodes efficiently. Instead of traditional traversal-based communication, this approach employs gossip communication techniques to reduce the network overhead. To boost the modelling efficiency, a dynamic load-balancing strategy is introduced that prevents nodes from becoming overloaded, thus optimizing resource usage during the modelling process and alleviating resource waste issues in multidevice clusters. The experimental results indicate that in small-scale data environments, this approach improves CPU/GPU utilization by 35.8%/23.4% compared with single-machine utilization. In large-scale data environments for cluster-based 3D modelling tests, this method enhances the average efficiency by 61.4% compared with traditional 3D modelling software while maintaining a comparable model accuracy. In computer vision and photogrammetry, research enhances 3D reconstruction for smart cities. To address slow UAV-based methods, the study employs dynamic load balancing and ‘gossip’ communication to minimize network overhead. In small data tests, the approach improves CPU and GPU efficiency by 20.7% and 40.3%, respectively. In large data settings, it outperforms existing methods by 61.38% while maintaining accuracy.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138629925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning point cloud context information based on 3D transformer for more accurate and efficient classification 基于 3D 变换器学习点云上下文信息,实现更准确、更高效的分类
Pub Date : 2023-12-10 DOI: 10.1111/phor.12469
Yiping Chen, Shuai Zhang, Weisheng Lin, Shuhang Zhang, Wuming Zhang
The point cloud semantic understanding task has made remarkable progress along with the development of 3D deep learning. However, aggregating spatial information to improve the local feature learning capability of the network remains a major challenge. Many methods have been used for improving local information learning, such as constructing a multi-area structure for capturing different area information. However, it will lose some local information due to the independent learning point feature. To solve this problem, a new network is proposed that considers the importance of the differences between points in the neighbourhood. Capturing local feature information can be enhanced by highlighting the different feature importance of the point cloud in the neighbourhood. First, T-Net is constructed to learn the point cloud transformation matrix for point cloud disorder. Second, transformer is used to improve the problem of local information loss due to the independence of each point in the neighbourhood. The experimental results show that 92.2% accuracy overall was achieved on the ModelNet40 dataset and 93.8% accuracy overall was achieved on the ModelNet10 dataset.
随着三维深度学习的发展,点云语义理解任务取得了显著进展。然而,聚合空间信息以提高网络的局部特征学习能力仍是一大挑战。为了提高局部信息的学习能力,人们采用了很多方法,例如构建多区域结构来捕捉不同的区域信息。但是,由于学习点特征的独立性,它会丢失一些局部信息。为了解决这个问题,我们提出了一种新的网络,它考虑了邻域中点之间差异的重要性。通过突出邻域中点云的不同特征重要性,可以增强对局部特征信息的捕捉。首先,构建 T-Net 来学习点云变换矩阵,以解决点云紊乱问题。其次,利用变换器来改善由于邻域中各点的独立性而导致的局部信息丢失问题。实验结果表明,ModelNet40 数据集的总体准确率为 92.2%,ModelNet10 数据集的总体准确率为 93.8%。
{"title":"Learning point cloud context information based on 3D transformer for more accurate and efficient classification","authors":"Yiping Chen, Shuai Zhang, Weisheng Lin, Shuhang Zhang, Wuming Zhang","doi":"10.1111/phor.12469","DOIUrl":"https://doi.org/10.1111/phor.12469","url":null,"abstract":"The point cloud semantic understanding task has made remarkable progress along with the development of 3D deep learning. However, aggregating spatial information to improve the local feature learning capability of the network remains a major challenge. Many methods have been used for improving local information learning, such as constructing a multi-area structure for capturing different area information. However, it will lose some local information due to the independent learning point feature. To solve this problem, a new network is proposed that considers the importance of the differences between points in the neighbourhood. Capturing local feature information can be enhanced by highlighting the different feature importance of the point cloud in the neighbourhood. First, T-Net is constructed to learn the point cloud transformation matrix for point cloud disorder. Second, transformer is used to improve the problem of local information loss due to the independence of each point in the neighbourhood. The experimental results show that 92.2% accuracy overall was achieved on the ModelNet40 dataset and 93.8% accuracy overall was achieved on the ModelNet10 dataset.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138566670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weakly supervised semantic segmentation of mobile laser scanning point clouds via category balanced random annotation and deep consistency-guided self-distillation mechanism 基于类别平衡随机标注和深度一致性引导的移动激光扫描点云弱监督语义分割
Pub Date : 2023-12-01 DOI: 10.1111/phor.12468
Jiacheng Liu, Haiyan Guan, Xiangda Lei, Yongtao Yu
Scene understanding of mobile laser scanning (MLS) point clouds is vital in autonomous driving and virtual reality. Most existing semantic segmentation methods rely on a large number of accurately labelled points, which is time-consuming and labour-intensive. To cope with this issue, this paper explores a weakly supervised learning (WSL) framework for MLS data. Specifically, a category balanced random annotation (CBRA) strategy is employed to obtain balanced labels and enhance model performance. Next, based on KPConv-Net as a backbone network, a WSL semantic segmentation framework is developed for MLS point clouds via a deep consistency-guided self-distillation (DCS) mechanism. The DCS mechanism consists of a deep consistency-guided self-distillation branch and an entropy regularisation branch. The self-distillation branch is designed by constructing an auxiliary network to maintain the consistency of predicted distributions between the auxiliary network and the original network, while the entropy regularisation branch is designed to increase the confidence of the network predicted results. The proposed WSL framework was evaluated on the WHU-MLS, NPM3D and Toronto3D datasets. By using only 0.1% labelled points, the proposed WSL framework achieved a competitive performance in MLS point cloud semantic segmentation with the mean Intersection over Union (mIoU) scores of 60.08%, 72.0% and 67.42% on the three datasets, respectively.
移动激光扫描(MLS)点云的场景理解在自动驾驶和虚拟现实中至关重要。现有的语义分割方法大多依赖于大量准确标注的点,耗时耗力。为了解决这一问题,本文探索了一种针对MLS数据的弱监督学习框架。具体而言,采用类别平衡随机标注(CBRA)策略获得平衡标签,提高模型性能。其次,以kpconvn - net为骨干网络,采用深度一致性引导自蒸馏(deep consistency-guided self-distillation, DCS)机制,开发了面向MLS点云的WSL语义分割框架。该DCS机制由深度一致性引导的自蒸馏分支和熵正则化分支组成。自蒸馏分支通过构建辅助网络来保持辅助网络与原始网络预测分布的一致性,而熵正则化分支则通过构建辅助网络来提高网络预测结果的置信度。在WHU-MLS、NPM3D和Toronto3D数据集上对提出的WSL框架进行了评价。通过仅使用0.1%的标记点,所提出的WSL框架在MLS点云语义分割中取得了具有竞争力的性能,在三个数据集上的平均mIoU分数分别为60.08%、72.0%和67.42%。
{"title":"Weakly supervised semantic segmentation of mobile laser scanning point clouds via category balanced random annotation and deep consistency-guided self-distillation mechanism","authors":"Jiacheng Liu, Haiyan Guan, Xiangda Lei, Yongtao Yu","doi":"10.1111/phor.12468","DOIUrl":"https://doi.org/10.1111/phor.12468","url":null,"abstract":"Scene understanding of mobile laser scanning (MLS) point clouds is vital in autonomous driving and virtual reality. Most existing semantic segmentation methods rely on a large number of accurately labelled points, which is time-consuming and labour-intensive. To cope with this issue, this paper explores a weakly supervised learning (WSL) framework for MLS data. Specifically, a category balanced random annotation (CBRA) strategy is employed to obtain balanced labels and enhance model performance. Next, based on KPConv-Net as a backbone network, a WSL semantic segmentation framework is developed for MLS point clouds via a deep consistency-guided self-distillation (DCS) mechanism. The DCS mechanism consists of a deep consistency-guided self-distillation branch and an entropy regularisation branch. The self-distillation branch is designed by constructing an auxiliary network to maintain the consistency of predicted distributions between the auxiliary network and the original network, while the entropy regularisation branch is designed to increase the confidence of the network predicted results. The proposed WSL framework was evaluated on the WHU-MLS, NPM3D and Toronto3D datasets. By using only 0.1% labelled points, the proposed WSL framework achieved a competitive performance in MLS point cloud semantic segmentation with the mean Intersection over Union (mIoU) scores of 60.08%, 72.0% and 67.42% on the three datasets, respectively.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"39 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138506930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The impact of oblique images and flight‐planning scenarios on the accuracy of UAV 3D mapping 倾斜图像和飞行规划场景对无人机3D制图精度的影响
Pub Date : 2023-10-09 DOI: 10.1111/phor.12466
Ebadat Ghanbari Parmehr, Mohammad Savadkouhi, Meghdad Nopour
Abstract The developments in lightweight unmanned aerial vehicles (UAVs) and structure‐from‐motion (SfM)‐based software have opened a new era in 3D mapping which is notably cost‐effective and fast, though the photogrammetric blocks lead to systematic height error due to inaccurate camera calibration parameters particularly when the ground control points (GCPs) are few and unevenly distributed. The use of onboard Global Navigation Satellite System (GNSS) receivers (such as RTK‐ or PPK‐based devices which use the DGNSS technique) to obtain the accurate coordinates of camera perspective centres has reduced the need for ground surveys, nevertheless, the aforementioned systematic error was reported in the UAV photogrammetric blocks. In this research, three flight‐planning scenarios with oblique imagery in addition to the traditional nadir block were evaluated and processed with four different processing cases. Therefore, 16 various blocks with different overlaps, direct and indirect georeferencing approaches as well as flight‐planning scenarios were tested to examine and offer the best imaging network. The results denote that the combination of oblique images located on a circle in the centre of the block with the nadir block provides the best self‐calibration functionality and improves the final accuracy by 50% (from 0.163 to 0.085 m) for direct georeferenced blocks and by 40% (from 0.042 to 0.026 m) for indirect georeferenced blocks.
摘要:轻型无人机(uav)和基于运动结构(SfM)的软件的发展开辟了3D测绘的新时代,该时代具有显著的成本效益和快速,尽管摄影测量块由于相机校准参数不准确而导致系统高度误差,特别是当地面控制点(gcp)很少且分布不均匀时。使用机载全球导航卫星系统(GNSS)接收器(如使用DGNSS技术的基于RTK或PPK的设备)来获得相机视角中心的精确坐标已经减少了对地面调查的需求,然而,在无人机摄影测量块中报告了上述系统误差。在本研究中,除了传统的最低点块外,还评估了三种倾斜图像的飞行计划场景,并使用四种不同的处理案例进行了处理。因此,测试了16个不同重叠的不同区块,直接和间接的地理参考方法以及飞行计划场景,以检查并提供最佳的成像网络。结果表明,位于块中心圆圈上的倾斜图像与最低点块的组合提供了最佳的自校准功能,并将直接地理参考块的最终精度提高了50%(从0.163到0.085 m),将间接地理参考块的最终精度提高了40%(从0.042到0.026 m)。
{"title":"The impact of oblique images and flight‐planning scenarios on the accuracy of UAV 3D mapping","authors":"Ebadat Ghanbari Parmehr, Mohammad Savadkouhi, Meghdad Nopour","doi":"10.1111/phor.12466","DOIUrl":"https://doi.org/10.1111/phor.12466","url":null,"abstract":"Abstract The developments in lightweight unmanned aerial vehicles (UAVs) and structure‐from‐motion (SfM)‐based software have opened a new era in 3D mapping which is notably cost‐effective and fast, though the photogrammetric blocks lead to systematic height error due to inaccurate camera calibration parameters particularly when the ground control points (GCPs) are few and unevenly distributed. The use of onboard Global Navigation Satellite System (GNSS) receivers (such as RTK‐ or PPK‐based devices which use the DGNSS technique) to obtain the accurate coordinates of camera perspective centres has reduced the need for ground surveys, nevertheless, the aforementioned systematic error was reported in the UAV photogrammetric blocks. In this research, three flight‐planning scenarios with oblique imagery in addition to the traditional nadir block were evaluated and processed with four different processing cases. Therefore, 16 various blocks with different overlaps, direct and indirect georeferencing approaches as well as flight‐planning scenarios were tested to examine and offer the best imaging network. The results denote that the combination of oblique images located on a circle in the centre of the block with the nadir block provides the best self‐calibration functionality and improves the final accuracy by 50% (from 0.163 to 0.085 m) for direct georeferenced blocks and by 40% (from 0.042 to 0.026 m) for indirect georeferenced blocks.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135147216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High‐resolution optical remote sensing image change detection based on dense connection and attention feature fusion network 基于密集连接和关注特征融合网络的高分辨率光学遥感图像变化检测
Pub Date : 2023-09-27 DOI: 10.1111/phor.12462
Daifeng Peng, Chenchen Zhai, Yongjun Zhang, Haiyan Guan
Abstract The detection of ground object changes from bi‐temporal images is of great significance for urban planning, land‐use/land‐cover monitoring and natural disaster assessment. To solve the limitation of incomplete change detection (CD) entities and inaccurate edges caused by the loss of detailed information, this paper proposes a network based on dense connections and attention feature fusion, namely Siamese NestedUNet with Attention Feature Fusion (SNAFF). First, multi‐level bi‐temporal features are extracted through a Siamese network. The dense connections between the sub‐nodes of the decoder are used to compensate for the missing location information as well as weakening the semantic differences between features. Then, the attention mechanism is introduced to combine global and local information to achieve feature fusion. Finally, a deep supervision strategy is used to suppress the problem of gradient vanishing and slow convergence speed. During the testing phase, the test time augmentation (TTA) strategy is adopted to further improve the CD performance. In order to verify the effectiveness of the proposed method, two datasets with different change types are used. The experimental results indicate that, compared with the comparison methods, the proposed SNAFF achieves the best quantitative results on both datasets, in which F1, IoU and OA in the LEVIR‐CD dataset are 91.47%, 84.28% and 99.13%, respectively, and the values in the CDD dataset are 96.91%, 94.01% and 99.27%, respectively. In addition, the qualitative results show that SNAFF can effectively retain the global and edge information of the detected entity, thus achieving the best visual performance.
从双时相影像中检测地物变化对城市规划、土地利用/土地覆盖监测和自然灾害评估具有重要意义。为了解决不完全变化检测(CD)实体和细节信息丢失导致的边缘不准确的局限性,本文提出了一种基于密集连接和注意力特征融合的网络,即Siamese NestedUNet with attention feature fusion (SNAFF)。首先,通过Siamese网络提取多层次双时态特征。解码器子节点之间的紧密连接用于补偿缺失的位置信息,并减弱特征之间的语义差异。然后,引入注意机制,结合全局和局部信息实现特征融合;最后,采用深度监督策略抑制了梯度消失和收敛速度慢的问题。在测试阶段,采用测试时间增加(TTA)策略进一步提高CD性能。为了验证该方法的有效性,使用了两个不同变化类型的数据集。实验结果表明,与对比方法相比,所提出的SNAFF在两个数据集上都取得了最好的定量结果,其中LEVIR‐CD数据集的F1、IoU和OA分别为91.47%、84.28%和99.13%,CDD数据集的F1、IoU和OA分别为96.91%、94.01%和99.27%。定性结果表明,SNAFF能够有效地保留被检测实体的全局和边缘信息,从而达到最佳的视觉效果。
{"title":"High‐resolution optical remote sensing image change detection based on dense connection and attention feature fusion network","authors":"Daifeng Peng, Chenchen Zhai, Yongjun Zhang, Haiyan Guan","doi":"10.1111/phor.12462","DOIUrl":"https://doi.org/10.1111/phor.12462","url":null,"abstract":"Abstract The detection of ground object changes from bi‐temporal images is of great significance for urban planning, land‐use/land‐cover monitoring and natural disaster assessment. To solve the limitation of incomplete change detection (CD) entities and inaccurate edges caused by the loss of detailed information, this paper proposes a network based on dense connections and attention feature fusion, namely Siamese NestedUNet with Attention Feature Fusion (SNAFF). First, multi‐level bi‐temporal features are extracted through a Siamese network. The dense connections between the sub‐nodes of the decoder are used to compensate for the missing location information as well as weakening the semantic differences between features. Then, the attention mechanism is introduced to combine global and local information to achieve feature fusion. Finally, a deep supervision strategy is used to suppress the problem of gradient vanishing and slow convergence speed. During the testing phase, the test time augmentation (TTA) strategy is adopted to further improve the CD performance. In order to verify the effectiveness of the proposed method, two datasets with different change types are used. The experimental results indicate that, compared with the comparison methods, the proposed SNAFF achieves the best quantitative results on both datasets, in which F1, IoU and OA in the LEVIR‐CD dataset are 91.47%, 84.28% and 99.13%, respectively, and the values in the CDD dataset are 96.91%, 94.01% and 99.27%, respectively. In addition, the qualitative results show that SNAFF can effectively retain the global and edge information of the detected entity, thus achieving the best visual performance.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135538365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Weak texture remote sensing image matching based on hybrid domain features and adaptive description method 基于混合域特征和自适应描述方法的弱纹理遥感图像匹配
Pub Date : 2023-09-26 DOI: 10.1111/phor.12464
Wupeng Yang, Yongxiang Yao, Yongjun Zhang, Yi Wan
Abstract Weak texture remote sensing image (WTRSI) has characteristics such as low reflectivity, high similarity of neighbouring pixels and insignificant differences between regions. These factors cause difficulties in feature extraction and description, which lead to unsuccessful matching. Therefore, this paper proposes a novel hybrid‐domain features and adaptive description (HFAD) approach to perform WTRSI matching. This approach mainly provides two contributions: (1) a new feature extractor that combines both the spatial domain scale space and the frequency domain scale space is established, where a weighted least square filter combined with a phase consistency filter is used to establish the frequency domain scale space; and (2) a new log‐polar descriptor of adaptive neighbourhood (LDAN) is established, where the neighbourhood window size of each descriptor is calculated according to the log‐normalised intensity value of feature points. This article prepares some remote sensing images under weak texture scenes which include deserts, dense forests, waters, ice and snow, and shadows. The data set contains 50 typical image pairs, on which the proposed HFAD was demonstrated and compared with state‐of‐the‐art matching algorithms (RIFT, HOWP, KAZE, POS‐SIFT and SIFT). The statistical results of the comparative experiment show that the HFAD can achieve the accuracy of matching within two pixels and confirm that the proposed algorithm is robust and effective.
摘要弱纹理遥感图像具有反射率低、相邻像元相似度高、区域间差异不显著等特点。这些因素给特征提取和描述带来困难,导致匹配失败。因此,本文提出了一种新的混合域特征和自适应描述(HFAD)方法来进行WTRSI匹配。该方法主要有两个贡献:(1)建立了一种结合空间域尺度空间和频域尺度空间的特征提取器,其中结合相位一致性滤波器和加权最小二乘滤波器建立频域尺度空间;(2)建立了一种新的自适应邻域描述符(LDAN),其中每个描述符的邻域窗口大小根据特征点的对数归一化强度值计算。本文准备了一些弱纹理场景下的遥感图像,包括沙漠、茂密森林、水域、冰雪和阴影。数据集包含50个典型的图像对,在这些图像对上展示了所提出的HFAD,并将其与最先进的匹配算法(RIFT、HOWP、KAZE、POS‐SIFT和SIFT)进行了比较。对比实验的统计结果表明,HFAD可以达到两个像素内的匹配精度,验证了该算法的鲁棒性和有效性。
{"title":"Weak texture remote sensing image matching based on hybrid domain features and adaptive description method","authors":"Wupeng Yang, Yongxiang Yao, Yongjun Zhang, Yi Wan","doi":"10.1111/phor.12464","DOIUrl":"https://doi.org/10.1111/phor.12464","url":null,"abstract":"Abstract Weak texture remote sensing image (WTRSI) has characteristics such as low reflectivity, high similarity of neighbouring pixels and insignificant differences between regions. These factors cause difficulties in feature extraction and description, which lead to unsuccessful matching. Therefore, this paper proposes a novel hybrid‐domain features and adaptive description (HFAD) approach to perform WTRSI matching. This approach mainly provides two contributions: (1) a new feature extractor that combines both the spatial domain scale space and the frequency domain scale space is established, where a weighted least square filter combined with a phase consistency filter is used to establish the frequency domain scale space; and (2) a new log‐polar descriptor of adaptive neighbourhood (LDAN) is established, where the neighbourhood window size of each descriptor is calculated according to the log‐normalised intensity value of feature points. This article prepares some remote sensing images under weak texture scenes which include deserts, dense forests, waters, ice and snow, and shadows. The data set contains 50 typical image pairs, on which the proposed HFAD was demonstrated and compared with state‐of‐the‐art matching algorithms (RIFT, HOWP, KAZE, POS‐SIFT and SIFT). The statistical results of the comparative experiment show that the HFAD can achieve the accuracy of matching within two pixels and confirm that the proposed algorithm is robust and effective.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135719333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Floor plan creation using a low‐cost 360° camera 使用低成本的360°摄像机创建平面图
Pub Date : 2023-09-25 DOI: 10.1111/phor.12463
Jakub Vynikal, David Zahradník
Abstract The creation of a 2D floor plan is an integral part of finishing a building construction. Legal obligations in different states often include submitting a precise floor plan for ownership purposes, as the building needs to be divided between new residents with reasonable precision. Common practice for floor plan generation includes manual measurements (tape or laser) and laser scanning (static or SLAM). In this paper, a novel approach is proposed using spherical photogrammetry, which is becoming increasingly popular due to its versatility, low cost and unexplored possibilities. Workflow is also noticeably faster than other methods, as video acquisition is rapid, on a par with SLAM. The accuracy and reliability of the measurements are then experimentally verified, comparing the results with established methods.
二维平面平面图的创建是完成建筑施工的一个组成部分。不同州的法律义务通常包括为所有权目的提交精确的平面图,因为建筑物需要在新居民之间合理精确地划分。平面图生成的常用方法包括手动测量(胶带或激光)和激光扫描(静态或SLAM)。本文提出了一种使用球面摄影测量的新方法,由于其通用性,低成本和未开发的可能性而越来越受欢迎。工作流也明显比其他方法快,因为视频采集速度很快,与SLAM相当。测量的准确性和可靠性,然后通过实验验证,并将结果与既定方法进行比较。
{"title":"Floor plan creation using a low‐cost 360° camera","authors":"Jakub Vynikal, David Zahradník","doi":"10.1111/phor.12463","DOIUrl":"https://doi.org/10.1111/phor.12463","url":null,"abstract":"Abstract The creation of a 2D floor plan is an integral part of finishing a building construction. Legal obligations in different states often include submitting a precise floor plan for ownership purposes, as the building needs to be divided between new residents with reasonable precision. Common practice for floor plan generation includes manual measurements (tape or laser) and laser scanning (static or SLAM). In this paper, a novel approach is proposed using spherical photogrammetry, which is becoming increasingly popular due to its versatility, low cost and unexplored possibilities. Workflow is also noticeably faster than other methods, as video acquisition is rapid, on a par with SLAM. The accuracy and reliability of the measurements are then experimentally verified, comparing the results with established methods.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135815731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2023 International Conference on Metrology for Archaeology and Cultural Heritage 2023考古与文化遗产计量国际会议
Pub Date : 2023-09-01 DOI: 10.1111/phor.3_12458
The Photogrammetric RecordVolume 38, Issue 183 p. 451-452 NOTES 2023 International Conference on Metrology for Archaeology and Cultural Heritage First published: 28 September 2023 https://doi.org/10.1111/phor.3_12458Read the full textAboutPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onEmailFacebookTwitterLinkedInRedditWechat No abstract is available for this article. Volume38, Issue183September 2023Pages 451-452 RelatedInformation
The Photogrammetric RecordVolume 38, Issue 183 p. 451-452 NOTES 2023国际考古和文化遗产计量会议首次出版:2023年9月28日https://doi.org/10.1111/phor.3_12458Read全文taboutpdf ToolsRequest permissionExport citation添加到favoritesTrack citation ShareShare给予accessShare全文accessShare全文accessShare全文accessShare请查看我们的使用条款和条件,并勾选下面的文章的全文版本。我已经阅读并接受了Wiley在线图书馆使用共享链接的条款和条件,请使用下面的链接与您的朋友和同事分享本文的全文版本。学习更多的知识。复制URL共享链接共享一个emailfacebooktwitterlinkedinreddit微信本文无摘要vol . 38, Issue183September 2023Pages 451-452
{"title":"2023 International Conference on Metrology for Archaeology and Cultural Heritage","authors":"","doi":"10.1111/phor.3_12458","DOIUrl":"https://doi.org/10.1111/phor.3_12458","url":null,"abstract":"The Photogrammetric RecordVolume 38, Issue 183 p. 451-452 NOTES 2023 International Conference on Metrology for Archaeology and Cultural Heritage First published: 28 September 2023 https://doi.org/10.1111/phor.3_12458Read the full textAboutPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onEmailFacebookTwitterLinkedInRedditWechat No abstract is available for this article. Volume38, Issue183September 2023Pages 451-452 RelatedInformation","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135587990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D Computer Vision and Photogrammetry 3D计算机视觉与摄影测量
Pub Date : 2023-09-01 DOI: 10.1111/phor.12458
The Photogrammetric RecordVolume 38, Issue 183 p. 450-450 NOTES 3D Computer Vision and Photogrammetry First published: 28 September 2023 https://doi.org/10.1111/phor.12458Read the full textAboutPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onEmailFacebookTwitterLinkedInRedditWechat No abstract is available for this article. Volume38, Issue183September 2023Pages 450-450 RelatedInformation
The Photogrammetric RecordVolume 38, Issue 183 p. 450-450 NOTES 3D计算机视觉和摄影测量首次发布:2023年9月28日https://doi.org/10.1111/phor.12458Read全文taboutpdf ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare给accessShare全文accessShare全文accessShare请查看我们的使用条款和条件,并在下面的框中选择分享文章的全文版本。我已经阅读并接受了Wiley在线图书馆使用共享链接的条款和条件,请使用下面的链接与您的朋友和同事分享本文的全文版本。学习更多的知识。复制URL共享链接共享一个emailfacebooktwitterlinkedinreddit微信本文无摘要vol . 38, Issue183September 2023页450-450
{"title":"3D Computer Vision and Photogrammetry","authors":"","doi":"10.1111/phor.12458","DOIUrl":"https://doi.org/10.1111/phor.12458","url":null,"abstract":"The Photogrammetric RecordVolume 38, Issue 183 p. 450-450 NOTES 3D Computer Vision and Photogrammetry First published: 28 September 2023 https://doi.org/10.1111/phor.12458Read the full textAboutPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onEmailFacebookTwitterLinkedInRedditWechat No abstract is available for this article. Volume38, Issue183September 2023Pages 450-450 RelatedInformation","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135588264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
The Photogrammetric Record
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1