Classification of structural building damage grades from multi-temporal photogrammetric point clouds using a machine learning model trained on virtual laser scanning data

V. Zahs, K. Anders, Julia Kohns, Alexander Stark, B. Höfle
{"title":"Classification of structural building damage grades from multi-temporal photogrammetric point clouds using a machine learning model trained on virtual laser scanning data","authors":"V. Zahs, K. Anders, Julia Kohns, Alexander Stark, B. Höfle","doi":"10.48550/arXiv.2302.12591","DOIUrl":null,"url":null,"abstract":"Automatic damage assessment based on UAV-derived 3D point clouds can provide fast information on the damage situation after an earthquake. However, the assessment of multiple damage grades is challenging due to the variety in damage patterns and limited transferability of existing methods to other geographic regions or data sources. We present a novel approach to automatically assess multi-class building damage from real-world multi-temporal point clouds using a machine learning model trained on virtual laser scanning (VLS) data. We (1) identify object-specific change features, (2) separate changed and unchanged building parts, (3) train a random forest machine learning model with VLS data based on object-specific change features, and (4) use the classifier to assess building damage in real-world point clouds from photogrammetry-based dense image matching (DIM). We evaluate classifiers trained on different input data with respect to their capacity to classify three damage grades (heavy, extreme, destruction) in pre- and post-event DIM point clouds of a real earthquake event. Our approach is transferable with respect to multi-source input point clouds used for training (VLS) and application (DIM) of the model. We further achieve geographic transferability of the model by training it on simulated data of geometric change which characterises relevant damage grades across different geographic regions. The model yields high multi-target classification accuracies (overall accuracy: 92.0% - 95.1%). Its performance improves only slightly when using real-world region-specific training data (<3% higher overall accuracies) and when using real-world region-specific training data (<2% higher overall accuracies). We consider our approach relevant for applications where timely information on the damage situation is required and sufficient real-world training data is not available.","PeriodicalId":13664,"journal":{"name":"Int. J. Appl. Earth Obs. Geoinformation","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Appl. Earth Obs. Geoinformation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2302.12591","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Automatic damage assessment based on UAV-derived 3D point clouds can provide fast information on the damage situation after an earthquake. However, the assessment of multiple damage grades is challenging due to the variety in damage patterns and limited transferability of existing methods to other geographic regions or data sources. We present a novel approach to automatically assess multi-class building damage from real-world multi-temporal point clouds using a machine learning model trained on virtual laser scanning (VLS) data. We (1) identify object-specific change features, (2) separate changed and unchanged building parts, (3) train a random forest machine learning model with VLS data based on object-specific change features, and (4) use the classifier to assess building damage in real-world point clouds from photogrammetry-based dense image matching (DIM). We evaluate classifiers trained on different input data with respect to their capacity to classify three damage grades (heavy, extreme, destruction) in pre- and post-event DIM point clouds of a real earthquake event. Our approach is transferable with respect to multi-source input point clouds used for training (VLS) and application (DIM) of the model. We further achieve geographic transferability of the model by training it on simulated data of geometric change which characterises relevant damage grades across different geographic regions. The model yields high multi-target classification accuracies (overall accuracy: 92.0% - 95.1%). Its performance improves only slightly when using real-world region-specific training data (<3% higher overall accuracies) and when using real-world region-specific training data (<2% higher overall accuracies). We consider our approach relevant for applications where timely information on the damage situation is required and sufficient real-world training data is not available.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用虚拟激光扫描数据训练的机器学习模型对多时摄影测量点云的结构建筑损伤等级进行分类
基于无人机三维点云的自动损伤评估可以快速提供地震后损伤情况的信息。然而,由于损害模式的多样性和现有方法在其他地理区域或数据源的可移植性有限,对多种损害等级的评估具有挑战性。我们提出了一种新的方法,利用虚拟激光扫描(VLS)数据训练的机器学习模型,从真实世界的多时点云中自动评估多类建筑损伤。我们(1)识别目标特定的变化特征,(2)分离变化和不变的建筑部分,(3)使用基于目标特定变化特征的VLS数据训练随机森林机器学习模型,以及(4)使用分类器从基于摄影测量的密集图像匹配(DIM)中评估现实世界点云中的建筑损伤。我们评估了在不同输入数据上训练的分类器在真实地震事件的事前和后DIM点云中对三种损害等级(严重、极端、破坏)进行分类的能力。我们的方法对于用于模型训练(VLS)和应用(DIM)的多源输入点云是可转移的。我们进一步通过训练几何变化的模拟数据来实现模型的地理可转移性,这些数据表征了不同地理区域的相关损害等级。该模型具有较高的多目标分类准确率(总体准确率为92.0% - 95.1%)。当使用真实世界特定区域的训练数据(总体精度提高<3%)和使用真实世界特定区域的训练数据(总体精度提高<2%)时,其性能仅略有提高。我们认为我们的方法适用于需要及时的损坏情况信息和没有足够的真实世界训练数据的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Classification of structural building damage grades from multi-temporal photogrammetric point clouds using a machine learning model trained on virtual laser scanning data Example-Based Explainable AI and its Application for Remote Sensing Image Classification Mapping the net ecosystem exchange of CO2 of global terrestrial systems A multi-layer fusion image enhancement method for visual odometry under poor visibility scenarios MTCNet: Multitask consistency network with single temporal supervision for semi-supervised building change detection
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1