{"title":"Building Damage Evaluation from Satellite Imagery using Deep Learning","authors":"Fei Zhao, Chengcui Zhang","doi":"10.1109/IRI49571.2020.00020","DOIUrl":null,"url":null,"abstract":"In recent decades, millions of people are killed by natural disasters such as wildfire, landslide, tsunami, and volcanic eruption. The efficiency of post-disaster emergency responses and humanitarian assistance has become crucial in minimizing the expected casualties. This paper focuses on the task of building damage level evaluation, which is a key step for maximizing the deployment efficiency of post-event rescue activities. In this paper, we implement a Mask R-CNN based building damage evaluation model with a practical two-stage training strategy. The motivation of Stage-l is to train a ResNet 101 backbone in Mask R-CNN as a Building Feature Extractor. In Stage-2, we further build on top the model trained in Stage-l a deep learning architecture that performs more sophisticated tasks and is able to classify buildings with different damage levels from satellite images. In particular, in order to take advantage of pre-disaster satellite images, we extract the ResNet 101 backbone from the Mask R-CNN trained on pre-disaster images in Stage-l and utilize it to build a Siamese based semantic segmentation model for classifying the building damage level at the pixel level. The pre- and post-disaster satellite images are simultaneously fed into the proposed Siamese based model during the training and inference process. The output of these two models own the same size as input satellite images. Buildings with different damage levels, i.e., ‘no damage’, ‘minor damage’, ‘major damage’, and ‘destroyed’, are represented as segments of different damage classes in the output. Comparative experiments are conducted on the xBD satellite imagery dataset and compared with multiple state-of-the-art methods. The experimental results indicate that the proposed Siamese based method is capable to improve the damage evaluation accuracy by 16 times and 80%, compared with a baseline model implemented by xBD team and the Mask-RCNN framework, respectively.","PeriodicalId":93159,"journal":{"name":"2020 IEEE 21st International Conference on Information Reuse and Integration for Data Science : IRI 2020 : proceedings : virtual conference, 11-13 August 2020. IEEE International Conference on Information Reuse and Integration (21st : 2...","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 21st International Conference on Information Reuse and Integration for Data Science : IRI 2020 : proceedings : virtual conference, 11-13 August 2020. IEEE International Conference on Information Reuse and Integration (21st : 2...","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IRI49571.2020.00020","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
In recent decades, millions of people are killed by natural disasters such as wildfire, landslide, tsunami, and volcanic eruption. The efficiency of post-disaster emergency responses and humanitarian assistance has become crucial in minimizing the expected casualties. This paper focuses on the task of building damage level evaluation, which is a key step for maximizing the deployment efficiency of post-event rescue activities. In this paper, we implement a Mask R-CNN based building damage evaluation model with a practical two-stage training strategy. The motivation of Stage-l is to train a ResNet 101 backbone in Mask R-CNN as a Building Feature Extractor. In Stage-2, we further build on top the model trained in Stage-l a deep learning architecture that performs more sophisticated tasks and is able to classify buildings with different damage levels from satellite images. In particular, in order to take advantage of pre-disaster satellite images, we extract the ResNet 101 backbone from the Mask R-CNN trained on pre-disaster images in Stage-l and utilize it to build a Siamese based semantic segmentation model for classifying the building damage level at the pixel level. The pre- and post-disaster satellite images are simultaneously fed into the proposed Siamese based model during the training and inference process. The output of these two models own the same size as input satellite images. Buildings with different damage levels, i.e., ‘no damage’, ‘minor damage’, ‘major damage’, and ‘destroyed’, are represented as segments of different damage classes in the output. Comparative experiments are conducted on the xBD satellite imagery dataset and compared with multiple state-of-the-art methods. The experimental results indicate that the proposed Siamese based method is capable to improve the damage evaluation accuracy by 16 times and 80%, compared with a baseline model implemented by xBD team and the Mask-RCNN framework, respectively.