Tarablesse Settou, M. Kholladi, Abdelkamel Ben Ali
{"title":"通过地震后航空图像的混合深度学习特征表示改进损伤分类","authors":"Tarablesse Settou, M. Kholladi, Abdelkamel Ben Ali","doi":"10.1080/19479832.2020.1864787","DOIUrl":null,"url":null,"abstract":"ABSTRACT One of the crucial problems after earthquakes is how to quickly and accurately detect and identify damaged areas. Several automated methods have been developed to analyse remote sensing (RS) images for earthquake damage classification. The performance of damage classification is mainly depending on powerful learning feature representations. Though the hand-crafted features can achieve satisfactory performance to some extent, the performance gain is small and does not generalise well. Recently, the convolutional neural network (CNN) has demonstrated its capability of deriving more powerful feature representations than hand-crafted features in many domains. Our main contribution in this paper is the investigation of hybrid feature representations derived from several pre-trained CNN models for earthquake damage classification. Also, in this study, in contrast to previous works, we explore the combination of feature representations extracted from the last two fully connected layers of a particular CNN model. We validated our proposals on two large datasets, including images highly varying in scene characteristics, lighting conditions, and image characteristics, captured from different earthquake events and several geographic locations. Extensive experiments showed that our proposals can improve significantly the performance.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"13 1","pages":"1 - 20"},"PeriodicalIF":1.8000,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1864787","citationCount":"4","resultStr":"{\"title\":\"Improving damage classification via hybrid deep learning feature representations derived from post-earthquake aerial images\",\"authors\":\"Tarablesse Settou, M. Kholladi, Abdelkamel Ben Ali\",\"doi\":\"10.1080/19479832.2020.1864787\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT One of the crucial problems after earthquakes is how to quickly and accurately detect and identify damaged areas. Several automated methods have been developed to analyse remote sensing (RS) images for earthquake damage classification. The performance of damage classification is mainly depending on powerful learning feature representations. Though the hand-crafted features can achieve satisfactory performance to some extent, the performance gain is small and does not generalise well. Recently, the convolutional neural network (CNN) has demonstrated its capability of deriving more powerful feature representations than hand-crafted features in many domains. Our main contribution in this paper is the investigation of hybrid feature representations derived from several pre-trained CNN models for earthquake damage classification. Also, in this study, in contrast to previous works, we explore the combination of feature representations extracted from the last two fully connected layers of a particular CNN model. We validated our proposals on two large datasets, including images highly varying in scene characteristics, lighting conditions, and image characteristics, captured from different earthquake events and several geographic locations. Extensive experiments showed that our proposals can improve significantly the performance.\",\"PeriodicalId\":46012,\"journal\":{\"name\":\"International Journal of Image and Data Fusion\",\"volume\":\"13 1\",\"pages\":\"1 - 20\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2020-12-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1080/19479832.2020.1864787\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Image and Data Fusion\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/19479832.2020.1864787\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"REMOTE SENSING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Image and Data Fusion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/19479832.2020.1864787","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"REMOTE SENSING","Score":null,"Total":0}
Improving damage classification via hybrid deep learning feature representations derived from post-earthquake aerial images
ABSTRACT One of the crucial problems after earthquakes is how to quickly and accurately detect and identify damaged areas. Several automated methods have been developed to analyse remote sensing (RS) images for earthquake damage classification. The performance of damage classification is mainly depending on powerful learning feature representations. Though the hand-crafted features can achieve satisfactory performance to some extent, the performance gain is small and does not generalise well. Recently, the convolutional neural network (CNN) has demonstrated its capability of deriving more powerful feature representations than hand-crafted features in many domains. Our main contribution in this paper is the investigation of hybrid feature representations derived from several pre-trained CNN models for earthquake damage classification. Also, in this study, in contrast to previous works, we explore the combination of feature representations extracted from the last two fully connected layers of a particular CNN model. We validated our proposals on two large datasets, including images highly varying in scene characteristics, lighting conditions, and image characteristics, captured from different earthquake events and several geographic locations. Extensive experiments showed that our proposals can improve significantly the performance.
期刊介绍:
International Journal of Image and Data Fusion provides a single source of information for all aspects of image and data fusion methodologies, developments, techniques and applications. Image and data fusion techniques are important for combining the many sources of satellite, airborne and ground based imaging systems, and integrating these with other related data sets for enhanced information extraction and decision making. Image and data fusion aims at the integration of multi-sensor, multi-temporal, multi-resolution and multi-platform image data, together with geospatial data, GIS, in-situ, and other statistical data sets for improved information extraction, as well as to increase the reliability of the information. This leads to more accurate information that provides for robust operational performance, i.e. increased confidence, reduced ambiguity and improved classification enabling evidence based management. The journal welcomes original research papers, review papers, shorter letters, technical articles, book reviews and conference reports in all areas of image and data fusion including, but not limited to, the following aspects and topics: • Automatic registration/geometric aspects of fusing images with different spatial, spectral, temporal resolutions; phase information; or acquired in different modes • Pixel, feature and decision level fusion algorithms and methodologies • Data Assimilation: fusing data with models • Multi-source classification and information extraction • Integration of satellite, airborne and terrestrial sensor systems • Fusing temporal data sets for change detection studies (e.g. for Land Cover/Land Use Change studies) • Image and data mining from multi-platform, multi-source, multi-scale, multi-temporal data sets (e.g. geometric information, topological information, statistical information, etc.).