{"title":"When CNNs Meet Vision Transformer: A Joint Framework for Remote Sensing Scene Classification","authors":"Peifang Deng, Kejie Xu, Hong Huang","doi":"10.1109/lgrs.2021.3109061","DOIUrl":null,"url":null,"abstract":"Scene classification is an indispensable part of remote sensing image interpretation, and various convolutional neural network (CNN)-based methods have been explored to improve classification accuracy. Although they have shown good classification performance on high-resolution remote sensing (HRRS) images, discriminative ability of extracted features is still limited. In this letter, a high-performance joint framework combined CNNs and vision transformer (ViT) (CTNet) is proposed to further boost the discriminative ability of features for HRRS scene classification. The CTNet method contains two modules, including the stream of ViT (T-stream) and the stream of CNNs (C-stream). For the T-stream, flattened image patches are sent into pretrained ViT model to mine semantic features in HRRS images. To complement with T-stream, pretrained CNN is transferred to extract local structural features in the C-stream. Then, semantic features and structural features are concatenated to predict labels of unknown samples. Finally, a joint loss function is developed to optimize the joint model and increase the intraclass aggregation. The highest accuracies on the aerial image dataset (AID) and Northwestern Polytechnical University (NWPU)-RESISC45 datasets obtained by the CTNet method are 97.70% and 95.49%, respectively. The classification results reveal that the proposed method achieves high classification performance compared with other state-of-the-art (SOTA) methods.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"19 1","pages":"1-5"},"PeriodicalIF":4.0000,"publicationDate":"2021-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"68","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Geoscience and Remote Sensing Letters","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/lgrs.2021.3109061","RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 68
Abstract
Scene classification is an indispensable part of remote sensing image interpretation, and various convolutional neural network (CNN)-based methods have been explored to improve classification accuracy. Although they have shown good classification performance on high-resolution remote sensing (HRRS) images, discriminative ability of extracted features is still limited. In this letter, a high-performance joint framework combined CNNs and vision transformer (ViT) (CTNet) is proposed to further boost the discriminative ability of features for HRRS scene classification. The CTNet method contains two modules, including the stream of ViT (T-stream) and the stream of CNNs (C-stream). For the T-stream, flattened image patches are sent into pretrained ViT model to mine semantic features in HRRS images. To complement with T-stream, pretrained CNN is transferred to extract local structural features in the C-stream. Then, semantic features and structural features are concatenated to predict labels of unknown samples. Finally, a joint loss function is developed to optimize the joint model and increase the intraclass aggregation. The highest accuracies on the aerial image dataset (AID) and Northwestern Polytechnical University (NWPU)-RESISC45 datasets obtained by the CTNet method are 97.70% and 95.49%, respectively. The classification results reveal that the proposed method achieves high classification performance compared with other state-of-the-art (SOTA) methods.
期刊介绍:
IEEE Geoscience and Remote Sensing Letters (GRSL) is a monthly publication for short papers (maximum length 5 pages) addressing new ideas and formative concepts in remote sensing as well as important new and timely results and concepts. Papers should relate to the theory, concepts and techniques of science and engineering as applied to sensing the earth, oceans, atmosphere, and space, and the processing, interpretation, and dissemination of this information. The technical content of papers must be both new and significant. Experimental data must be complete and include sufficient description of experimental apparatus, methods, and relevant experimental conditions. GRSL encourages the incorporation of "extended objects" or "multimedia" such as animations to enhance the shorter papers.