When CNNs Meet Vision Transformer: A Joint Framework for Remote Sensing Scene Classification

IF 4 3区 地球科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Geoscience and Remote Sensing Letters Pub Date : 2021-09-09 DOI:10.1109/lgrs.2021.3109061
Peifang Deng, Kejie Xu, Hong Huang
{"title":"When CNNs Meet Vision Transformer: A Joint Framework for Remote Sensing Scene Classification","authors":"Peifang Deng, Kejie Xu, Hong Huang","doi":"10.1109/lgrs.2021.3109061","DOIUrl":null,"url":null,"abstract":"Scene classification is an indispensable part of remote sensing image interpretation, and various convolutional neural network (CNN)-based methods have been explored to improve classification accuracy. Although they have shown good classification performance on high-resolution remote sensing (HRRS) images, discriminative ability of extracted features is still limited. In this letter, a high-performance joint framework combined CNNs and vision transformer (ViT) (CTNet) is proposed to further boost the discriminative ability of features for HRRS scene classification. The CTNet method contains two modules, including the stream of ViT (T-stream) and the stream of CNNs (C-stream). For the T-stream, flattened image patches are sent into pretrained ViT model to mine semantic features in HRRS images. To complement with T-stream, pretrained CNN is transferred to extract local structural features in the C-stream. Then, semantic features and structural features are concatenated to predict labels of unknown samples. Finally, a joint loss function is developed to optimize the joint model and increase the intraclass aggregation. The highest accuracies on the aerial image dataset (AID) and Northwestern Polytechnical University (NWPU)-RESISC45 datasets obtained by the CTNet method are 97.70% and 95.49%, respectively. The classification results reveal that the proposed method achieves high classification performance compared with other state-of-the-art (SOTA) methods.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"19 1","pages":"1-5"},"PeriodicalIF":4.0000,"publicationDate":"2021-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"68","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Geoscience and Remote Sensing Letters","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/lgrs.2021.3109061","RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 68

Abstract

Scene classification is an indispensable part of remote sensing image interpretation, and various convolutional neural network (CNN)-based methods have been explored to improve classification accuracy. Although they have shown good classification performance on high-resolution remote sensing (HRRS) images, discriminative ability of extracted features is still limited. In this letter, a high-performance joint framework combined CNNs and vision transformer (ViT) (CTNet) is proposed to further boost the discriminative ability of features for HRRS scene classification. The CTNet method contains two modules, including the stream of ViT (T-stream) and the stream of CNNs (C-stream). For the T-stream, flattened image patches are sent into pretrained ViT model to mine semantic features in HRRS images. To complement with T-stream, pretrained CNN is transferred to extract local structural features in the C-stream. Then, semantic features and structural features are concatenated to predict labels of unknown samples. Finally, a joint loss function is developed to optimize the joint model and increase the intraclass aggregation. The highest accuracies on the aerial image dataset (AID) and Northwestern Polytechnical University (NWPU)-RESISC45 datasets obtained by the CTNet method are 97.70% and 95.49%, respectively. The classification results reveal that the proposed method achieves high classification performance compared with other state-of-the-art (SOTA) methods.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
当cnn满足视觉变换:遥感场景分类的联合框架
场景分类是遥感图像解译中不可缺少的一部分,人们探索了各种基于卷积神经网络(CNN)的方法来提高分类精度。尽管它们在高分辨率遥感图像上表现出了良好的分类性能,但提取的特征判别能力仍然有限。本文提出了一种结合cnn和视觉变换(ViT) (CTNet)的高性能联合框架,以进一步提高特征对HRRS场景分类的判别能力。CTNet方法包含两个模块,分别是ViT流(T-stream)和cnn流(C-stream)。对于t流,将平整的图像块发送到预训练的ViT模型中,以挖掘HRRS图像中的语义特征。为了与t流互补,将预训练好的CNN转移到c流中提取局部结构特征。然后,将语义特征和结构特征连接起来,预测未知样本的标签。最后,提出了一个联合损失函数来优化联合模型,增加类内聚集。CTNet方法在航空影像数据集(AID)和西北工业大学(NWPU)-RESISC45数据集上获得的最高精度分别为97.70%和95.49%。分类结果表明,与其他SOTA方法相比,该方法具有较高的分类性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Geoscience and Remote Sensing Letters
IEEE Geoscience and Remote Sensing Letters 工程技术-地球化学与地球物理
CiteScore
7.60
自引率
12.50%
发文量
1113
审稿时长
3.4 months
期刊介绍: IEEE Geoscience and Remote Sensing Letters (GRSL) is a monthly publication for short papers (maximum length 5 pages) addressing new ideas and formative concepts in remote sensing as well as important new and timely results and concepts. Papers should relate to the theory, concepts and techniques of science and engineering as applied to sensing the earth, oceans, atmosphere, and space, and the processing, interpretation, and dissemination of this information. The technical content of papers must be both new and significant. Experimental data must be complete and include sufficient description of experimental apparatus, methods, and relevant experimental conditions. GRSL encourages the incorporation of "extended objects" or "multimedia" such as animations to enhance the shorter papers.
期刊最新文献
A “Difference In Difference” based method for unsupervised change detection in season-varying images AccuLiteFastNet: A Remote Sensing Object Detection Model Combining High Accuracy, Lightweight Design, and Fast Inference Speed Monitoring ten insect pests in selected orchards in three Azorean Islands: The project CUARENTAGRI. Maritime Radar Target Detection in Sea Clutter Based on CNN With Dual-Perspective Attention A Semantics-Geometry Framework for Road Extraction From Remote Sensing Images
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1