MCAFNet: Multi-Channel Attention Fusion Network-Based CNN For Remote Sensing Scene Classification

Jingming Xia, Yao Zhou, Ling Tan, Yue Ding
{"title":"MCAFNet: Multi-Channel Attention Fusion Network-Based CNN For Remote Sensing Scene Classification","authors":"Jingming Xia, Yao Zhou, Ling Tan, Yue Ding","doi":"10.14358/pers.22-00121r2","DOIUrl":null,"url":null,"abstract":"Remote sensing scene images are characterized by intra-class diversity and inter-class similarity. When recognizing remote sensing images, traditional image classification algorithms based on deep learning only extract the global features of scene images, ignoring the important role\n of local key features in classification, which limits the ability of feature expression and restricts the improvement of classification accuracy. Therefore, this paper presents a multi-channel attention fusion network (MCAFNet). First, three channels are used to extract the features of the\n image. The channel \"spatial attention module\" is added after the maximum pooling layer of two channels to get the global and local key features of the image. The other channel uses the original model to extract the deep features of the image. Second, features extracted from different channels\n are effectively fused by the fusion module. Finally, an adaptive weight loss function is designed to automatically adjust the losses in different types of loss functions. Three challenging data sets, UC Merced Land-Use Dataset (UCM), Aerial Image Dataset (AID), and Northwestern Polytechnic\n University Dataset (NWPU), are selected for the experiment. Experimental results show that our algorithm can effectively recognize scenes and obtain competitive classification results.","PeriodicalId":211256,"journal":{"name":"Photogrammetric Engineering & Remote Sensing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Photogrammetric Engineering & Remote Sensing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14358/pers.22-00121r2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Remote sensing scene images are characterized by intra-class diversity and inter-class similarity. When recognizing remote sensing images, traditional image classification algorithms based on deep learning only extract the global features of scene images, ignoring the important role of local key features in classification, which limits the ability of feature expression and restricts the improvement of classification accuracy. Therefore, this paper presents a multi-channel attention fusion network (MCAFNet). First, three channels are used to extract the features of the image. The channel "spatial attention module" is added after the maximum pooling layer of two channels to get the global and local key features of the image. The other channel uses the original model to extract the deep features of the image. Second, features extracted from different channels are effectively fused by the fusion module. Finally, an adaptive weight loss function is designed to automatically adjust the losses in different types of loss functions. Three challenging data sets, UC Merced Land-Use Dataset (UCM), Aerial Image Dataset (AID), and Northwestern Polytechnic University Dataset (NWPU), are selected for the experiment. Experimental results show that our algorithm can effectively recognize scenes and obtain competitive classification results.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MCAFNet:基于多通道注意力融合网络的CNN遥感场景分类
遥感场景图像具有类内多样性和类间相似性的特点。传统的基于深度学习的遥感图像分类算法在识别遥感图像时,只提取场景图像的全局特征,忽略了局部关键特征在分类中的重要作用,限制了特征表达能力,制约了分类精度的提高。为此,本文提出了一种多通道注意力融合网络(MCAFNet)。首先,利用三个通道提取图像的特征;在两个通道的最大池化层之后增加通道“空间注意模块”,得到图像的全局和局部关键特征。另一个通道使用原始模型提取图像的深层特征。其次,通过融合模块对不同通道提取的特征进行有效融合;最后,设计了自适应的权重损失函数,对不同类型损失函数中的损失进行自动调整。三个具有挑战性的数据集,加州大学默塞德分校土地使用数据集(UCM),航空图像数据集(AID)和西北工业大学数据集(NWPU)被选择用于实验。实验结果表明,该算法能够有效地识别场景并获得有竞争力的分类结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
ReLAP-Net: Residual Learning and Attention Based Parallel Network for Hyperspectral and Multispectral Image Fusion Book Review ‐ Top 20 Essential Skills for ArcGIS Pro A Surface Water Extraction Method Integrating Spectral and Temporal Characteristics Assessing the Utility of Uncrewed Aerial System Photogrammetrically Derived Point Clouds for Land Cover Classification in the Alaska North Slope GIS Tips & Tricks ‐ USGS Adds 100K Topo Scale to OnDemand Map Products
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1