E2TNet:用于高光谱图像分类的高效增强变换器网络

IF 3.1 3区 物理与天体物理 Q2 INSTRUMENTS & INSTRUMENTATION Infrared Physics & Technology Pub Date : 2024-10-05 DOI:10.1016/j.infrared.2024.105569
Yunji Zhao, Wenming Bao, Xiaozhuo Xu, Yuhang Zhou
{"title":"E2TNet:用于高光谱图像分类的高效增强变换器网络","authors":"Yunji Zhao,&nbsp;Wenming Bao,&nbsp;Xiaozhuo Xu,&nbsp;Yuhang Zhou","doi":"10.1016/j.infrared.2024.105569","DOIUrl":null,"url":null,"abstract":"<div><div>Recently, Convolutional Transformer-based models have become popular in hyperspectral image (HSI) classification tasks and gained competitive classification performance. However, some Convolutional Transformer-based models fail to effectively mine the global correlations of coarse-grained and fine-grained features, which is adverse to recognizing the refined scale variation information of land-cover. The combination of convolution operations and multihead self-attention mechanisms also increases the computational cost, leading to low classification efficiency. In addition, shallow spectral–spatial features are directly input into the encoder, which inevitably incurs redundant spectral information. Therefore, this paper proposes an efficient enhancement Transformer network (E2TNet) for HSI classification. Specifically, this paper first designs a spectral–spatial feature fusion module to extract spectral and spatial features from HSI cubes and fuse them. Second, considering that redundant spectral information has a negative impact on classification performance, this paper designs a spectral–spatial feature weighted module to improve the feature representation of critical spectral information. Finally, to explore the global correlations of coarse-grained and fine-grained features and improve classification efficiency, an efficient multigranularity information fusion module is embedded in the encoder of E2TNet. The experiment is conducted on four benchmark hyperspectral datasets, and the experimental results demonstrate that the proposed E2TNet is better than several Convolutional Transformer-based classification models in terms of classification accuracy and classification efficiency.</div></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":"142 ","pages":"Article 105569"},"PeriodicalIF":3.1000,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"E2TNet: Efficient enhancement Transformer network for hyperspectral image classification\",\"authors\":\"Yunji Zhao,&nbsp;Wenming Bao,&nbsp;Xiaozhuo Xu,&nbsp;Yuhang Zhou\",\"doi\":\"10.1016/j.infrared.2024.105569\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Recently, Convolutional Transformer-based models have become popular in hyperspectral image (HSI) classification tasks and gained competitive classification performance. However, some Convolutional Transformer-based models fail to effectively mine the global correlations of coarse-grained and fine-grained features, which is adverse to recognizing the refined scale variation information of land-cover. The combination of convolution operations and multihead self-attention mechanisms also increases the computational cost, leading to low classification efficiency. In addition, shallow spectral–spatial features are directly input into the encoder, which inevitably incurs redundant spectral information. Therefore, this paper proposes an efficient enhancement Transformer network (E2TNet) for HSI classification. Specifically, this paper first designs a spectral–spatial feature fusion module to extract spectral and spatial features from HSI cubes and fuse them. Second, considering that redundant spectral information has a negative impact on classification performance, this paper designs a spectral–spatial feature weighted module to improve the feature representation of critical spectral information. Finally, to explore the global correlations of coarse-grained and fine-grained features and improve classification efficiency, an efficient multigranularity information fusion module is embedded in the encoder of E2TNet. The experiment is conducted on four benchmark hyperspectral datasets, and the experimental results demonstrate that the proposed E2TNet is better than several Convolutional Transformer-based classification models in terms of classification accuracy and classification efficiency.</div></div>\",\"PeriodicalId\":13549,\"journal\":{\"name\":\"Infrared Physics & Technology\",\"volume\":\"142 \",\"pages\":\"Article 105569\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-10-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Infrared Physics & Technology\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1350449524004535\",\"RegionNum\":3,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"INSTRUMENTS & INSTRUMENTATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Infrared Physics & Technology","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1350449524004535","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INSTRUMENTS & INSTRUMENTATION","Score":null,"Total":0}
引用次数: 0

摘要

最近,基于卷积变换器的模型在高光谱图像(HSI)分类任务中开始流行,并获得了具有竞争力的分类性能。然而,一些基于卷积变换器的模型无法有效挖掘粗粒度和细粒度特征的全局相关性,不利于识别土地覆盖的精细尺度变化信息。卷积运算与多头自关注机制的结合也增加了计算成本,导致分类效率低下。此外,浅层光谱空间特征直接输入编码器,不可避免地会产生冗余光谱信息。因此,本文提出了一种用于人机交互分类的高效增强变换器网络(E2TNet)。具体来说,本文首先设计了一个光谱-空间特征融合模块,从 HSI 立方体中提取光谱和空间特征并进行融合。其次,考虑到冗余光谱信息会对分类性能产生负面影响,本文设计了光谱-空间特征加权模块,以改进关键光谱信息的特征表示。最后,为了探索粗粒度和细粒度特征的全局相关性并提高分类效率,本文在 E2TNet 编码器中嵌入了高效的多粒度信息融合模块。实验在四个基准高光谱数据集上进行,实验结果表明,所提出的 E2TNet 在分类精度和分类效率方面都优于几个基于卷积变换器的分类模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
E2TNet: Efficient enhancement Transformer network for hyperspectral image classification
Recently, Convolutional Transformer-based models have become popular in hyperspectral image (HSI) classification tasks and gained competitive classification performance. However, some Convolutional Transformer-based models fail to effectively mine the global correlations of coarse-grained and fine-grained features, which is adverse to recognizing the refined scale variation information of land-cover. The combination of convolution operations and multihead self-attention mechanisms also increases the computational cost, leading to low classification efficiency. In addition, shallow spectral–spatial features are directly input into the encoder, which inevitably incurs redundant spectral information. Therefore, this paper proposes an efficient enhancement Transformer network (E2TNet) for HSI classification. Specifically, this paper first designs a spectral–spatial feature fusion module to extract spectral and spatial features from HSI cubes and fuse them. Second, considering that redundant spectral information has a negative impact on classification performance, this paper designs a spectral–spatial feature weighted module to improve the feature representation of critical spectral information. Finally, to explore the global correlations of coarse-grained and fine-grained features and improve classification efficiency, an efficient multigranularity information fusion module is embedded in the encoder of E2TNet. The experiment is conducted on four benchmark hyperspectral datasets, and the experimental results demonstrate that the proposed E2TNet is better than several Convolutional Transformer-based classification models in terms of classification accuracy and classification efficiency.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.70
自引率
12.10%
发文量
400
审稿时长
67 days
期刊介绍: The Journal covers the entire field of infrared physics and technology: theory, experiment, application, devices and instrumentation. Infrared'' is defined as covering the near, mid and far infrared (terahertz) regions from 0.75um (750nm) to 1mm (300GHz.) Submissions in the 300GHz to 100GHz region may be accepted at the editors discretion if their content is relevant to shorter wavelengths. Submissions must be primarily concerned with and directly relevant to this spectral region. Its core topics can be summarized as the generation, propagation and detection, of infrared radiation; the associated optics, materials and devices; and its use in all fields of science, industry, engineering and medicine. Infrared techniques occur in many different fields, notably spectroscopy and interferometry; material characterization and processing; atmospheric physics, astronomy and space research. Scientific aspects include lasers, quantum optics, quantum electronics, image processing and semiconductor physics. Some important applications are medical diagnostics and treatment, industrial inspection and environmental monitoring.
期刊最新文献
Breaking dimensional barriers in hyperspectral target detection: Atrous convolution with Gramian Angular field representations Multi-Scale convolutional neural network for finger vein recognition Temporal denoising and deep feature learning for enhanced defect detection in thermography using stacked denoising convolution autoencoder Detection of black tea fermentation quality based on optimized deep neural network and hyperspectral imaging Hyperspectral and multispectral images fusion based on pyramid swin transformer
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1